Hacker Newsnew | past | comments | ask | show | jobs | submit | u5wbxrc3's commentslogin

At this point its a sunk cost fallacy


Yeah Im just waiting for this to finally play out.

If you understand product design, economics, corporate finance and various other disciplines well - its obvious what is going on.

MSFT wants to juice their numbers for the next earnings call to keep the mania going. Zzzzzzzzzzz

Ive spoken to people who work in the finance sector - portfolio management and tax audit - they laugh hysterically at how bad the tools are (copilot in particular) and resent how much they are being pushed down on them.


I think it's more nuanced than that. Microsoft is desperate for growth in a growth challenged macro, and Nadella has guided valuations and stock targets that are simply unobtainable without this growth. That's one part of the AI slop shovel. The second part is institutions that are desperate to cut labor costs or find other efficiency gains to maintain historical financial performance targets during ZIRP that are simply no longer obtainable. They are willing to push their workers through the sausage machine of "AI" to try to make it happen.

And so the performance art must continue, at least until the AI investment music stops. Your options are get rich if in a position to (due to irrational exuberance and unsophisticated capital investment), or play along until the facade falls to keep your job. "It is what it is."


> Nadella has guided valuations and stock targets that are simply unobtainable without this growth

I read a couple of articles and he seems to have been completely suckered in by the damn thing

from https://archive.is/oWbZB#selection-2387.572-2401.668

> Copilot consumes Nadella’s life outside the office as well. He likes podcasts, but instead of listening to them, he loads transcripts into the Copilot app on his iPhone so he can chat with the voice assistant about the content of an episode in the car on his commute to Redmond. At the office, he relies on Copilot to deliver summaries of messages he receives in Outlook and Teams and toggles among at least 10 custom agents from Copilot Studio. He views them as his AI chiefs of staff, delegating meeting prep, research and other tasks to the bots. “I’m an email typist,” Nadella jokes of his job, noting that Copilot is thankfully very good at triaging his messages.

this is no different to ending up in an alternate reality populated by your AI "girlfriends"

and will produce similar results


Sounds like a dogfooding sales pitch to me. Nadella has learned from embarrassing prior incidents where Windows Phone developers were found to be using the iPhones or Androids as their personal devices, or when pictures of Microsoft employee offices showed Macs littering the workspace.


But if real, indicates, that he does take it serious. Eat your own dogfood and all that.

I do not see it as AI girlfriend. I see it as pushing the limits of the technology for productivity.

" Copilot is thankfully very good at triaging his messages"

But .. I would never do that, without also having a human check what important bits don't get through.


At some point a mistake will be made that will be very embarrassing for someone that matters


People have "satired" similar hypotheticals about Sam Altman [0], and we know it's happened with people like Blake lemoine.

It makes me curious how much, if any, of the current LLM hype is simply coming from people who are dealing with the ELIZA effect.

[0] https://medium.com/where-thought-bends/the-7-trillion-delusi...


How'd ELIZA do on the International Math Olympiad?


It's quite fortunate that the ELIZA effect isn't defined by IMO scores, isn't it?


How is that point relevant? The ELIZA effect involves humans incorrectly attributing a simple computer program with intelligence. Meanwhile, a complex ML model that solves difficult problems it hasn't seen before is intelligent, whether you or I like it or not.

That part is no longer up for debate. The question is how useful and scalable this particular form of intelligence will turn out to be.


Yes but the sunk cost is also driving that - the investment has to show growth in cashflows, to the extent it is beating the hurdle rate to add value.


Ahh, Microsoft's AI capital investment sunk cost fallacy, certainly, I agree with that. The stock will be punished when the investment does not show cashflow returns for sure.

MSFT: Microsoft CEO Offloads $75 Million in Stock Amid AI Boom - https://finance.yahoo.com/news/msft-microsoft-ceo-offloads-7... - September 5th, 2025


> institutions that are desperate to cut labor costs...

I wonder if those who resist copilot aren't driven by the nascent feeling they are training their replacement.

> ... to maintain historical financial performance targets during ZIRP that are simply no longer obtainable.

If all else fails, they can ZIRP back to fantasy land and avoid being the losers by shifting it to the public once again.


Microsoft really wants that AI company valuation.


Googles decisision to add developer verification killed my interest in handset development entirely. But hey, at least I know what to focus my time on rather than third party app development ie. F-Droid. I look at my android phone differently now that its on the table which sucks but hey they made me switch my development time to linux drivers now instead.


After 15 years of professional development on Android I too am now thinking about switching my focus to something different. And it sucks.

Just wished there was a viable* FOSS Linux based mobile OS project out there that I could offer my time and energy to instead.


Aren't Graphene and Lineage exactly that?

I have been running Graphene on a Pixel for a while now and I don't think Linux phones are a viable alternative. The vast majority of Android apps just work on Graphene, and there are millions of them. The UI experience is polished, everything just works with the exception of apps that require Google Play Integrity. And of course these projects aren't affected by Google's restrictions on sideloading.


Look I love that GrapheneOS exists, and I have used it in the past (as have I with Lineage).

But GrapheneOS lives by the mercy of Google. Pixel devices being reference devices makes it so that it's unlikely that Google will close them down completely.

However, as can be seen with this verification move, Google is willing to go very far to accomplish its aims. They already delayed delivery of Android 16 images, causing GrapheneOS some headaches.

Who is to say more isn't to come.


Google also announced that Pixel devices are no longer reference devices. The reference device is now some VM.


Waydroid exists and a mobile distro that provided Waydroid OOTB would be as usable as a full-on Android phone. You could even build it to remove the app verification stuff if that found its way into AOSP.


It's just bizarre to me. I have always been fully aware that when I give input to a LLM I am conversing with a statistical model. Never has it crossed my mind to actually talk to a LLM. But I guess it seems possible when you haven't grown up with technology or don't know how it works. This fate for that poor boy is awful and OpenAI should be responsible.


Interesting approach! The title is misleading though. I do not understand how this relates to pixel art. Pixel art is placing pixel on a constrained canvas and choosing colors on a limited palette which this does not seem to be. Maybe this could be considered blurring effect or image reconstruction/approximation?


Thank you for your feedback! You're absolutely right. This should be image reconstruction using the GA algorithm.


I am more interested in what this article and project does not seem to mention.

> During this process I've learned a lot

Yes, but what exactly? I mean I guess you don't have to touch the project once its finished so there is less value in familiarizing yourself with the source. The source is roughly 15135 lines. That is quite a chunk and most likely would have taken more than 30 hours to write that from an standpoint of knowing the basics of typescript and the phaser library.


I’ve put some of the things I’ve learned in the README, but in general I feel more familiar with how Phaser works, how to achieve different outcomes, how to properly prompt the agents to get good results when making a game, and what to watch for when creating more complex game systems. The tutorial system in general proved to be challenging and if I had to make it again I would be more specific in how exactly it should be architected.


Not every interest comes with age. I am interesed in some antique stuff that's way older than me.


I can not love this development project enough its the peak of Android custom ROM. I am curious though as to what they have changed so much that it is going to be more difficult.


Sounds like there are lots of causes: the project losing a senior dev who got conscripted to fight in a war. Not getting access to an OEM rom early on. Google changing a lot of the code around lock screen and other features (which makes porting over their custom changes on top of it take more time).


Oh, they also had the issue that one of the leading devs got forcibly conscripted.

Quickly scanning GrapheneOS's posts I couldn't find any detail about the technical challenges. They'll probably post about it in the coming months


I mean, conscription is per definition forced, isn't it?


Is such nitpicking necessary? It derails the conversation


Yes and no. In some countries and depending on peace or war times, you are forced into service, but you can choose to do military service or be a conscientious objector and work as a hospital aide.


Yes, but the connotation here was "conscripted which is a bad thing that made them unhappy."


Even if it is mandatory in some form, there is plenty of nuance in the actual meaning before we come to calling "forced", isn't it?


Looks neat and simple! Gonna try it for few days atleast, thanks.


Thank you! Any feedback is welcome, have fun.


Helix was great until I discovered something that was a dealbreaker for me. They treat newline character as a normal character which is just very very non intuitive. I just wish there was option for behavior same as vim does. https://github.com/helix-editor/helix/issues/2956


I actually use the fact newline is a pseudo-character pretty often (e.g t-return-d to "delete till newline"). I have the opposite issue where I use Helix most of the time, but sometimes have to compromise with rebinding a "vim mode" and little things like esc-i causing the cursor to move 1 character left drive me up the wall.


> I see the "theft" as being democratized now- large studios/entities with large resources have always been able to legally "steal" so with these AI tools I guess we all can now?

You don't see any issue with machine learning models trained on huge amounts of copyrighted and patented materials basically scraped from the internet. Yes you can make your animated film and audio but at the cost of hugely controversial and non-transparent generative models.

> Aesthetics are dead now imo because of generative AI as anyone can be any "style" so now it is all about ideas- original human ideas.

This argument kind of conflicts with itself, no? Aesthetics are inferred from ideas either inspired or original.


The proposition that "aesthetics is dead" because of [insert new thing here] is one of practically comical ignorance. I have never seen someone with at least a modicum of competence make it; it carries the same desperate, self-validating stench as "history is dead" or "truth is dead", etc.


No I don't see it as theft- in that I choose to not be a part of the formal film industry- I don't "monetize" any of my work- I have nothing that can be stolen but my aesthetics- but aesthetics are dead imo- they no longer have value- as an artist my ideas are what is valuable- and no ML/AI etc can "steal" my ideas as I haven't thought of them yet- my art and ideas are an expression of my soul- no machine will ever have a soul so I'm not threatened one bit my Ai etc


If I as an artist make a great piece of art, you don't think I should be compensated for it. If I complete a surgery or drive a truck which all of them are byproducts of ideas and knowledge I should be compensated for them, not just for thinking of them. You see the problem here? I don't want to pay for an idea of art, I want the complete and original piece made by the artist. As for if they generate the art using some machine learning model, they need to disclose that so I can make conscious choice.


Yes I do think we should be compensated for it, and we are- when we talk of compensation that usually refers to money- I'm referring to other forms of compensation-

I feel compensated on a spiritual level for my work- I do other things for money and my art can stay "pure" in a sense-

It's said that if you "sell your soul" you are unable to ever "buy it back"- so why sell it in the first place? Just so you can try to get "rich" and attempt to buy it back later because you are soulless and miserable? Doesn't make sense to me.

Maybe art is something higher? A higher cause? I was really inspired by the Rudolf Steiner book "The arts and their mission" early in my career- worth looking into if you are in the arts.


Fair use?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: