I see where you're coming from, and I agree with the implication that this is more of an issue for inexperienced devs. Having said that, I'd push back a bit on the "legacy" characterization.
For me, if I check in LLM-generated code, it means I've signed off on the final revision and feel comfortable maintaining it to a similar degree as though it were fully hand-written. I may not know every character as intimately as that of code I'd finished writing by hand a day ago, but it shouldn't be any more "legacy" to me than code I wrote by hand a year ago.
It's a bit of a meme that AI code is somehow an incomprehensible black box, but if that is ever the case, it's a failure of the user, not the tool. At the end of the day, a human needs to take responsibility for any code that ends up in a product. You can't just ship something that people will depend on not to harm them without any human ever having had the slightest idea of what it does under the hood.
Take responsibility by leaving a good documentation of your code and a beefy set of tests, future agents and humans will have a point to bootstrap from, not just plain code.
Gemini is similar. It insists that information from before its knowledge cutoff is still accurate unless explicitly told to search for the latest information before responding. Occasionally it disagrees with me on the current date and makes sarcastic remarks about time travel.
One nice thing about Grok is that it attempts to make its knowledge cutoff an invisible implementation detail to the user. Outdated facts do sometimes slip through, but it at least proactively seeks out current information before assuming user error.
> It seems to me like this is yet another instance of just reading vibes, like when GPT 5 was underwhelming and people were like "AI is dead"
This might be part of what you meant, but I would point out that the supposed underwhelmingness of GPT-5 was itself vibes. Maybe anyone who was expecting AGI was disappointed, but for me GPT-5 was the model that won me away from Claude for coding.
I'm not going to dogpile criticism on Tailwind or Adam, whose behavior seems quite admirable, but I fundamentally agree with the thrust of the parent comment. It's unfortunate for Tailwind and anyone who was invested in the project's pre-2022 trajectory, but no one is entitled to commercial engagement by unaffiliated third parties.
Here's a similar example from my own experience:
* Last week, I used Grok and Gemini to help me prepare a set of board/committee resolutions and legal agreements that would have easily cost $5k+ in legal fees pre-2022.
* A few days ago, I started a personal blog and created a privacy policy and ToS that I might otherwise have paid lawyers money to draft (linked in my profile for the curious). Or more realistically, I'd have cut those particular corners and accepted the costs of slightly higher legal risk and reduced transparency.
* In total, I've saved into the five figures on legal over the past few years by preparing docs myself and getting only a final sign-off from counsel as needed.
One perspective would be that AI is stealing money from lawyers. My perspective is that it's saving me time, money, and risk, and therefore allowing me to allocate my scarce resources far more efficiently.
Automation inherently takes work away from humans. That's the purpose of automation. It doesn't mean automation is bad; it means we have a new opportunity to apply our collective talents toward increasingly valuable endeavors. If the market ultimately decides that it doesn't have sufficient need for continued Tailwind maintenance to fund it, all that means is that humanity believes Adam and co. will provide more value by letting it go and spending their time differently.
Laws are not intellectual property of individuals or companies, they belong to the public. That's a fundamentally different type of content to "learn" from. I totally agree that AI can save a lot of time, but I don't agree that the creators of Tailwind don't see any form of compensation.
It does not feel not right to me that revenue is being taken from Tailwind and redirected to Google, OpenAI, Meta and Anthropic without 0 compensation.
I'm not sure how this should codified in law or what the correct words are to describe it properly yet.
I see what you're getting at, but CSS is as much an open standard as the law. Public legal docs written against legal standards aren't fundamentally dissimilar to open source libraries written against technical standards.
While I am all for working out some sort of compensation scheme for the providers of model training data (even if indirect via techniques like distillation), that's a separate issue from whether or not AI's disruption of demand for certain products and services is per se harmful.
If that is the case, it's a very different claim than that AI is plagiarizing Tailwind (which was somewhat of a reach, given the permissiveness of the project's MIT license). Achieving such mass adoption would typically be considered the best case scenario for an open source project, not harm inflicted upon the project by its users or the tools that promoted it.
The problem Tailwind is running into isn't that anything has been stolen from them, as far as I can tell. It's that the market value of certain categories of expertise is dropping due to dramatically scaled up supply — which is basically good in principle, but can have all sorts of positive and negative consequences at the individual level. It's as if we suddenly had a huge glut of low-cost housing: clearly a social good on balance, but as with any market disruption there would be winners and losers.
If Tailwind's primary business is no longer as competitive as it once was, they may need to adapt or pivot. That doesn't necessarily mean that they're a victim of wrongdoing, or that they themselves did anything wrong. GenAI was simply a black swan event. As a certain captain once said, "It is possible to commit no mistakes and still lose. That is not a weakness; that is life.".
Depends. If you're talking about a free online test I can take to prove I have basic critical thinking skills, maybe, but that's still a slippery slope. As a legal adult with the right to consent to all sorts of things, I shouldn't have to prove my competence to someone else's satisfaction before I'm allowed autonomy to make my own personal decisions.
If what you're suggesting is a license that would cost money and/or a non-trivial amount of time to obtain, it's a nonstarter. That's how you create an unregulated black market and cause more harm than leaving the situation alone would have. See: the wars on drugs, prostitutes, and alcohol.
Where I'd suggest you go too far is implying that saturated fat and sugar are similarly bad. Technically you do hedge the claim with "excess", which is effectively a tautology, so the claim isn't outright false. You also don't qualify whether you mean excess in absolute terms (i.e. caloric intake) or as a proportion of macronutrients.
In practical terms, I don't consider it useful guidance based on the available evidence. As far as I can tell, there's little to no evidence that saturated fat is unhealthy (but lots of bad studies that don't prove what they claim to prove). Meanwhile, the population-wide trial of reducing saturated fat consumption over the past half-century has empirically been an abject failure. Far from improving health outcomes, the McGovern committee may well have triggered the obesity epidemic.
I think the benefits of "low fat" may have been dulled by how literally people took that message, and what companies replaced the fat with.
Most available "low fat" products compensated by adding sugar. Lots of sugar. That way it still tastes nice, but its healthy right?
Just like fruit juice with "no added sugar" (concentration via evaporation doesn't count) is a healthy alternative to soda right?
In truth your body is perfectly happy converting sugar to weight, with the bonus that it messes up the insulin cycle.
At a fundamental level consuming more calories than you burn makes you gain weight. Reducing refined sugar is the simplest way to reduce calories (and solves other health issues.) Reducing carbohydrates is next (since carbs are just sugar, but take a bit longer to digest). The more unprocessed the carb the better.
Reducing fat (for some, by a lot) is next (although reduce not eliminate. )
Both sides want to blame the other. But the current pendulum is very much on the "too much sugar/ carbs" side of things.
Agreed, this is a big part of the problem. The average person doesn't have anything resembling a coherent mental model of nutrition, and vague conflicting nutritional advice only adds to the confusion. The average person doesn't even know what a carb is, much less understand the biochemistry of how their body processes one.
Does "reduce fat consumption" mean a proportional reduction (i.e. increase carb/protein consumption) or an absolute reduction (i.e. decrease overall caloric intake)? In either case, what macros and level of caloric intake relative to TDEE are the assumed starting point? Who knows, but the net effect has been multiple generations hooked on absurd concentrations of sugar and UPFs.
I'll go further and say they should 100% use version control. I understand why they don't right now, but accommodating that seems like a relatively obvious lay-up for Microsoft. They could open source a docx-diffing/merging git extension, integrate it into GitHub, and integrate GitHub into Word.
I'd also add that Word could add first-class Markdown/.md support. Seems like a pretty natural direction to go in as AI-assisted drafting/editing becomes increasingly commonplace, and would further simplify the GitHub integration.
I did some investigation into this the other day. The short answer seems to be that if you like MacBooks, you aren't willing to accept a downgrade along any axis, and you really want to use Linux, your best bet today is an M2 machine. But you'll still be sacrificing a few hours of battery life, Touch ID support (likely unfixable), and a handful of hardware support edge cases. Apple made M3s and M4s harder to support, so Linux is still playing catch-up on getting those usable.
Beyond that, Lunar Lake chips are evidently really really good. The Dell XPS line in particular shows a lot of promise for becoming a strict upgrade or sidegrade to the M2 line within a few years, assuming the haptic touchpad works as well as claimed. In the meantime, I'm sure the XPS is still great if you can live with some compromises, and it even has official Linux support.
> Linux is still playing catch-up on getting those usable
This is an understatement. It is completely impossible to even attempt to install Linux at all on an M3 or M4, and AFAIK there have been no public reports of any progress or anyone working on it. (Maybe there are people working on it, I don’t know).
In his talk a few days ago, one of the main Asahi developers (Sven) shared that there is someone working on M3 support. There are screenshots of an M3 machine running Linux and playing DOOM at around 31:34 here: https://media.ccc.de/v/39c3-asahi-linux-porting-linux-to-app...
Sounds like the GPU architecture changed significantly with M3. With M4 and M5, the technique for efficiently reverse-engineering drivers using a hypervisor no longer works.
What I mean is: on a normal laptop, when you scroll with two fingers on the scroll wheel, the distance you scroll is nearly a continuous function of how much you move your fingers; that is, if you only move your fingers a tiny bit, you will only scroll a few pixels or just one.
Most VM software (at least all of it that I've tried) doesn't properly emulate this. Instead, after you've moved your fingers some distance, it's translated to one discrete "tick" of a mouse scroll wheel, which causes the document to scroll a few lines.
The VM software I use is UTM, which is a frontend to QEMU or Apple Virtualization framework depending on which setting you pick when setting up the VM.
As the saying goes, "If I'd had more time, I would have written a shorter letter". Of course AI can be used to lazily stretch a short prompt into a long output, but I don't see any implication of that in the parent comment.
If someone isn't a good writer, or isn't a native speaker, using AI to compress a poorly written wall of text may well produce a better result while remaining substantially the prompter's own ideas. For those with certain disabilities or conditions, having AI distill a verbal stream of consciousness into a textual output could even be the only practical way for them to "write" at all.
We should all be more understanding, and not assume that only people with certain cognitive and/or physical capabilities can have something valuable to say. If AI can help someone articulate a fresh perspective or disseminate knowledge that would otherwise have been lost and forgotten, I'm all for it.
> For those with certain disabilities or conditions, having AI distill a verbal stream of consciousness into a textual output could even be the only practical way for them to "write" at all.
These are the exact kinds of cases I think are ok, but let's not pretend even 10% of the AI writing out there fits this category
For me, if I check in LLM-generated code, it means I've signed off on the final revision and feel comfortable maintaining it to a similar degree as though it were fully hand-written. I may not know every character as intimately as that of code I'd finished writing by hand a day ago, but it shouldn't be any more "legacy" to me than code I wrote by hand a year ago.
It's a bit of a meme that AI code is somehow an incomprehensible black box, but if that is ever the case, it's a failure of the user, not the tool. At the end of the day, a human needs to take responsibility for any code that ends up in a product. You can't just ship something that people will depend on not to harm them without any human ever having had the slightest idea of what it does under the hood.
reply