> The CLICK: "Critiques kill". You want a live internet? Don't critique.
I don't see the connection. Critiques are also content.
The issue is not related to the type of content, but to what is producing it. Dead Internet is the (proven) idea that most content on the internet is produced and consumed by machines, not humans.
First, your country has been divided since at least the mid-19th century. Every war has a winning and losing side, but the losers don't simply vanish. Their mentality persists throughout generations, even if it remains in the background, and is ignored by the other side.
Secondly, all this technology you've built and allowed the world to use can and has been exploited by your enemies to your own detriment. The same systems you've built that allow manipulating people into buying things are also ideal channels for spreading propaganda and disinformation. Information warfare is not new, but modern technology has made it more effective than ever at manipulating groups of people, sowing dissent, and generally causing chaos and confusion within a nation.
So, putting those two together, it's not difficult to see how acts of information warfare could be used to fuel the deeply rooted social divide, directly causing or strongly contributing to the internal sociopolitical instability you've been experiencing for the past decade.
Meanwhile, your enemies can sit back and enjoy the show of an imploding nation. They know that you're untouchable via traditional warfare, which is why these tactics are so perfect. They do require a long time to come into effect, but they're highly effective, very cheap to deploy, and the best part is that they're completely untraceable to the attacker. It's still debatable whether there was Russian interference in your elections, and how effective it actually was, even though there is evidence for it. It's still debatable whether Chinese-operated social media platforms are a national security threat or not. Were J6 protesters rioters or patriots? And so on about every controversial sociopolitical topic.
This confusion is exactly the intended effect. Your regular checks and balances, your laws, ideals and values, make no difference if your communication channels are corrupted.
I don't see how you can get out of this mess, and I expect things will get much worse before they get better. Not just for you, but globally. These same tactics are also deployed in other countries, by the US as well. Though, ironically, countries that are cut off from the global internet have an upper hand in this conflict.
I agree, this is the way. To be clear: I'm not a mobile developer, and have only dabbled with it over the years, but I'm generally familiar with the stacks.
If you want to simplify development of a cross-platform app, your work should start by architecting the software in a way that the core business logic is agnostic to the user interfaces. Then your web, mobile, and desktop GUIs, CLI, TUI, API, and any other way a user interacts with your program are simply thin abstractions that call your core components.
The complexity of each UI will inevitably vary, and it might not be possible to offer a consistent experience across platforms or to leverage unique features of each platform without making your core a tangled mess, so this is certainly not "easy" to do in practice, but this approach should avoid the bulk of the work of targetting individual platforms than if you had done it any other way. It should also avoid depending on a 3rd-party framework that may or may not exist a few years down the line.
One extra clarification: If the quality of your app is business critical you should really use the native UI toolkit to offer the best platform integration and user experience.
If your app is not business critical (you just have to offer it - example: dishwasher app, ..) you might get away with using a cross platform toolkit like flutter or react native. But even then this adds a 3rd party dependency as you mentioned which adds risk.
Writing an App in Swift on iOS is boring. The same thing is true for writing an Android app using Kotlin/Java. This is a good thing. Now your developers can concentrate on shipping great features.
I would push back on the idea that alien civilizations might somehow be more enlightened so as to avoid internal conflict altogether. Unless they were artificially designed by a creator who explicitly factored out these traits, they likely also evolved from primitive beginnings. If we know anything from our single sample of living organisms is that competition and survival play a key role in driving evolution. Even if their planet had ample resources for everyone—which can also be said about Earth—those resources might not be accessible to everyone equally. This would inevitably lead to hoarding, tribalism, and conflicts. Besides, physical resources aren't the only cause for conflict. If they're social creatures, relationships, hierarchies, and politics also play a role.
So all of these things would be embedded in their organisms even after they've evolved to a technological civilization, just like they are in ours. Therefore it is not difficult to imagine that they would also struggle to balance their use of technology with their nature to distrust each other. I don't think this is a human-centrist viewpoint, but one we observe from nature itself. However limited that may be, it's the only place we can draw any kind of conclusions from. Thinking otherwise is interesting, but the realm of science fiction.
> Unfortunately it’s in the interests of search and AI companies to keep you inside their portals, so they may be less than willing to link to the outside even when it would improve the experience.
This is true, but aren't "AI" summaries directly opposed to this interest? The user will usually get the answer they need much more quickly than if they had to scroll down the page, hunt for the right result, and get exposed to ads. So "AI" summaries are actually the better user experience.
In time I'm sure that we'll see ads embedded in these as well, but in the current stage of the "AI" hype cycle, users actually benefit from this feature.
Yes, users can rely on "AI" summaries if they want a quick answer, but they've been able to do that for years via page snippets underneath each result, which usually highlight the relevant part of the page. The same argument was made when search engines began showing page snippets, yet we found a balance, and websites are still alive.
On the contrary, there's an argument to be made that search engines providing answers is the better user experience. I don't want to be forced to visit a website, which will likely have filler, popups, and be SEO'd to hell, when I can get the information I want in a fraction of the time and effort, within a consistent interface. If I do need additional information, then I can go to the source.
I do agree with the idea you mention below of search engines providing source links, but even without it, "AI" summaries can hardly be blamed for hurting website traffic. Websites are doing that on their own with user hostile design, SEO spam, scams, etc.
There is a long list of issues we can criticize search engines for, and the use of "AI" even more so, but machine-generated summaries on SERPs is not one of them IMO.
I guess you didn't take up my offer to search for how AI is killing traffic. There are numerous studies that repeatedly prove this to be true, this relatively recent article links to a big pile of them[0]. Why would anyone visit a website, if the AI summary is seemingly good enough?
My issue with AI summaries is that they are not even remotely accurate, trustworthy or deterministic. Someone else posted this wonderful evidence[1] in the comments. LLMs are sycophantic and agree with you all the time, even if it means making shit up. Maybe things will improve, but for the last 2 years, I have not seen much progress regarding hallucinations or deterministic i.e. reliable/trustworthy responses. They are still stochastic token guessers with some magic tricks sprinkled on top to make results slightly better than last month's LLMs.
And what happens when people stop creating new websites because they aren't getting any visitors (and by extension ad-revenue)? New info will stop being disseminated. Where will AI summarize data, if there is no new data to summarize? I guess they can just keep rehashing the new AI-generated websites, and it will be one big pile of endlessly recycled AI shit :)
p.s. I don't disagree with you regarding SEO spam, hostile design, cookie popups, etc. There is even a hilariously sad website[2] which points out how annoying websites have become. But using non-deterministic sycophantic AI to "summarize" websites is not the answer, at least not in the current form.
> My issue with AI summaries is that they are not even remotely accurate, trustworthy or deterministic.
Who cares if it's deterministic? Google changes their algorithms all the time, you don't know what its devs will come up with next, when they release it, when they deploy it, when the previous cache gets cleared. It doesn't matter.
Haha, I suppose the problem is that LLM outputs are unreliable yet presented as authoritative (disclaimers do little to counteract the boffo confidence with which LLMs bullshit) — not that they are unreliable in unpredictable ways.
I'm well aware of the studies that "prove" that "AI" summaries are "killing" traffic to websites. I suppose you didn't consider my point that the same was said about snippets on SERPs before "AI"[1].
> My issue with AI summaries is that they are not even remotely accurate, trustworthy or deterministic.
I am firmly on the "AI" skeptic side of this discussion. And yet if there's anything this technology is actually useful for is for summarizing content and extracting key points from it. Search engines contain massive amounts of data. Training a statistical model on it that can provide instant results to arbitrary queries is a far more efficient method of making the data useful for users than showing them a sorted list of results which may or may not be useful.
Yes, it might not be 100% accurate, but based on my own experience, it is reliable for the vast majority of use cases. Certainly beats hunting for what I need in an arbitrarily ordered list and visiting hostile web sites.
> LLMs are sycophantic and agree with you all the time, even if it means making shit up.
Those are issues that plague conversational UIs, and long context windows. "AI" summaries answer a single query and the context is volatile.
> And what happens when people stop creating new websites because they aren't getting any visitors (and by extension ad-revenue)? New info will stop being disseminated.
That's baseless fearmongering and speculation. Websites might be impacted by this feature, but they will cope, and we'll find ways to avoid the doomsday scenario you're envisioning.
Some search engines like Kagi already provide references under their "AI" summaries. If Google is pressured to do so, they will likely do the same as well.
So the web will survive this specific feature. Website authors should be more preoccupied with providing better content than with search engines stealing their traffic. I do think that "AI" is a net negative for the world in general, but that's a separate discussion.
Sorry I didn't meant to discount your argument. I don't think SERPs are a valid comparison, AI is for me an apples vs. oranges comparison, or rather rocks vs. turtles :)
btw your linked article/study doesn't support your argument - SERPs are definitely stealing clicks (just not nearly as many as AI):
> In other words, it looks like the featured snippet is stealing clicks from the #1 ranking result.
I should maybe clarify: I have been using LLMs since the day they arrived on the scene and I have a love/hate relationship with them. I do use summaries sometimes, but I generally still prefer to just at least skim TFA unless it's something where I don't care about perfect accuracy. BTW did you click on that imgur link? It's pretty damning - the AI summary you get depends entirely on how you phrase your query!
> Yes, it might not be 100% accurate, but based on my own experience, it is reliable for the vast majority of use cases. Certainly beats hunting for what I need in an arbitrarily ordered list and visiting hostile web sites.
What does "vast majority" mean? 9 out of 10? Did/do you double-check the accuracy regularly? Or did you stop verifying after reaching the consensus that X/Y were accurate enough? I can imagine as a tech-savvy individual, that you still verify from time to time and remain skeptical but think of 99% of the users who don't care/won't bother - who just assume AI summaries are fact. That's where the crux of my issue lies: they are selling AI output as fact, when in fact, it's query-dependent, which is just insane. This will (or surely has) cost plenty of people dearly. Sure, reading a summary of the daily news is probably not gonna hurt anyone, but I can imagine people have/will get into trouble believing a summary for some queries e.g. renter rights - which I did recently (combination summaries + paid LLMs), and almost believed it until I double-checked with a friend who works in this area who then pointed out a few minor but critical mistakes, which then saved my ass from signing some bad paperwork. I'm pretty sure AI summaries are still just inaccurate, non-deterministic LLMs with some special sauce to make them slightly less sketchy.
> Those are issues that plague conversational UIs, and long context windows. "AI" summaries answer a single query and the context is volatile.
Just open that imgur link. Or try it for yourself. Or maybe you are just good at prompting/querying and get better results.
> So the web will survive this specific feature. Website authors should be more preoccupied with providing better content than with search engines stealing their traffic.
I agree the web will survive in some form or other, but as my Register link shows (with MANY linked studies), it already IS killing web traffic to a great degree because 99% of users believe the summaries. I really hope you are right, and the web is able to weather this onslaught.
Just to add fuel to the fire...AI output is non deterministic even with the same prompt. So users searching the same thing may get different results. The output is not just query dependent
> What does "vast majority" mean? 9 out of 10? Did/do you double-check the accuracy regularly? Or did you stop verifying after reaching the consensus that X/Y were accurate enough?
I don't verify the accuracy regularly, no. And I do concede that I may be misled by the results.
But then again, this was also possible before "AI". You can find arguments on the web supporting literally any viewpoint you can imagine. The responsiblity of discerning fact from fiction remains with the user, as it always has.
> Just open that imgur link. Or try it for yourself. Or maybe you are just good at prompting/querying and get better results.
I'm not any better at it than any proficient search engine user.
The issue I see with that Imgur link is that those are not search queries. They are presented as claims, and the "AI" will pull from sources that back up those claims. You would see the same claims made by web sites listed in the results. In fact, I see that there's a link next to each paragraph which will likely lead you to the source website. (The source website might also be "AI" slop, but that's a separate matter...) So Google is already doing what you mentioned as a good idea above.
All the "AI" is doing there is summarizing content you would find without it as well. That's not proof of hallucinations, sycophancy, or anything else you mentioned. What it does is simplify the user experience, like I said. These tools still suffer from these and other issues, but this particular use case is not proof of it.
So instead of phrasing a query as a claim ("NFL viewership is up"), I would phrase it using keywords ("NFL viewership statistics 2025"). Then I would see the summarized statistics presented by "AI", drill down and go to the source, and make up my mind on which source to trust. What I wouldn't do is blindly trust results from my biased claim, whether they're presented by "AI" or any website.
> it already IS killing web traffic to a great degree because 99% of users believe the summaries. I really hope you are right, and the web is able to weather this onslaught.
I don't disagree that this feature can impact website traffic. But I'm saying that "killing" is hyperbole. The web is already a cesspool of disinformation, spam, and scams. "AI" will make this even worse by enabling website authors to generate even more of it. But I'm not concerned at all about a feature that right now makes extracting data from the web a little bit more usable and safer. I'm sure that this feature will eventually also be enshittified by ads, but right now, I'd say users gain more from it than what they lose.
E.g. if my grandma can get the information she needs from Google instead of visiting a site that will infect her computer with spyware and expose her to scams, then that's a good thing, even if that information is generated by a tool that can be wrong. I can explain this to her, but can't easily protect her from disinformation, nor from any other active threat on the modern web.
I don't see such a niche use case as a design failure.
By your logic, the lowercase "v" should extend even higher to meet the pipe. The caret has conventionally been higher for a long time, and IMO would look out of place making it the inverse "v".
I agree with not introducing abstractions prematurely, but your suggestion hinges on the design of the S3 client. In practice, if your code is depending on a library you have no control over, you'll have to work with interfaces if you want to prevent your tests from doing I/O. So in unit tests you can pass an in-memory mock/stub, and in integration tests, you can pass a real S3 client, and connect to a real S3 server running locally.
So I don't see dependency injection with interfaces as being premature abstractions. You're simply explicitly specifying the API your code depends on, instead of depending on a concrete type of which you might only use one or two methods. I think this is a good pattern to follow in general, with no practical drawbacks.
Yes, this is absolutely dependent on the design S3 client.
The reality of development is we have to merge different design philosophies into one code base. Things can get messy. 100% agreed.
The approach I advocate for is more for a) organizing the code you do own, and b) designing in a way that you play nice with others who may import your code.
I like that it's relatively compact horizontally. If I had to nitpick, the curly braces look a bit too "wavy" for my taste, which doesn't quite match the hard angles on some other glyphs.
My favorite monospace font for the past 10+ years has been Iosevka Term ss08. I've tried many others over the years, and Iosevka is just perfect IMO.
Out of curiosity: what are the tools and the process to create a font today? It would be interesting to read a bit about that.
thanks for the feedback. about the braces please see another comment below. the issue of needlessly complicated braces has been raised quite a few times now. a variant could be considered if there is more interest.
this particular font is quite simple and doesn't contain any ligatures, etc. so most of the design is in Fontforge.
i didn't start from scratch. it started out as a customised version of Source Code Pro (released as Hera and currently archived in my profile) but i borrowed many glyphs from other fonts and modified many others to the point it became a different font. you can open the .sfd file directly in Fontforge to edit and modify it yourself.
> The CLICK: "Critiques kill". You want a live internet? Don't critique.
I don't see the connection. Critiques are also content.
The issue is not related to the type of content, but to what is producing it. Dead Internet is the (proven) idea that most content on the internet is produced and consumed by machines, not humans.
reply