The stunning revelation in the video was that Kurz tried using AI to write a script about Brown Dwarf stars. Kurz have a robust fact-checking process with multiple independent experts, and about 20% of the 'facts' in the AI script couldn't be verified by Kurz staff from the listed sources. When the script was given to independent experts, these experts identified the exact same suspect facts, which the AI seemed to have made up in order to fulfill the general expectations of the script. Kurz ultimately shelved the AI script.
Kurz later observed that another creator on YouTube published a well-edited video about Brown Dwarfs that received a fair bit of viewership. Only Kurz were completely horrified to find that it incorporated the very 'facts' about Brown Dwarves that Kurz had determined were AI hallucinations. It was clear that the other YouTube creator had used AI at some point in the process of script writing, but failed to identify and remove the hallucinations and the video was becoming a prominent source of public information about the topic.
I watched the video right before seeing this post and I wasn't surprised at all. They don't care. AI script, graphics, voice over. There was 0 skin in the game. To paraphrase Michael Scott, if you cancel the channel over fake news they'll just make another one and another one and another one. It's effortless to make slop, so they have no incentive to care
I kid, but we have to change the incentive structure then. Or rather, the disincentive: if you publish blatant misinformation, you're banned from creating content for a while. YouTube accounts are linked to Google accounts which aren't that easy to mass create.
Note I'm not saying you go to jail or pay fines. I don't trust the public system to not abuse this as fake news gets redefined by the dictator du jour.
Even if you have personal priorities for privacy, surely you can understand that many user's first expectation for a web browser is for websites to work correctly.
We've kind of lost the plot if we get too far away from the core notion that a web browser is for correctly and completely rendering websites. The user population don't use web browsers to hide, they use it to look at the internet and do internet work. If a browser has any problems doing this, it not going to be relevant.
Observationally, Google search and Kagi are fundamentally different business models.
Google followed/trailblazed the "enshitification" arc of providing a free service that sees widespread adoption by the public, and then financially exploiting the widespread adoption by leveraging usage of the service to serve ads like in the screenshot.
Kagi is a subscription service you pay for and they generate their best effort at an ideal service for you using the money you gave them.
The Google model of providing a free service sort of requires that it be enshitified in order to close the circle on the business case. Reliance on VC money in this model is likely a further aggravating factor to aggressively exploit usage of the service once widespread adoption is achieved.
The Kagi model has an opposite pressure, where if it tries to exploit adoption of the service in a way that users don't appreciate, users will simply abandon their subscription, putting a core revenue stream the business has built itself around at risk.
Is it possible for Kagi or a business like that to become shitty? Sure, a new manager that misunderstands core realities can show up anywhere and ruin the business, or sagging business financials could require VC injection which then pressures further financial extractions from uses. But the structural pressures on a Kagi-style model certainly seem to steer it in the right direction when Google's structural model invariably steered it into something that becomes less pleasant than we all initially knew.
The real difference is market dominance. If Kagi was the dominant search engine used by 90% of all users, they could enshittify while still collecting subscription, and while that would lose them some users, it would still be profitable when everything is added up.
It's even worse for niches where there's some way to lock people in. E.g. look at streaming providers - everyone has either rolled out ads on paid plans or is planning to do so. Why? Because if you happen to have X as an exclusive in your catalog, then people who want to see X either have to suck it up or else figure out how to pirate it without getting caught.
Grounding aircraft types when there is a significant airworthiness concern has been a phenomenon from the very beginning of commercial jet aviation during the 1950s. The pioneering DeHavilland Comet, the very first jet airliner, suffered ironically similar failures of the pressurized cabin, albiet more serious than the pictured incident. Just like the current article, airlines at the time voluntarily grounded their Comet fleets in 1954.
The only difference is an uptick in risk aversion. It took multiple serious incidents to trigger the grounding of the comet, but here we see grounding after one moderate incident. Then again, the greedy and selfish decision by Boeing executives to install only a single angle of attack sensor without redundancy as a cost cutting measure, which caused fatal crashes when paired with poorly designed software - on this very model, I believe - took multiple fatal crashes before a grounding occurred.
There could potentially be some application for VR. I know a lot of the VFR scan is checking visual landmarks relative to wingtips, etc. I wonder if a good stick and rudder setup with a VR headset and working out of an actual private pilot's manual might represent a more helpful experience.
That said, I don't even know if VR is supported by Ms flight simulator.
Perhaps it's a reference to a relativistic quirk having to do with cosmic rays. When a high-energy cosmic ray hits the atmosphere, it can create a meson or something. This sub-atomic particle isn't stable and has a tiny half-life, really it is mostly just an intermediary mid-way through the feynman diagram of the cosmic particle collision interaction, where the actual result is a typical output of stable electrons or neutrinos or what have you.
It gets weird because you see evidence of these mesons or whatever way down below the part of the atmosphere where cosmic rays impact. Like, if you multiply the half life of the meson with the speed of light, you get a result that should be way shorter than the actual distance you see mesons (or what have you) actually traveling. Depending on how you look at it, it's like the mesons are traveling faster than the speed of light given how far they're going before decaying.
It turns out these particles are traveling so close to the speed of light that the passage of time is different for the particles than an observer on earth. Their half-life duration occurs within their frame of reference, which is different than ours. So even though the particle was traveling at (fake numbers) 1 kilometer a second and traveled 3 kilometers, it only had a lifespan of 1 second. It plays out this way because 3 seconds on earth transpired during the particle's 1 second in its own frame of reference and thus we saw it travel 3 kilometers during the particle's own 1 second, despite the speed of light being just 1 kilometer per second. So depending on how you look at the numbers it can seem like it was 3x faster than the 1km/s speed of light.
I realize this almost of creates more (and bigger) questions than answers.
Kurz later observed that another creator on YouTube published a well-edited video about Brown Dwarfs that received a fair bit of viewership. Only Kurz were completely horrified to find that it incorporated the very 'facts' about Brown Dwarves that Kurz had determined were AI hallucinations. It was clear that the other YouTube creator had used AI at some point in the process of script writing, but failed to identify and remove the hallucinations and the video was becoming a prominent source of public information about the topic.