The title is misleading, and HN comments don't seem to relate to the article.
The misleading part: the actual finding is that organoid cells fire in patterns that are "like" the patterns in the brain's default mode network. That says nothing about whether the there's any relationship between phenomena of a few hundred organoid cells and millions in the brain.
As a reminder, heart pacing cells are automatically firing long before anything like a heart actually forms. It's silly to call that a heartbeat because they're not actually driving anything like a heart.
So this is not evidence of "firmware" or "prewired" or "preconfigured" or any instructions whatsoever.
This is evidence that a bunch of neurons will fall into patterns when interacting with each other -- no surprise since they have dendrites and firing thresholds and axons connected via neural junctions.
The real claim is that organoids are a viable model since they exhibit emergent phenomena, but whether any experiments can lead to applicable science is an open question.
I think a helpful conclusion is that while the firing pattern in organoids doesn’t preclude a wetware of complex programmed instructions, it could be just the emergent properties of the underlying physics and electrochemical properties of the neurons; analogous to the phenomenon of synchronism when placing pendulums in a common place.
"Bad" regulation just raises the question what would be better for all concerned. Sometimes that means reducing the weight and impact of a concern (redefining the problem), but more often it means a different approach or more information.
In this case, pumping first-ever possible toxins into the ground could be toxic, destructive, and irreversible, in ways that are hard to test or understand in a field with few experts. The benefit is mainly a new financial quirk, to meet carbon accounting with uncertain gains for the environment. It's not hard to see why there's a delay, which would only be made worse with an oppositional company on a short financial leash pushing the burden back onto regulators.
The regulation that needs attention is not the unique weird case, but the slow expansion of under-represented, high-frequency or high-traffic - exactly like the cellular roaming charges or housing permits or cookies. It's all-too-easy to learn to live with small burdens.
Do you have any evidence that the AI efforts are not being funded by the AI product, Kagi Assistant? I would expect the reverse: the high-margin AI products are likely cross-subsidizing the low-margin search products and their sliver of AI support.
The traffic stop is for breaking some kind of traffic law, usually.
I suppose you could have a reasonable suspicion stop, but it would have to be something like "a hit and run just happened nearby, no vehicle description", and you witness a car with a smashed grill and leaking radiator fluid, but not breaking any traffic laws.
Reasonable suspicion might develop over the course of the stop, e.g. driver is super nervous, the back seat is full of overstuffed black duffel bags, there is a powerful chemical air freshener odor, and the vehicle has just crossed the Mexico border.
It may be that some media or some alcohol is more toxic than others, but it's still fair to test whether the mode of administration has an independent or enhancing effect.
E.g., crack cocaine is more addictive than nasal, and extended release Adderall is less addictive the immediate-release. So there's good reason to hypothesize that SFV has similar addiction-enhancing effects over long-form, and the article meta-analysis says problems in inhibition and cognition are among the strongest.
wrt choice, the thing about addiction is that while becoming addicted results from a series of choices, being addicted impairs your choice-making executive functions. Addicts use even when they don't like it, and to the exclusion of other things they prefer, and often switch from expensive drugs to cheap ones just to maximize use.
So in the same way that society would prefer to prevent rather than treat legions of fentanyl addicts infecting cities or meth addicts roaming the countryside, society would like to avoid the cognitive decline and productivity loss of a generation lost to scrolling.
> More useful words are "negligible" and "problematic".
Yes, thank you! Worth emulating.
By comparison:
> A characteristic of these systems spanning so many orders of magnitude is that it is very frequently the case that one of the things your system will be doing is in fact head-and-shoulders completely above everything else your system should be doing, and if you have a good sense of your rough orders of magnitudes from experience, it should be generally obvious to you where you need to focus at least a bit of thought about optimization, and where you can neglect it until it becomes an actual problem.
TLDR: [AI promises] a future where we prize instant answers over teaching and understanding
But what this article and the comments don't say: open-source is mainly a quality metric. I re-use code from popular open-source repo's in part because others have used it without complaints (or document the bugs), in part because people are embarrassed to write poor-quality open-source so it's above-par code, and in part because if there are issues in this corner of the world, this dependency will solve them over time (and I can watch and wait when I don't have time to fix and contribute).
The quality aspect drives me to prefer dependencies over AI when I don't want full ownership, so I'll often ask AI to show open-source projects that do something well.
(As an aside, this article is about AI, but AI is so pervasive now as an issue that it doesn't even need saying in the title.)
Seems great: I looked at two largish code bases I'm familiar with, and learned something each time.
But is this just a summary for the impatient, or can it reduce the effort for developers writing docs?
Docs have always been the mirror of code, and thus hard to get and keep right. Can we do without the mirror, or parts of it?
Does it work when you haven't written documentation for your code? Let's say one is fanatical about writing such clear code that names are sufficient to convey what's happening (i.e., no documentation and no comments). Does it work?
If not, does it work when there are only (clear) comments?
Does it tell you when documentation, comments, or code is unclear or missing?
I.e., I'd like it to go beyond summarization to fill easy gaps and point developers to the hard ones.
It's worth highlighting the conditions under which this can help:
> in domains where the taxonomy drifts, the data is scarce, or the requirements shift faster than you can annotate
It's not actually clear if warranty claims really meet these criteria.
For warranty claims, the difficulty is in detecting false negatives, when companies have a strong incentive and opportunity to hide the negatives.
Companies have been trusted to do this kind of market surveillance (auto warranties, drug post-market reporting) largely based on faith that the people involved would do so in earnest. That faith is misplaced when the process is automated (not because the implementors are less diligent, but because they are too removed to tell).
Then the backlash to a few significant injuries might be a much worse regime of bureaucratic oversight, right when companies have replaced knowledge with automation (and replacement labor costs are high).
An independent Europe is easier for China to dominate.
Now that NATO is in question, you'll start to hear about US manipulation of the SWIFT banking system, so Europe will start pushing for an international one, which China will eventually control.
It’s just a detail, the international financial market/banking system is basically under active US control, just look at what happened to Wegelin & Co. (at that point the oldest bank in Switzerland) when they thought that that was not the case.
The misleading part: the actual finding is that organoid cells fire in patterns that are "like" the patterns in the brain's default mode network. That says nothing about whether the there's any relationship between phenomena of a few hundred organoid cells and millions in the brain.
As a reminder, heart pacing cells are automatically firing long before anything like a heart actually forms. It's silly to call that a heartbeat because they're not actually driving anything like a heart.
So this is not evidence of "firmware" or "prewired" or "preconfigured" or any instructions whatsoever.
This is evidence that a bunch of neurons will fall into patterns when interacting with each other -- no surprise since they have dendrites and firing thresholds and axons connected via neural junctions.
The real claim is that organoids are a viable model since they exhibit emergent phenomena, but whether any experiments can lead to applicable science is an open question.
reply