Why should it have to be expensive computationally? How do brains do it with such a low amount of energy? I think catching the brain abilities even of a bug might be very hard, but that does not mean that there isn't a way to do it with little computational power. It requires having the correct structures/models/algorithms or whatever is the precise jargon.
> How do brains do it with such a low amount of energy?
Physical analog chemical circuits whose physical structure directly is the network, and use chemistry/physics directly for the computations. For example, a sum is usually represented as the number of physical ions present within a space, not some ALU that takes in two binary numbers, each with some large number of bits, requiring shifting electrons to and from buckets, with a bunch of clocked logic operations.
There are a few companies working on more "direct" implementations of inference, like Etched AI [1] and IBM [2], for massive power savings.
This is the million dollar question. I'm not qualified to answer it, and I don't really think anyone out there has the answer yet.
My armchair take would be that watt usage probably isn't a good proxy for computational complexity in biological systems. A good piece of evidence for this is from the C. elegans research that has found that the configuration of ions within a neuron--not just the electrical charge on the membrane--record computationally-relevant information about a stimulus. There are probably many more hacks like this that allow the brain to handle enormous complexity without it showing up in our measurements of its power consumption.
My armchair is equally comfy, and I have an actual paper to point to:
Jaxley: Differentiable simulation enables large-scale training of detailed biophysical models of neural dynamics [1]
They basically created sofware to simulate real neurons and ran some realistic models to replicate typical AI learning tasks:
"The model had nine different channels in the apical and basal dendrite, the soma, and the axon [39], with a total of 19 free parameters, including maximal channel conductances and dynamics of the calcium pumps."
So yeah, real neurons are a bit more complex then ReLU or Sigmoid.
My whole point is that it maybe possible to do perception using a lot of computational power, or alternatively, there could be another kind of smart ideas that allows to do it in a diferent way with much less computation. It is not clear it requires it.
There could definitely be a chance. I was just responding to what in your comment sounded like a question.
That said, I think there is a good reason to be skeptical that it is a good chance. The consistent trend of finding higher complexity than expected in biological intelligences (like in C. Elegans), combined with the fact that the physical nature of digital architectures versus biological architectures are very different, is a good reason to bet on it being really complex to emulate with our current computing systems.
Obviously there is a way to do it physically--biological systems are physical after all--but we just don't understand enough to have the grounds to say it is "likely" doable digitally. Stuff like the Universal Approximation Theorem implies that in theory it may be possible, but that doesn't say anything about whether it is feasible. Same thing with Turing completeness too. All that these theorems say is our digital hardware can emulate anything that is a step-by-step process (computation), but not how challenging it is to emulate it or even that it is realistic to do so. It could turn out that something like human mind emulation is possible but it would take longer than the age of the universe to do it. Far simpler problems turn out to have similar issues (like calculating the optimal Go move without heuristics).
This is all to say that there could be plenty of smart ideas out there that break our current understandings in all sorts of ways. Which way the cards will land isn't really predictable, so all we can do is point to things that suggest skepticism, in one direction or another.
Following the trend of discovering smaller and smaller phenomena that our brains use for processing, it would not be surprising if we eventually find that our brains are very nearly "room temperature" quantum computers.
I don't know much about this, but I understand Project Jupyter is Nonprofit. If I go to "jupyter.org" I see a tab "Community" and another "Governance". If I go to "deepnote.com" I see "Customers" and "Pricing".
Why would people want a standard to be controlled by a private company? I don't think the "Open-Sourcing" of it says enough. How does licensing work with formats or standards?
People don't want that. This article is largely empty marketing. Claiming they have "the successor" is all you need to read before you can infer it's hot air.
All standards are ultimately controlled by private companies. Even non-profits require funding.
Open source always depended on a viable business model (of one or many companies) that can sustain not just the release, but also an ongoing maintenance of the standard.
The problem with corporate control isn't that they require funding or they are private, the problem is they are motivated first by profit. Sometimes exclusively. So when "what's best" is at odds with "what's profitable", they tend to make the wrong choice.
Take this project for instance. If one day their choice is to forgo all future profits, or to close the source to continue operating, it's very likely they will close the source to continue operating, rather than forgoing profits. We've seen it happen enough to be wary from the project structure alone.
But that has nothing to do with the development, maintenance, or enforcement of the standards, since the corporations have no involvement in any of the standards, and are probably opposed to their existence at all.
It's a great counterexample to "corporate money and influence are required to develop, maintain, and enforce standards", because it shows that it sprang up on its own in the absence of money and has persisted for decades.
Yes, monkeys could write Shakespeare works given enough time.
But in this case, it is really hard to know if a model is identifying "correct answers" reliably. A lot of answers are really hard to qualify as correct or not when written by humans, much more when written by a machine trying to trick readers into thinking the answer is correct. It can be done, but I doubt LLM are being trained to identify the subtle differences between those types of potential answers.
I agree with most of what you said. However it is not correct to say they are executing algorithms, just as it is not correct to say that a water fountain is executing an algorithm.
It is correct to say that in theory a water fountain can be modeled by an algorithm. It can either be modelled at a high level by simplified model. Or in theory you can simulate every possible atom that makes up that water fountain.
The model that reconstructs these simulations are certainly algorithms.
I agree that the metaphor is good. The point is understood. However, the specific clothes that are considered OK in one context ore another are always changing and based in criteria that most of the time makes no sense.
But the moderator AI does not need to understand the meme. Ideally, it should only care about texts violating the law.
I don't think you need to improve that much current LLM so they can detect actual harm threats or hate speech from any other type of communication. And I think those should be the only sort of banned speech.
And if facebook wants to impose additional censorship rules, then it should at least clearly list them, and make the moderator AI explain what are the violated rules, and give the possibility to appeal in case it is doing wrong.
Any other type of bot moderation should be unacceptable.
I normally would agree with you but there are cases where what was spoken and its meaning are disjointed.
Example: Picture of a plate of cookies.
Obese person: “I would kill for that right now”.
Comment flagged. Obviously the person was being sarcastic but if you just took the words at face value, it’s the most negative sentiment score you could probably have. To kill something. Moderation bots do a good job of detecting the comment but a pretty poor job of detecting its meaning. At least current moderation models. Only Meta knows what’s cooking in the oven to tackle it. I’m sure they are working on it with their models.
I would like a more robust appeal process. Like bot flags, you appeal, appeal bot runs it through a more thorough model, upholds the flag, you appeal, a human or “more advanced AI” would then really detect whether it’s a joke sentiment, sarcasm, or you have a history of violent posts and it was justified.