'It's not AGI yet' - the implication is insufferable. It's a language model that is incapable of any kind of reasoning, the talk of 'AGI' is a glib utopianism, a very heavy kind of koolaid. If we were to have referred to this tech as anything other than 'intelligence' - for example, if we chose 'adaptive algorithms' or 'weighted node storage' etc. we'd likely have a completely different popular mental model for it.
There will be no 'AI model' that is 'AGI', rather, a large swath of different technologies and models, operating together, will give the appearance of 'AGI' via some kind of interface.
It will not appear as an 'automaton' (aka single processing unit) and it certain will not be an 'aha moment'.
In 10 years, you'll be able to ask various agents, of different kinds, which will use varying kinds of AI to interpret speech, to infer context, which will interface with various AI APIs, in many ways it'll resemble what we have today but with more nuance.
The net appearance will evolve over time to appear a bit like 'AGI' but there won't be an 'entity' to identify as 'it'.
There is no reasoning, which is why it will be impossible to move the LLM's past certain kinds of tasks.
They are 'next word prediction models' which elicit some kinds of reasoning embedded in our language, but it's a crude approximation at best.
The AGI metaphors are Ayahuasca Koolaid, like a magician duped by his own magic trick.
There will be no AGI, especially because there will be not 'automaton' aka distinct entity that elicits those behaviours.
Imagine if someone proposed 'Siri' were 'conscious' - well nobody would say that, because we know it's just a voice-based interface onto other things.
Well, Siri is about to appear much smarter thanks to LLMs, and be able to 'pass the bar exam' - but ultimately nothing has fundamentally changed.
Whereas each automaton in the human world had it's own distinct 'context' - the AI world will not have that at all. Context will be as fleeting as memory in RAM, and it will be across various systems that we use daily.
I know it's hard, but you have to choose here. Are they reasoning or are they not reasoning?
> next word prediction models
238478903 + 348934803809 = ?
Predict the next word. What process do you propose we use here? "Approximately" reason? That's one hell of a concept you conjured up there. Very interesting one. How does one "approximates" reason and what makes it so that the approximation will forever fail to arrive at its desired destination?
> Whereas each automaton in the human world had it's own distinct 'context' - the AI world will not have that at all. Context will be as fleeting as memory in RAM, and it will be across various systems that we use daily.
Human context is fleeting as well. Time, dementia and ultimately death can attest to that. Even in life identity is complicated and multifaceted without singular I. For all intents and purpose we too are composed of massive amounts of loosely linked subsystems vaguely resembling some sort of unity. I agree with you on that one. General intelligence IMO probably requires some form of cooperation between disparate systems.
But you see some sort of fundamental difference here between "biology" and "tech" that I just cannot. If RAM was implemented biologically, would it cease to be RAM? I fail to see what's so special about the biological substrate.
To be clear, I'm not saying LLMs are AGI, but I have a hard time dismissing the notion that some combination of systems - of which LLMs might be one - will result in something we just have to call generally intelligent. Biology just beat us to it, like it did with so many things.
> It's just tech, that's it.
The human version is: it's just biology, that's it. What's the purpose of stating that?
The 'Apple Way' is more user centric than 'the short term profit way'.
One of Jobs primary philosophies is around 'creating great products' aka 'craftsmanship' which is de facto user centric orientation.
If companies made products the Apple Way, they'd probably have better products.
The Apple Way is of course going to be a culty foundation for keeping behaviours oriented around a common sentiment, it helps the Apple org focus. It probably will feel a bit odd and constraining in some ways, but that has more advantages than downsides.
> The charging port of the second-generation Magic Mouse is located on its underside, preventing the mouse from being used while charging.
I've never seen another wireless mouse with that problem. Never. Only Apple, it seems, could dream up that specific misdesign. There's no benefit to anyone in that, only detriment, and I am at a loss every time I think about it.
Onwards:
> The Magic Mouse uses its acrylic multi-touch surface for 360-degree scrolling, replacing the rubber scroll ball on the Mighty Mouse. The mouse does not support left and right-clicking simultaneously, and also removes the ability to middle click without third-party software workarounds.
I will be honest: I straight-up don't like the idea of a mouse having a touch pad on its top. I think a touch pad is a poor imitation of a mouse, such that if you have a mouse, using a touch pad (literally) on top of it is using a better interface to replicate a worse one.
However, my personal likes and dislikes pale in comparison to the simple fact the touch pad was also a usability regression for clicking. I absolutely cannot see this as user-focused.
Did it have good build quality? Maybe, but it hardly matters if something is a good implementation of a bad design.
There is obviously some legit cred here, and it's a definite downgrade - but it's not being sent to the woods either. G
Given that 'nobody else cares at all, whatsoever' - that adds context to this which we should include in our understanding. Nobody is missing grant money, or getting a stain on their resume, not getting a job, publicly dragged, meaning the slight is ultimately very personal.
There is a legit grievance here, but it's overstated.
Most gripes have a kernel of truth, the issue is to match up the size of the truth, with the size of the kernel.
If nobody cared then why did someone make such a big deal out of the talk making them uncomfortable and therefore needing to downgrade it. It obviously did matter to more than just the speaker.
You're not speaking for everyone. I care about the Rust leadership behaving properly for example, because issues like this may be a sign of other problems with respecting the community. And I would like to trust them to resolve issues fairly and quickly since I will rely on the project's progress in the future and don't want people leaving because of mishandling social issues.
I think the framing of 'uncomfortable' is just a poor choice of words among a bunch of people it would seem have difficulty with these things. My gosh this is all out of proportion.
My gosh no, this toxic - it is not up to people to have contextualise or defend their behaviour given others' sensitivities.
It is fundamentally bigoted to assume racism, and fundamentally up to people to provide at least some evidence or context if they suspect there is.
The commentor is not going on the offensive, rather the defensive, as someone else brought the issue up.
"makes you look (see, optics again) either incredibly naïve or covering for some pretty bad behaviour."
You have arbitrarily (and repulsively) accused a commenter of 'covering for some crime' - typical of the social justice fanaticism that otherwise empathic people have come to loathe - you may want to contemplate why you might say such a thing.
This isn't an issue of race, it's not particularly part of the dialogue, it's an issue of perceived slight and professional victimhood. Not everything is hyper intersectional.
"makes you look (see, optics again) either incredibly naïve or covering for some pretty bad behaviour."
It is point blank what you said, it's plainly ridiculous that you would deny what is right there as though we're misinterpreting it, this is repulsive gaslighting.
There will be no 'AI model' that is 'AGI', rather, a large swath of different technologies and models, operating together, will give the appearance of 'AGI' via some kind of interface.
It will not appear as an 'automaton' (aka single processing unit) and it certain will not be an 'aha moment'.
In 10 years, you'll be able to ask various agents, of different kinds, which will use varying kinds of AI to interpret speech, to infer context, which will interface with various AI APIs, in many ways it'll resemble what we have today but with more nuance.
The net appearance will evolve over time to appear a bit like 'AGI' but there won't be an 'entity' to identify as 'it'.