Hacker Newsnew | past | comments | ask | show | jobs | submit | more jasmer's commentslogin

'It's not AGI yet' - the implication is insufferable. It's a language model that is incapable of any kind of reasoning, the talk of 'AGI' is a glib utopianism, a very heavy kind of koolaid. If we were to have referred to this tech as anything other than 'intelligence' - for example, if we chose 'adaptive algorithms' or 'weighted node storage' etc. we'd likely have a completely different popular mental model for it.

There will be no 'AI model' that is 'AGI', rather, a large swath of different technologies and models, operating together, will give the appearance of 'AGI' via some kind of interface.

It will not appear as an 'automaton' (aka single processing unit) and it certain will not be an 'aha moment'.

In 10 years, you'll be able to ask various agents, of different kinds, which will use varying kinds of AI to interpret speech, to infer context, which will interface with various AI APIs, in many ways it'll resemble what we have today but with more nuance.

The net appearance will evolve over time to appear a bit like 'AGI' but there won't be an 'entity' to identify as 'it'.


> incapable of any kind of reasoning

If this were true the debate would be a hell of lot easier. Unfortunately, it is not.


In fact, comments like the one your are responding to are the most effective way to respond to ‘it hallucinates’.


There is no reasoning, which is why it will be impossible to move the LLM's past certain kinds of tasks.

They are 'next word prediction models' which elicit some kinds of reasoning embedded in our language, but it's a crude approximation at best.

The AGI metaphors are Ayahuasca Koolaid, like a magician duped by his own magic trick.

There will be no AGI, especially because there will be not 'automaton' aka distinct entity that elicits those behaviours.

Imagine if someone proposed 'Siri' were 'conscious' - well nobody would say that, because we know it's just a voice-based interface onto other things.

Well, Siri is about to appear much smarter thanks to LLMs, and be able to 'pass the bar exam' - but ultimately nothing has fundamentally changed.

Whereas each automaton in the human world had it's own distinct 'context' - the AI world will not have that at all. Context will be as fleeting as memory in RAM, and it will be across various systems that we use daily.

It's just tech, that's it.


> There is no reasoning

> elicit some kinds of reasoning

I know it's hard, but you have to choose here. Are they reasoning or are they not reasoning?

> next word prediction models

238478903 + 348934803809 = ?

Predict the next word. What process do you propose we use here? "Approximately" reason? That's one hell of a concept you conjured up there. Very interesting one. How does one "approximates" reason and what makes it so that the approximation will forever fail to arrive at its desired destination?

> Whereas each automaton in the human world had it's own distinct 'context' - the AI world will not have that at all. Context will be as fleeting as memory in RAM, and it will be across various systems that we use daily.

Human context is fleeting as well. Time, dementia and ultimately death can attest to that. Even in life identity is complicated and multifaceted without singular I. For all intents and purpose we too are composed of massive amounts of loosely linked subsystems vaguely resembling some sort of unity. I agree with you on that one. General intelligence IMO probably requires some form of cooperation between disparate systems.

But you see some sort of fundamental difference here between "biology" and "tech" that I just cannot. If RAM was implemented biologically, would it cease to be RAM? I fail to see what's so special about the biological substrate.

To be clear, I'm not saying LLMs are AGI, but I have a hard time dismissing the notion that some combination of systems - of which LLMs might be one - will result in something we just have to call generally intelligent. Biology just beat us to it, like it did with so many things.

> It's just tech, that's it.

The human version is: it's just biology, that's it. What's the purpose of stating that?


Sam is responsible for marketing the team which popularized a certain kind of AI product. No need for personal attack.


It's absurd that people are still thinking that a language model which a bunch of tokens are indexed is some kind of 'AGI'.


It's not just UX it's everything customer facing.

Only organisations with a specific cultural focus on user centrism will have great products.

Apple at least has some of this, but not always.

Have faith: there is a payoff in the long run for many product focused companies. Not all though.


Apple makes products the Apple Way.

Some people think the Apple Way defines good UX, so they think Apple products have good UX.

Anyone who prefers a non-Apple UX is setting themselves up for a very condescending rant.


The 'Apple Way' is more user centric than 'the short term profit way'.

One of Jobs primary philosophies is around 'creating great products' aka 'craftsmanship' which is de facto user centric orientation.

If companies made products the Apple Way, they'd probably have better products.

The Apple Way is of course going to be a culty foundation for keeping behaviours oriented around a common sentiment, it helps the Apple org focus. It probably will feel a bit odd and constraining in some ways, but that has more advantages than downsides.


Let's look at the Magic Mouse:

https://en.wikipedia.org/wiki/Magic_Mouse

> The charging port of the second-generation Magic Mouse is located on its underside, preventing the mouse from being used while charging.

I've never seen another wireless mouse with that problem. Never. Only Apple, it seems, could dream up that specific misdesign. There's no benefit to anyone in that, only detriment, and I am at a loss every time I think about it.

Onwards:

> The Magic Mouse uses its acrylic multi-touch surface for 360-degree scrolling, replacing the rubber scroll ball on the Mighty Mouse. The mouse does not support left and right-clicking simultaneously, and also removes the ability to middle click without third-party software workarounds.

I will be honest: I straight-up don't like the idea of a mouse having a touch pad on its top. I think a touch pad is a poor imitation of a mouse, such that if you have a mouse, using a touch pad (literally) on top of it is using a better interface to replicate a worse one.

However, my personal likes and dislikes pale in comparison to the simple fact the touch pad was also a usability regression for clicking. I absolutely cannot see this as user-focused.

Did it have good build quality? Maybe, but it hardly matters if something is a good implementation of a bad design.


Cherry picking.

Apple is almost biggest company in the world and they command very high margins for a reason that is 'more than marketing'.

Ironically, you're pocking at the 'mouse' when their trackpad is the reason that keeps me on mac notebooks.

Their 'hit rate' is much higher than most.


I spend much of my time looking at dusty old code, arguing with my former self, undoing tangled knots of hitherto maligned bits of supposed genius.


This is not the right analogy.

'Criminally adjacent' means literally, adjacent to organised crime.

This is not the same thing as 'above bar businesses are 'adjacent' to anything that is not' , or 'aggressive competitive measures'.


'feels rigged'.

My friend, the entire thing is rigged from top to bottom, from the time you see the ad, until you are back at home.

The only way to win is to accept the grift on those terms.


Nobody cares but the speaker themselves.

There is obviously some legit cred here, and it's a definite downgrade - but it's not being sent to the woods either. G

Given that 'nobody else cares at all, whatsoever' - that adds context to this which we should include in our understanding. Nobody is missing grant money, or getting a stain on their resume, not getting a job, publicly dragged, meaning the slight is ultimately very personal.

There is a legit grievance here, but it's overstated.

Most gripes have a kernel of truth, the issue is to match up the size of the truth, with the size of the kernel.


If nobody cared then why did someone make such a big deal out of the talk making them uncomfortable and therefore needing to downgrade it. It obviously did matter to more than just the speaker.


> Nobody cares but the speaker themselves.

You're not speaking for everyone. I care about the Rust leadership behaving properly for example, because issues like this may be a sign of other problems with respecting the community. And I would like to trust them to resolve issues fairly and quickly since I will rely on the project's progress in the future and don't want people leaving because of mishandling social issues.


Yes, we care that people behave themselves, we're referring to 'who cares' about whether the nature of a talk being 'keynote speaker or not'.

If Jim Smith is keynote, great, if they're in a conference talk, great, nobody is going to fathom one way or another but the speaker themselves.


I think the framing of 'uncomfortable' is just a poor choice of words among a bunch of people it would seem have difficulty with these things. My gosh this is all out of proportion.


My gosh no, this toxic - it is not up to people to have contextualise or defend their behaviour given others' sensitivities.

It is fundamentally bigoted to assume racism, and fundamentally up to people to provide at least some evidence or context if they suspect there is.

The commentor is not going on the offensive, rather the defensive, as someone else brought the issue up.

"makes you look (see, optics again) either incredibly naïve or covering for some pretty bad behaviour."

You have arbitrarily (and repulsively) accused a commenter of 'covering for some crime' - typical of the social justice fanaticism that otherwise empathic people have come to loathe - you may want to contemplate why you might say such a thing.

This isn't an issue of race, it's not particularly part of the dialogue, it's an issue of perceived slight and professional victimhood. Not everything is hyper intersectional.


That’s a lot of words you just put in my mouth.

I’m not going to bother responding to every point you just hallucinated, but I do want to point out that I don’t think a crime has been committed.


You very literally accused someone of 'covering something bad up' because of a random post.


Not at all what they said.


That’s not what I said.


"makes you look (see, optics again) either incredibly naïve or covering for some pretty bad behaviour."

It is point blank what you said, it's plainly ridiculous that you would deny what is right there as though we're misinterpreting it, this is repulsive gaslighting.


The post is about optics. Read again what I wrote.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: