Hacker Newsnew | past | comments | ask | show | jobs | submit | lxgr's commentslogin

Airbnb wanted access to my bank account transaction details (via Plaid) a while ago, "to verify my credit card". Hotels have never looked so appealing.

At some point booking.com decided it doesn't want to accept my money because I'm a fraud, apparently, so I use it to search and then book directly at the hotel, and booking.com doesn't get their commission.

Or until they’ve successfully “demonstrated” that it always was impossible.

> Apple is guaranteed to have lawyers, admins, and executives already on the payroll for this task.

As both a shareholder and user, I really wish they’d invest their resources into feature development instead of manufacturing obstacles.


Does it really matter how the LLM got to a (correct) conclusion?

As long as the explanation is sound as well and I can follow it, I don't really care if the internal process looked quite different, as long as it's not outright deceptive.


I'm just quoting the author of TFA, who did in fact appear to want periodic explanations of how their "agent" arrived at its decisions.

That would arguably not be artificial intelligence, but rather simulated natural intelligence.

It also seems orders of magnitude less resource efficient than higher-level approaches.


What’s the difference? Arguably the latter will be better IMO than the former

How many orders of magnitude? Nearly as many as it would be less efficient?

It’s like comparing apples and oranges

I think we're noticing that our goalposts for AGI were largely "we'll recognize it when we see it", and now as we are getting to some interesting places, it turns out that different people actually understood very different things by that.

I think many people are now learning that their definition of intelligence was actually not very precise.

From what I've seen, in response to that, goalposts are then often moved in the way that requires least updating of somebody's political, societal, metaphysical etc. worldview. (This also includes updates in favor of "this will definitely achieve AGI soon", fwiw.)


I remember when the goal posts were set at the "Turing test."

That's certainly not coming back.


The goal posts were never set at the "Turing test"

It's not a real thing. You do not remember the goal posts ever being there.

Turing put forth a thought experiment in the early days of some discussions about "artificial" thinking machines on a very philisophical level.

Add to that, nobody who claims to have "passed" the turing test has ever done an actual example of that thought experiment, which is about taking two respondents and finding out which is human. It is NOT talking to a single respondent and deciding whether they are an LLM or not.

It also has never been considered a valid "test" of "intelligence" as it was obvious from the very very beginning that tricking a person wasn't really meaningful, as most people can be tricked by even simple systems.

ELIZA was the end of any thought around "The turing test", as it was able to "trick" tons of people and show how useless the turing thought experience was. Anyone who claims ELIZA is intelligent would be very silly.


If you know the tricks wont you be able to figure out if some chat is done by a LLM?

Arguably there isn't even a widely shared, coherent definition of intelligence: To some people, it might mean pure problem solving without in-task learning; others equate it with encyclopedic knowledge etc.

Given that, I consider it quite possible that we'll reach a point where even more people will consider LLMs having reached or surpassed AGI, while others still only consider it "sufficiently advanced autocomplete".


I'd believe this more if companies weren't continuing to use words like reason, understand, learn, and genius when talking about these systems.

I buy that there's disagreement on what intelligence means in the enthusiast space, but "thinks like people" is pretty clearly the general understanding of the word, and the one that tech companies are hoping to leverage.


The defining feature of true AGI, in my opinion, is that the software itself would decide what to do and do it without external prompts more than environmental input.

Doubly so if the AGI writes software for itself to accomplish a task it decided to do.

Once someone has software like that, not a dog that is sicced on a task, but a bloodhound that seeks out novelty and accomplishment for its own personal curiosity or to test its capabilities, then you have a good chance of convincing me that AGI has been achieved.

Until then, we have fancy autocomplete.


Have you tried asymptotically approaching the speed of light?

I’m quite certain you can approach it in any convenient manner

6 alphanumeric, case insensitive characters only allow for about 2 billion unique combinations. I’d have guessed there were more reservations made than that?

Or are PNR locators recycled after a while?


Yes, I've got in my drawer two physical boarding passes with the same PNR

Electromagnetic waves have perfect/lossless superposition, so radiation can’t really degrade a signal that way.

The big limiting factors are free space path loss and noise.


That's only true in classical electrodynamics, as it happens. If you're in a very strong B-field like you might find near a compact object you'll get nonlinear QED effects.

You can get a low order correction with Euler-Heisenberg: https://en.wikipedia.org/wiki/Euler%E2%80%93Heisenberg_Lagra...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: