Hacker Newsnew | past | comments | ask | show | jobs | submit | Hedepig's commentslogin

... and Oracle lost


And the lesson is not to trust Oracle.


Except on HN

HAHA!

Our servers are still down, though


HAHAHAHA


I'm not sure whether to laugh or cry

Maybe I'll do both


YEP that's the case nowwwww


I mean, this could be phase 2 of career growth. Focus on personal growth first by surrounding by experts.


Maybe.

I’ve seen some pretty incompetent people just show up and play the game make it decently far. I prioritized learning first and got great offers right off the bat, but I definitely overdid it.

Different strategies for different people.


It just all seems so random and personal to me to make any blanket statement in this area.

The biggest thing to me is to not blow the small handful of opportunities that randomly present themselves. In this context, there is a good chance those won't be in agents or robotics.

At 50, I am happy with my life but I have blown almost all my opportunities. I have grinded out a decent life but it is pretty minimized vs what could have been.

I was offered a nice career at 21 in the financial markets from my part time job while going to college for CS so turned it down. CS was delusion with my math skills and I dropped out. Ended up spending a decade grinding to get into the financial markets after college.

Before CS I thought I would get a PhD in psychology. A real Freudian fan boy professor made me abandon that. I thought Freud was so absurd on first encounter. At this point I have read almost all of Freud's work on my own. I should have stuck with the original plan in psychology.

On the other hand, maybe the path I have traveled is the optimal path because at 50 I am not done. I still have huge dreams to make this all be the right path. The bar is not that high.

It is all a relative valuation.


This is not totally my experience, I've debated a successful engineer who by all accounts has good reasoning skills, but he will absolutely double down on unreasonable ideas he's made on the fly he if can find what he considers a coherent argument behind them. Sometimes if I absolutely can prove him wrong he'll change his mind.

But I think this is ego getting in the way, and our reluctance to change our minds.

We like to point to artificial intelligence and explain how it works differently and then say therefore it's not "true reasoning". I'm not sure that's a good conclusion. We should look at the output and decide. As flawed as it is, I think it's rather impressive


> ego getting in the way

That thing which was in fact identified thousands of years ago as the evil to ditch.

> reluctance to change our minds

That is clumsiness in a general drive that makes sense and is recognized part of the Belief Change Theory: epistemic change is conservative. I.e., when you revise a body of knowledge you do not want to lose valid notions. But conversely, you do not want to be unable to see change or errors, so there is a balance.

> it's not "true reasoning"

If it shows not to explicitly check its "spontaneous" ideas, then it is a correct formula to say 'it's not "true reasoning"'.


> then it is a correct formula to say 'it's not "true reasoning"'

why is that point fundamental?


Because the same way you do not want a human interlocutor to speak out of its dreams, uttering the first ideas that come to mind unvetted, and you want him to instead have thought hard and long and properly and diligently and well, equally you'll want the same from an automation.


If we do figure out how to vet these thoughts, would you call it reasoning?


> vet these thoughts, would you call it reasoning

Probably: other details may be missing, but checking one's ideas is a requirement. The sought engine must have critical thinking.

I have expressed very many times in the past two years, some times at length, always rephrasing it on the spot: the Intelligent entity refines a world model iteratively by assessing its contents.


I do see your point, and it is a good point.

My observation is that the models are better at evaluating than they are generating, this is the technique used in the o1 models. They will use unaligned hidden tokens as "thinking" steps that will include evaluation of previous attempts.

I thought that was a good approach to vetting bad ideas.


> My observation is that the [o1-like] models are better at evaluating than they are generating

This is very good (a very good thing that you see that the out-loud reasoning is working well as judgement),

but we at this stage face an architectural problem. The "model, exemplary" entities will iteratively judge and both * approximate the world model towards progressive truthfulness and completeness, and * refine their judgement abilities and general intellectual proficiency in the process. That (in a way) requires that the main body of knowledge (including "functioning", proficiency over the better processes) is updated. The current architectures I know are static... Instead, we want them to learn: to understand (not memorize) e.g. that Copernicus is better than Ptolemy and to use the gained intellectual keys in subsequent relevant processes.

The main body of knowledge - notions, judgements and abilities - should be affected in a permanent way, to make it grow (like natural minds can).


The static nature of LLMs is a compelling argument against the reasoning ability.

But, it can learn, albeit in a limited way, using the context. Though to my knowledge that doesn't scale well.


I read the original comment as hyperbole. But can see why it was confusing.

Edit: that came out way more condescending than I intentended


Have you had a go with the o1 range of models?


Yesterday, I got into an argument on the internet (shocking, I know), so I pulled out an old gravitation simulator that I had built for a game.

I had chatGPT give me the solar system parameters, which worked fine, but my simulation had an issue that I actually never resolved. So, working with the AI, I asked it to convert the simulation to constant-time (it was currently locked to render path -- it's over a decade old). Needless to say, it wrote code that set the simulation to be realtime ... in other words, we'd be waiting one year to see the planets go around the sun. After I pointed that out, it figured out what to do and still got things wrong or made some terrible readability decisions. I ended up using it as inspiration instead and then was able to have the simulation step at one second resolution (which was required for a stable orbit) but render at 60fps and compress a year into a second.


This sums up my experience as well. You can get an idea or just a direction from it, but itself AI stumbles upon its own legs instantly in any non-tutorial task. Sometimes I envy and at the same time feel sorry for successful AI-enabled devs, cause it feels like they do boilerplate and textbook features all day. What a release if something can write it for you.


I think you're right

The danger is two fold

1. People don't eat fatty foods which have solid evidence for their benefits (e.g. Virgin olive oil)

2. People substitute the lack of fats with sugar, which I believe (not an expert) has a lot literature linking it with obesity


Argh, what has Black Mirror done to our sense of optimism?

(I agree with you).


I mentioned in another comment, I enjoyed the structure and found it much easier to process. This is despite mostly reading long form articles and never having been a huge twitter user.


On the contrary, I have never been a heavy twitter user, yet I would dearly love all articles I read to be broken down like this. I definitely find it easier to process a list structure like this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: