Hacker Newsnew | past | comments | ask | show | jobs | submit | chii's commentslogin

apparently, 40MWh of capacity is enough to travel 40 nautical miles. The distance between Tasmania and South America is around 6,500–7,500 nautical miles.

For comparison, a wide body airliner needs ~0.15MWh to travel 1 nautical mile.

A wide body airliner doesn't carry "up to 2,100 passengers and 225 vehicles".

It also does so in a medium where the main drag force is induced by air rather than water, which is probably a comparably significant factor

It also needs to beat up that air enough to make the resultant forces overcome gravity acting on the airliner whereas the ship just gets to float there.

Apples to orages.


Yup.

Or to structure it a the earlier comment: for comparison, it takes me about 0.000065 MWh to cycle 1 nautical mile.

That's a couple of apples.



You also aren’t doing so while carrying 2100 passengers sms 225 cars, I imagine.

Plus they are going to get very waterlogged cycling that nautical mile.

Some dedicated cyclists will cycle in any weather.

I would be extremely surprised if the ship were designed to use 100% of its capacity in one way of its intended route.

but robotics had the means to do majority of the physical labour already - it's just not worth the money to replace humans, as human labour is cheap (and flexible - more than robots).

With knowledge work being less high-paying, physical labour supply should increase as well, which drops their price. This means it's actually less likely that the advent of LLM will make physical labour more automated.


the "usefulness" excuse is irrelevant, and the claim that phones/internet is "immediately useful" is just a post hoc rationalization. It's basically trying to find a reasonable reason why opposition to AI is valid, and is not in self-interest.

The opposition to AI is from people who feel threatened by it, because it either threatens their livelihood (or family/friends'), and that they feel they are unable to benefit from AI in the same way as they had internet/mobile phones.


The usefulness of mobile phones was identifiable immediately and it is absolutely not 'post hoc rationalization'. The issue was the cost - once low cost mobile telephones were produced they almost immediately became ubiquitous (see nokia share price from the release of the nokia 6110 onwards for example).

This barrier does not exist for current AI technologies which are being given away free. Minor thought experiment - just how radical would the uptake of mobile phones have been if they were given away free?


It's only low cost for general usage chat users. If you are using it for anything beyond that, you are paying or sitting in a long queue (likely both).

You may just be a little early to the renaissance. What happens when the models we have today run on a mobile device?

The nokia 6110 was released 15 years after the first commercial cell phone.


Yes although even those people paying are likely still being subsidized and not currently paying the full cost.

Interesting thought about current SOTA models running on my mobile device. I've given it some thought and I don't think it would change my life in any way. Can you suggest some way that it would change yours?


Eh, quite the contrary. A lot of anti AI people genuinely wanted to use AI but run into the factual reality of the limitations of the software. It's not that it's going to take my job, it's that I was told it would redefine how I do work and is exponentially improving only to find out that it just kind of sucks and hasn't gotten much better this year.

> in turn act irrationally

it isn't irrational to act in self-interest. If LLM threatens someone's livelihood, it matters not that it helps humanity overall one bit - they will oppose it. I don't blame them. But i also hope that they cannot succeed in opposing it.


It's irrational to genuinely hold false beliefs about capabilities of LLMs. But at this point I assume around half of the skeptics are emotionally motivated anyway.

As opposed to having skin in the game for llms and are blind to their flaws???

I'd assume that around half of the optimists are emotionally motivated this way.


> got down-skilled.

who's to say that it's a down?

Orchestrating and doing higher level strategic planning, such that the sub-tasks can be AI produced, is a skill that might be higher than programming.


> they don't actually understand how

but if it empirically works, does it matter if the "intelligence" doesn't "understand" it?

Does a chess engine "understand" the moves it makes?


It matters if AGI is the goal. If it remains a tool to make workers more productive, then it doesn't need to truly understand, since the humans using the tools understand. I'm of the opinion AI should have stood for Augmented (Human) Intelligence outside of science fiction. I believe that's what early pioneers like Douglas Engalbert thought. Clearly that's what Steve Jobs and Alan Kay thought computing was for.

AGI is such a meaningless concept. We can’t even fully design what human intelligence is (and when a human fails it meaning they lack human intelligence). It’s just philosophy.

AGI is about as well defined as "full self-driving" :D

It's an useless philosophical discussion.


If it empirically works, then sure. If instead every single solution it provides beyond a few trivial lines falls somewhere between "just a little bit off" and "relies entirely on core library functionality that doesn't actually exist" then I'd say it does matter and it's only slightly better than an opaque box that spouts random nonsense (which will soon include ads).

Those are 2024-era criticisms of LLMs for code.

Late 2025 models very rarely hallucinate nonexistent core library functionality - and they run inside coding agent harnesses so if they DO they notice that the code doesn't work and fix it.


get ready to tick those numbers over to 2026!

This sounds like you're copy-pasting code from ChatGPT's web interface, which is very 2024.

Agentic LLMs will notice if something is crap and won't compile and will retry, use the tools they have available to figure out what's the correct way, edit and retry again.


This is a semantic dead end when discussing results and career choices

> I’m basically just the conductor of all those processes.

a car moves faster than you, can last longer than you, and can carry much more than you. But somehow, people don't seem to be scared of cars displacing them(yet)? Perhaps autodriving would in the near future, but there still needs to be someone making decisions on how best to utilize that car - surely, it isn't deciding to go to destination A without someone telling them.

> I feel like I’m doing the work of an entire org that used to need twenty engineers.

and this is great. A combine harvester does the work of what used to be an entire village for a week in a day. More output for less people/resources expended means more wealth produced.


> a car moves faster than you, can last longer than you, and can carry much more than you. But somehow, people don't seem to be scared of cars displacing them(yet)?

People whose life were based around using horses for transportation were very scared of cars replacing them though, and correctly so, because horses for transportation is something people do for leisure today, not necessity. I feel like that's a more apt analogy than comparing cars to any human.

> More output for less people/resources expended means more wealth produced.

This is true, but it probably also means that this "more wealth produced" will be more concentrated, because it's easier to convince one person using AI that you should have half of the wealth they produce, rather than convincing 100 people you should have half of what they produce. From where I'm standing, it seems to have the same effects (but not as widespread or impactful, yet) as industrialization, that induced that side-effect as well.


Analogies are not going to work. Bug it's just as likely that, in the worst case, we are stage coach drivers who have to use cars when we just really love the quiet slowness of horses.

And parent is scared of being made redundant by AI because they need their job to pay for their car, insurance, gas and repairs.

> a car moves faster than you, can last longer than you, and can carry much more than you. But somehow, people don't seem to be scared of cars displacing them(yet)?

???

Cars replaced horses, not people.

In this scenario you are the horse.


Well no, you'd be the horse driver who becomes a car driver

> Well no, you'd be the horse driver who becomes a car driver

Well, that's the crux of the argument. The pro-AI devs are making the claim that devs are the horse-drivers, the anti-AI is making the claim that devs are the horses themselves.

There is no objective way to verify who is right in this case, we just have to see it play out.


I don't really understand what you are saying... Anyways glad you got what I am saying at least

> Your software will be mangled

the quality of how maintainable the source code is has no bearing on how a user perceives the software's usefulness.

If the software serves a good function for the user, they will use it, regardless of how badly the datastructures are. Of course, good function also means reliability from the POV of the user. If your software is so bad that you lose data, obviously no one will use it.

But you are conflating the maintainability and sensibilities of clean tools, clean code and clean workspaces, with output.

A messy carpentry workshop can still produce great furniture.


This is bean counter mentality. Personally just don’t believe this is how it works.

The intention/perspective of development is something on its own and doesn’t correspond to the end result directly.

This is such a complex issue that everything comes down to what someone believes


> not sure about ventilation

i assume you'd always need ventilation, because if it is enclosed, the produced heat will just accumulate and go above 70c eventually.

But ventilation is much cheaper than heating/cooling (and can be a passive vent rather than anything active - thus taking zero power to run except for maintenance/replacement).


> find a buyer

this buyer would rather buy off GOG than you, unless you give a significant discount (and even then, the trust is hard to establish).

Therefore, even if you might have a legal right to re-sell (which you really don't unfortunately), the actual sale won't happen.


That's not relevant to the issue of "ownership"

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: