Hacker Newsnew | past | comments | ask | show | jobs | submit | sorokod's commentslogin

The Armorials section in Wikipedia:

The armorials of the South Sea Company, according to a grant of arms dated 31 October 1711, were: Azure, a globe whereon are represented the Straits of Magellan and Cape Horn all proper and in sinister chief point two herrings haurient in saltire argent crowned or, in a canton the united arms of Great Britain. Crest: A ship of three masts in full sail. Supporters, dexter: The emblematic figure of Britannia, with the shield, lance etc all proper; sinister: A fisherman completely clothed, with cap boots fishing net etc and in his hand a string of fish, all proper.[61]


Sounds interesting, unfortunately for some reason a login is required - I'll pass

As long as the submissions are on behalf of humans we should. The humans should accept the consequences too.

I shared this one with my son, the step where the 2ab expressions cancel out gave him a little aha moment.


Many go off-world to create real estate opportunities?


The author is stretching an analogy, it's a price to pay for starting with R^3 as a motivational example. There is nothing in the general definition of a vector space that requires it's elements to be "indexed"


"junior developers" is a convenient label, it is incorrect but it will take a bit until we come up something that describes entities that:

- can write code

- tireless

- have no aspirations

- have no stylistic or architectural preferences

- have massive, but at the same time not well defined, body of knowledge

- have no intrinsic memories of past interactions.

- change in unexpected ways when underlying models change

- ...

Edit: Drones? Drains?


- don't learn from what you tell them

- don't have career growth that you can feel good about having contributed to

- don't have a genuine interest in accomplishment or team goals

- have no past and no future. When you change companies, they won't recognize you in the hall.

- no ownership over results. If they make a mistake, they won't suffer.


Sounds like my teammates.


- don't learn from what you tell them

Whenever I have a model fix something new I ask it to update the markdown implementation guides I have in the docs folder in my projects. I add these files to context as needed. I have one for implementing routes and one for implementing backend tests and so on.

They then know how to do stuff in the future in my projects.


They still aren't learning. You're learning and then telling them to incorporate your learnings. They aren't able to remember this so you need to remind them each day.

That sounds a lot like '50 First Dates' but for programming.


> They aren't able to remember this

Yes, this is something people using LLMs for coding probably pick up on the first day. They're not "learning" as humans do obviously. Instead, the process is that you figure out what was missing from the first message you sent where they got something wrong, change it, and then restart from beginning. The "learning" is you keeping track of what you need to include in the context, how that process exactly works, is up to you. For some it's very automatic, and you don't add/remove things yourself, for others is keeping a text file around they copy-paste into a chat UI.

This is what people mean when they say "you can kind of do "learning" (not literally) for LLMs"


While I hate anthropomorphizing agents, there is an important practical difference between a human with no memory, and an agent with no memory but the ability to ingest hundreds of pages of documentation nearly instantly.


That is true, but does it actually matter if the outcome is the same? GP is saying they don't need to remind them.


The outcome is definitely not the same, and you need to remind them all the time. Even if you feed the context automatically they will happily "forget" it from time to time. And you need to update that automated context again, and again, and again, as the project evolves


They document how to do something they just figured out. They store/memorise it in a file.

It's functionally working the same as learning.

If you look at it like a black box, then you can't tell the difference from the input and output.


I believe LLMs ultimately cannot learn new ideas from their input in the same way as they can learn it from their training data, as the input data doesn't affect the weights of the neural network layers.

For example, let's say LLMs did not have examples of chess gameplay examples in their training data. Would one be able to have an LLM play chess by listing the rules and examples in the context? Perhaps, to some extent, but I believe it would be much worse than if it was part of the training (which of course isn't great either).


50 first new Date()


Ah, so it's like you have a junior developer that can't learn


Can this additional prompt from you also be automated? I do this too but I forget sometimes. I don't know if a general rule will be enough ?


> I add these files to context as needed.

Key words are these.

> They then know how to do stuff in the future in my projects.

No. No, they don't. Every new session is a blank slate, and you have to feed those markdown files manually to their context.


The feeding can be automated in some cases. In GitHub copilot you can put it under .github/instructions and each instructions markdown file starts with a section that contains a regex of which files to apply the instructions to.


You can also have an index file that describes when to use each file (nest with additional folders and index files as needed) and tell the agent to check the index for any relevant documentation they should read before they start. Sometimes it will forget and not consult the docs but often it will consult the relevant docs first to load just the things it needs for the task at hand.


So, again, they don't learn.


Do you want them to?


I would. Getting tired of redirecting them in correct directions from scratch every time


I tend to think it would lead to them forming opinions about the people they interact with as they learn what it's like to interact with them, and that this would also influence their behaviour/outputs. Just imagining the day where copilot's chain of thought starts to include things like "Greg is bossy and often unkind to me in PR reviews. I need to set clear boundaries with him and discontinue the relationship if he will not respect them."


Doesn't this also consume context


Having a good prompt file ("memory") is an artform.

The AI hype folks write massive fan fiction style novellas that don't have any impact.

But there's middle ground where you tell the agent the specific things about your repo that it doesn't know based on its training. Like if your application has a specific way to run tests headless or it's compiled a certain way that's not the default average.


This works surprisingly well for Claude: https://github.com/obra/superpowers (in the context of rather small side projects in Elixir).

Unless, of course, the phase of the moon is wrong and Claude itself is stupid beyond all reason


Yes.


https://agents.md/

AGENTS.md exists, Codex and Crush support it directly. Copilot, Gemini and Claude have their own variants and their /init commands look at AGENTS.md automatically to initialise the project.

Nobody is feeding aything "manually" to Agents. Only people who think "AI" is a web page do that.


Ah yes. Agents.md is a magical file that just appears out of thin air. No one creates it, no one keeps it updated, and LLMs always, without fail, not only consult it but never forget it, and in every new session know precisely what changed in the project and how to continue.

All of them often can't even find/read relevant docs in a new session without prompting


Literally every single CLI-based Agent will show you a suggestion to run /init at startup.

And of course it's up to the developer to keep the documentation up to date. Just like when working with humans. Stuff don't magically document itself.

Yes "good code is self-documenting", but it still takes ages to find anything without docs to tell you the approximate direction.

It's literally a text file the agent can create and update itself. Not hard. Try it.


> Just like when working with humans. Stuff don't magically document itself.

Humans actually learn from codebases they work with. They don't start with a clean slate every time they wake up in the morning. They know where to find information and how to search for them. They don't need someone to constantly update docs to point to changes.

> but it still takes ages to find anything without docs to tell you the approximate direction.

Which humans, unsurprisingly, can do without wiping their memory every time.


Imagine having these complaints about a screwdriver

It's a tool, not an intelligent being


Yeah, if my screwdriver undid the changes I just made to my mower, constantly ignored my desire to unscrew screws and instead punched a hole in my carb - I'd be throwing that screwdriver in the garbage.


I do not need to babysit my screwdriver.


Yet.

Next year there will be AI screwdriver your employer force you to use.


At first, I thought “ponector’s forgotten to add the /s”

Then I realised that this will actually happen, and was sadly reminded we’re now in the post-sarcasm era.


And you can buy AI screwdriver today from Amazon!


- don't learn from what you tell them

We'll fix that, eventually.

- don't have career growth that you can feel good about having contributed to

Humans are on the verge of building machines that are smarter than we are. I feel pretty goddamned awesome about that. It's what we're supposed to be doing.

- don't have a genuine interest in accomplishment or team goals

Easy to train for, if it turns out to be necessary. I'd always assumed that a competitive drive would be necessary in order to achieve or at least simulate human-level intelligence, but things don't seem to be playing out that way.

- have no past and no future. When you change companies, they won't recognize you in the hall.

Or on the picket line.

- no ownership over results. If they make a mistake, they won't suffer.

Good deal. Less human suffering is usually worth striving for.


> We'll fix that, eventually.

> Humans are on the verge of building machines that are smarter than we are.

You're not describing a system that exists. You're describing a system that might exist in some sci-fi fantasy future. You might as well be saying "there's no point learning to code because soon the rapture will come".


That particular future exists now, it's just not evenly distributed. Gemini 2.5 Pro Thinking is already as good at programming as I am. Architecture, probably not, but give it time. It's far better at math than I am, and at least as good at writing.


Computers beat us in maths decades ago, yet LLMs are not able to beat a calculator half of the time. The maths benchmarks that companies so proudly show off are still the realm of a traditional symbolic solvers. You claiming much success in asking LLMS for math makes me question if you have actually asked an LLM about maths.

Most AI experts not heavily invested in the stocks of inflated tech companies seem to agree that current architectures cannot reach AGI. It's a sci-fi dream, but hyping it is real profitable. We can destroy ourselves plenty with the tech we already have, but it won't be a robot revolution that does it.


The maths benchmarks that companies so proudly show off are still the realm of a traditional symbolic solvers. You claiming much success in asking LLMS for math makes me question if you have actually asked an LLM about maths.

What I really need to ask an LLM for is a pointer to a forum that doesn't cultivate proud exhibition of ignorance, Luddism, and general stupidity at the level exhibited by commenters in this entire HN story, and in this subthread in particular.

We already had one Reddit, we didn't need two.


Replace suffering with caring and have your AI write that again.


Why would I do a goofy thing like that?


>>Humans are on the verge of building machines that are smarter than we are. I feel pretty goddamned awesome about that. It's what we're supposed to be doing.

Have you ever spent any time around children? How about people who think they're accomplishing a great mission by releasing truly noxious ones on the world?

You just dismissed the entire notion of accountability as an unnecessary form of suffering, which is right up there with the most nihilistic ideas ever said by, idk, Dostoevsky's underground man or Raskolnikov.

Don't waste your life on being the Joker.


> Humans are on the verge of building machines that are smarter than we are. I feel pretty goddamned awesome about that. It's what we're supposed to be doing.

It's also the premise of The Matrix. I feel pretty goddamned uneasy about that.


(Shrug) There are other sources of inspiration besides dystopic sci-fi movies. There's the Biblical story of the Tower of Babel, for instance. Better not work on language translation, which after all is how the whole LLM thing got started.


Sometimes fiction went in the wrong direction. Sometimes it didn't go far enough.

In any case, the matrix wasn't my inspiration here, but it is a pithy way to describe the concept. It's hard to imagine how humans maintain relevancy if we really do manage to invent something smarter than us. It could be that my imagination is limited though. I've been accused of that before.


> It's what we're supposed to be doing.

Why?


Because venture capital managers say so.


You'd also have no intrinsic memory of past interactions if we removed your hippocampus.

Coincidentally, the hippocampus looks like a seahorse (emoji). It's all connected.


> the hippocampus looks like a seahorse

Not to mention; hippocampus literally means "seahorse" in Greek. I knew neither of those things before today, thanks!


- constantly ignore your advice

- constantly give wrong answers, with surprising confidence

- constantly apologize, then make the same mistake again immediately

- constantly forget what you just told them

- ...


Sounds like a junior developer?

They can usually write code, but not that well. They have lots of energy and little to say about architecture and style. Don't have a well defined body of knowledge and have no experience. Individual juniors don't change, but the cast members of your junior cohort regularly do.


The problem with AI Agents like Claude is that they write VERY good code and very fast.

But they don't have a grasp for the project's architecture and will reinvent the wheel for feature X even when feature Y has it or there is an internal common library that does it. This is why you need to be the "manager of agents" and stay on top of their work.

Sometimes it's just about hitting ESC and going "waitaminute, why'd you do that?" and sometimes it's about updating the project documentation (AGENTS.md, docs/) with extra information.

Example: I have a project with a system that builds "rules" using a specific interpreter. Every LLM wants to "optimise" it by using a pattern that looks correct, but will in fact break immediately when there's more than one simultaneous user - and I have a unit test that catches it.

I got bored by LLMs trying to optimise the bit wrong, so I added a specific instruction, with reasoning why it shouldn't be attempted and has been tried and failed multiple times. And now they stopped doing it =)


I describe them in the claude training I'm doing for my company as: super smart, infinitely patient, overeager interns


Sometimes smart sometimes the opposite, though. Perhaps due to memory loss.


Not sure "smart" or "dumb" are even the right axis to be judging them by, seems like intrinsically human traits.


Robots ?


clankers.


"brooms"


*distant sound of the sorcerer's apprentice is heard*


Would a mention of loot boxes move you?


At least for 4D, would you not consider 3D-over-time as a four dimensional model? Doesn't watching the evolution as seen here allows for building up an intuition ?


Well, what's interesting about 4D is that's not just an extra dimension slapped on top, it's extra rotational degrees of freedom. You can't really get that with time (at least not until you get relativistic, and it still would be hyperbolic rotation, not euclidean).


Sure you do - waves only exist in 4D as they have a time vector (frequency).


What I'm talking about is something like this: https://en.wikipedia.org/wiki/Rotations_in_4-dimensional_Euc...

You can either sweep a cutting hyperplane through time or rotate a fixed projection or cut through time, but not both simultaneously.


The commercial models are not designed to win the imitation game (that is what Allan Turing named it). In fact the are very likely to loose every time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: