I have some context here, as my dad used to work at a state college running "the systems". There was era of thin clients and a centralized VAX machine or similar that did all the work. I remember weekends where my dad had to work because they were "running the numbers" which involved calculating grades and producing end of semester reports and such. Somehow this took more than a day of processing for a few thousans students and ran on a big tape machine. Sometimes it would crash or something so someone had to be there to keep things moving.
I don't remember all the details, but this is what they used up til the mid-90s. By then, I could probably run something on my 486 home computer that would complete in half an hour. But there were decades of process and customization embedded in these systems.
When modernization happened, it was swift. My dad was lucky with the timing as he was retiring during the transition so even made bonus money coming back as a consultant. But you can imagine that even if the new software was pricey and not as customizable, the speed improvements and reduction in staff made sense.
Once the old staff was cleared out, there was no department of staff being paid to build computer services, only the lesser staff needed to maintain and use it. The issue was that hardware/Internet usage expanded too fast, the importance and reliance on tech grew and it became a selling point for unis to have the newest systems in place.
It makes sense now for the pendulum to swing in the other direction, as customization and cost are wildly out of balance with AI and the latent tech workforce available at every college.
I would say the blocker now is the same as what allowed creaky old systems to persist into the 90s - administration doesn't give a shit about any of this and it is only viewed as a cost center. Until differentiating through customization provides an obvious and immediate fiscal benefit to the admins themselves, most unis won't look at changing off their shitty landlord systems until they are basically forced to by the market.
5th grade, my best friend at the time was in a basketball team, just a small town league for kids. I never really played basketball, so I was planning to watch the game then we'd hang out. It was the first game of the season and my friend was getting his uniform from a table when a dad running things asked me what team I was playing on and I said no, I'm just here to hang out with my friend.
He shook his head and said, "No, that won't do. You're on his team, too" and handed me a jersey. Then he went ahead and paid my registration fee.
More than the money, it was the proactive nature of it that struck me at the time. The thing is, if I had asked my parents they probably would have signed me up. But it was one of those things where it would have never crossed my mind to ask. I ws as one of those kids that needed a push every now and then and rarely got one.
I never got very good at basketball but I never missed a game and had a great time with my friend. So not a tragic or desperate story, but still meaningful to me all these years later.
I agree with the replies to this saying that the fact it could lead to drama should not prevent people doing things like this, but I can see this causing trouble/resentment too.
I think a lot of the other unasked for examples given could also cause resentment. Perhaps often the right thing to do is just taking the risk.
Things have always been able to go wrong. That's not a reason to stop doing things. Oh no, you might get an ear full from an angry parent once in a while. boo hoo.
The "world model" is what we often refer to as the "context". But it is hard to anticipate bad assumptions that seem obvious because of our existing world model. One of the first bugs I scanned past from LLM generated code was something like:
if user.id == "id":
...
Not anticipating that it would arbitrarily put quotes around a variable name. Other time it will do all kinds of smart logic, generate data with ids then fail to use those ids for lookups, or something equally obvious.
The problem is LLMs guess so much correctly that it is near impossible to understand how or why they might go wrong. We can solve this with heavy validation, iterative testing, etc. But the guardrails we need to actually make the results bulletproof need to go far beyond normal testing. LLMs can make such fundamental mistakes while easily completing complex tasks that we need to reset our expectations for what "idiot proofing" really looks like.
In whatever way this is true, it has very little to do with sticking it to "coders" but is about magically solving/automating processes of any kind. Replacing programmers is small potatoes, and ultimately not a good candidate for jobs to replace. Programmers are ideal future AI operators!
What AI usage has underlined is that we are forever bound by our ability to communicate precisely what we want the AI to do for us. Even if LLMs are perfect, if we give it squishy instructions we get squishy results. If we give it a well-crafted objective and appropriate context and all the rest, it can respond just about perfectly. Then again, that is a lot of what programming has always been about in the first place - translate human goals into actionable code. Only the interface and abstraction level has changed.
Don't forget scuttling all the projects the staff has been working overtime to complete so that they can focus on "make it better!" waves hands frantically
For whatever reason, I can't get into Claude's approach. I like how Cursor handles this, with a directory of files (even subdirectories allowed) where you can define when it should use specific documents.
We are all "context engineering" now but Claude expects one big file to handle everything? Seems luke a deadend approach.
CLAUDE.md should only be for persistent reminders that are useful in 100% of your sessions
Otherwise, you should use skills, especially if CLAUDE.md gets too long.
Also just as a note, Claude already supports lazy loaded separate CLAUDE.md files that you place in subdirectories. It will read those if it dips into those dirs
I think their skills have the ability to dynamically pull in more data, but so far i've not tested it to much since it seems more tailored towards specific actions. Ie converting a PDF might translate nicely to the Agent pulling in the skill doc, but i'm not sure if it will translate well to it pulling in some rust_testing_patterns.md file when it writes rust tests.
Eg i toyed with the idea of thinning out various CLAUDE.md files in favor of my targeted skill.md files. In doing so my hope was to have less irrelevant data in context.
However the more i thought through this, the more i realized the Agent is doing "everything" i wanted to document each time. Eg i wasn't sure that creating skills/writing_documentation.md and skills/writing_tests.md would actually result in less context usage, since both of those would be in memory most of the time. My CLAUDE.md is already pretty hyper focused.
So yea, anyway my point was that skills might have potential to offload irrelevant context which seems useful. Though in my case i'm not sure it would help.
This is good for the company, chances are you will eat more tokens. I liked Aider approach, it wasn't trying to be too clever, it used files added to chat and asks if it figure out that something more is needed (like, say, settings in case of Django application).
I don't remember all the details, but this is what they used up til the mid-90s. By then, I could probably run something on my 486 home computer that would complete in half an hour. But there were decades of process and customization embedded in these systems.
When modernization happened, it was swift. My dad was lucky with the timing as he was retiring during the transition so even made bonus money coming back as a consultant. But you can imagine that even if the new software was pricey and not as customizable, the speed improvements and reduction in staff made sense.
Once the old staff was cleared out, there was no department of staff being paid to build computer services, only the lesser staff needed to maintain and use it. The issue was that hardware/Internet usage expanded too fast, the importance and reliance on tech grew and it became a selling point for unis to have the newest systems in place.
It makes sense now for the pendulum to swing in the other direction, as customization and cost are wildly out of balance with AI and the latent tech workforce available at every college.
I would say the blocker now is the same as what allowed creaky old systems to persist into the 90s - administration doesn't give a shit about any of this and it is only viewed as a cost center. Until differentiating through customization provides an obvious and immediate fiscal benefit to the admins themselves, most unis won't look at changing off their shitty landlord systems until they are basically forced to by the market.
reply