Hacker Newsnew | past | comments | ask | show | jobs | submit | VincentEvans's commentslogin

You haven’t quite come to grips with mortality, I think.


I think OP is not entirely incorrect. Reproductive cells undergo processes like epigenetic reprogramming, which basically strips away many of the chemical marks (like DNA methylation patterns) that accumulate with age. That’s one of the reasons babies don’t start with the cellular age of their parents. Researchers can take adult cells, reprogram them back to an embryonic like state using Yamanaka factors (a set of four genes) effectively erasing their biological age.

I think scientists currently are testing ways to "partially" reprogram cells to make them younger while keeping their function. Early studies in mice have shown some reversal of aging signs.

Seems like an engineering problem more than an absolute limitation.


DNA damage inevitably accumulates. The big reason children are younger than their parents DNA wise is because the parents' DNA undergo random recombination to create something that is the mixture of the two.

This doesn't help overall. Mixing two roughly equally broken things just yields the mean of the two. But the trick is that roughly 60 to 70% of conceptions will not survive to birth. This rejection sampling is ultimately what makes children younger.

If you had a population of single cells that didn't undergo this rejection sampling at some point, entropy and Muller's ratchet would actually age the entire population and kill it.


You are right that DNA damage inevitably accumulates and that selection (including miscarriages) weeds out embryos with severe defects but that doesn’t fully explain why a newborn’s biological age is near zero.

What scientists usually mean by "cellular age" isn’t mutation load, it’s the epigenetic and functional state of cells. During gametogenesis and early embryonic development DNA undergoes extensive repair, telomere maintenance and global epigenetic reprogramming that wipes and rewrites methylation patterns. This resets the cellular "clock" even though some mutations are passed on.

So while mutation load drifts slightly each generation, the reason babies start biologically young is this large scale reprogramming. That’s also why researchers are trying to mimic this process in adult cells (Yamanaka factors etc) to reverse aspects of aging.


Fully agree! I don't think life is much more than a sort of chemical engineering, "designed" with the "purpose" of self-replication. Our engineer, natural selection, didn't have "healthspan" in mind; insofar as we are human-making machines, we're pretty well built. I fail to see any reason that necessarily precludes a retooling of our internal machinery to accomplish our desires, not nature's.


More over, babies can clearly grow from limited cells to "young" versions of fully differentiated human tissues. Which means from some initial stock, you can replace the vast majority of the bodies cells with younger versions - i.e. with plausible, attainable technology we would generally expect to be able to grow immunologically identical replacement organs and major tissues. That definitely is an engineering problem, more then anything else.

The only truly troubling one is the brain, and we're very much not sure if it actually is one or for example, suffers degradation from the degradation of the body its attached to - likely both - but we also know that the brain is not a static structure, and so replacement or rejuvenation of key systems would definitely be possible (certainly finding any way to protect the small blood vessels in the brain would greatly help with dementia).


Social shaming is a big way humans deal with unchangeable things. They impose a cost for anyone expressing a desire for that thing to be different.

And it makes sense, really. You can't have a functioning society if everyone is running around freaking out about death all the time.

But we're entering a weird time where we might actually be able to add more good years to our lives. One of the steps towards getting there is being a little more okay with people seriously exploring these ideas.


I don't see the point in doing that


If mortality is just a tradition and you're the first to realize you needn't acquiesce, sure.

If not, the point in doing that is the enormous amount of suffering you create while thrashing against an inevitability.

That is not to say you should take naps and wait patiently for death, but it's a line to walk.


> If not, the point in doing that is the enormous amount of suffering you create while thrashing against an inevitability.

This is absurd. Of course mortality is inevitable -- eternity is a very long time -- but working to increase lifespan, prolong one's youth and vigor, and delay the inevitable doesn't cause an "enormous amount of suffering" (far less than the diseases of aging cause) and it's unfair to characterize it as "thrashing" when it can be approached in ways which are thoughtful and reasonable.


You aren't wrong. I was replying to a flippant declaration "I don't see the point [in coming to grips with mortality]", which is quite different from your nuanced reply.

I tried to convey that I'm not saying "this is as good as it gets and it's wrong to try for longer life". Your "thoughtful and reasonable" approach was exactly what I had in mind.

What I say leads to suffering arises from denying that mortality is inevitable and tarring those who say otherwise as defeatists. Death is another part of life, as you acknowledged. It unnerves me to see denying that truth cast as a virtue.


> You haven’t quite come to grips with mortality

This is unfair, and akin to branding anyone who takes medicine as being unhinged.

There is evidence we can extend our health spans. By how much and how are open questions. And if we can actually stop aging, versus slow it down, has not been demonstrated. Some people engage with this unhealthily, just as many terminally-ill cancer patients unhealthily engage with long-shot treatment options. That doesn’t make everyone taking those treatments delusional.

I’d hope we more mature as a society than decrying real medical research that could materially increase our health spans because they’re heretical.


It's the 'indefinite' part that I react negatively to. I don't have a good impression of people who are obsessed with abolishing death, as opposed to your example of maximizing quality of life (or minimizing illness) without getting too hung up on overall age.


The person just said aging isn't a law of physics. They are right, you are the fool here.


Actually they said: "Aging isn’t a law of nature." But it kind of is. Almost all biological organisms age and the ones that don't are much simpler than us. That's not to mention entropy which is both a law of physics and dictates an inescapable form of aging for the universe as a whole.


They also said there isn't a physical reason, that is often meant to mean "it isn't a law of physics".

The fact that something happens doesn't mean it's a law of anything. Cars didn't exist before we built them - no law of "no cars". People died of TB before we had a cure - no law of "TB". Same for various types of cancer.

In practice when someone says "live forever", they don't mean to imply they'll live the 10^100 (or whatever the guestimates are) years to the end of the universe. They mean they'll stop aging in the sense that we do now. Maybe we could live to 10,000 or 50,000 or whatever. You can always get hit by a bus, or get some strange disease from a bat, or whatever.


Last time meta blocked my account was because I gave away free framing lumber after demolishing my poorly framed basement. Somehow it got flagged and that was that. Thankfully I don’t give a damn, and now never will.

Ps: some couple happily picked up 100 or so 2x4 studs of various lengths to build a greenhouse for their garden with.


I got blocked for sending a post asking if anyone wants to grab a lunch when I'm back in (location).


Tangential, but a decade ago I lost my original Amazon account, because I bought bandages. Yes bandages.

I'd had it for 5 years, no excess returns, no issues. I click add and go to checkout... banned.

Some reasons about religious icons flashed on my screen. It was red cross bandages ffs!

And why ban me, and not the seller?!?

Calls, emails resulted in confused but unhelpful people.


They must have assumed it was a scam, like those Nigerian prices offering free gold.


There will be a a new kind of job for software engineers, sort of like a cross between working with legacy code and toxic site cleanup.

Like back in the day being brought in to “just fix” a amalgam of FoxPro-, Excel-, and Access-based ERP that “mostly works” and only “occasionally corrupts all our data” that ambitious sales people put together over last 5 years.

But worse - because “ambitious sales people” will no longer be constrained by sandboxes of Excel or Access - they will ship multi-cloud edge-deployed kubernetes micro-services wired with Kafka, and it will be harder to find someone to talk to understand what they were trying to do at the time.


I met a guy on the airplane the other day whose job is to vibe code for people who can't vibe code. He showed me his discord server (he paid for plane wifi), where he charges people 50$/month to be in the server and he helps them unfuck their vibe coded projects. He had around 1000 people in the server.


So wait is he an actual software engineer doing this as a side hustle? Or like a vibe coder guru that basically only works with AI tools?


He said he used to be a software dev. Then he started consulting on the side making websites, doing SEO, and he just started doing that fulltime. But then SEO died because of AI (according to him anyways). then he started vibe coding like a year or two ago and saw all these people posting in forums about how everything they made broke and they don't know what to do. So he started helping people for money and it turned into a thing.

I watched him text people and say "set up a lovable account, put in your credit card info then send me the login". Then he would just write some prompts for them on lovable to build their websites for them. Then text them back on discord and be like "done".

He said he had multiple tiers, like 50$/month got you in the discord and he would reply your questions and whatever. but for 500$/month he would do everything you want and just chat with you about what you wanted for your incredible facebook replacement app for whatever. But I mean most of the stuff seemed like it was just some small business trying to figure out a way to use the internet in 2025.

All this gave me anxiety because I'm here as an academic scientist NOT making 50$/month*1000 signups to vibe code for people who can't vibe code when I definitely know how to vibe code at least. Haha. Maybe I should listen to all my startup friends and go work at a startup instead.


>> But then SEO died because of AI (according to him anyways).

Former web dev and I still do some SEO and for the most part, he's correct. I've posted on here multiple times over the last two to three years how easy it is now to manipulate search engines now.

Back in the day, when you needed content for SEO and needed it to be optimized, you had to find a content writer who knew how to do this, or write it yourself and hope that Google doesn't bury your site for stuffing your content with keywords.

Now? Any LLM can spin out optimized content in a few seconds. Any LLM can review your site, compare it to a competitor and tell you want you should do to rank better. All of the stuff SEO people used to do? You can do now in the span of a few mins with any LLM. This is lower hanging fruit than vibe coding and Google has yet to adjust their algorithm to deal with this.

A few years ago, I cranked out an entire services area page for a client. I had AI write all the content. Granted, it was pretty clunky and I had to clean some of it up, but it saved me hours of trying to write it myself. We're talking some 20-30 pages that I gradually posted over the course of several months. Within a days, every new page was ranking page 1 within the top ten results.


I need to start hanging out in more lucrative forums, apparently.


You just might be in the right place. Asking same question, wait until someone will make a directory website to sell access to you to find those forums.


Any pointers which forums do these people hang out?


I wish. He said that in the beginning he built a core group just with direct contacts but then he started a YouTube channel to drive traffic to the discord. He paid my buymeacoffee.com link because I showed him my windowfied.com tool I made to let you have dir commands on osx instead of ls.

I hope you can meet him on a plane too.


Why would anyone want dir


A big part of the reason that people develop solutions in Excel is that they don’t have to ask anyone’s permission. No business case, no scope, no plan, and most importantly no budget.

Unless a business allows any old employee to spin up cloud services on a whim we’re not going to see sales people spinning up containers and pipelines, AI or not.


What about a sales person interacting with an LLM that is already authz'd to spin up various cloud resources? I don't think that scenario is too far-fetched...


I imagine something along the lines of cloud platforms rolling out functionality that caters to vibe-coding crowd - one stop shop: you enter your prompts and it spins up your code along with the infra. I mean why wouldn’t they - seem like a goldmine.


Given how easy it is to spin up GCP resources with a text file, I'm surprised Gemini doesn't already offer this service. The prompt below gave me a 167-line file that uses Cloud Run, Cloud Built, Artifact Registry, Firestore, Maps, and IAM.

>I'm creating an app for dog walkers to optimize their routes. It should take all client locations and then look for dog-friendly cafes for the walker to get lunch and then find the best route. I'm vibe coding this on GCP. Please generate a Terraform file to allocate the necessary resources.


So very true.

And then over time these Excel spreadsheets become a core system that runs stuff.

I used to live in fear of one of these business analyst folks overwriting a cell or sorting by just the column and not doing the rows at the same time.

Also VLOOKUP's are the devil.


Why also sorting by row? And why are vlookups the devil? my undergrad was finance, but I've self-learned a lot of CS.


It's possible to sort just a single column, leaving all the columns beside it in their original sort order. That's very bad if you want to keep your rows in one piece.


Oh, duh yeah. That's such a natural thing to avoid I hadn't considered it


Unless they have a linux with some libre office, I fail to see where there is no budget for Excel. Initially you have to keep up with windows licenses then office.


An Office license is a must in most companies. So it will be there beforehand, you don't have to have a special budget for it.


> and it will be harder to find someone to talk to understand what they were trying to do at the time.

This will be the big counter to AI generated tools; at one point they become black boxes and the only thing people can do is to try and fix them or replace them altogether.

Of course, in theory, AI tooling will only improve; today's vibe coded software that in some cases generate revenue can be fed into the models of the future and improved upon. In theory.

Personally, I hate it; I don't like magic or black boxes.


> or replace them altogether.

Before AI companies were usually very reticent to do a rewrite or major refactoring of software because of the cost but that calculus may change with AI. A lot of physical products have ended up in this space where it's cheaper to buy a new product and throw out the old broken one rather than try and fix it. If AI lowers the cost of creating software then I'm not sure why it wouldn't go down the same path as physical goods.


Every time software has gotten cheaper to create the end result has been we create a lot more software.

There are still so many businesses running on pen and paper or excel spreadsheets or off the shelf software that doesn't do what they need.

Hard to say what the future holds but I'm beginning to see the happy path get closer than it looked a year or two ago.

Of course, on an individual basis it will be possible to end up in a spot where your hard earned skills are no longer in demand in your physical location, but that was always a possibility.


The prevailing counter narrative around vibe coding seems to be that "code output isn't the bottle neck, understanding the problem is". But shouldn't that make vibe coding a good tool for the tool belt? Use it to understand the outermost layer of the problem, then throw out the code and write a proper solution.


> [create prototype], then throw out the code and write a proper solution.

Problem is, that in everyones' experience, this almost never happens. The prototype is declared "good enough, just needs a few small adjustments", rewrite is declared too expensive, too time-consuming. And crap goes to production.


Watching was supposed to be a prototype become the production code is one of the most constant themes of my 20 year career


Software takes longer to develop than other parts of the org want to wait.

AI is emerging as a possible solution to this decades old problem.


Everything takes longer than ppl want to wait. But when building a house, ppl are more patient and tolerant about the time taken, because they can physically see the progress, the effort, the sweat. Software is intangible and invisible except maybe for beta-testers and developer liaisons. And the visual parts, like the nonfunctional GUI or web UI, are often taken as "most of the work is done", because that is what people see and interact with.


It's product management's job to bridge that gap. Break down and prioritize complex projects into smaller deliverables that keep the business folks happy.

It's better than houses, IMO - no one moves into the bedroom once it's finished while waiting for the kitchen.


No, the org will still have to wait for the requirements, which is what they were waiting for all along.


until the whole company fails because lack of polishing and security in the software. Think tea app openly accessible databases...


is there any evidence the tea app failure was due to AI use?


Or as a new problem that it will persist for decades to come.


I don’t really see this as universal truth with corporate customers stalling process for up to 2 years or end users being reluctant to change.

We were deploying new changes every 2 weeks and it was too fast. End users need training and communication, pushback was quite a thing.

We also just pushed back aggressive timeline we had for migration to new tech. Much faster interface with shorter paths - but users went all pitchforks and torches just because it was new.

But with AI fortunately we will get rid of those pesky users right?


Different situation. You already had a product that they were quite happy with, and that worked well for them. So they saw change as a problem, not a good thing. They weren't waiting for anything new, or anything to improve, they were happy on their couch and you made them move to redo the upholstery.


They were not happy otherwise we would not have new requirements.

Well maybe they were happy but software needs to be updated to new business processes their company was rolling out.

Managers wanted the changes ASAP - their employees not so much, but they had to learn that hard way.

Not so fun part was that we got the blame. Just like I got down vote :), not my first rodeo.


Yes, that's how it is. And that is a separate problem. And it also shifts the narrative a bit more towards 'the bottleneck is writing good code'.


This is the absolute reality.

I think we'll need to see some major f-ups before this current wave matures.


> Problem is

How much is it a problem, really ?

I mean, what are the alternatives ?


The alternative is obviously: Do it right on the first try.

How much of a problem it is can be seen with tons of products that are crap on release and only slowly get patched to a half-working state when the complaints start pouring in. But of course, this is status quo in software, so the perception of this as a problem among software people isn't universal I guess.


Sure.

How about the tons of products we don't even see? Those that tried to do it right on the first try, then never delivered anything because there were too slow and expensive. Or those that delivered something useless because they did not understand the users' need.

If "complaints start pouring in", that means the product is used. This in turns can mean two things: 1/ the product is actually useful despite its flaws, or 2/ the users have no choice, which is sad.


> How about the tons of products we don't even see? Those that tried to do it right on the first try, then never delivered anything because there were too slow and expensive.

I would welcome seeing a lesser amount of new crappy products.

That dynamic leads to a spiral of ever crappier software: You need to be first, and quicker than your competitors. If you are first, you do have a huge advantage, because there are no other products and there is no alternative to your crapware. Coming out with a superior product second or third sometimes works, but very often doesn't, you'll be an also-ran with 0.5% market share, if you survive at all. So everyone always tries to be as crappy and as quick as possible, quality be damned. You can always fix it later, or so they say.

But this view excludes the users and the general public: Crapware is usually full of security problems, data leaks, harmful bugs that endanger peoples' data, safety, security and livelihood. Even if the product is actually useful, at first, in the long term the harm might outweigh the good. And overall, by the aforementioned spiral, every product that wins this way damages all other software products by being a bad example.

Therefore I think that software quality needs some standards that programmers should uphold, that legislators should regulate and that auditors should thoroughly check. Of course that isn't a simple proposition...


I agree. Crapware is crapware by design not because there was a good idea but the implementation lacked. We're blessed that poor ideas were bogged down by poor implementation. I'm sure few good things may have slipped through the cracks but it's a small price to pay.


Exactly. There is a reason for the push. The natural default of many engineers is to "do things properly", which often boils down to trying to guess all kinds of possible future extensions (because we have to get the foundations and the architecture right), then everything becomes abstracted and there's this huge framework that is designed to deal with hypothetical future needs in an elegant and flexible way with best practices etc. etc. And as time passes the navel-gazing nature of the project grows, where you add so much abstraction that you need more stuff to manage the abstraction, generate templates that generate the config file to manage the compilation of the config file generator etc.

Not saying this happens always, but that's what people want to avoid when they say they are okay with a quick hack if it works.


Coding is how I build a sufficiently deep understanding of the problem space--there's no separating coding and understanding for me. I acknowledge there's different ways of working (and I imagine this is one of the reasons a lot of people think they get a lot more value out of LLMs than I do), but like, having Cursor crank code out for me actually slows me down. I have to read all the stuff it does so I can coach it into doing better, and also use its work to build a good mental model of the problem, and all that takes longer than writing the code myself.


Well, actually there could be a separate step: understanding is done during and after gathering requirements, before and while writing specifications. Only then are specifications turned into code.

But almost no-one really works like that, and those three separate steps are often done ad-hoc, by the same person, right when the fingers hit the keys.


I can use those processes to understand things at a high level, but when those processes become detailed enough to give me the same level of understanding as coding, they're functionally code. I used to work in aerospace, and this is the work systems engineers are doing, and their output is extremely detailed--practically to the level of code. There's downsides of course, but the division of labor is nice because they don't need to like, decide algorithms or factoring exactly, and I don't need to be like, "hmm this... might fail? should there be a retry? what about watchdog blah blah".


> Well, actually there could be a separate step: understanding is done during and after gathering requirements, before and while writing specifications. Only then are specifications turned into code.

The promise of coding AI is that it can maybe automate that last step so more intelligent humans can actually have time to focus on the more important first parts.


We used to call that Waterfall, and it has been frowned upon for a while now.

So we went full circle, again.


Waterfall is a caricature straw man process where you can never ever go back to the drawing board and change the requirements or specifications. The defining characteristic is the part where design up front, you can never go back and really really have to do everything in strict order for the whole of the project.

Just having requirements and a specification isn't necessarily waterfall. Almost all agile processes at least have requirements, the more formal ones also do have specifications. You just do it more than once in a project, like once per sprint, story or whatever.


Waterfall certainly has processes for going back and adjusting previous steps after learning things later in the process. The design was updated if something didn’t work out during implementation, and of course implementation was changed after errors was found during testing.

Now that agile practitioners have learned that requirements and upfront design actually is helpful, the only difference seems to be that the loops are tighter. That might not have been possible earlier without proper version control, without automated tests, and the software being delivered on solid media. A tight feedback loop is harder when someone has to travel to your customer and sit down at their machines to do any updates.


That thinking and understanding can be done before coding begins, but I think we need to understand the potential implementation layer well in order to spec the product or service in the first place.

My feeling is that software developers will need end up working this type of technical consultant role once LLM dominance has been universally accepted.


> Personally, I hate it; I don't like magic or black boxes.

So, no compilers for you neither ?

(To be fair: I'm not loving the whole vibe coding thing. But I'm trying to approach this wave with open mind, and looking for the good arguments in both side. This is not one of them)


Apart from various C UB fiascos, the compiler is neither a black box nor magic, and most of the worthwhile ones are even determinstic.


I’m sorry for an off-topic, are there any non-determenistic compilers you can name? I’d been wondering for a while if they actually exist


Accidental non-deterministic compilers are fairly easy if you use sort algorithms and containers that aren't "stable". You then can get situations where OS page allocation and things like different filenames give different output. This is why "deterministic build" wasn't just the default.

Actual randomness is used in FPGA and ASIC compilers which use simulated annealing for layout. Sometimes the tools let you set the seed.


I think you're misunderstanding. AI is not a black-box, and neither is a compiler. We(as a species) know how they work, and what they do.

The 'black-boxes' are the theoretical systems non-technical users are building via 'vibe-coding'. When your LLM says we need to spin up an EC2 instance, users will spin one up. Is it configured? Why is it configured that way? Do you really need a VPS instead of a Pi? These are questions the users, who are building these systems, won't have answers to.


If there are cryptographically secure program obfuscation (in the sense of indistinguishability obfuscation) methods, and someone writes some program, applies the obfuscation method to it, publishes the result, deletes the original version of the program, and then dies, would you say that humanity "knows how the (obfuscated) program works, and what it does"? Assume that the obfuscation method is well understood.

When people do interpretabililty work on some NN, they often learn something. What is it that they learn, if not something about how the works?

Of course, we(meaning, humanity) understand the architecture of the NNs we make, and we understand the training methods.

Similarly, if we have the output of an indistinguishability obfuscation method applied to a program, we understand what the individual logic gates do, and we understand that the obfuscated program was a result of applying an indistinguishability obfuscation method to some other program (analogous to understanding the training methods).

So, like, yeah, there are definitely senses in which we understand some of "how it works", and some of "what it does", but I wouldn't say of the obfuscated program "We understand how it works and what it does.".

(It is apparently unknown whether there are any secure indistinguishability obfuscation methods, so maybe you believe that there are none, and in that case maybe you could argue that the hypothetical is impossible, and therefore the argument is unconvincing? I don't think that would make sense though, because I think the argument still makes sense as a counterfactual even if there are no cryprographically secure indistinguishability obfuscation methods. [EDIT: Apparently it has in the last ~5 years been shown, under relatively standard cryptographic assumptions, that there are indistinguishability obfuscation methods after all.])


> AI is not a black-box

Any worthwhile AI is non-linear, and it’s output is not able to be predicted (if it was, we’d just use the predictor).


> There will be a a new kind of job for software engineers

New? New!?

This is my job now!

I call it software archeology — digging through Windows Server 2012 R2 IIS configuration files with a “last modified date” about a decade ago serving money-handling web apps to the public.


WebForms?


Yes, and classic ASP, WCF, ASP.NET 2.0, 3.5, 4.0, 4.5, etc…

It’s “fun” in the sense of piecing together history from subtle clues such as file owners, files on desktops of other admins’ profiles, etc…

I feel like this is what it must be like to open a pharaoh’s tomb. You get to step into someone else’s life from long ago, walk in their shoes for a bit, see the world through their eyes.

“What horrors did you witness brother sysadmin that made you abandon this place with uneaten takeaway lunch still on your desk next to the desiccated powder that once was a half drunk Red Bull?”


When Claude starts deploying Kafka clusters I’m outro



still don’t know why you need an MCP for this when the model is perfectly well trained to write files and run kubetctl on its own


If it can run kubectl it can run any other command too. Unless you're running it as a different user and have put a bit of thought into limiting what that user can do, that's likely too much leeway.

That's only really relevant I'd you're leaving it unattended though.


You can control it with hooks. Most people I know run in yolo mode in a docker container.


What about being in a docker container lets you `kubectl get pod` but prevents you from `kubectl delete deployment`?


this is more about the service account than the runtime environment i think. you put your admin service account in docker the agent can still wreak havoc. Docker lets you hide the admin service account on your host FS from the agent.


Keeping the powerful credentials where the agent can't reach them does buy you a bit of safety. But I still think its a bit loose when compared with exposing an API to the model which can only do what you intend for that model to do.


sure fair enough. I guess i'm mostly being pragmatic here.

Plus i'm not convinced that generating "kubectl"...json..."get"...json..."pod"... is easier for most models than "bash"...json..."kubectl get pod"...


Yes... a docker container...


Not sure about the MCP, but I find that using something (RAG or otherwise provide docs) to point the LLM specifically to what you're trying to use works better than just relying on its training data or browsing the internet. An issue I had was that it would use outdated docs, etc.


Claude is, some models aren't. In some cases the MCPs do get the models to use tools better as well due to the schema, but I doubt kubectl is one of them (using the git mcp in claude code... facepalm)


Yeah fair enough lol…usually I end up building model-optimized scripts instead of mcp which just flood context window with json and uuids (looking at you, linear) - much better to have Claude write 100 lines of ts to drop a markdown file with the issue and all comments and no noise


> on its own

does it? Did you forget the prompts? MCP is just a protocol for tool/function calling which in turn is part of the prompt, quite an important part actually.

Did you think AI works by prompts like "make magic happen" and it... just happens? Anyone who makes dumb arguments like this should not deserve a job in tech.


I’ve literally asked Claude Code to look at and fix an issue on a cluster and it knows to use the cli utils.


Because Claude has that as a built-in tool. Try Claude on web and see how useless AI is without tools.

And don't even get me start with giving AI your entire system in one tool, it's good for toying around only.


Why would I use Claude on web to do that? Why would I use the wrong tool for the job?


I am not saying you should. I am pointing out AI without tools (which I believe is what you think of when you refer to MCP) is useless.


I allowed Claude to debug an ingress rule issue on my cluster last week for a membership platform I run.

Not really the same since Claude didn’t deploy anything — but I WAS surprised at how well it tracked down the ingress issue to a cron job accidentally labeled as a web pod (and attempting to service http requests).

It actually prompted me to patch the cron itself but I don’t think I’m that bullish yet to let CC patch my cluster.


oh yeah we had claude diagnose a production k8s redis outage last week (figured out that we needed to launch a new instance in a new AZ to pick up the previous redis' AZ-scoped EBS PVC after a cluster upgrade).


I have seen a few dozen Kafka installs.

I have seen one Kafka instal that was really the best tool for the job.

More than a hand full of them could have been replaced by Redis, and in the worst cases could have been a table in Postgres.

If Claude thinks it fine, remember it's only a reflection of the dumb shit it finds in its training data.


Superfund repos.


Now that's an open source funding model governments can get behind.


A lot of big open source repos need to be given the superfund treatment


A whole bigger lot of closed source software needs to be given the superfund treatment!


What makes you so sure it will have a repo?

I don’t recall the last time Claude suggested anything about version control :-)


Claude will give what you asked for. My sensible chuckle moment was when I asked it to create a demo asp net web API and it did everything but add the authorize tag or any kind of authentication. I asked what was missing and until i mentioned it, it didn't mention authentication or authorization at all.


> Claude will give what you asked for.

And how many know they need to ask for version control?


"As per my last email that contained the code claude wrote in a .pdf file I would like you to ask to fix two different users being able to see each others data if they are logged in at the same time, thank you for your attention in this matter."


Does anyone remember the websites that front page and dreamweaver used to generate from its wysiwyg editor? It was a nightmare to modify manually and convinced me to never rely on generated code.


I agree that the code that dreamweaver generated was truely awful. But compilers and interpreters also generate code, and these days they are very good at it. Technically the browser’s rendering engine is a code generator as well, so if you’re hand-coding HTML you’re still relying on code generation.

Declarative languages and AI go hand in hand. SQL was intended to be a ‘natural’ language that the query engine (an old-school AI) would use to write code.

Writing natural language prompts to produce code is not that different, but we’re using “stochastic” AI, and stochastic means random, which means mistakes and other non-ideal outputs.


I definitely remember that. Got paid $400 for my very first site in the early 00s.

But we also didn't have an AI tool to do the modifying of that bad code. We just had our own limited-capacity-brain, mistake-making, relatively slow-typing selves to depend on.


I still remember that Frontpage exploit in which a simple google search would return websites that still had the default Frontpage password and thus you could login and modify the webpage.


Developers do that too. Consultants have be doing rescue projects for quite a long time. I don't think anything has or will change on that front.


Agreed, sometimes it seems like there are only two types of roles. Maintaining / updating hot mess legacy code bases for an established company or work 100 hours a week building a new hot mess code base for a startup. Obviously oversimplifying but just my very limited experience scoping out postings and talking to people about current jobs.

Regardless this just made me shudder thinking about the weird little ocean of (now maybe dwindling) random underpaid contract jobs for a few hours a month maintaining ancient Wordpress sites...

Surely that can't be our fate...


> Developers do that too.

Not at that speed. Scale remains to be seen, so far I'm aware only of hobby-project wreck anecdotes.


>it will be harder to find someone to talk to understand what they were trying to do at the time.

IMHO, there's a strong case for the opposite. My vibe coding prompts are along the lines of "Please implement the plan described in `phase1-epic.md` using `specification.prd` as a guide." The specification and epics are version controlled and a part of the project. My vibe coded software has better design documentation than most software projects I've been involved in.


I assume you have some software engineering fundamentals training.


Training? Not a lick. I took AP Pascal back in High School...


Do we have a method to let AI analyze the data within the DBs and figure out how to port it to a well designed db? I'm a fan of the philosophy of write strong data structures and stupid algorithms around them, your data will outlive your application, etc. Simple example is a Mongodb field which stores same thing as int or string, relationships without foreign keys in Postgres etc. Then frustrating shit like somebody creating an entire table since he cant `ALTER TABLE ADD COLUMN`


"Claude, connect to DB A via FOO and analyze the data, then figure out to to port it to well designed DB B, come back to me with a proposal and implementation plan"


I think we're already there [0].

[0] https://x.com/PovilasKorop/status/1959590015018652141

Im really curious about what other jobs will pop up. As long as there is an element of probability associated with AI, there will need to be manual supervision for certain tasks/jobs.


> it will be harder to find someone to talk to understand what they were trying to do at the time.

These are my favorite types of code bases to work on. The source of truth is the code. You have to read it and debug it to figure it out, and reconcile the actual behaviors with the desired or expected behaviors through your own product oriented thinking


The description makes it sound like someone wanted to deploy a single static site and followed a how to article they found on hacker news.


Its alright because you can shove all of that into an LLM and have it fixed instantly


See, you're using the definition of "Fixed" from the future, not the current definition of fixed.


Foxpro, the horror


This whole discussion is blowing my mind!

When I hit your comment:

1. I thought, "YES! Indeed!"

2. Then, "For Sale: Baby Shoes."

3. The similar feel caused me to do a rethink on all this. We are moving REALLY fast!

Nice comment


Sorry all. The short comment parent to mine tells a very suggestive story with high brevity. This is similar to Hemingway writing a story in a few words: "For Sale: Baby Shoes."

The hook aspect of these appears similarly suggestive and brief and I thought that intriguing and thought provoking given the overall subject matter.

And that just gave me some reference to the speed this whole tech branch has.


I for one can’t wait. It will be absolutely spectacular!


Who are these trades going to sell their services to when a large proportion of people employed in white collar work are looking at a prospect of reduced income or loss of jobs?


The economy functioned without large numbers of office workers in the past, and there are regions of the country where this is still the case. To an extent they will sell their services to each other. To another extent they will be selling to the owners of AI (imagine an electrician building out a data center). The economic surplus will still be there - it will be larger in fact - and there will still be a need for their services. The players involved will change however.


“In the past” trades did not enjoy nearly the income levels they do now. The rise in demand for their services and corresponding raise in their compensation are linked to the wealth of the other half of the economy.


Maybe they are all mostly dead and ever-more-feral survivors ridden by the crippling radiation- and pollution-borne genetic sicknesses are birthing still-born and slowly dying out while picking through the debris left from the civilizational collapse caused by global warming, ai, and the resulting world wars.

And the last stronghold of civilization are genetically superior, warlike, numerous, but illiterate Tate descendants hidden in the mountains of Romania, unable to build anything more advanced than a cudgel used in the rituals to determine the alpha leader.


Most time travel theories ignore the fact that the earth is not fixed in space. It is moving relative to the sun in the solar system and the solar system is moving relative to the center of the galaxy, and the galaxy is… etc. A motion in each of these systems is not 100% accurately predictable forward or backward in time.

This fact alone means that any time traveler is most likely to arrive in the middle of empty space.


> Most time travel theories ignore the fact that the earth is not fixed in space

This is a misconception that bugs me. The problem isn't that the Earth isn't fixed in space, it's that there's no such thing as a fixed point in space. Position is only defined relative to other objects. If you're going to use time travel in a story or something then it has to use something like an anchor object to determine destination. I.e. the relative location of the traveler and the anchor is replicated from the future to the past.


We would assume that the time-space traveler would have to tell the machine both the time and space directions from their current position in time and space. Assuming the time-space traveler cannot stop to observe his or her “location” in time-space coordinate along the way in small increments, he would have to calculate the entire travel trajectory beforehand.

I am saying this trajectory calculation relative to current coordinates is impossible. Even modern satellites with super precise instruments still need regular ongoing “adjustments.” Time travel requires many order of magnitude more precision than satellite orbital maintenance.


Go backwards in time 15 minutes at a time. At this short distance your calculation error will be small, and you can land your hovercraft back on earth to correct for any drift. Then go backwards another 15 minutes, repeat ad infinitum. Even present day aircraft have autopilot, so surely this can be automated too.


I think you have a good premise for a science fiction story right here: Say some "magic" (i.e. invented) physics quirk allows you to travel both into the future and the past, but all you can do is essentially accelerate and rewind time drastically. You don't "jump" to a time, there's still a physical presence, and colliding would have catastrophic results.

The logistical impacts from that would yield plenty of storytelling material: If you want to travel back in time, you need some ancient cellar that has been undisturbed since the target timeframe. If you want to skip forward, you need to establish that cellar, and round trips are limited by the space available.


That is - essentially - how 2002's _The Time Machine_ showed travel: Alexander's machine was 'stationary' on the earth, but time passed around him in a massively accelerated manner


to take this one step further - go back one infitesmal back in time and adjust position one infitensmal, thusly, a fixed time machine


Relativity just says "nothing about space seems to require a preferred reference frame", not "such a thing as a preferred reference frame can't possibly exist". If we're allowing for the discovery of time travel in the story, I'm willing to allow for such a discovery as well.

In reality I'd bet neither are realistic, but that's what makes the stories interesting.


Even if you could magically arrive at the right point, how would you get the right momentum? If the Earth were standing completely still, it would still be spinning at a horrendous speed.


I'll give a half-baked counter to this: we know gravity impacts the flow of time through relativity. There is currently no evidence that time travel wouldn't be impacted by gravity in some way. Maybe the way time in time travel interacts with gravity protects you from this problem? Probably not, but it has just as much evidence to support it as your claim of time travel will dump you in empty space.


You’re positing some unknown influence will cause everything to work out well in the ends without any evidentiary basis. Occam’s Razor suggests that you’re more likely to be wrong than parent.

Of course the idea that your point of origin must be fixed from time A to time Z if you’re willing to allow for time travel is itself flawed. If you could somehow move an object to an arbitrary time you could move them to an arbitrary point in space, and your ability to calculate may be significantly greater on the grounds that you’d have more advanced technology than us. It’s all scifi woo though until someone actually time travels.


I disagree with this interpretation of what I said. We HAVE evidence that time and gravity interact. It's actually more of a violation of Occam's Razor to suggest that time travel is somehow exempt from that interaction than to claim that yes, time travel should in someway be subject to the influence of gravity.


It’s even crazier if you imagine that whole universe might be countless universe lengths away from its starting point every microsecond for all we know. Acceleration is the only thing we feel.

You’re very likely to travel into an undefined void even if you map out and calibrate the whole system.


That why it is important not to mess up coordinate system. With wrong calculations they fall to ground. Or they are buried under ground. And space is full of frozen bodies.


1. Genetically superior 2. Tate Descendents

Pick One.


Tall, powerful, beautifully bald, multitudinous and decease resistant!


Why are you giving Tate free advertising on a completely unrelated post? Just saying that asshole's name risks exposing more people to him.

Streisand Effect, people. If you hate someone and want them to go away you have to completely stop mentioning them online.


Seems to me your comment has more of a Streisand Effect quality than the one you're replying to.


I always thought that the passages that talk about Smeagol before he was corrupted by the ring - made it rather easy to think of him as a hobbit or maybe a human.


Those come from the Lord of the Rings where Gandalf makes it clear that Gollum is/was a Hobbit or a very close relation.


I can’t even tell how this would be different from what is commonly termed as human trafficking.


willing participants perhaps


Curious, just based in the facts of what has been stated - does this pass the sniff test to you, not exploitative at all?


While until recently I had to, in the name of flight safety, carefully pack my bags while consulting the sizes of shampoo containers allowed in the carry-on baggage, surrender my unapproved nail clippers, and with my shoes in hand and pants belt-less - stand in line to be x-rayed and patted down on my way to board a plane…

… someone can without anyone ringing any alarm bells and not phasing the local law enforcement one bit - take off multiple times unnoticed and unidentified on a private plane, and, if they choose to, fly it straight into a freshly refueled jet that I am sitting in waiting to take off.

Shhh, hope “terrorists” don’t read this comment. Or the article in LA Times.


Well, yeah. Anyone can own a small plane if they have the money. There’s plenty of uncontrolled airspace and uncontrolled airports.

Good God! What would happen if someone rented a box truck and bought some fertilizer?

Oh, and civilians can own muzzle-loading black powder cannons. Imagine what someone could do with a 32-pound cannonball.

The reality is anyone with the proper skills can crash a plane into anything they like. Unless you have someone on the roof with a MPAD, no one is going be to stop them in time.


> Oh, and civilians can own muzzle-loading black powder cannons. Imagine what someone could do with a 32-pound cannonball.

At many historical locations, said cannons are just sitting around entirely unguarded! Anyone[0] could just come and take one.

[0]...equipped with heavy equipment and maybe a hefty grinder or a stout set of bolt cutters.


They could steal a smaller cannon first and use that on any chains or locks.


They're all plugged with concrete


Concrete plug, steel barrel.

Easily remedied!


This is rather hysterical. "Alarm bells" - both metaphorical and physical - would absolutely be going off if a Cessna was not responding on radio and headed anywhere near an airport operating passenger jets. Corona Muni isn't LAX.


To be fair, few people know anything about aviation other than being miffed at the grand inconvenience of obeying the rules of scheduled passenger flight services.


To be accurate I am “miffed” at the blasé response of airport admin and local police. No “criminal negligence”, no “dereliction of duty”. Not even administrative punishment for utter incompetence at a primary job with rather serious potential consequences.


Isn't this a bit like being mad at Joe's Unstaffed Parking Garage because someone "borrowed" your car that you left there for the week?


Probably closer to a rent-a-car lot. Most GA pilots rent, they don't own. Owning only (kinda) makes sense if you fly A TON. Otherwise all the timed maintenance eats you alive. On a plane you have lots of "every N months" work items, even if you don't use it.


That’s basically the right analogy.


...which betrays a lack of knowledge of aviation beyond the inconveniences of scheduled passenger flight services.

There is an entire world of aviation outside of commercial airlines flying airliners out of large, towered airports with fancy terminal buildings. An aircraft is a vehicle like any other, and operating one is regulated in tiers like any other type of vehicle. It's about as inane to gripe that an untowered recreational airport is not regulated to the same extent as the airports you fly commercially out of, as it would be to gripe that you driving your car out of your home is not regulated to the same extent as driving a school bus.


Or, to make the point more salient, a rowboat in a lake, vs a containership in a deep water port.


*in the name of security theater

General aviation, in the U.S. at least, runs largely on the honor system. To fly in controlled airspace these days, ADS-B out is required, and there are definitely records of where people go


There is a big difference between a Cessna 172 with a gross weight of 2,450 including the 56 gallons of fuel and an A380 with a maximum takeoff weight of 1,268,000 pounds and 65,000 gallons of fuel.

Did you know the Twin Towers were actually designed to withstand a jet? https://archive.seattletimes.com/archive/19930227/1687698/tw...

Except they assumed that it'd be a 707 and also that'd be at landing speed of about 180mph ... not a 767 (which could be as much as 2x a 707 in take off weight) doing almost 600 mph.

A plane larger than a Cessna, but still no jumbo jet, crashed into a mall https://www.eastbaytimes.com/2010/12/21/the-sunvalley-mall-p... - 7 people died. Tragic, but it goes to show that SIZE DOES MATTER.

Also also. "no alarm bells" is highly dependent on location. If this "stolen" plane were to have flown into highly controlled airspace without approvals, you can bet your ass that alarm bells would have gone off. But the person flying the plane knew what they were doing and where they were going. They went away from busy areas and didn't anything out of the ordinary.

Is there still many reasons this could be a problem? Sure. But invoking the terrorism word is full FUD, the likes of which the media loves to use. And ends us with security theater like shampoo size limits.


Some years ago I was flying home, and it turned out someone had called in a bomb threat. So, naturally, they had ramped up the security scan to the max, causing a massive queue.

The landside area was packed to the brim with travelers waiting in line, mixed with the people on their way to check-in pushing their trollies loaded with suitcases.

I've never been so scared, waiting in line for security, as I imagined how easily anyone could pack 6-7 huge suitcases filled with explosives onto a trolley in the parking lot right outside, move into the packed crowd without suspicion and set it off.

No need to get on a plane.


Don't need an airport for that. Any busy city sidewalk would do just fine.


Did Patrick write anything about negotiating a raise?

Or maybe someone came across any actionable advice they’d like to share?


Get a job offer from somewhere else that pays more. Redact it for source information and provide it during the negotiation. Explain that they can’t just match it, they need to beat it, or you switch jobs.


For whatever it’s worth, I’ve worked at places that have a policy that if someone starts looking elsewhere, they’re unhappy and are going to end up moving anyway, so they’ll just assume you’re handing in your notice. I don’t like this mindset at all, but I assume it’s fairly common.


…then you just end up switching jobs and taking the lesser of two raises.


It seems that many did not come across https://en.m.wikipedia.org/wiki/The_Emperor%27s_New_Clothes … and are still busy trying to figure out the allegedly intricate but evidently incorporeal designs this administration is wearing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: