Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Obviously a sensationalised title, but it's a neat illustration of how you'd apply the language models of the future to real tasks.


Would be ridiculously inefficient, while also being nondeterministic and opaque. Impossible to debug, verify, or test anything, and thus would be unwise to use for almost any kind of important task.

But maybe for a very forgiving task you can reduce developer hours.

As soon as you need to start doing any kind of custom training of the model, then you are reintroducing all developer costs and then some, while the other downsides still remain.

And if you allow users of your API to train the model, that introduces a lot of issues. see: Microsoft's Tay chatbot

Also you would need to worry about "prompt injection" attacks.


> Would be ridiculously inefficient, while also being nondeterministic and opaque. Impossible to debug, verify, or test anything, and thus would be unwise to use for almost any kind of important task.

Not to defend a joke app, but I have worked in “serious” production systems that for all intents and purposes were impossible to recreate bugs in to debug. They took data from so many outside sources that the “state” of the software could not be easily replicated at a later time. Random microservice failures littered the logs and you could never tell if one of them was responsible for the final error.

Again, not saying GPT backend is better but I can definitely see use-cases where it could power DB search as a fall-through condition. Kind of like the standard 404 error - did you mean…?


> They took data from so many outside sources that the “state” of the software could not be easily replicated at a later time.

By definition, that's a complex system, and reproducing errors would be equally complex.

A GPT author would produce that for every system. Worse, you would not be able to reproduce bugs in the author itself.

While humans do have bugs that cause them to misunderstand the problem, at least humans are similar enough for us to look at their wrong code and say "Hah, he thought the foobar worked with all frobzes, but it doesn't work with bazzed-up frobzes at all".

IOW, we can point to the reason the bug was written in the first place. With GPT systems it's all opaque - there's no reason or rhyme for why it emitted code that tried to work on bazzed-up frobzes the second time, and not the first time, or why it alternates between the two seemingly randomly ...


> They took data from so many outside sources that the “state” of the software could not be easily replicated at a later time.

Oh, I have fixed systems like those so that everything is deterministic and you can fake the state with a reasonably low amount of effort. It solved a few very important problems.

(But mine were data integration problems. For operations interdependence ones the common advice is to write a fucking lot of observability into it. My favorite minoritary one is "don't create it". I understand there are times you can do neither.)


Wow I did not consider last ditch effort error handling, but that makes a lot of sense. Thank you for giving me something to think about!


Absolutely this. It's a solution looking for a problem.

If the developer task is really so trivial why not just have a human write actual code?

And even if it is actual code instead of a Rube Goldberg-esque restricted query service, I still don't think there's ever any time saved using AI for anything. Unless you also plan on assigning the code review to the AI, a human must be involved. To say that the reviews would be tedious is an understatement. Even the most junior developer is far more likely to comprehend their bug and fix it correctly. The AI is just going to keep hallucinating non-existent APIs, haphazardly breaking linter rules, and writing in plagiarized anti-patterns.


Guys, this is a joke. Don't take it so seriously. Literally the first thing in the README is a meme.


You may not take it seriously, and I may not take it seriously, but it takes one person to read this seriously, convince another person to invest, and then hire a third person and tell them, "make it so", for the joke to no longer be a joke.


A developer getting paid because an investor misunderstands a technology isn’t anything we need to get too worried about, I think. It seems to be a big part of our industry, and I don’t know if that’s ever going to change. I sometimes think of all the crapware dApps that got shoveled out in the last boom - little of meaning was created from a technical standpoint, but smart people got to do what they love to put bread on the table.

Perhaps I’m being overly simplistic, but I don’t see it as all that different from contractors getting paid to do silly and tasteless renos on McMansions. Objectively a bad way to reinvest one’s money, but it’s a wealth transfer in the direction I prefer, so I’ll hold my judgement.


Fair enough. I'm not going to complain much about money moving towards the workers, but I also hate obvious waste as a matter of principle. I also hate being dragged into bullshit work against my will.

I had a close call many years ago - my co-workers and I had to talk higher-ups out of a desperate attempt to add something, anything, that is even tangentially related to AI or blockchains, so either or both of those words could be used in an investor pitch...

That's when I fully grokked that buzzword-driven development doesn't happen because someone in management reads a HBR article and buys into the hype - it happens because someone in management believes the investors/customers buy into the hype. They're probably not wrong, but it still feels dirty to work on bullshit, so I steer clear.


Investors know to "sell the shovels" [to use a gold-rush concept] and are investing into well-diversified positions; which include the likes of GPT's capacity: nVIDIA, AMD, TSMC, MSFT &c — these are the shovels which speculators must buy (or utilize via kWh / price of another's GPT-instance), and I assure you is the case.


If somebody putting a few millions into making this widespread were enough to make it a problem, then software development would already be doomed and we would better start learning woodwork right now.


The argument is stochastic. Maybe this joke will get ignored, but then we could've had the same conversation few years ago about "prompt engineering" becoming a job, and here we are.

Or about launching a Docker container implementing a single, short-lived CLI command.

Or about all the other countless examples of ridiculously complicated and/or wasteful solutions to simple problems that become industry standards simply because they make it easier to do something quickly - all of them discussed/criticized regularly here and elsewhere, yet continuing to gain adoption.

Nah, our industry values development velocity much more than correctness, performance, ergonomics, or any kind of engineering or common sense.


> Maybe this joke will get ignored, but then we could've had the same conversation few years ago about "prompt engineering" becoming a job, and here we are.

The joke is on all of us if we only treat this as a joke. Rails pioneered simple command line templates and convention over configuration, and it took over the world for awhile.

An AI as backend is the logical conclusion of that same trend.


The title is a play on "Attention is All You Need", which is the paper that introduced transformers


Thank you for this human-generated connection [I can still safely and statistically presume].

I already know personally how incredible and what GPT-like systems are capable, and I've only "accepted" this future for about six weeks. Definitely having to process multitudes (beyond technical) and start accepting that prompt engineering is real and that there are about to be more jobless than just losing the trucking industry to AI [largest employer of males in USA] — this is endemic.

The sky is falling. The sky is also blue (this is the stupidest common question GPT is getting right now; instead ask "Why do people care that XYZ is blue/green/red/white/bad/unethical?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: