I disagree with much of this. Programming isn't just a tool we use in pursuit of being an “Engineer” or whatever aggrandizing title is applied. I cant help but smile at the pretension of it.
Currently AI models are inconsistent and unpredictable programmers, but less so when applied to non-novel small and focused programming tasks. Maybe that will change resulting in it being able to do your job. Are you just writing lines of code, organized into functions and modules using a “hack it till it works” methodology? If so, I suggest be open to change.
Professionally programming is just a means to an end that is solving a business problem. Just like hammering nails or turning screws is not a job in itself. The craft is important, but ultimately what people pay for is a final product (a house, a bridge, a software that does X, etc.), and our job is to build something that match their constraints. What you describe is the equivalent of complex novel construction projects, not what the vast majority of the construction industry is about.
Building a house is like a CRUD app, it already was a solved problem technically. AI is like prefabs or power tools. If your job and what you were interested in was building houses AI is great. If you were a brick layer not so much.
Engineer is not an aggrandizing title, it’s the job. Being paid for the hobby of writing code was just an anomaly that AI will close in the majority of the industry IMO.
> “AI always thinks and learns faster than us, this is undeniable now”
No, it neither thinks nor learns. It can give an illusion of thinking, and an AI model itself learns nothing. Instead it can produce a result based on its training data and context.
I think it important that we do not ascribe human characteristics where not warranted. I also believe that understanding this can help us better utilize AI.
Failure typically comes from two directions. Unknown and changing requirements, and management that relies on (often external) technical (engineering) leadership that is too often incompetent.
These projects are often characterized by very complex functional requirements, yet are undertaken by those who primarily only know (and endlessly argue about) non-functional requirements.
This seems to derive from the “skills” feature. A set of “meta tools” that supports granular discovery of tools, but whereas you write (optional) skills code yourself, a second meta tool can do it for you in conjunction with (optional) examples you can provide.
But many computer applications are models of systems real or imagined. Those systems are not mathematical models. That everything is an “algorithm” is the mantra of programmers that haven’t been exposed to different types of software.
I see this a lot in what LLMs know and promote in terms of software architecture.
All seem biased to recent buzzwords and approaches. Discussions will include the same hand-waving of DDD, event-sourcing and hexagonal services, i.e. the current fashion. Nothing of worth apparently preceded them.
I fear that we are condemned to a future where there is no new novel progress, but just a regurgitation of those current fashion and biases.
This is effectively a product, not a feature (or bug). Ask the submitter how you can you determine if this meets functional and non-functional requirements, to start with?
My first reaction is how do they know? Are these all people sharing their chats (willingly) with OpenAI, or is opting out of “helping improve the model” for privacy a farce?
Does OpenAI's terms prevent them from looking at chats at all? I assumed that if you don't "help improve the model", it just means that they won't feed your chats in as training data, not that they won't look at your chats for other purposes.
Currently AI models are inconsistent and unpredictable programmers, but less so when applied to non-novel small and focused programming tasks. Maybe that will change resulting in it being able to do your job. Are you just writing lines of code, organized into functions and modules using a “hack it till it works” methodology? If so, I suggest be open to change.