Treating it like a thinking partner is good. When programming, I don't treat it like a person, but rather a very high programming language. Imagine exactly how you want the code to be. Then find a way to express that unambiguously in natural language; the idea is that you'll still have a bit of work to do writing things out, but it will be a lot quicker than typing out all code by hand. Combined with iterations of feedback as well as having the AI build and run your program (at least as a sanity check); and asking the AI to check the program behaviour in the same way you would, gets you quite far.
A limitation is the lack of memory. If you steer it from style A to B using multiple points of feedback, if this is not written down, the next AI session you'll have to reexplain this all over.
Deepseek is about 1TB in weights; maybe that is why LLMs don't remember things across sessions yet. I think everybody can have their personal AI (hosted remote unless you own lots of compute); it should remember what happened yesterday; in particular the feedback it was given when developing. As an AI layman I do think this is the next step.
A limitation is the lack of memory. If you steer it from style A to B using multiple points of feedback, if this is not written down, the next AI session you'll have to reexplain this all over.
Deepseek is about 1TB in weights; maybe that is why LLMs don't remember things across sessions yet. I think everybody can have their personal AI (hosted remote unless you own lots of compute); it should remember what happened yesterday; in particular the feedback it was given when developing. As an AI layman I do think this is the next step.