ChatGPT-3.5 isn’t even worth touching as an end-user application. Bard is better (due to having some integrations), but it’s still barely useful.
ChatGPT-4 is on an another level entirely compared to either 3.5 or Bard. It is actually useful for a lot.
ChatGPT-3.5 can still serve a purpose when you’re talking about API automations where you provide all the data in the prompt and have ChatGPT-3.5 help with parsing or transforming it, but not as a complete chat application on its own.
Given the bad experiences ChatGPT-3.5 gives out on a regular basis as a chat application, I don’t even know why OpenAI offers it for free. It seems like a net-negative for ChatGPT/OpenAI’s reputation.
I think it is worth paying for a month of ChatGPT-4. Some people get more use out of it than others, so it may not be worth it to you to continue, but it’s hard for anyone to know just how big of a difference ChatGPT-4 represents when they haven’t used it.
I provided a sample of ChatGPT-4’s output in my previous response, so you can compare that to your experiences with ChatGPT-3.5.
With a proper grammar, you can require the "subject" field to be one of several valid entity names. In the prompt, you would tell the LLM what the valid entity names are, which room each entity is in, and a brief description of each entity. Then it would be able to infer which entity you meant if there is one that reasonably matches your request.
If you're speaking through the kitchen microphone (which should be provided as context in the LLM prompt as well) and there are no controllable lights in that room, you could leave room in the grammar for the LLM to respond with a clarifying question or an error, so it isn't forced to choose an entity at random.
Preventing burning your house down belongs on the output handling side, not the instruction processing side. If there is any output from an LLM at all that will burn your house down, you already messed up.
I'd go as far as saying it should be handled on the "physics" level. Any electric apparatus in your home should be able to be left on for weeks without causing fatal consequences.
Im not taken aback by the current AI hype but having LLMs as an interface to voice commands is really revolutionary and a good fit to this problem. It’s just an interface to your API that provides the function as you see fit. And you can program it in natural language.
The Celestia project was dormant for nearly a decade. It becomes hard to package unmaintained software. I haven’t kept up with who the new owner is or what they’re doing, so there could be reasons it hasn’t been picked up again by the distros.
Back in the late 2000s, Celestia was certainly an amazing experience for me. I see there’s a mobile version now, which makes me happy. It works pretty well on my iPhone, although the UX is not perfect.
Mistral Medium is available via their API, but not available for download, for example, so I find that confusing if you’re claiming their CEO claimed the plan is to be open for all models.