That doesn’t logically follow. It got this very straightforward thing correct; that doesn’t prove their response was cynical. It sounds like they know what they’re talking about.
A couple of times per month I give Gemini a try at work, and it is good at some things and bad at others. If there is a confusing compiler error, it will usually point me in the right direction faster than I could figure it out myself.
However, when it tries to debug a complex problem it jumps to conclusion after conclusion “a-ha now I DEFINTELY understand the problem”. Sometimes it has an OK idea (worth checking out, but not conclusive yet), and sometimes it has very bad ideas. Most times, after I humor it by gathering further info that debunks its hypotheses, it gives up.
Keep in mind that some LLMs are better than others. I have experienced this "Aha! Now I definitely understand the problem" quite often with Gemini and GPT. Much more than I have with Claude, although not unheard of, of course... but I have went back and forth with the first two... Pasted the error -> Response from LLM "Aha! Now I definitely understand the problem" -> Pasted new error -> ... ad infinitum.