Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My hunch is that this is exactly what he was expecting. There is a lot of hype around ChatGPT passing the medical exam and this exercise is a counter point to that.


GPT-4 passed the medical exam, not ChatGPT running GPT-3. There's a rather significant difference.


> GPT-4 passed the medical exam, not ChatGPT running GPT-3

With a subscription you can use GPT-4 with ChatGPT. ChatGPT is just the wrapper to the model.


That's not true, ChatGPT is a model. Quote from the ChatGPT announcement post: > ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.


I’m looking at the ChatGPT interface right now (paid account) and I have a “Model:” drop-down at the start of a new chat that says:

- Legacy (GPT-3)

- Default (GPT-3.5)

- GPT-4


The medical exam has very specific questions which define what you're expected to include in the answer. The question asked in this case was nowhere near that detailed, do I don't think they're comparable. To really evaluate something beyond the "random generic user" level, you need to be familiar with the tech as well.

The article really tells us more about the experience of someone with no chatgpt knowledge checking their own symptoms rather than its usability for emergency diagnosis.


any example that doesn't use the current sota isn't a very good counter point to be honest. 3 barely passed. 4 aced it. For all we know, GPT-4 erases most of his concerns (not saying it would).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: