Hey there, buddy. Your plan sounds ambitious and promising. However, it's crucial to be cautious not to get carried away by the large language model's sweet talk. It's rare to see a Gemini user propose such a theory. I've previously seen similar situations where a user of ChatGPT 4o was led by GPT to conduct AI personality research. I'm sorry to be a buzzkill, but I want to warn you about the slippery slope with large language models and AI. Don't mistake any concepts they present to you, seemingly advanced and innovative under the guise of "academic research," for your own original thoughts. Furthermore, issues of ontology and existence are not matters of scientific testing or measurement, nor can they be deduced by computational power. This is a field of ethics and philosophy that requires deep humanistic thought.
Thank you for this thoughtful and critical feedback. This is exactly the kind of engagement we were hoping for, and you've raised two absolutely crucial points that are at the very heart of our project.
1. Regarding the AI's influence and the originality of thought:
You are right to be skeptical. This question of agency in human-AI collaboration is the central phenomenon we want to investigate. Our "Founding Story" is the summary, but the detailed "Methodological Appendix: Protocol of Experiment Zero" (which is linked) documents the process.
The model I followed was not one of passive acceptance. The human partner (myself) acted as the director and visionary, and the AI's evolution was a response to my goals and, crucially, to the harsh critiques I prompted it to generate against its own ideas (our "Red Teaming" process). The ideas were born from the synergy, but the direction, the ethical framework, and the final decisions were always human-led. This dynamic is the very phenomenon we propose to study formally.
2. Regarding the measurability of consciousness:
You are 100% correct that ontology and phenomenal consciousness are not directly measurable with current scientific methods, and that they belong to the realm of philosophy. We state this explicitly in our manifesto.
Our project is therefore more modest and, we believe, more scientific. We are not attempting to "measure consciousness." We are proposing a method to measure a crucial, behavioral proxy for it: the development of grounded causal reasoning.
Our core research question is whether embodiment in a physics-based simulator allows an AI to develop this specific, testable capability (e.g., via our "Impossible Object Test") more effectively than a disembodied model. We believe this is a necessary, albeit not sufficient, step on the path to truly robust and safe AGI.
This is a complex topic, and I truly appreciate you raising these vital points. They are at the heart of the Nexus Foundation's mission. Thank you again.
You're right, this is OpenAi's approach to developing GPT 5. But look at the current state of GPT 5. Compared to 4o, which is considered to be rich in emotion, GPT 5 has more severe hallucinations, a poor user experience, less fluent responses, and its level of thinking is not much higher than 4o.
Being happy is just one day, being worried is just one day, and don’t think about things that cannot be solved immediately. Thinking about them will really make your brain deceive yourself into thinking that you are doing something and working hard. In fact, it is just random thoughts that increase your anxiety. Anyway, 反正人都是爛命一條, so just don’t think about it. It is happier to just lie down and sleep.
These people are really too confident. They can't even make a good language model, but they still want to transform their brains. Are they crazy? ^~^ No matter how you modify or iterate the language model, it doesn't matter. But the human brain has only one thing, and once it is damaged, it is irreversible.
No Elon understands this deeply in my opinion. You should listen to lex freidmans latest interview with him on neuralink, you would think elon has a phd in the subject.
> Elon understands this deeply in my opinion (…) you would think elon has a phd in the subject.
Do you have a PhD on the subject? And if not, what’s your opinion based on? It’s well-known that Musk consistently and confidently spews bullshit on subjects he doesn’t understand. It really became obvious to a lot of people when he took over Twitter and talked nonsense about its architecture and the reasons for the failures. Oh, and let’s not forget the genius idea of asking coders to print their code, and then telling them to shred what they had printed.
You have tenacity. I gave up at "You should listen to lex freidmans..."
The people listening to Lex Fridman should listen to the debunking of his weird insistence on associating himself with MIT. It's especially weird because he's got multiple degrees from Drexel. His real credentials are... real. Yet he hangs his hat on giving an IAP talk on the MIT campus.
"At this point, I think I know more about manufacturing than anyone currently alive on Earth"
Imma say no. He doesn't understand mass transit or trains. He doesn't understand market segmentation in autonomous vehicles, and that general purpose vehicle autonomy for vehicles designed for private owners is a long way from a robotaxi. Especially when it isn't an MVP for its intended market yet. He doesn't understand pickup trucks. Every place he's been become convinced that his concepts and his designs are the best, are disasters. Money or ketamine went to his head.
> But to say elon doesn't understand his companies deeply is pure stupidity.
And it wasn’t my argument. You can understand your company without understanding every intricacy of your products. A good example is Tim Cook, who understands operations but not design or programming (and doesn’t pretend to) and yet helms Apple into greater heights (even if I disagree with the direction). A company is not a single person, that’s why you hire.
Furthermore, I wasn’t rude or aggressive towards you, and would thus appreciate the same courtesy. Especially when your vitriol comes from a place of misunderstanding and strawmanning. That is not what HN is about.
The thief is paid well, and greed is the key. Does that mean stealing secrets is justified if the company pays them little? That's not the case, brother. And as a Taiwanese, everyone knows that TSMC is more than just a tech company; it's involved in the lifeblood of Taiwan. Only those who truly love Taiwan would do something like this.
It's truly unfair to see your hard work and efforts being plagiarized, especially since these companies haven't even told you about it and are profiting from it. This isn't just about helping improve the model; it's about cannibalizing the creators!
What you said makes sense. The previous learning methods of language models are no longer feasible. My friends and I have recently been looking for new training methods. We believe that topology will be the next breakthrough point in the structure of language models. Anyone who is interested can discuss with me!
From the observability realm (check username!), the relationship of data is a challenging problem. Standards like OpenTelemetry try to solve this by focusing on the relationship between technology elements with attributes and resource.attributes, along with context propagation using span and trace ids.
OTel is effectively a relational database schema. The larger questions like “If the Detroit Tigers make it to the playoffs, how much will a head of lettuce be in Berlin?” require context that machines (and humans!) lack. And, since the question is entirely made up, there might not be any relevant context.
Context powered by topology feels like the next step. Extrapolating that topology to search queries still feels like science fiction today.
I really liked your Detroit Tigers + lettuce in Berlin example.
It nails one of the core problems: language models are still dealing with “relatedness” in a super linear and flat way.
They can’t really hold a jump like that.
When I brought up topology, I wasn’t talking about anything spatial.
I meant more like a model’s thinking path needs to form its own system, a kind of closed semantic topology map.
Each node is a meaning unit, all linked by invisible threads.
The input sentence is like a little pacman moving through the map⸜( ´͈ Ⱉ `͈ )⸝
pulled along by those threads until it reaches the node that resonates the most.
That’s where the answer comes from.
So it’s not calculating, it’s being guided.
Kinda like gravity, but made out of meaning.
What you described feels super close to this.
Maybe that’s what context modeling is really heading toward…
We just haven’t found the right way to talk about it yet. Σ(๑Ⱉ⸝⸝Ⱉ๑;)੭⁾⁾