How is there an ethical future when we don’t yet understand the hard problem of consciousness? How does one know if their manufactured neural net is self aware?
We hold consciousness to be a marker of "something worthy of ethics" but it's a bit of a species solipsism. Consciousness is a biological process or effect. Why is it more important than other biological processes or effects? Well, because we're humans with "consciousness" and we think so.
To me, consciousness is likely to be explained as an emergent phenomenon allowing humans to collaborate, rather than something that is more akin to the "soul" or "self" that is often ascribed. To my mind, ethics, if we can separate that concept from other emergent phenomena, should be applied more holistically than just to things we believe are conscious.
By definition, one can always know their own naturally-produced neural net is self aware (cogito, ergo sum). The open question for solipsists is whether, should someone report to you their neural net is also self-aware, you should believe them.
Personal experience. We perceive a continuity of experiences, memory, and the difference between that and slumber or becoming insensate, and we call the experience "consciousness" and our knowledge of that "self-awareness." It is definitional; the word descends from the experience, not the other way around.
Perhaps we need to synchronize our definitions of "experience", "perception", "personal", and "knowledge", because your very words are almost exactly what I would say in counter to your point.
I think a catchall, bare-minimum solution, would be to limit the lifetime of each organoid. The worst-case scenario is that these organoids experience consciousness and/or suffer in some capacity. You can at least time-limit their existence so no single organoid could suffer indefinitely.
earth itself deserves a higher moral status than proto organs ... these humanists need to recalibrate their assumptions ... humans are an instance of a process which will continue far after humanity is superceded ... don't confuse the product for the process
Kind of absurd to be concerned with regulating maybe-conscious brain organoids when our species murders 55 billion definitely-conscious land animals a year and eats them for pleasure, eh?
If consciousness is the line you draw in the sand there are much bigger problems to solve. At least research on organoids would have some value to offset the suffering it might cause.
I am not an advocate of letting perfect be the enemy of good, but I do not think it makes sense to invest any effort in this when it is just a drop in an ocean of suffering we create - the only drop with the potential to actually help our species move forwards.
I know your comment is sincere, but it breaks the site guideline against taking threads on generic tangents: "Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents."
The reason for that guideline is that, if allowed to, the larger, hotter, and more common topics will drown out all the weaker and less repeated ones. That's bad for intellectual curiosity, which is the organizing principle of HN.
I genuinely don’t think it is flame bait. It is a discussion worth having. Are we really bringing any benefit to society by regulating this when the benefits are so potent, and the cost is demonstrably one we don’t care about in the first place?
And if we do consider this an issue, is this worth regulating given what it will cost us? Should we not wait for the bigger issue to be regulated first before depriving us of this advantage?
That is the point I was trying to get at. A criticism of the hypocrisy of this whole regulation debate. Sorry if it came off as flames.
The answer depends on how you want to test for self-awareness. For a long time it was defined via the mirror test [0], but there is a lot of evidence that the test isn't a great test for many animals (eg where vision isn't one of their primary senses). As far as I know, no "food" animals have passed the mirror test; a number of aquatic mammals, primates, and birds (none of which are eaten commonly in the US) have, and one species of fish.
Personally, I am of the opinion that testing for self awareness is very difficult and its not clear why that's the cut off for where we stop eating things to begin with. Clearly the animals we farm and eat are capable of feeling pain, and many of them are very intelligent. Pretending that we would respond to the news that eg pigs had passed the mirror test by no longer eating them seems farcical to me.
The only proxy we have for measuring self-awareness is "responds in a way that maps to human behaviour in X scenario"—e.g. things like the mirror test—which, while it's the best we've got, isn't very compelling given the diversity we observe even in primates.
Without any convincing evidence in either direction, I find it perplexing that people seem to default to asumming the negative (i.e. the more convenient assumption)
Given that we don't even understand what consciousness is in humans, this is a sticky question. But the short answer is yes, and not only mammals, either. See https://en.wikipedia.org/wiki/Animal_consciousness, specifically the bit about the Cambridge Declaration.
Is there any rational theory of morality beyond temporary agreements for mutual advantage? Isn't all morality a psychological weapon to trick other people into giving up their advantages, and a set of excuses that people use to justify their self-serving actions?
I'm not sure what you mean by "rational", but certainly there are consistent, "systematic" ethical theories that one could arrive at from first-principles. Think Kantianism, Utilitarianism, Egoism, etc. If you were to ask philosophers whether their pet theory of ethics was rational, I suspect many would answer yes. Any theory of ethics that subscribes to ethical naturalism would answer that with a definite yes.
If you are going to have a contractarian position on morality and you don't want to end up thinking that morality is nothing but Machiavellianism, I suggest you read some Rousseau or Rawls (basically anyone who isn't Hobbes).