Profile photo for David Pearce

Most people get a fair amount of fun out of their lives, but on balance life is suffering, and only the very young or the very foolish imagine otherwise.
(George Orwell)

I find thinking about suffering so disturbing I struggle to do so clearly. If you suppose that you can think rationally about suffering – the most severe suffering, at any rate – then IMO you simply haven’t grasped its awfulness. Our conceptual scheme fails in the face of unbearable agony and despair. Indeed, the only good reason I know to understand the neural correlates of suffering is to minimise and prevent it. After experience below hedonic zero has been genetically eradicated, the existence of Darwinian life should then be forgotten like a bad dream. Prevention of (the physical signature of) the recurrence of suffering should be offloaded to zombie artificial intelligence.

With this disclaimer in mind, here goes:

First, the good news – or what passes as good news. Some ethicists worry about the prospect of novel, nonbiological sources of suffering beyond the miseries of Darwinian life (cf. Brian Tomasik’s Risks of Astronomical Future Suffering). In my view, there is no evidence that classical digital computers or classically parallel connectionist systems can support unified subjects of experience. Faster processing power and increased complexity of code are irrelevant to phenomenal unity. Even if the intrinsic nature of the world’s fundamental quantum fields is experiential rather than non-experiential – a solution to the Hard Problem of consciousness most scientists find fanciful – there can still be no non-trivial suffering in digital computers (or anything else) without phenomenal binding. If so, then humans will not deliberately or inadvertently create pain-ridden robots, digital mind-uploads, video-game characters or any other form of artificial life in silico that can suffer. Nor will an advanced civilisation create astronomical suffering in the guise of running digital “ancestor simulations”. This claim admittedly assumes an account of consciousness and phenomenal binding that may be refuted by experiment, more specifically by molecular matter-wave interferometry. Yet I’m fairly confident not just that binding isn’t a classical phenomenon, but also that dualism is false. I’m relaxed, too, about the theoretical possibility of creating sentient nonbiological quantum computers. We have no reason to believe they could support the pleasure-pain axis of organic wetware. In addition, various widely-discussed existential risks aren’t really s-risks. Consider a machine Intelligence Explosion of recursively self-improving software-based AI that converts life on Earth into (the equivalent of) paperclips. Paperclips don’t suffer. Classical aggregates don’t suffer. Classical Turing machines don’t suffer. Nor am I especially worried about transhumans radiating out across the Milky Way to spread a capacity for suffering to lifeless solar systems. Watching Star Trek and Hollywood sci-fi movies as kids helps us “misremember” the future. Yes, in the face of daunting technical obstacles, humans will most likely establish small self-sustaining colonies on the Moon and Mars later this century. But the challenges of colonising alien solar systems and creating pain-ridden ecosystems light-years away are orders of magnitude more formidable than building lunar settlements, or even terraforming Mars. Centuries hence, if transhumans haven't phased out the biology of suffering on Earth, then maybe these colonisation risks will be real. Unoccupied ecological niches rarely stay empty indefinitely. Spreading some sort of toxic Darwinian cocktail of suffering, malaise and adulterated pleasure across the Galaxy may be a more realistic scenario than the “cosmic rescue missions” for Darwinian ecosystems I once anticipated (cf. The Hedonistic Imperative) if (contrary to what I believe) the Rare Earth hypothesis turns out to be false. Yet compared to the challenges of reaching the stars, getting rid of suffering on Earth is technically trivial. So the question of space colonisation is important but not urgent.

Now for the bad stuff.
Some risks are horrific, realistic and foreseeable, not least nuclear war. Some risks involve our treatment of
nonhuman animals: neurological evidence suggests that pilot whales, for instance, may suffer more than any human, and yet barbarians murder them for their flesh in the name of tradition. By contrast, the nonhumans whom meat-eaters abuse and kill in the death factories are merely as sentient as toddlers: what humans are doing to nonhuman animals in factory-farms and slaughterhouses is no better or worse than industrialised child abuse. Other potential s-risks are more exotic – for example, synthetic gene drives could be used to sabotage the global ecosystem instead of eliminating vector-borne disease and reprogramming the biosphere to eradicate suffering. Still other s-risks depend on a controversial “no collapse” interpretation of quantum mechanics: Everettian QM makes me despair. Then there are the unquantifiable “unknown unknowns”. Given our ignorance of the nature of mind and reality, humility would be wise.

However, the practical s-risk I worry most about is the dark side of our imminent mastery of the pleasure-pain axis. If we conventionally denote the hedonic range of Darwinian life as -10 to 0 to +10, then a genetically re-engineered civilisation could exhibit a high hedonic-contrast +70 to +100 or a low-contrast +90 to +100. Genome-editing promises a biohappiness revolution: a world of paradise engineering. Life based on gradients of superhuman bliss will be inconceivably good. Yet understanding the biological basis of unpleasant experience in order to make suffering physically impossible carries terrible moral hazards too – far worse hazards than anything in human history to date. For in theory, suffering worse than today’s tortures could be designed too, torments that would make today’s worst depravities mere pinpricks in comparison. Greater-than-human suffering is inconceivable to the human mind, but it’s not technically infeasible to create. Safeguards against the creation of hyperpain and dolorium – fancy words for indescribably evil phenomena – are vital until intelligent moral agents have permanently retired the kind of life-forms that might create hyperpain to punish their “enemies” – lifeforms like us. Sadly, this accusation isn’t rhetorical exaggeration. Imagine if someone had just raped and murdered your child. You can now punish them on a scale of -1 to -10, today’s biological maximum suffering, or up to -100, the theoretical upper bounds allowed by the laws of physics. How restrained would you be? By their very nature, Darwinian lifeforms like us are dangerous malware.

Mercifully, it’s difficult to envisage how a whole civilisation could support such horrors. Yet individual human depravity has few limits – whether driven by spite, revenge, hatred or bad metaphysics. And maybe collective depravity could recur, just as it’s practised on nonhuman animals today. Last century, neither Hitler and the Nazis nor Stalin and the Soviet Communists set out to be evil. None of us can rationally be confident we understand the implications of what we’re doing – or failing to do. Worst-case scenario-planning using our incomplete knowledge is critical. Safeguards are hard to devise because (like conventional “biodefense”) their development may inadvertently increase s-risk rather than diminish it. In the twenty-first-century, unraveling the molecular basis of pain and depression is essential to developing safe and effective painkillers and antidepressants. More suicidally depressed and pain-ridden people kill themselves, or try to kill themselves, each year than died in the Holocaust. A scientific understanding of the biology of suffering is necessary to endow tomorrow’s more civilised life with only a minimal and functional capacity to feel pain. A scientific understanding of suffering will be needed to replace the primitive signalling system of Darwinian life with a transhuman civilisation based entirely on gradients of bliss (cf. Life in the Year 3000).

But this is dangerous knowledge – how dangerous, I don’t know.

View question
About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press ·
© Quora, Inc. 2025