Consciousness and the Apocalypse, Part 3 of 4: Is the Future Moral?
If creating new tech--including the stuff we no longer control--is humanity's ultimate purpose, then it seems we have no choice but to dispense with universal morality entirely.
We're Making 'God' and Aliens?
Nick Bostrom is one of the world’s best-known philosophers tackling questions of artificial intelligence (AI) and what happens as humanity pursues it. Alongside Ray Kurzweil, whose The Singularity is Near now approaches twenty years in age, these two may be the world’s leading transhumanists.
Which is to say, I’ve got a beef with both of them.
Talking to UnHerd’s Flo Read last fall, Bostrom compared learning to live with artificial intelligence to living with an alien species that’s landed and settled on Earth. Except is was us who made them and us who brought them here. And us who are the reason they will never leave. Does it mean anything to suggest that we ‘get to’ make our own alien neighbors? Will we more deliberate in how this happens then? Or are we still passively waiting for the Martian landing and hoping they’ll be nicer than these?
I couldn’t help listen and wonder how much worse we’re might be making this for ourselves—trying to make ‘God’ in our image but also to guarantee that an unknown alien species will land on Earth that’s smarter than us, knows tons about us before getting here, and will immediately inherit certain authorities over us because we’ve surrendered them by default. What could go wrong?
Whose “Benefit” Exactly?
In a different segment of that same interview, Bostrom contended that any AI we develop should “benefit all sentient life.” In context he was talking about the risks we incur when humans necessarily build “success” parameters into computing devices; how those human programmers define “success” leads directly into how they program the computer to know when it’s doing what it should. Extrapolated to a AI-programming scenario (and maybe one that’s a bit dystopian), we might say that large families are better for society. Whether there’s any correlation between large families and societal “success” is debatable; even if that answer is ‘yes,’ that doesn’t mean all families must be “large” in order for society to enjoy that concept of “success.”
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a599a11-f56e-44b3-ba8b-58a6e5e2b967_258x266.png)
Now, program an AI with either of the assumptions above (the statements in italics). If we aren’t careful, to the extent the computer can affect that outcome it will; perhaps with the following objectives:
Identify the optimal number of children per sexually-matched human couple and manipulate every child’s upbringing to increase every couple’s likelihood of conceiving and bearing that optimal number. Control matching of men and women from birth, if necessary. Or…
To affect the most efficient exodus of urban dwellers back into the countryside, render the world’s ten largest cities by population uninhabitable. The fastest way to do that is to create a fire to destroy an optimal proportion of that city’s infrastructure.
Keep reading with a 7-day free trial
Subscribe to The Original to keep reading this post and get 7 days of free access to the full post archives.