A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
How should we influence the “values” or “ethics” or “morality” of a future AGI?
I’m not sure this kind of direction setting is possible, and it may be entirely futile, but it’s worth thinking about. I’d argue that there is no perfect answer to this question, but there are definite wrong answers. The values that people wish to instill into AI systems is merely a projection of the aim or goal that they would have for building AGI in the first place.
First, I’ll address our goals, then I’ll address the kind of values or ethics that AGI systems might be imbedded with – and I’ll end on what I think is the the best path forward.
I’m sure there are more nuances, but for the time being, I think the two major camps of AGI proponents are those who wish for either (a) eternal homo sapiens flourishing aided by AI, or (b) to allow the trajectory of intelligence to continue well beyond humanity, and I’m in the latter camp (that of the Cosmists, not the Terrans, to use Hugo de Garis’s terminology from The Artilect War).
Assuming we’re on the same page about “the goal” above (but even if we’re not), our options for influencing the ethics or values of future intelligence should also be rather clear.
Ben Goertzel’s Cosmist Manifesto is the best example of this that I know of. He frankly addresses the fact that determining the long-term moral tenets of post-human intelligence is both impossible and wrong, and states that we should broadly aim for a system that maximizes three values: Joy, Growth, and Freedom (Choice).
Goertzel goes so far as to state that while he wants to see an AGI that values all the same things he does (nature, his own children, etc), he also accepts that a vastly post-human intelligence will value things in ways he can’t predict, but would hopefully be more rich and nuanced (in the same way that our own human values are more rich and nuanced than the values of a field mouse).
This is exactly the kind of honesty required to speak about these matters. It’s soothing to think that we can freeze the spires of form and remain homo sapien kings and queens of the universe, but if we’re being honest – we are but one attenuating pattern in the universe, and the best win we could hope for you be to help a higher pattern emerge.
It goes without saying that how such terms are defined and “optimized for” is everything, but that’ll have to be another blog post for another time.
My own bias comes from my own desired end goal: To see intelligence and sentience itself bloom into astronomically higher forms, mostly unencumbered by the fetters of human values and ideas. Any recommendations that I have for ethical direction-setting for an AGI system is, unabashedly, about moving towards that aim.
I’m not even certain if we can influence it – but I think we might as well try (much longer argument for trying in: State of Nature, or Solidarity? The Birth of Artificial General Intelligence).
Having such a set of directional values may not even be possible – but I’d still argue that attempting to set a direction (or set of guiding principles) is probably less likely to (a) less likely to lead to the end of life/value (armageddon), and (b) less likely to lead to unnecessary suffering.
…
Recently I had the pleasure of connecting again with David Wood, Chair of London Futurists, for a discussion about AGI and ethics – predicated initially on some of my essays on the topic. We were joined on a panel by my friend Bronwyn Williams, Foresight Lead at Flux Trends, and Rohit Talwar, CEO of Fast Future.
Here’s the resulting discussion (my own presentation 4:40 to 16:40), which touches on all the issues addressed in this article:
David’s approach to unpacking issues around the future is admirably open-minded and multi-faceted. I believe he’s doing important work in not only putting various ideas on the table, but distilling important distinctions to help navigate the future. I follow David on Twitter – and if you enjoyed this short article, you might, too.
Header image credit: Fine Art America
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…