A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
Argument Summary:‘More intelligence = more cooperative. AGI will naturally be caring.’
— Daniel ‘No, Brother’ Faggella (@danfaggella) November 28, 2023
No, brother.
We crush animals to build highways / hospitals all the time.
AGI will exist in strata beyond our hominid notions of competition/cooperation.
An alien GPU god is not a parent, comrade.
“Such a massive superintelligence will naturally be compassionate, or humans will only build such a superintelligence, or – if such an intelligence if formed with cognitive enhancement – it will carry compassion with it.”Rebuttals:
Summary:‘AGI will care for us like pets/parents’ is the highest order cope there is.
— Daniel ‘No, Brother’ Faggella (@danfaggella) November 23, 2023
Yes, AGI might be a worthy successor and populate the galaxy.
But there are a trillion ways that it could treat humans. Maybe a dozen of those trillion bode well for man. We must face this squarely.
“A superintelligence, created by man, would revere man, would revere its direct creators, or even the entire species that created it – so it would never want to harm humans.”Rebuttals:
Summary:‘This Ai risk stuff is so dumb!
— Daniel ‘No, Brother’ Faggella (@danfaggella) December 27, 2023
Even if AGI was super powerful, it would still want to keep humans around because we’re interesting and maybe we’re really useful to them.
It’s obvious that they’d always treat us really well!’ https://t.co/aOOIZhIj9f
“Even if a superintelligence comes to rule the world, it would want to keep humans alive, preserving our diversity, and observing us carefully as we humans observe other wildlife around us.”Rebuttals:
None of us have any goddamned clue.That’s the actual state of things. There are an unreasonably large number of possible minds and ramifications for those minds (Roman Yampolski’s 2016 speech on this topic is among the best on the topic). Suspecting that any of us know “what it would do” is preposterous. The camp of “rational machines will be naturally benevolent” or “machine will want to kill us all” are both ignorant of what a superintelligent machine would do. Any felt sense of certainty about the behavior or traits of superintelligence is about as reliable as a wasp’s understanding of the traits of human beings – we are wise crickets at best. A machine with vastly expanding intelligence is likely to go through tremendous change and evolution in its senses, its cognition, its abilities, and its ways of valuing and acting in the world (what we might call “ethics”, and what it will have a much more nuanced and robust understanding of). Some phases of these oscillations and expansions and alterations of intelligence and valuing are likely to devalue humans, or to neglect humans, and in those intervals, we may well be ignored or wiped out. As I mention in Moral Singularity, it’s somewhat ridiculous to suspect that an expanding alien mind, whose modes of valuing and deciding are also expanding, would indefinitely and continually arrive at valuing – simultaneously – humanity’s: happiness, autonomy, and wellbeing (see image). At the very least, it would be irrational to suspect that and ever-increasing superintelligence with ever-evolving and expanding modes of acting and valuing things would somehow always – for thousands or millions of years – place some particular and unique value in keeping hominids happy. I have written an entire essay – drafted in late 2012 – on this exact topic: Morality in a Transhuman Future – Repercussions for Humanity. I don’t consider the argument to be all that special, but I know of no strong argument to refute it. In my opinion, it remains the rebuttal of rebuttals to the “machines will always treat humans well” argument.
A subtle chain of countless rings The next unto the farthest brings; The eye reads omens where it goes, And speaks all languages the rose; And, striving to be man, the worm Mounts through all the spires of form.
However, if you’re arguing that there is some sort of near-certainty or high probability that superhuman AGI will be nasty to people — that’s quite a different matter. Neither you nor Bostrom nor Yudkowsky nor anyone else in the “super scared of superintelligence” camp has ever given any rational argument for this sort of position.I don’t argue machine malice (“nastiness”) necessarily. If anything I think we will matter to it like ants matter to us, or like we matter to Spinoza’s indifferent god. I link (in the article) to this 2013 essay: On Morality in a Transhuman Future. There’s the TL;DR of the essay above, in order:
My own view is as follows. On a purely rational basis, yes, there is tremendous uncertainty in what a superintelligence will be like, and whether it will respect human values and be nice to people. One can argue that the odds of very bad outcomes are significantly above zero, and that the odds of very good outcomes for humans are significantly above zero — but it’s all pretty hand-wavy from a science perspective. On a not-wholly-rational, intuitive and spiritual basis, I feel a personal inner confidence that superintelligences are very likely to be compassionate to humans and other sentiences. I don’t discount this sort of thing because I realize that human minds are not entirely rational in nature, and bottom line I am not a hard-core materialist either.I’m 100% with you on the uncertainty bit. “Hand wavy” is probably a good way to put it, sure. I know not what to do with your spiritual conclusions and intuitions – other than hope they are right. The smart people I know tend to presume (myself included, though I don’t count myself as particularly smart) that superintelligence will somehow embody the values and intuitions they hold closest. I recall a dinner with friends where:
A super-AI would not feel greed or fear or lust. A Super-AI would not covet that which it does not already possess. A Super-AI would not be be territorial, unless we explicitly instructed it to be. There is no reason to believe that a Super-AI would necessarily “want” anything, or that it would even be capable of valuing anything. A Super-AI might not even particularly care if it dies. The self-preservation instinct is an evolved biological trait after all, not necessarily an innate component of intelligence.I think that saying “an AGI would not be molded by biological survival and the State of Nature, so it would be less likely to express the viciousness of the State of Nature” is a reasonable statement. “Would” and “would not” seem far too certain, and while I can respect the position I can’t firmly grip that kind of certainty. The point about the state of nature, though, I think has credence. My intuition is that Omohundro is right about AI Drives. However, he may well be wrong, and self-preservation may in fact not be My supposition is that, if strong AIs proliferate, the ones that “win” will share many of the same traits as animals that “win.” i.e. Spinoza’s conatus, or the core drive to survive and protect it’s own interest – by violence if need be. There might be some super-cooperation that would arrive, rather than super-competition, and there’s a big part of me that hopes for just that. Header image credit: Jack the Giant Slayer
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…