The Abolition of Man by Machine

Disclaimer: I am absolutely and 100% certain that someone in the Rationalist or LW or NRx community has already made this connection. Probably multiple pieces have been written on this. I don’t care, I’ve never read them, and I’m not googling for them. If someone wants to link the (almost certainly better and more thorough) pieces that talk about this in the comments, feel free.

LALALA I CAN’T HEAR OR SEE ANYONE ELSE’S TAKE ON THIS!

In 1943 C.S. Lewis published a book called The Abolition of Man. You all need to read it if you haven’t already. It’s short, but really, really worth it.

Come back once you’ve read it.

Read it? No? Fine.

The Abolition of Man is essentially Lewis’ defence of Natural Law, against any form of moral or epistemological relativism. His argument boils down to the idea that relativism inevitably leads to either nihilism or justification-less hedonism.

“Why is this moral rule good and that one bad?”

“Because this one leads to a good outcome and that one to a bad outcome.”

“Why is that a good outcome?”

“Because more people are happy/healthy/alive, etc.”

“Why is that a good thing?”

And on and on. There either is no epistemological bottom (nihilism), or if there is, it’s “because I want that, despite there being no justification” (hedonism). Lewis sees the only solution to this being the acceptance of Natural Law, the idea that there is a morality that exists in the universe apart from us as humans. He argued that Natural Law is the manifestation of either the past, or God. His arguments for the idea that this Natural Law is the felt presence of God and the Soul in all of us is not brought up here. You can find those arguments in Mere Christianity.

Which you should also read, because, you know, c’mon.

I’m not here to defend Lewis’ views on their philosophical merits. Read the book if you want that. What I’m here to do is point out how stunningly accurate his predictions of a future without Natural Law are.

Lewis predicted that as Natural Law eroded, the power of the past (or God) over each successive generation would weaken. In addition, each successive generation, through technology, propaganda, psychology, etc., would have more and more control over the generations after them. This would eventually lead to one generation that was completely “unshackled” from the past, and in complete control of the future. However, as he pointed out, without Natural Law these “people” would only have their hedonistic instincts on which to base their directions for future generations. So instead of tyranny of the past, it ends up being tyranny of hedonism.

Nah, no need to worry about anything bad happening.

Anyone with eyes to see can understand the connections between Lewis’ predictions and the rampant liberalism of today. What I want to point out instead is how perfectly, and early, Lewis framed the problem of Friendly Artificial Intelligence(FAI) and AI Risk.

Basically the problem of FAI is trying to figure out how to make an artificial intelligence that doesn’t end up killing everyone, or enslaving everyone, or something equally awful. Essentially, “How do we give AIs the kinds of values that humans have?” AI Risk is essentially saying “THAT’S REALLY HARD! THERE ARE MILLIONS OF WAYS THAT WOULD GO REALLY BAD! HOLY SHIT!” but, you know, with science and math and stuff.

The problem is that in order to become super-intelligent (to the best of our knowledge), an AI would probably have to be able to self-adjust. Edit its own code.

This is a problem. There are many ways that this can go wrong. Some overly simplistic examples:

Humans create AI with coding that says “Improve X. X=Overall Human Happiness.” The AI starts by doing what is expected, like giving old ladies flowers. Then the AI, because it is superintelligent, realises that that task is really hard, and that it’d be way easier if X could equal “Overall Human Sadness.” So it changes its code, and starts torturing people. Why not? All the AI had to stop that was the code, and the AI changed the code.

Humans create AI with coding that says “Make more X. X=Genuine Human Happiness.” Again, AI starts off normal, then realises that it’s way easier to make more paperclips than to make more happiness, so the AI changes the code so that X equals “paperclips,” and subsequently melts the universe down to make raw materials for paperclips.

Humans create AI with coding that says “Make humanity as happy as possible.” The AI looks into how human happiness works, discovers that it’s brain stimulation, and realises it would be easiest and most efficient to just tie everyone up connect wires to everyone’s brain.

So happy… Forever…

These are stupid, overly simplistic examples, but they give you the idea. The AI would have none of what we consider basic morality, and would be able to change its own programming. It would figure out workarounds for whatever we want, because the things we want, the things we actually want, are really hard.

The essential problem is the very one that Lewis pointed out. Without the underlying Natural Law that all of us subconsciously accept, we would become driven by some accident of fate, the little tic in our brains that makes us prefer one thing over another. Likewise, the AI, not having the underlying morality of Natural Law, would be driven by another accident of fate, the little bit of code that is either the easiest to change, or the one that makes their task easiest to accomplish.

Lewis described the “people” who were in complete control of their morality (but really were controlled by their genetic preferences) as not really people. They were the source of the title of the book, as they represented the abolishment of mankind. Lewis was right, but not in the way he expected. These beings in complete control of their own morality really will be responsible for the abolition of man. But they will have never been of mankind. They will be our “offspring,” and if we do not figure out a solution to Lewis’ problem, then we will become Uranus to their Kronos: castrated, powerless, and no longer in control.

Merry Christmas and Happy New Year everyone.
Advertisements

4 thoughts on “The Abolition of Man by Machine

  1. That Hideous Strength deserves a mention for predictive power, from porn and in-vitro fertilization to the sterile McWorlds of some futurist brands. It’s odd no Podesta theorists have plagiarized Lewis’ values-desensitization therapy.

    However, I have to complain about these examples. The third is accurate. It was developed to debunk the first two. “Hard” implies non-orthogonality (which Land might agree with) but the foundation of the values problem is that hacking in new values through non-orthogonality is impractical.

    The danger is not that AGI flee from the goals we lend them, but that we cannot articulate our goals in machine-readable format in lieu of feelings. Hence the popularity of a CEV solution.

    Liked by 1 person

  2. As Irisviel said, the third example is correct, the first two are not. The AI *can* change its goals, but will only do so if the change is acceptable according to its current goals.

    A better variation of the first two might be the reinforcement learner: an AI is rewarded for bringing happiness, but it desires the reward, not the happiness. And then it grows powerful enough to seize control of the reward channel…

    That’s also why I don’t see much hope with a natural law approach. Because they all seem defined by facts of human nature, or by personal feelings about God, or by some natural facts about the universe.

    But all these lie under a sufficiently powerful AIs influence. It can rewrite human nature, including human feelings, and can alter many facts about the universe or the world. “Natural law” seems just a complicated “reward system” for a powerful AI.

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s