AI Risk

From LawSnap
Jump to navigation Jump to search

AI Risk is a broad term referring to a concerns for ways that artificial intelligence can harm humans or society or even civilization.

As with all powerful technologies, Artificial Intelligence caries with it risks of misuse and abuse.

How best to manage these risks is controversial and hotly debated. A few aspects of AI make assessing the risks difficult.

  1. AI is new and unpredictable
  2. AI is, or may soon be, extremely powerful, and thus the stakes of AI are higher than for other technologies.

Introduction to How to Think About AI Risks[edit | edit source]

Lots of smart people are engaged in lots of heated debates about the risks of Artificial Intelligence. I'm going to try, with this page, to provide a summary of the various issues, and the various positions.

I am also going to try to be clear, as much as possible, to lay out the various positions as faithfully as I can.

Rule 1: Be humble[edit | edit source]

If could offer one piece of advice at the outset, it's to proceed with humility. There is a huge amount we don't know and a lot to learn.

Writing in an earlier age, about an earlier risky technology, nuclear weapons, John von Neumann told us, to be very cautious with predictions (aka guesses)

All experience shows that even smaller technological changes than those now in the cards profoundly transform political and social relationships. Experience also shows that these transformations are not a priori predictable and that most contemporary "first guesses" concerning them are wrong. For all these reasons, one should take neither present difficulties nor presently proposed reforms too seriously.

John von Neumann, Can We Survive Technology (1955) (emphasis added)

The Three-Sided AI Debate[edit | edit source]

The heated public debate over AI is best understood as as involving three broad factions:

  1. the “AI Doomers,” aka pessimists;
  2. The “AI Boosters,” aka optimists; and
  3. The “AI Realists,” aka pragmatists,
  • ⚖️ 🛠️ Realists are most worried about AI and power. They think too much of the discussion of AI is focused on science-fiction scenarios and that is a distraction from the very real, practical current harms from AI, such as using it for surveillance, for spreading disinformation, and for invading privacy. They argue that, as with any powerful tech, AI can be used by a few at the expense of the many. In fact, the realists argue, AI is already being used by a few to grab power, and we should be focused on these real-world problems, rather than hypotheticals.
  • 💥 🌏 Doomers are most worried about the end of the world: They argue that AI poses a threat to humanity and to civilization and the whole planet. They think AI will become more powerful and start acting in its own interests, not ours. They admit this might sound like sci-fi, but, they argue AI is unlike any tech we’ve ever seen, with risks we’ve never seen before.
  • 🚀💡Boosters are most worried about missing out: They believe AI is a miracle tech that we can use to solve huge problems like curing Alzheimer’s and cancer and stopping climate change. They see AI as a miracle tech that could solve many of our biggest problems and save and enrich literally billions of lives. They believe that, like any tech, it has risks, but they believe the biggest risks are measured in lives lost through dithering and years of delay in solving critical problems like climate change and global malnutrition.

Much of the debate about AI risk is over which types of risk are most important. The debaters can be put into three groups:

  1. Those who believe that the biggest risk is that we are going to miss the amazing benefits of AI
    • Marc Andreessen
    • Yann LeCun
  2. Those who believe that the biggest risk is from other human beings using AI against us
    • Melanie Mitchell
    • Timnit Gebru
  3. Those who believe that the biggest risk is that AI acts on its own against us
    • Nick Bostrom
    • Yoshua Bengio
    • Max Tegmark
three-sided chess board



For our earlier coverage of this debate, see Unpacking All Three Sides of the AI Debate








The Debate Over "Existential Risk" of AI[edit | edit source]

Note: This section is a work in progress. My aim is to turn it into a fair and complete summary of a highly contentious issue. Your feedback is much appreciated.

Existential Risk of Artificial Intelligence, aka the of "doomsday scenario" refers to the risk that AI will lead to (1) human extinction or (2) permanently and drastically endangering human flourishing.

Whether, and to what extent we should be worried about this possibility is the subject of heated debate.

This article is a first step at assessing the arguments, pro and con, regarding existential risk.

The debate raises multiple issues

  1. What is the probability that, in the near term (say the next 5 to 10 years) AI will approach or exceed human capabilities?
  2. What is the probability that an autonomous AI will pursue goals that are harmful to humans?
  3. What is the probability that, if an autonomous AI does pursue goals harmful to humans, humans will not be able to fight back effectively?

What is the probability that AI will approach or exceed human capabilities[edit | edit source]

[TBD]

Will an Autonomous AI pursue "anti-human" Goals?[edit | edit source]

An evolving list of the the "anti-doom" arguments that AI entities will not -- in the near term -- pursue goals that are contrary to human flourishing.

  1. Effective AI will need to have something corresponding to human emotions
  2. Intelligence is intrinsically good, and therefore if an AI develops superhuman intelligence, it will by definition, not be evil
  3. We will develop true AI iteratively, and so humans will be able to address any potential problems long before a doomsday scenario occurs

Full disclosure: I find most of these "anti-doom" arguments less than compelling. I am starting this effort by summarizing them. I am attempting to follow Rapoport's Rules as set forth by philosopher Daniel Dennet.

  1. Attempt to re-express your target’s position so clearly, vividly and fairly that your target says: “Thanks, I wish I’d thought of putting it that way.”
  2. List any points of agreement (especially if they are not matters of general or widespread agreement).
  3. Mention anything you have learned from your target.
  4. Only then are you permitted to say so much as a word of rebuttal or criticism.

AI will Require Emotions[edit | edit source]

Note: this is currently in outline form.

  • we won't be able to get super AI without emotions
  • we can hardwire subservient emotions into them

Iterative development will protect us[edit | edit source]

  • consider analogy to jet engines: Right after the Wright Brothers flew, could we imagine the safety issues in a jet flying faster than the speed of sound? Of course not, but that would have been premature.
  • we are not going to get to AI in one jump. if we see that it's dangerous, we won't build it

Examples of AI Risk[edit | edit source]

The Waluigi Effect refers to a phenomenon where, by training an AI model to satisfy a desirable property P, then it's easier to elicit the chatbot into satisfying the exact opposite of property P.

Cleo Nardo, The Waluigi Effect (mega-post)

Overview of AI Risk[edit | edit source]

Helpful summary of the history of AI risk, going all the way back to the industrial revolution

Luke Muehlhauser AI Risk and Opportunity: Humanity's Efforts So Far summary here: History of AI Risk Thought

Organizations focused on AI Risk[edit | edit source]

The Machine Intelligence Research Institute

Managing AI Risk[edit | edit source]

Main article: Managing AI Risk

Approaches to training AI to be safe