Alignment Problem

The goal of building safe, human-compatible AI, including potentially AGI/ASI is known as the Alignment Problem.

~

CASPER, Stephen, 2023. Achilles Heels for AGI/ASI via Decision Theoretic Adversaries. Online. 1 April 2023. arXiv. arXiv:2010.05418. [Accessed 5 April 2023]. As progress in AI continues to advance, it is important to know how advanced systems will make choices and in what ways they may fail. Machines can already outsmart humans in some domains, and understanding how to safely build ones which may have capabilities at or above the human level is of particular concern. One might suspect that artificially generally intelligent (AGI) and artificially superintelligent (ASI) will be systems that humans cannot reliably outsmart. As a challenge to this assumption, this paper presents the Achilles Heel hypothesis which states that even a potentially superintelligent system may nonetheless have stable decision-theoretic delusions which cause them to make irrational decisions in adversarial settings. In a survey of key dilemmas and paradoxes from the decision theory literature, a number of these potential Achilles Heels are discussed in context of this hypothesis. Several novel contributions are made toward understanding the ways in which these weaknesses might be implanted into a system. arXiv:2010.05418 [cs]