Getting AI Right: Introductory Notes on AI & Society

  title={Getting AI Right: Introductory Notes on AI \& Society},
  author={J. Manyika},


On the Opportunities and Risks of Foundation Models
This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities, to their applications, and what they are even capable of due to their emergent properties.
The brain is a computer is a brain: neuroscience's internal debate and the social significance of the Computational Metaphor
This essay invites the neuroscience community to consider the social implications of the field’s most controversial metaphor, the Computational Metaphor, and asks whom does it help and who does it harm.
Reward is enough
Highly accurate protein structure prediction for the human proteome
This work dramatically expands structural coverage by applying the state-of-the-art machine learning method, AlphaFold, at scale to almost the entire human proteome, covering 58% of residues with a confident prediction, of which a subset have very high confidence.
Human Compatible: Artificial Intelligence and the Problem of Control
"The most important book I have read in quite some time" (Daniel Kahneman); "A must-read" (Max Tegmark); "The book we've all been waiting for" (Sam Harris) LONGLISTED FOR THE 2019 FINANCIAL TIMES AND
Open Problems in Cooperative AI
This research integrates ongoing work on multi-agent systems, game theory and social choice, human-machine interaction and alignment, natural-language processing, and the construction of social tools and platforms into Cooperative AI, which is an independent bet about the productivity of specific kinds of conversations that involve these and other areas.
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
It is found that pretrained LMs can degenerate into toxic text even from seemingly innocuous prompts, and empirically assess several controllable generation methods find that while data- or compute-intensive methods are more effective at steering away from toxicity than simpler solutions, no current method is failsafe against neural toxic degeneration.
Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data
It is argued that a system trained only on form has a priori no way to learn meaning, and a clear understanding of the distinction between form and meaning will help guide the field towards better science around natural language understanding.
Artificial Intelligence, Values and Alignment
This paper looks at philosophical questions that arise in the context of AI alignment and defends three propositions, including the central challenge for theorists is not to identify 'true' moral principles for AI; rather, it is to identify fair principles for alignment, that receive reflective endorsement despite widespread variation in people's moral beliefs.