A jaw-dropping exploration of everything that goes wrong when we build AI systemsâ€”and the movement to fix them.
Todayâ€™s â€śmachine-learningâ€ť systems, trained by data, are so effective that weâ€™ve invited them to see and hear for usâ€”and to make decisions on our behalf. But alarm bells are ringing. Systems cull rĂ©sumĂ©s until, years later, we discover that they have inherent gender biases. Algorithms decide bail and paroleâ€”and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill.
When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christianâ€™s riveting account, we meet the alignment problemâ€™s â€śfirst-responders,â€ť and learn their ambitious plan to solve it before our hands are completely off the wheel.