Melanie Mitchell, professor of Computer Science at Portland State University, isn’t worried that AI (artificial intelligence) systems have become too smart. Instead, Mitchell worries that AI systems remain so stupid even as people abdicate decision-making to them. “A.I. programs that lack common sense and other key aspects of human understanding are increasingly being deployed for real-world applications,” she explains.

Mitchell elaborates on this opinion in this excerpt from her essay in the New York Times:

As someone who has worked in A.I. for decades, I’ve witnessed the failure of similar predictions of imminent human-level A.I., and I’m certain these latest forecasts will fall short as well. The challenge of creating humanlike intelligence in machines remains greatly underestimated. Today’s A.I. systems sorely lack the essence of human intelligence: understanding the situations we experience, being able to grasp their meaning. The mathematician and philosopher Gian-Carlo Rota famously asked, “I wonder whether or when A.I. will ever crash the barrier of meaning.” To me, this is still the most important question.

[…]

While some people are worried about “superintelligent” A.I., the most dangerous aspect of A.I. systems is that we will trust them too much and give them too much autonomy while not being fully aware of their limitations. As the A.I. researcher Pedro Domingos noted in his book “The Master Algorithm,” “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”

The race to commercialize A.I. has put enormous pressure on researchers to produce systems that work “well enough” on narrow tasks. But ultimately, the goal of developing trustworthy A.I. will require a deeper investigation into our own remarkable abilities and new insights into the cognitive mechanisms we ourselves use to reliably and robustly understand the world. Unlocking A.I.’s barrier of meaning is likely to require a step backward for the field, away from ever bigger networks and data collections, and back to the field’s roots as an interdisciplinary science studying the most challenging of scientific problems: the nature of intelligence.