When it comes to important life-or-death stuff, people just don’t want to trust a purely AI approach. Saying that the AI was developed by the government only makes the trust problem worse.
John Breeden II filed this report in Nextgov:
These days, the old “Trust us, we’re with the government,” slogan is probably more of a meme than a way to ensure instant cooperation. People tend to question everything, and perhaps rightfully so. And when it’s not even a person, but a government-created artificial intelligence asking for their trust, it adds yet another potential barrier to the relationship.
That could be a big problem, especially for the military, as it strives to create the so-called third generation of AI. The third wave AIs are supposed to be able to react to changing situations and think about the best solutions, supposedly faster and more accurately than a human. It’s one of the main goals of the Science of Artificial Intelligence and Learning for Open-world Novelty (SAIL-ON) program, which is in turn part of a massive $2 billion investment the Defense Advanced Research Projects Agency is making in artificial intelligence. Called AI Next, that program may eventually field over 60 projects aimed at improving AI smartness and reliability.
But none of that will do much good if people simply ignore the advice of their machine helpers or even rebel against them. While attitudes could change over time, for now, humans overall don’t seem willing to place their trust in AI, even when given assurances that doing so would be in their best interest. Just because someone says, “You can trust this AI. It was made in Washington and it’s here to help,” that doesn’t make people believe it.