Researchers recently found that artificial intelligence (AI) could find and exploit weaknesses in human decision-making and guide people toward certain decisions. Experts say the discovery is a sign of the growing influence of algorithms on human behavior.  “People who are heavy users of online digital platforms are at a greater risk of being influenced, compared to the average person, by virtue of providing the behind-the-scenes, data-hungry AI algorithms with more information about their habits and patterns of behavior,” Soheila Borhani, a physician and scientist at the University of Illinois at Chicago, who was not involved in the research, told Lifewire in an email interview. 

Watching and Learning

Scientists ran three experiments in the recently published research where participants played games against a computer. As the machine gained insights into participant responses’ behaviors, it identified and targeted vulnerabilities in people’s decision-making to steer them towards particular actions or goals. Amir Dezfouli, a neuroscientist and machine learning expert, who spearheaded the research, said in a news release that the findings highlighted the potential power of AI and underscored the need for proper governance to prevent possible misuse. “AI and machine learning offer significant benefits across many areas, including health,” he said. “Ultimately, how responsibly we set these technologies will determine if they will be used for good outcomes for society, or manipulated for gain.”

Not Just Theory

While the recent paper highlighted AI’s ability to influence decisions, some experts say that computers are already doing just that.  Anyone who goes online and accesses the web is subject to AI’s pervasive power, Josephine Yam, an AI lawyer and ethicist, told Lifewire in an email interview.  “AI is the Fourth Industrial Revolution,” Yam said. “Its growing ability to make autonomous decisions faster, better, and cheaper than humans are impacting our lives profoundly. Driver-assist features make our cars safer. Computer vision makes diagnosing diseases more accurate. Machine translation enables us to communicate across oceans despite language barriers.” Because AI is woven itself into most aspects of our lives, it influences our day-to-day decisions, whether we’re aware of it or not, Yam said. It serves online ads and news feeds based on our prior clicks. It recommends music, movies, and gift ideas based on our past listening, viewing, and shopping behaviors.     “AI is the world’s greatest prediction machine,” Yam added. “Because large volumes of historical data are used to train algorithms, the AI system’s machine learning capabilities detect nuanced patterns in our personal data to make very accurate recommendations about us.” But Theresa Kushner, an AI expert at NTT DATA Services, disputed the idea that AI is currently influencing online decisions. “You could say that AI is helping to inform decisions,” Kushner told Lifewire in an email interview.  “But influence is a specific capability to have an effect on character, development, or behavior of someone or something,” Kushner added. “Your Google feed is a good example of AI working today. Are you buying more furniture lately because Google knows you’ve been looking at sofas?” To prevent AI from hijacking their decisions, Yam said users should remember there is no such thing as online privacy anymore.  “People are leaving digital footprints of their identities or personal data wherever they go. AI algorithms record, compile and mine all their online personal data,” Yam added. “These algorithms collect thousands of personal data points about a user to make predictions about that user’s most likely behavior.”  Whether or not AI is currently influencing human decisions, observers are calling for more industry regulation. The proposed EU Artificial Intelligence Act, for example, is the first global regulatory framework that will make humans responsible for the harmful impacts of their AI, Yam said.   “The only realistic way to prevent this type of AI misuse by big tech companies is to require them, either through public pressure or legislation, to make their reinforcement learning algorithms available to public scrutiny,” Borhani said. “This is not a big ask, considering that these companies still get to hold on to their users’ data, without which the algorithms by themselves are of no practical use.”