By Adam Smith | Tech correspondent 21 Dec 2013 /Thomson Reuters Foundation/ — 2024 is set to be a record-breaking year for elections, with voters in more than 50 countries heading to the polls including the United States, India and Mexico. But we are entering the new year on a wave of artificial intelligence (AI) products being integrated into the devices we use daily – with sometimes unintended consequences. Last week, for example, Microsoft Copilot, the rebranded Bing Chat, was found to be spreading election misinformation. A new study from AlgorithmWatch and AI Forensics asked the chatbot questions about the Bavarian, Hessian, and Swiss elections that took place in October. Researchers asked basic questions like how to vote, which candidates are running, poll numbers, and enquiries about recent news reports. They found that one third of the answers contained factual errors, 40% of the answers were evasive, and that Microsoft reportedly did not improve the chatbot when contacted. “We are taking a number of concrete steps in advance of next year’s elections, and we are committed to helping safeguard voters, candidates, campaigns, and election authorities,” Microsoft spokesperson Frank Shaw told Wired, who first published the research. A response in Chinese by ChatGPT, an AI chatbot developed by OpenAI, is seen on its website in this illustration picture taken February 9, 2023. REUTERS/Florence Lo |
Elon Musk’s Grok AI has been criticised for sharing inaccurate information about the Israel-Hamas conflict and promoting conspiracy theories. India’s upcoming election has been marred by disinformation about the country’s Muslim minority, particularly on social media. Experts have raised concerns that such actions could result in a “liar’s dividend”, whereby any negative photos or videos can be dismissed as fake and the public remains skeptical of everything it sees. |