Disinformation is expected to be one of the biggest cyber risks in the 2024 election.
Andrew Brooks Image Source | Getty Images
The vote comes as the country faces a range of issues, including a cost of living crisis and bitter divisions over immigration and asylum.
“We expect the majority of cybersecurity risks to emerge in the months leading up to Election Day, as most British citizens will vote at a polling station on Election Day,” said the CEO of identity security company Okta. ) Todd McKinnon told CNBC via email. .
It wouldn't be the first time.
In 2016, the US presidential election and the UK's vote to leave the EU were both found to have been disrupted by disinformation shared on social media platforms attributed to Russian state-linked groups, but the Russian government has denied these claims. I'm denying it.
Cyber experts say state actors have since carried out routine attacks in various countries aimed at manipulating election results.
Meanwhile, Britain last week claimed that the Chinese government-linked hacker group APT31 attempted to access the email accounts of British MPs, but said the attempt was unsuccessful. London has imposed sanctions on Chinese individuals and technology companies in Wuhan that are believed to be home to APT31.
The United States, Australia and New Zealand also imposed their own sanctions. China has denied allegations of state-sponsored hacking, calling them “baseless.”
Cybersecurity experts predict that malicious actors will interfere in the upcoming election in a variety of ways. It is expected to get even worse this year due to the proliferation of artificial intelligence, especially through disinformation.
Synthetic images, video, and audio generated using computer graphics, simulation techniques, and AI (commonly referred to as “deepfakes”) will become commonplace as people can easily create them. Deaf experts say.
Okta's McKinnon said, “Nation-state actors and cybercriminals are using AI-powered identity-based attacks, including phishing, social engineering, ransomware, and supply chain breaches, to attack politicians, campaign staff, and election officials. They are likely to target institutions.”
“We will also certainly see an influx of AI and bot-driven content generated by threat actors to push misinformation on an even larger scale than we have seen in previous election cycles.”
The cybersecurity community is calling for increased awareness of the misinformation generated by this type of AI, as well as international cooperation to reduce the risk of such malicious activity.
Adam Myers, head of adversary countermeasures at cybersecurity firm CrowdStrike, said AI-powered disinformation is the biggest risk to the 2024 election.
“Right now, generative AI can do both harm and good, and we're seeing more and more adoption of both applications every day,” Meyers told CNBC.
China, Russia, and Iran are likely to utilize tools such as generative AI to conduct misinformation and disinformation operations against various global elections, according to CrowdStrike's latest annual threat report. That's what it means.
“This democratic process is extremely fragile,” Myers told CNBC. “How adversary states like Russia, China, and Iran are leveraging generative AI and some new technologies to create messages and use deepfakes to create stories and narratives that people are forced to accept. When you start thinking about how you can create this kind of confirmation bias, it's very dangerous, especially when people already have this kind of confirmation bias.”
A key issue is that AI is reducing the barriers to entry for criminals looking to exploit people online. This has already happened in the form of fraudulent emails created using: easily accessible AI tools like ChatGPT.
By training AI models based on our own data available on social media, hackers are developing more sophisticated and personal attacks, said Dan Holmes, a fraud prevention expert at regulated technology firm Feedzai. It is said that it is also being developed.
“These voice AI models are very easy to train…with social exposure. [media]'' Holmes said in an interview with CNBC. [about] You get engaged on an emotional level and come up with something really creative. ”
A fake AI-generated audio clip of opposition Labor Party leader Keir Starmer abusing party members was posted on social media platform X in October 2023 in connection with the election. The post garnered him 1.5 million views. According to the fact-correcting charity Full Fact.
This is just one example of a number of deepfakes that have cybersecurity experts worried about what the future holds as the UK prepares for an election later this year.
However, deepfake technology is becoming more sophisticated. And for many tech companies, the race to beat them is now a fight with fire.
“Deepfakes have gone from being theoretical to being operational today,” Onfido CEO Mike Tuchen said in an interview with CNBC last year.
“Right now, it’s a cat-and-mouse game of ‘AI vs. AI.’ Using AI to detect deepfakes and reduce their impact on customers is the big battle right now.”
Cyber experts say it's becoming increasingly difficult to tell what's real, but there may be some signs that content has been digitally manipulated.
AI uses prompts to generate text, images, and videos, but it doesn't always get them right. For example, if you're watching an AI-generated dinner video and your spoon suddenly disappears, that's an example of an AI flaw.
Okta's McKinnon added: “While we will no doubt see more deepfakes throughout the election process, a simple step we can all take is to verify the authenticity of something before sharing it.” .