experts warn against “voting” for “deep fakes”

experts warn against “voting” for “deep fakes”

[ad_1]

The 2024 election will be the first in the era of artificial intelligence (AI). Considering the spread of so-called clones or “deepfakes”, when artificial intelligence copies the appearance and speech of any person, experts warn: voters risk voting based on misinformation they hear from “fake candidates”.

The 2024 election is the first experience of voting in the era of artificial intelligence

AI abuse challenges potentially affect 83 elections in 78 countries worldwide and more than 4 billion people collectively.

Such data about an “explosive year for democracies” is provided by The New York Times newspaper, publishing a “global election calendar” based on data collected by specialists of the Anchor Change company.

The evolution of voter influence: from propaganda, bots and humorous series to “deepfakes”

If previously the influence on the opinion of voters was carried out through classic propaganda and disinformation, later through social networks, bot farms and even humorous series, as in the presidential elections in Ukraine, now the world fears the “intervention” of artificial intelligence.

In the USA, already at the stage of the primaries, unscrupulous examples of the use of AI began

In the USA, already at the stage of the primaries, unscrupulous examples of the use of AI began. Yes, in the state of New Hampshire, voters received calls from a bot with a voice imitating Joe Biden.

The fake appeal urged people not to vote in local primaries. The attorney general of the state suggested that the audio was generated with the help of artificial intelligence, NBC News wrote.

Experts warn that such audio recordings, as well as recently popular “deep fakes”, can spread during the election campaign and even influence the voting results.

Republican Ron DeSantis has already dropped out of the election race, but at the beginning of it he was distinguished by the distribution of three images of his then one-party competitor, ex-president Donald Trump, which were assessed by forensic experts as “almost certainly deep fakes created with the help of artificial intelligence,” writes the NYT.

The U.S. Democratic Party, in turn, tested AI to write the first drafts of some fundraising messages, the newspaper reported, noting that such appeals are often more effective than those written by humans. However, the rapid development of artificial intelligence in politics is already blurring the lines between fact and fake, the newspaper emphasizes.

Several US states have already drafted bills to regulate AI-generated political content to prevent voter fraud.

And the European Union last month passed the world’s first comprehensive law on the regulation of artificial intelligence, which could introduce a global standard for classifying risks related to AI, as well as ensuring transparency of developments in this area and determining measures of financial punishment for offending companies.

Before that, US President Joe Biden also issued a decree on “safe, secure and reliable artificial intelligence”.

Western voters are more vulnerable than… Ukrainians

At the same time, residents of Western countries can learn resistance to misinformation from Ukrainians, the expert told Voice of America Olga TokaryukChatham House Ukraine Forum researcher.

The experience of confronting Russian disinformation has made Ukrainians more knowledgeable and cautious in consuming information, the expert says. The years of war, the fight against propaganda and educational campaigns about fact-checking also help.

Residents of Western countries simply do not live in a situation of constant information threats, like Ukrainians.

“Ukrainians already know well that Russia can try to interfere in our information space, it spreads propaganda and disinformation. We cannot generalize, but a significant part of Ukrainians has clearly already developed this resistance to informational threats. On the other hand, this cannot always be said about residents of many Western countries, who, in principle, firstly, are used to trusting information more and, secondly, the level of trust in their society is generally higher.

They simply do not live in a situation of constant information threats, as Ukrainians do today and for at least the last 10 years,” Tokaryuk explains.

According to her, the Ukrainian experience, “lessons from Ukraine” is a popular topic of analytical reports and reports at international conferences today.

AI can detect propaganda and other abuses in elections – an expert

“Artificial intelligence is already widely used in communication campaigns. Its actual application is not something unequivocally good or bad. This is simply already happening, – emphasizes the expert. “The question is more about who uses it and for what.”

The risk during the election period, Tokaryuk says, is that certain organizations or even states can use artificial intelligence in their own campaigns of influence or disinformation propaganda companies, including to influence the course of elections.

Excessive demonization of artificial intelligence and the fear that “humanity is under threat” should be avoided

But artificial intelligence can also be used to counteract these evil intentions, the expert emphasizes:

“In other words, artificial intelligence is only a tool. And then it depends on whose hands it will fall into and how it will be used.”

Therefore, it is necessary to avoid excessive demonization of artificial intelligence and the fear that “this tool will radically change everything” or that “humanity is under threat”, urges Olga Tokaryuk.

AI is a weapon

At the same time, the expert reminds that, for example, Russia has included the capabilities of artificial intelligence and cyber attacks in the weapons with which it wages war, on a par with traditional military means.

And not only against Ukraine, but also, for example, as an element of reducing the unity of Western countries in their intention to help Ukraine.

“As we can see, in Ukraine there have already been numerous attempts to throw audio fakes into the information space – from the first quite poor attempts to quite high-quality ones”, – says Tokaryuk. All this taught Ukrainians to be sufficiently alert and check information.

However, the expert does not agree with colleagues who predict a total era of mistrust:

“I consider it harmful to advise not to trust any information in principle. It’s just a matter of critically perceiving what we hear and see, thinking about how likely it is that the president or some other politician will personally call me, as well as being aware of the latest technological innovations and their possibilities in general with regard to potential disinformation.”

An experiment with deepfake: the expert was “cloned” right during the discussion

The dangers of AI for the electoral process were discussed during a discussion at the American legal institute Brennan Center for Justice.

Experts have identified several levels of potential danger. First of all, this is an “imitation threat” – the use of “deep fakes” or chats like GPT to write texts.

In addition, “phishing attacks” remain common, when, for example, an employee receives a letter from a supposed manager asking for a password or other confidential information or access to certain files.

The work of officials or members of election commissions can also be hindered by showers of artificially generated appeals or emails. Cyberattacks, which have been actively used by countries such as Russia and China recently, have even worse consequences.

Risks of AI: Voters will choose based on false information. At the same time, they will consider that they heard it from a reliable source or even from the first person

But what experts fear most is that voters will choose based on false information. At the same time, they will consider that they heard it from a reliable source or even from the first person.

By the way, directly during the expert discussion, the organizers offered to listen to a “secret guest” – a surprise for the participants was the “performance” of a deepfake of one of them. The video generated with the help of AI contained not only the face of the expert, spoke in his voice, but the AI ​​even generated an answer to a question that the real expert had previously received during the discussion.

For this, the organizers of the experiment used the expert’s previous speeches and materials written by him – everything that was published in the public domain.

Next, the voice was “cloned” based on his previous interviews and lip movements were synchronized with the generated audio.

“Deepfakes and other artificial intelligence methods to create fake but realistic footage can mislead the public and undermine trust in democracy,” the generated clone of the expert “said”.

“I don’t think I’m quite ready to be replaced right now [штучним інтелектом]. But it actually looks very impressive,” commented the experiment Larry Nordensenior director of the Brennan Center’s Program on Elections and Government.

He also warned that the use of AI could affect people’s security and basic rights, as well as democratic norms, which would also suffer if AI regulation became more stringent.

Norden believes that some AI capabilities will be recognized as an “unacceptable risk” and banned, such as facial recognition capabilities in public space, which in democratic countries is considered a strong encroachment on civil rights and democratic freedoms.

He also suggests that in today’s reality, it will be more important to be able to confirm the authenticity of this or that video or audio material than to refute fakes sent by AI.

[ad_2]

Original Source Link