Political campaign ads and donor solicitations have long been misleading. For example, in 2004, Democratic US presidential candidate John Kerry defeated Republican opponent George W. Bush ad “Sends Jobs Abroad ‘Makes Sense’ for America.”
Bush never said such a thing.
The next day, Bush responded, advertising that Kerry “supported more than 350 times more taxes.” This is also a false statement.
These days, the Internet has become a wild territory full of false political ads: many masquerading as polls and carrying misleading headlines. Clickbait.
Campaign fundraising claims are also false.
An analysis of 317,366 political emails sent during the 2020 US election found that deception was common. For example, One campaign manipulated recipients into opening emails by lying about the sender’s identity. and using subject lines that make you think the sender is responding to a donor, or saying the email is “not asking for money,” but that’s what happens. Republicans and Democrats do.
How Artificial Intelligence Will Impact Election Campaigns
Campaigns are now rapidly adopting artificial intelligence to create and prepare ads and donor solicitations. The results are impressive: Democratic campaigns have found that AI-written letters are more effective for donors than human-written letters Write custom text that prompts recipients to click and send donations.
AI has benefits for democracy, such as helping staff organize voter emails or making the work of some government officials easier.
But that fear is not unfounded AI will make politics trickier than ever.
Here are six things to keep in mind. I based this list on my own experiments testing the effects of political deception. As America heads into the next presidential election campaign, I hope voters learn what to expect and watch out for, and learn to be more skeptical of.
AI Comes to Dating Apps: How Do You Know You’re Talking to a Real Person?
1. Fake personalized campaign promises
My research on the 2020 presidential election revealed that voters’ choice between Biden and Trump was driven by their opinions. Which candidate “proposes realistic solutions to problems” According to 75 items on a survey, “I say what I think out loud”. These are two of the most important qualities a candidate must possess to project the image of the President and succeed.
AI chatbots such as OpenAI’s ChatGPT, Microsoft’s Bing Chat, and Google’s Bart can be used by politicians to make personal campaign promises to deceive voters and donors.
Currently, when people scroll through news feeds, articles are recorded in their computer history, which is tracked by sites like Facebook. The user is labeled as liberal or conservative, and was also appointed as possessor of certain interests. Political campaigns can place an ad in real-time on a person’s wall with a custom headline.
Campaigns can use AI to create a repository of articles written in different styles that deliver different campaign promises. Campaigns can then incorporate an AI algorithm into the process, courtesy of automated commands already inserted by the campaign. Make fake opt-in campaign promises A news article or donor solicitation at the end of the ad.
For example, ChatGPT might hypothetically ask the voter to include text-based content from recent articles they read online. The voter then scrolls down and votes for the candidate they want to see, word for word, in a personalized tone. My experiments show that if a presidential candidate can align the tone of his chosen words with voter preferences, the politician appears more presidential and credible.
2. Take advantage of the tendency to trust each other
People automatically believe what they are told. They have what academics call “truth default.” They are victims of seemingly unbelievable lies.
In my tests, I found that People who expose the presidential candidate’s misinformation believe the false statements.
Because text produced by ChatGPT can change people’s attitudes and opinions, it’s relatively easy for AI to exploit voters’ default truth while bots stretch the limits of credibility beyond what humans can imagine.
3. More lies, less responsibility
Like chatbots ChatGPT has the potential to create things They are either objectively inaccurate or outright ridiculous. AI can generate false information, provide false statements and false advertising. The most unscrupulous human campaign operative might still have an ounce of responsibility, an AI has none. And OpenAI acknowledges flaws in ChatGPT that could lead to the provision of biased, misinformed or outright false information.
If campaigns spread AI messages without any human filter or moral compass, the lies can get worse and spiral further out of control.
4. Cheating voters by deceiving your candidate
An essayist The New York Times Had a long conversation with Microsoft’s Bing chatbot. Eventually, Bode tried to get him to leave his wife. “Sydney,” he told the reporter several times, “I love you (…) you’re married, but you don’t love your partner, you love me. You really want to be with me.”
Imagine millions of such encounters A bot tries to trick voters into leaving their candidate for someone else.
AI chatbots can show biases. For example, they are currently more to the left politically, with liberal positions, expressing 99% support for Biden, with less dissent than the general public.
In 2024, Republicans and Democrats will have an opportunity to develop models that instill political bias and influence voters.
5. Manipulation of candidate photographs
AI can transform images. So called videos and pictures Deep fake They are common in politics and very progressive. Donald Trump has used AI to create a fake photo of him kneeling in prayer.
Photos can be customized more precisely to influence voters in more subtle ways. In my research, I have found that a communicator’s appearance can be just as influential and misleading as what someone actually says. My research also revealed that while voters thought Trump was “presidential” in the 2020 election, he was perceived as “real.”
Making people think you’re “genuine” through your appearance and nonverbal language is a deceptive tactic that’s more convincing than actually telling the truth.
Using Trump as an example, let’s say you want voters to see you as honest, trustworthy, and likable. Certain changeable aspects of his appearance make him seem insincere, untrustworthy, and unlikable: he shows his lower teeth when he speaks and rarely smiles.
The campaign could use AI to alter an image or video of Trump to make him appear smiling and friendly, making voters think he’s more reassuring and winning, and ultimately honest and trustworthy.
6. Avoiding blame
AI provides a new scapegoat when campaigns get it wrong. Usually, when politicians get into trouble, they blame the staff. When employees get into trouble, they blame the coach. If trainers run into trouble, they can now blame ChatGPT.
A campaign can avoid responsibility for its mistakes by blaming technology for fabricating lies. When Ron DeSantis’ campaign tweeted fake photos of Trump hugging and kissing Anthony Fauci, staffers did not acknowledge the misconduct or respond to reporters’ requests for comment. If a robot could, hypothetically, take the blame, obviously no human would have to.
But of course, not all AI contributions to politics are harmful, it can help voters politically, such as by making it easier to access information about issues. However, a lot of terrible things can happen as campaigns implement AI. I hope these six tips help you prepare and avoid disappointment in donor applications and advertisements.
We put ChatGPT to the test, a chatbot that passes university exams but makes mistakes
“Music ninja. Analyst. Typical coffee lover. Travel evangelist. Proud explorer.”