This is a cache of https://politics.slashdot.org/story/24/10/12/0445226/ai-disclaimers-in-political-ads-backfire-on-candidates-study-finds. It is a snapshot of the page at 2024-10-13T01:11:45.278+0000.
AI Disclaimers in Political Ads Backfire on Candidates, Study Finds - Slashdot

Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Advertising Politics

AI Disclaimers in Political Ads Backfire on Candidates, Study Finds (msn.com) 34

Many U.S. states now require candidates to disclose when political ads used generative AI, reports the Washington Post.

Unfortunately, researchers at New York University's Center on Technology Policy "found that people rated candidates 'less trustworthy and less appealing' when their ads featured AI disclaimers..." In the study, researchers asked more than 1,000 participants to watch political ads by fictional candidates — some containing AI disclaimers, some not — and then rate how trustworthy they found the would-be officeholders, how likely they were to vote for them and how truthful their ads were. Ads containing AI labels largely hurt candidates across the board, with the pattern holding true for "both deceptive and more harmless uses of generative AI," the researchers wrote. Notably, researchers also found that AI labels were more harmful for candidates running attack ads than those being attacked, something they called the "backfire effect".

"The candidate who was attacked was actually rated more trustworthy, more appealing than the candidate who created the ad," said Scott Babwah Brennen, who directs the center at NYU and co-wrote the report with Shelby Lake, Allison Lazard and Amanda Reid.

One other interesting finding... The article notes that study participants in both parties "preferred when disclaimers were featured anytime AI was used in an ad, even when innocuous."

AI Disclaimers in Political Ads Backfire on Candidates, Study Finds

Comments Filter:
  • Isn't that a good thing?
  • by dirk ( 87083 ) <dirk@one.net> on Saturday October 12, 2024 @03:27PM (#64859579) Homepage

    The ads don't specify where the AI was used, just that it was used. So anyone watching then questions everything in the ad and wonders what was real and what was generated. Sure, you make use it to make something innocuous, but the people watching the ad don't know that was the only thing it was used for. Candidates are better off not using AI as people don't trust it in general. And this also means the disclaimers are working and should be kept, as they are making people question the ad.

  • This will probably generally hold true, but will be invalid for supporters of Trump.

    There's an old saying. "You can't beat an emotional argument with a logical one." And many (perhaps most) of Trump's supporters are operating from the emotional space. It doesn't matter how many facts or disclaimers you stack on anything. They will not be swayed. They'll no more absorb the label than they would any fact-check. It's noise.

    • There's an old saying. "You can't beat an emotional argument with a logical one."

      Certainly explains why religion is still a Thing.

    • And many (perhaps most) of Trump's supporters are operating from the emotional space. It doesn't matter how many facts or disclaimers you stack on anything. They will not be swayed. They'll no more absorb the label than they would any fact-check. It's noise.

      Sure. The Fascist Pig Party (aka Republicans) have been ensuring they're scared out of their minds constantly for decades now, and people in a constant state of terror can't think straight, and they'll flock to whoever has the loudest voice tell them "we can save you!". Sound familiar?

    • There's an old saying. "You can't beat an emotional argument with a logical one."

      Yeah but Dr House did it on every episode.

      And many (perhaps most) of Trump's supporters are operating from the emotional space. It doesn't matter how many facts or disclaimers you stack on anything.

      Unfortunately a whole bunch of the fact checkers are operating from an emotional space, too. They try to pretend to be logical, but it's just motivated reasoning. That's why it doesn't work. Dr. House doesn't use motivated reasoning.

  • Meaning I create an AI ad, that I correctly disclose, about some made up or even real thing about myself. Run it, and get sympathy results out of it from the AI disclaimer (and people not wanting or able to think).

  • It is only "unfortunate" if you think people *should* be trusting convincingly faked content in polical ads.

    It isn't unfortunate - it's the REASON for the of labeling AI genned content in political ads

  • Thanks to the last 25 years of having a DVR, I don't even see political ads, but if I had to be subjected to them I wouldn't be very impressed by anyone from any party using AI generated crap in their ads.
  • Studies like this are of limited utility, as there is often a disconnect between what people say and what they actually do.

    Moreover, party allegiances are likely to override any negative inferences, and cause people to rationalize their choice despite their stated preferences or values.

  • ... require candidates to disclose ...

    There's really only 2 comments in campaigning: 1) Look what I'm doing/did correctly. 2) Look what the other side does/did wrongly. The problem is, using Point 1 comments, helps the other side use more Point 2 comments. So politicking is a race to the bottom, where only attack adverts and negative adverts are used. The nature of the beast means those adverts contain much dishonesty.

    This year, campaigning contains a new menace: Vindictive misinformation, most of which is currently produced by one si

"Can you program?" "Well, I'm literate, if that's what you mean!"

Working...