Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Advertising Politics

AI Disclaimers in Political Ads Backfire on Candidates, Study Finds (msn.com) 49

Many U.S. states now require candidates to disclose when political ads used generative AI, reports the Washington Post.

Unfortunately, researchers at New York University's Center on Technology Policy "found that people rated candidates 'less trustworthy and less appealing' when their ads featured AI disclaimers..." In the study, researchers asked more than 1,000 participants to watch political ads by fictional candidates — some containing AI disclaimers, some not — and then rate how trustworthy they found the would-be officeholders, how likely they were to vote for them and how truthful their ads were. Ads containing AI labels largely hurt candidates across the board, with the pattern holding true for "both deceptive and more harmless uses of generative AI," the researchers wrote. Notably, researchers also found that AI labels were more harmful for candidates running attack ads than those being attacked, something they called the "backfire effect".

"The candidate who was attacked was actually rated more trustworthy, more appealing than the candidate who created the ad," said Scott Babwah Brennen, who directs the center at NYU and co-wrote the report with Shelby Lake, Allison Lazard and Amanda Reid.

One other interesting finding... The article notes that study participants in both parties "preferred when disclaimers were featured anytime AI was used in an ad, even when innocuous."
This discussion has been archived. No new comments can be posted.

AI Disclaimers in Political Ads Backfire on Candidates, Study Finds

Comments Filter:
  • Isn't that a good thing?
  • by dirk ( 87083 ) <dirk@one.net> on Saturday October 12, 2024 @02:27PM (#64859579) Homepage

    The ads don't specify where the AI was used, just that it was used. So anyone watching then questions everything in the ad and wonders what was real and what was generated. Sure, you make use it to make something innocuous, but the people watching the ad don't know that was the only thing it was used for. Candidates are better off not using AI as people don't trust it in general. And this also means the disclaimers are working and should be kept, as they are making people question the ad.

  • True for some? (Score:2, Insightful)

    by Petersko ( 564140 )

    This will probably generally hold true, but will be invalid for supporters of Trump.

    There's an old saying. "You can't beat an emotional argument with a logical one." And many (perhaps most) of Trump's supporters are operating from the emotional space. It doesn't matter how many facts or disclaimers you stack on anything. They will not be swayed. They'll no more absorb the label than they would any fact-check. It's noise.

    • There's an old saying. "You can't beat an emotional argument with a logical one."

      Certainly explains why religion is still a Thing.

    • And many (perhaps most) of Trump's supporters are operating from the emotional space. It doesn't matter how many facts or disclaimers you stack on anything. They will not be swayed. They'll no more absorb the label than they would any fact-check. It's noise.

      Sure. The Fascist Pig Party (aka Republicans) have been ensuring they're scared out of their minds constantly for decades now, and people in a constant state of terror can't think straight, and they'll flock to whoever has the loudest voice tell them "we can save you!". Sound familiar?

    • There's an old saying. "You can't beat an emotional argument with a logical one."

      Yeah but Dr House did it on every episode.

      And many (perhaps most) of Trump's supporters are operating from the emotional space. It doesn't matter how many facts or disclaimers you stack on anything.

      Unfortunately a whole bunch of the fact checkers are operating from an emotional space, too. They try to pretend to be logical, but it's just motivated reasoning. That's why it doesn't work. Dr. House doesn't use motivated reasoning.

      • A British comic actor playing an angry American doctor on a TV drama isn't reality.

        • Dr House is not real.

          Neither is the idea that you can't beat an emotional argument with a logical one.
          • Depends on the person, depends on the argument.

            In a lot of cases it's impossible to however.

            See my sig. There's a guy here who is convinced the EM drive could be possible. It's simple to see why it would be a perpetual motion machine if it existed. We can usually get as far as F=ma, sometimes kinetic energy, but as soon as we reach energy=power times time, he just starts cussing me out and leaves the thread.

            It's three basic steps of high school physics to show the EM drive and perpetual motion machine (whi

            • ok, well where is the hole in his understanding? There must be something he isn't seeing clearly.
              • Buggered if I know. Every time we get vaguely close he starts insulting me and them stupid replying.

                My guess is he doesn't understand that you can't just turn off one but of physics without having knock on consequences elsewhere. However he had now expanded a lot of effort telling people who do understand that they are idiots and he by implication is much smarter. In order to understand the physics he much first accept that he is the idiot (in his terms).

                That's a tough blow to the ego.

                • ok you need to start doing more finding out. What are the contours of his knowledge, and where is he coming from? Ask him questions. These will lead to results faster, but they're not as fun as insulting.
              • Oh it's Angel'o'sphere. Feel free to take a crack.

  • Meaning I create an AI ad, that I correctly disclose, about some made up or even real thing about myself. Run it, and get sympathy results out of it from the AI disclaimer (and people not wanting or able to think).

  • It is only "unfortunate" if you think people *should* be trusting convincingly faked content in polical ads.

    It isn't unfortunate - it's the REASON for the of labeling AI genned content in political ads

  • Thanks to the last 25 years of having a DVR, I don't even see political ads, but if I had to be subjected to them I wouldn't be very impressed by anyone from any party using AI generated crap in their ads.
  • Studies like this are of limited utility, as there is often a disconnect between what people say and what they actually do.

    Moreover, party allegiances are likely to override any negative inferences, and cause people to rationalize their choice despite their stated preferences or values.

  • ... require candidates to disclose ...

    There's really only 2 comments in campaigning: 1) Look what I'm doing/did correctly. 2) Look what the other side does/did wrongly. The problem is, using Point 1 comments, helps the other side use more Point 2 comments. So politicking is a race to the bottom, where only attack adverts and negative adverts are used. The nature of the beast means those adverts contain much dishonesty.

    This year, campaigning contains a new menace: Vindictive misinformation, most of which is currently produced by one si

  • AI generated content is now equal to fake and lies.

    We don't want that from politicians , though there's the unexplainable phenomena called Turmp/

Asynchronous inputs are at the root of our race problems. -- D. Winker and F. Prosser

Working...