Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Technology

Amnesty International Criticised for Using AI-Generated Images (theguardian.com) 30

While the systemic brutality used by Colombian police to quell national protests in 2021 was real and is well documented, photos recently used by Amnesty International to highlight the issue were not. The international human rights advocacy group has come under fire for posting images generated by artificial intelligence in order to promote their reports on social media -- and has since removed them. From a report: The images, including one of a woman being dragged away by police officers, depict the scenes during protests that swept across Colombia in 2021. But any more than a momentary glance at the images reveals that something is off. The faces of the protesters and police are smoothed-off and warped, giving the image a dystopian aura. The tricolour carried by the protester has the right colours -- red, yellow and blue -- but in the wrong order, and the police uniform is outdated.

Amnesty and other observers have documented hundreds of cases of human rights abuses committed by Colombian police during the wave of unrest in 2021, among them violence, sexual harassment and torture. Their research has raised awareness of the heavy-handedness of Colombian police and contributed to the growing acceptance of the need for reform. But photojournalists and media scholars warned that the use of AI-generated images could undermine Amnesty's own work and feed conspiracy theories.

This discussion has been archived. No new comments can be posted.

Amnesty International Criticised for Using AI-Generated Images

Comments Filter:
  • by TheMiddleRoad ( 1153113 ) on Thursday May 04, 2023 @05:28PM (#63497872)

    AA doesn't give a damn about truth. They care about their narratives. Using fake photos is no surprise at all.

    • by Tablizer ( 95088 ) on Thursday May 04, 2023 @05:35PM (#63497890) Journal

      > AA doesn't give a damn about truth. They care about their narratives.

      Hold on a sec, the article states this:

      Amnesty International said it had used photographs in previous reports but chose to use the AI-generated images to protect protesters [identity] from possible state retribution.

      To avoid misleading the public, the images included text stating that they were produced by AI.

      That sounds like a legitimate reason to me, although it's not clear if the disclaimer was added after the discovery of their origin.

      • Re: (Score:3, Insightful)

        by Tyr07 ( 8900565 )

        So they care so much about the truth they're not willing to blur out identifying details with a little effort? I don't buy it, you could use that argument for anything.

        "I generated AI images about X target by X special group showing X target doing BAD things to X special group but I used AI to protect X special group from retribution. Super honest though, scouts honor, totally happened, and uh, we need more funding to support X group from such things...no I know they have a mansion now but uh, that life sty

        • Re: (Score:3, Insightful)

          Comment removed based on user account deletion
          • So they care so much about the truth they're not willing to blur out identifying details with a little effort? I don't buy it, you could use that argument for anything.

            To be fair, if being identifiable in the image is enough reason for the person to be targeted then blurring might not help. There's more footage than just the AI photos, and security forces and government allies could figure out the subject but looking at other faces and details in the photo.

            I think it's also the case that a blurred out face is less impactful than a (seemingly) human face.

            I definitely agree that AI shouldn't be using image generators in this way, if nothing else it becomes far too easy to c

            • To be fair, if being identifiable in the image is enough reason for the person to be targeted then blurring might not help. There's more footage than just the AI photos, and security forces and government allies could figure out the subject but looking at other faces and details in the photo.

              That... doesn't make any sense. If there's more footage (and I'm not disputing that), then the security forces and government allies can just look at that anyway, the fact that an image from those events was published by Amnesty doesn't matter. The "crime" here that would trigger their retaliation isn't "being in an Amnesty photograph".

              I think it's also the case that a blurred out face is less impactful than a (seemingly) human face.

              Then use an AI generated face superimposed upon the actual photograph. Simple as.

              • To be fair, if being identifiable in the image is enough reason for the person to be targeted then blurring might not help. There's more footage than just the AI photos, and security forces and government allies could figure out the subject but looking at other faces and details in the photo.

                That... doesn't make any sense. If there's more footage (and I'm not disputing that), then the security forces and government allies can just look at that anyway, the fact that an image from those events was published by Amnesty doesn't matter. The "crime" here that would trigger their retaliation isn't "being in an Amnesty photograph".

                Well it depends on the motive for the retaliation. If AI asks for consent before publishing the photos the retaliation could be for that. Or it could very well be as simple as a general worry of "oh no, look what happened to those folks you pictured in your article" retaliation to make AI nervous about publishing photos.

                Again, I don't agree with AI but I suspect they had legit concerns.

                I think it's also the case that a blurred out face is less impactful than a (seemingly) human face.

                Then use an AI generated face superimposed upon the actual photograph. Simple as.

                I'm not sure a non-obviously manipulated p

          • Piss off the notorious They bad enough and you'll be found auto-erotically asphyxiated or stuck in a little concrete box while the owned press tells lies about you. How is Assange these days anyway?
        • > they're not willing to blur out identifying details with a little effort?

          A fuzzy blob instead of a face doesn't evoke the same emotional response in many. I'm okay with AI images if they identify it as such using labels or footnotes. In my opinion a drawing would have been a better option than AI. (Do both, actually: use an art filter in Photoshop to paint-ify the AI image.)

          Regardless, it's fair to apply Hanlon's Razor. People are new to these AI tools, making new mistakes.

          • by Tyr07 ( 8900565 )

            A fuzzy blob instead of a face doesn't evoke the same emotional response in many

            Exactly, you hit the nail on the head. They're generating fictitious AI images to provoke the emotional response / agenda they want to push. What we hope is that it is with integrity and based represents correctly non fictitious events that are happening, but made up images and actual images are two, very, very different things.

            To you this sounds great while it supports your own agendas, you may even imagine using it for such if you aren't already. You won't be a fan of it as soon as someone using the same

      • That sounds like a legitimate reason to me, although it's not clear if the disclaimer was added after the discovery of their origin.

        If you click through to the image it could easily be mistaken for an actual photo of misconduct. I think most people would assume it was. Of course it uses a young, pretty girl being manhandled by the police to maximize emotion.

        If you want to protect the protesters then blur their faces.

        • by pjt33 ( 739471 )

          Blurring doesn't protect them, though. Deconvolution may not recover the original perfectly, even if you guess the kernel perfectly, because of the edges, but it can recover enough detail to deanonymise.

      • Then you don't have a photo, or you have a drawing based on a photo. Simple as that. A fake photo? Misleading.

    • by wxjones ( 721556 )
      Most non-profits are about enriching their executives.
  • by Anonymous Coward

    Like quite a few "good cause" organisations, they no longer really care about the issues themselves, just about pushing the issues and perpetuating themselves. And the issues... don't really mean any more what they used to either. So if that's all fake, why not make use of those fancy fakery tools that are all the rage these days? In a sense, that's at least honest. In a backward, back-handed sort of way. As a consequence, you gotta take anything with their name on it with a grain of salt. Or a little more

    • Amnesty is little more than a vehicle for a grab bag of left wing causes. I was a member some years back, then contacted them during the Count Dankula nonsense to seek their views on his prosecution.

      What I received was boiler plate text. It opened by explaining the importance of freedoms of expression, followed by the 'but'. There is not but when a then small YouTuber is prosecuted for prancing his then girlfriend by training her cute dog to raise a paw in a Nazi salute. This desire Amnesty directly stating

  • by gTsiros ( 205624 ) on Thursday May 04, 2023 @06:26PM (#63497964)

    Text, images, sound, video will be infeasible to tell if it is real or fake. Sure, it may be possible by seeking sources and spending effort, but it's a losing battle. By the time you have some sort of proof that it is one or the other, the general populace will have - quite arbitrarily - already decided it is one or the other and formed their opinion.

    Generative algorithms (I refuse to use the common acronym) will keep becoming more versatile, easy to use and accessible. Now you need a server farm in the hundreds of thousands of CPUs. In a couple years, with optimization, specialization and advances in hardware, in the thousands. A little more and you may be able to make something in a couple hours with your home computer. 3D printing takes hours to shit out a part of dubious quality and finish and yet it is so popular. But anyway.

    In a short while, a piece of text that reads perfectly, with references even, will be suspicious. There will be no guarantee it is legit, written by a person, *thought out* by a person. You could exchange emails with the author. You could exchange physical mail with the author, discussing the details in an effort to figure out whether the text is legit. Still no guarantee. A person is perfectly able to *handwrite* what the GPT outputs.

    Sooner or later, books that can be proven to have been printed before 2023 will start becoming more valuable.

    And inevitably, thinking, that is the process of thinking, will be either directly or indirectly, by accident or intentionally, affected by all this. The line has been crossed and there is no turning back. We've created the closest thing to a mind virus we could hope for.

    I'm horrified.

    • "Sooner or later, books that can be proven to have been printed before 2023 will start becoming more valuable."

      It's already starting. The estate of Agatha Christie has authorized the rewriting of here book to Woke standards. Same with Roald Dahl. Certain Dr. Seuss books are censored.

      I'm glad I got my unclean versions already.

    • by Tony Isaac ( 1301187 ) on Thursday May 04, 2023 @08:52PM (#63498190) Homepage

      It's an arms race, yes, but it's not inevitable.

      It's been possible for quite a while now to make photos and videos that are hard to distinguish from reality. But tools to analyze fakes has also improved.

      Remember when people wrote checks, and checks came with a funky dot pattern to make sure people couldn't photocopy them to steal money from check writers? Nowadays, those dot patterns are gone, and you literally deposit a check by taking a picture of it. The technology to copy images is better, but also the ability of banks to track specific payments is better, and has compensated.

      It's going to be a turbulent transition to AI, but all is not lost.

      • You don't need any "tools" to do the analyzing, you just need to use your eyes.

        There are so many bugs in this image that it takes about two seconds to spot that it's fake.
        Yes, there are some images that cannot be identified (visually) as AI-generated, but as soon as they try to depict a somewhat complex content, it's game over...
      • by gTsiros ( 205624 )

        I emphasize, it does not matter if an image is ultimately proven to be fake. If it is convincing enough, people will believe it.

        That there is an arms race _is the problem itself_ it is not about who is gonna win it.

        • There is always a segment of the population who will believe literally anything, like the QAnon rumors that certain politicians are cannibals.

          There is no escaping the arms race, whatever we think about it.

  • Ever since she took over it has gone done hill. I stopped donating.

    • I've been fooled before. There are certain things I check before giving money to a charity, mainly how they spend their money. It's amazing how the adverts will tell you that X cause "needs your money desperately", but funnily enough the charity chooses to hold onto your donation and invest it in the stock market - sometimes into things that cause the problems they're trying to solve (Comic Relief holding shares in a company that makes landmines is the most disgusting example I've ever found). Anyway, in 20

  • Once it has become established that an organization is willing to lie or mislead people to support their goals, everything they publish becomes suspect. It gives the opposition plausible deniability, the ability to handwave away any atrocity with, "Why do you believe an organization with a history of faking photographs?"

    Amnesty International is done. This deception enables the oppressors. As much as their past work has been valued for exposing corruption, they can no longer effectively function as a f

    • That only happens if the average jane finds out about it, and that only happens if it goes viral on TikTok, so no.

Byte your tongue.

Working...