Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software AI Technology

Engineers Develop Colorful Printed Patch That Hides People From AI (theverge.com) 53

A group of engineers from the University of KU Leuven in Belgium have come up with a solution to make users invisible to one specific algorithm. "In a paper shared last week on the preprint server arXiv, these students show how simple printed patterns can fool an AI system that's designed to recognize people in images," reports The Verge. From the report: If you print off one of the students' specially designed patches and hang it around your neck, from an AI's point of view, you may as well have slipped under an invisibility cloak. As the researchers write: "We believe that, if we combine this technique with a sophisticated clothing simulation, we can design a T-shirt print that can make a person virtually invisible for automatic surveillance cameras."

In the case of this recent research -- which we spotted via Google researcher David Ha -- some caveats do apply. Most importantly, the adversarial patch developed by the students can only fool one specific algorithm named YOLOv2. It doesn't work against even off-the-shelf computer vision systems developed by Google or other tech companies, and, of course, it doesn't work if a person is looking at the image.

This discussion has been archived. No new comments can be posted.

Engineers Develop Colorful Printed Patch That Hides People From AI

Comments Filter:
  • And they need them again.

    But eventually probably not.

  • by Tablizer ( 95088 ) on Tuesday April 23, 2019 @05:56PM (#58479916) Journal

    Extra points if you make one that identifies me as Pikachu.

  • If yes you got a killer app.

    • There's no objective way to define gender as the term is (mis)used in today's language. It used to be a synonym for sex, but now simply means "whatever the fuck I feel like at this exact moment."

    • by Anonymous Coward

      AI only cares about facts.

  • text would go here if i had something more to say, but since slashdot wont let me leave this section empty i have to put something here, so blah blah blah
    • The only reason this hack works is because there is nothing similar to it in the training set.

      Add some colorful patterns to the training image set, re-run, deploy, problem fixed.

      • eventually someone will come up with a design with a mix of LED infrared lights that fools them all,
        • by ShanghaiBill ( 739463 ) on Tuesday April 23, 2019 @07:06PM (#58480254)

          eventually someone will come up with a design with a mix of LED infrared lights that fools them all,

          If images incorporating IR LEDs are added to the training set, the system can adapt to that as well.

          These researchers are not identifying any flaws in an algorithm, just gaps in the training set that can be easily fixed.

          In fact, both detecting and fixing the gaps can be automated. That is a exactly what a GAN [wikipedia.org] does.

    • I think the point of the image is to break the contrast at the waist between a person's shirt and pants, so just having the image on a t-shirt wouldn't be enough. you'd have to have the patter bleed onto the top of your pants as well.

  • ART is dedicated to adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attacks and defense methods for machine learning models. ART provides an implementation for many state-of-the-art methods for attacking and defending classifiers.

    https://github.com/IBM/adversa... [github.com]

  • by fahrbot-bot ( 874524 ) on Tuesday April 23, 2019 @06:02PM (#58479938)

    Most importantly, the adversarial patch developed by the students can only fool one specific algorithm named YOLOv2. ...

    Only works against YOLOv2 -- got it.

    It doesn't work against even off-the-shelf computer vision systems developed by Google or other tech companies, ...

    Sure, super clear... It doesn't work against [not] YOLOv2 -- check.

    ... and, of course, it doesn't work if a person is looking at the image.

    Okay... super-duper, unnecessarily clear, it also doesn't work against a person -- (because a person is not an AI).

    • by Anonymous Coward

      This is from The Verge, don't expect real journalism.

    • Only works against YOLOv2 -- got it.

      And YOLOv3 is the version currently included in OpenCV.

  • by SirAstral ( 1349985 ) on Tuesday April 23, 2019 @06:11PM (#58479980)

    Once again we are calling something that is not AI an AI.

    This is an algorithm, it is doing nothing more than matching up what it sees against a bunch of decision that a human already coded it to test. It is not making decisions any more than a cook decides what a customer is going to order. Like a cook, this so call "AI" just matches what it is about to do with data based on what something else told it. It will have to be touched by a human to correct for the flaws in its "decision data/processes".

    Had this been an AI, it would have been able to adjust when a human walked up and call the computer a moron and told it what was wrong without resorting to reviewing its code!

    • by mark-t ( 151149 )

      Is it artificial? Check.

      Does it make intelligent decisions? Since I'm not familiar with the particulars, I can't evaluate that objectively, but assuming that it did, in what way would the phrase "Artificial Intelligence" not apply here?

      There is absolutely *nothing* in the scope of what A.I. literally means that suggests that an algorithm that makes deterministic decisions cannot be one.

      • A glorified table lookup is NOT intelligence.

        Artificial Ignorance would better describe it.

        • by mark-t ( 151149 )

          Except it's not just a table... it's an algorithm that *USES* such a table.

          There is absolutely nothing to suggest that even much of human thought itself is not governed by principles that are no less deterministic than the output of an algorithm. The argument that we don't understand it therefore "quantum mechanics" is not any better than invoking the argument that "god did it" when science doesn't explain something.

          • Except it's not just a table... it's an algorithm that *USES* such a table.

            Nope. It is an algorithm that BUILDS such a table.

            To be fair, YOLO uses the table, but another algorithm (not a human) builds the table, and it is this 2nd algorithm that is being "tricked" by the techniques described in TFA.

    • it is doing nothing more than matching up what it sees against a bunch of decision that a human already coded it to test.

      No, this is completely wrong.

      There is no human setting the criteria to test.

      A machine learning system identifies the important criteria and test parameters during the training phase.

      Humans just feed it data.

    • by ceoyoyo ( 59147 )

      This is an algorithm, it is doing nothing more than matching up what it sees against a bunch of decision that a human already coded it to test.

      Whether or not you agree with labelling learning algorithms AI, at least get your facts straight. These algorithms are not coded with a bunch of heuristic rules. They learn based on training data.

      Had this been an AI, it would have been able to adjust when a human walked up and call the computer a moron and told it what was wrong

      This is precisely the strength of machi

  • Repeat after me: image recognition is NOT AI.

  • Laughing Man from Ghost In The Shell (anime)

    https://www.youtube.com/watch?... [youtube.com]

  • 1. Get more money.
    2. Offer police/mil/gov in very different nations globally lots of free new CCTV and software.
    3. Test on global populations and movements of populations globally for "free".
    Free police support for 4th, 3rd and 2nd world nations.
    At their train stations, bus stops, ports, airports, along their roads. In their malls and city centres.
    Match every face with exisiting passports and national photo ID databases.
    Then work on gait, 3D side on/top/looking up CCTV detection.
    Bring the advanc
  • (Arthur C. Clarke, Steven Baxter)

    In the book people can be tracked anywhere through time with a wormhole camera (WormCam), people that want privacy start by hiding their faces and ultimately end up wearing full body disguises that change their heat signatures to avoid being tracked.

    This is the beginning of a privacy arms race.

    • The aspect of confusing patterns and human operators looking at the cameras reminded me of BLIT [wikipedia.org] by David Langford, where a scientist discovers that certain visual patterns can short-circuit the human brain, resulting in death or madness.
  • by Solandri ( 704621 ) on Tuesday April 23, 2019 @08:41PM (#58480624)
    People start wearing military-style camouflage makeup before heading out in public?
    • by Anonymous Coward

      You haven't lived in the South, have you.

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...