Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Security Spam Technology News

Poison Attacks Against Machine Learning 82

mikejuk writes "Support Vector Machines (SVMs) are fairly simple but powerful machine learning systems. They learn from data and are usually trained before being deployed. SVMs are used in security to detect abnormal behavior such as fraud, credit card use anomalies and even to weed out spam. In many cases they need to continue to learn as they do the job and this raised the possibility of feeding it with data that causes it to make bad decisions. Three researchers have recently demonstrated how to do this with the minimum poisoned data to maximum effect. What they discovered is that their method was capable of having a surprisingly large impact on the performance of the SVMs tested. They also point out that it could be possible to direct the induced errors so as to produce particular types of error. For example, a spammer could send some poisoned data so as to evade detection for a while. AI based systems may be no more secure than dumb ones."
This discussion has been archived. No new comments can be posted.

Poison Attacks Against Machine Learning

Comments Filter:
  • Try this on humans (Score:5, Interesting)

    by s_p_oneil ( 795792 ) on Sunday July 22, 2012 @09:46AM (#40729433) Homepage

    Universities should run a number of psychology experiments to see how this can be done to human intelligence to see how susceptible it is compared to AI. Or you could just study people who tune in to .

  • by Kanel ( 1105463 ) on Sunday July 22, 2012 @10:43AM (#40729663) Journal

    There's already a whole subfield of machine learning which concern itself with these problems. It's called "adversarial machine learning".
    The approaches are very different from usual software security. Instead of busying oneself with patching holes in software or setting up firewalls, adversarial machine learning re-design the algorithms completely, using game theory and other techniques. The premise is "How can we make an algorithm that works in an environment full of enemies that try to mislead it?" It's a refreshing change from the usual software-security paradigm, which is all about fencing the code into some supposedly 'safe' environment.

  • Shhhh.... (Score:3, Interesting)

    by ibsteve2u ( 1184603 ) on Sunday July 22, 2012 @12:33PM (#40730177)
    Stop talking about how easy it is to poison data collection efforts; you're going to kill the golden goose of those who insist that analyzing social data can allow you to pinpoint psychopaths [slashdot.org] and other "problematic" individuals before that goose ever takes to the air (on the wings of "black budget" funding, no doubt).
  • Re:Propaganda (Score:5, Interesting)

    by betterunixthanunix ( 980855 ) on Sunday July 22, 2012 @12:43PM (#40730257)
    I disagree; D.A.R.E. has been overwhelmingly successful at convincing people of the legitimacy of the war on drugs and the paramilitary police that were created in the name of that war. Hardly anyone questions the fact that we have soldiers (but with "POLICE" or "DEA" written on their uniforms) attacking unarmed civilians just to serve an arrest warrant. Hardly anyone questions the fact that the executive branch of government, through the Attorney General's office, now has the power to make and enforce drug laws, without democratic action. Hardly anyone questions the fact that the DEA, supposedly a law enforcement agency, has so much signals intelligence capability that the dictators of some nations have tried to demand the DEA's help in spying on political opponents.

    How many propaganda programs have been so successful at convincing people that this sort of unwinding of a democratic system is the right thing to do?

Old programmers never die, they just hit account block limit.

Working...