mikejuk writes "Support Vector Machines (SVMs) are fairly simple but powerful machine learning systems. They learn from data and are usually trained before being deployed. SVMs are used in security to detect abnormal behavior such as fraud, credit card use anomalies and even to weed out spam. In many cases they need to continue to learn as they do the job and this raised the possibility of feeding it with data that causes it to make bad decisions. Three researchers have recently demonstrated how to do this with the minimum poisoned data to maximum effect. What they discovered is that their method was capable of having a surprisingly large impact on the performance of the SVMs tested. They also point out that it could be possible to direct the induced errors so as to produce particular types of error. For example, a spammer could send some poisoned data so as to evade detection for a while. AI based systems may be no more secure than dumb ones."