Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Facebook Communications

WhatsApp on How It's Fighting Bulk Messaging and Suspicious Accounts (venturebeat.com) 11

Ahead of India's national elections later this year, WhatsApp is trying to wrangle bulk messaging and suspicious accounts. From a report: At a press briefing in New Delhi early today, company executives said they have built a machine learning system to detect and weed out users who engage in inappropriate behavior, such as sending bulk messages and creating multiple accounts with the sole purpose of spreading questionable content on the platform. Automated suspicious accounts and people who seek to create havoc are barred from the platform at various stages -- at the time of registration, while messaging, and when they are reported by others, the company's executives said.

Overall, WhatsApp bans about 2 million accounts on its platform each month, a spokesperson said. To address this issue, a machine learning system uses learnings from the company's past dealings with problematic accounts and from specific scenarios engineers followed when taking down accounts, said Matt Jones, a software engineer at WhatsApp. This machine learning system has reached a level of sophistication that allows it to ban 20 percent of bad accounts at the time of registration, according to the company. Seventy-five percent of the 2 million accounts WhatsApp bans in a month are handled without human intervention or a report filed by a user, said Carl Woog, a spokesperson for WhatsApp.

This discussion has been archived. No new comments can be posted.

WhatsApp on How It's Fighting Bulk Messaging and Suspicious Accounts

Comments Filter:
  • But I'm still surprised by the idea that someone purpose built a device that can handle multiple sim cards just so they can blast out mass messages. It's like with spam, I guess. The incredibly low rate of clicks is enough with some people.

  • by Anonymous Coward

    Banning the account immediately only helps the abuser to switch tactics. Instead, put the account in isolation, so that the spammer wastes his time with a worthless account, and can't readily figure out why.

    • by Gravis Zero ( 934156 ) on Wednesday February 06, 2019 @05:57PM (#58081102)

      Banning the account immediately only helps the abuser to switch tactics. Instead, put the account in isolation, so that the spammer wastes his time with a worthless account, and can't readily figure out why.

      This is called a "shadow ban" and abusers become aware of them quickly because they do not get any feedback. For this reason, I propose a "Hell ban" that would effectively be a shadow ban but would employ the use of AI to simulate feedback (preferably mostly negative feedback) to keep the abuser from recognizing he's been shadow banned.

      • I remember a bulletin board that had both shadowbanning, and an Eliza-like bot which would constantly reply to a spammer's own stuff via an account that nobody could see except the top admin and the spammers. It kept their treadmills going a while.

        What is needed are good old fashioned honeypots, and expanding teergrube functionality to suspected bots. This way, a CAPTCHA gets posted which has no right answer as one delay. Or, the account gets created and the account password doesn't work.

        I am reminded of

      • Nice idea, but what happens to the false positives?

  • Yay censorship! Yay Faceboot for MOAR CENSORSHIP!! Yay!

  • In WhatsApp you can not forward the same message more than five people in our country. Sending bulk messages and creating multiple accounts with the sole purpose of spreading questionable content on the platform is restricted. If you need assistance to repair printer in windows 10 have look at https://www.canonprintersuppor... [canonprint...ort247.com]

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...