Google today launched a new technology to help news organizations and online platforms identify and swiftly remove abusive comments on their websites. The technology, called Perspective, will review comments and score them based on how similar they are to comments people said were "toxic" or likely to make them leave a conversation. From a report on BBC: The search giant has developed something called Perspective, which it describes as a technology that uses machine learning to identify problematic comments. The software has been developed by Jigsaw, a division of Google with a mission to tackle online security dangers such as extremism and cyberbullying. The system learns by seeing how thousands of online conversations have been moderated and then scores new comments by assessing how "toxic" they are and whether similar language had led other people to leave conversations. What it's doing is trying to improve the quality of debate and make sure people aren't put off from joining in.