Facebook is rolling out "proactive detection" artificial intelligence technology that will scan all posts on the site for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. The goal is to use AI to decrease how long it takes to send help to those in need. TechCrunch reports: Facebook previously tested using AI to detect troubling posts and more prominently surface suicide reporting options to friends in the U.S. Now Facebook is will scour all types of content around the world with this AI, except in the European Union, where General Data Protection Regulation privacy laws on profiling users based on sensitive information complicate the use of this tech. Facebook also will use AI to prioritize particularly risky or urgent user reports so they're more quickly addressed by moderators, and tools to instantly surface local language resources and first-responder contact info. It's also dedicating more moderators to suicide prevention, training them to deal with the cases 24/7, and now has 80 local partners like Save.org, National Suicide Prevention Lifeline and Forefront from which to provide resources to at-risk users and their networks.