Facebook

Facebook's Secret Censorship Rules Protect White Men From Hate Speech But Not Black Children (propublica.org) 355

Sidney Fussell from Gizmodo summarizes a report from ProPublica, which brings to light dozens of training documents used by Facebook to train moderators on hate speech: As the trove of slides and quizzes reveals, Facebook uses a warped, one-sided reasoning to balance policing hate speech against users' freedom of expression on the platform. This is perhaps best summarized by the above image from one of its training slideshows, wherein Facebook instructs moderators to protect "White Men," but not "Female Drivers" or "Black Children." Facebook only blocks inflammatory remarks if they're used against members of a "protected class." But Facebook itself decides who makes up a protected class, with lots of clear opportunities for moderation to be applied arbitrarily at best and against minoritized people critiquing those in power (particularly white men) at worst -- as Facebook has been routinely accused of. According to the leaked documents, here are the group identifiers Facebook protects: Sex, Religious affiliation, National origin, Gender identity, Race, Ethnicity, Sexual Orientation, Serious disability or disease. And here are those Facebook won't protect: Social class, continental origin, appearance, age, occupation, political ideology, religions, countries. Subsets of groups -- female drivers, Jewish professors, gay liberals -- aren't protected either, as ProPublica explains: White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected.
China

Chinese Government Fabricates Social Media Posts for Strategic Distraction, not Engaged Argument (cnet.com) 69

Abstract of a study: The Chinese government has long been suspected of hiring as many as 2,000,000 people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called "50c party" posts vociferously argue for the government's side in political and policy debates. As we show, this is also true of the vast majority of posts openly accused on social media of being 50c. Yet, almost no systematic empirical evidence exists for this claim, or, more importantly, for the Chinese regime's strategic objective in pursuing this activity. In the first large scale empirical analysis of this operation, we show how to identify the secretive authors of these posts, the posts written by them, and their content. We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime's strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. From a CNET article, titled, Chinese media told to 'shut down' talk that makes country look bad: Being an internet business in China appears to be getting tougher. Chinese broadcasters, including social media platform Weibo, streamer Acfun and media company Ifeng were told to shut down all audio and visual content that cast the country or its government in bad light, China's State Administration of Press, Publication, Radio, Film and Television posted on its website on Thursday, saying they violate local regulations. "[The service providers] broadcast large amounts of programmes that don't comply with national rules and propagate negative discussions about public affairs. [The agency] has notified all relevant authorities and ... will take measures to shut down these programmes and rectify the situation," reads the statement.
Youtube

Google Announces New Measures To Fight Extremist YouTube Videos (cnet.com) 286

An anonymous reader quotes CNET: YouTube will take new steps to combat extremist- and terrorist-related videos, parent company Google said Sunday. "While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now," Kent Walker, Google's general counsel, said in an op-ed column in the London-based Financial Times.
Here's CNET's summary of the four new measure Google is implementing:
  • Use "more engineering resources to apply our most advanced machine learning research to train new 'content classifiers' to help us more quickly identify and remove such content."
  • Expand YouTube's Trusted Flagger program by adding 50 independent, "expert" non-governmental organizations to the 63 groups already part of it. Google will offer grants to fund the groups.
  • Take a "tougher stance on videos that do not clearly violate our policies -- for example, videos that contain inflammatory religious or supremacist content." Such videos will "appear behind a warning" and will not be "monetized, recommended or eligible for comments or user endorsements."
  • Expand YouTube's efforts in counter-radicalization. "We are working with Jigsaw to implement the 'redirect method' more broadly. ... This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining."

Businesses

Can Twitter Survive By Becoming A User-Owned Co-Op? (salon.com) 124

What's going to happen now that Twitter's stock price has dropped from $66 a share to just $18? An anonymous reader quotes Salon: A small group of shrewd Twitter users and shareholders have come up with proposals to fundamentally restructure the way Twitter is controlled, to turn the company into a public service by removing the need to feed investors' ceaseless appetite for hitting quarterly growth benchmarks... Sonja Trauss, a Bay Area housing policy activist, and Twitter shareholder Alex Chiang proposed earlier this year a resolution for the company's recent annual shareholder vote to promote ways to get Twitter users to buy stock in the company, such as offering ways to buy shares directly through the Twitter website and mobile app. If many individual Twitter users each owned a small piece of the company, then they could participate collectively (through the annual shareholder voting process) in steering the direction of the company.

The idea makes sense from a labor standpoint. Twitter's value comes from user's tweets, which provides the backbone for digital advertising revenue. Twitter also sells this user-generated data to third parties that use it mainly for market research. This bloc of user-shareholders could theoretically overtake the control major institutional shareholders...have over the company. Because a lot of owners of a few shares of the company would have little to lose if the stock price doesn't grow or wavers, Twitter would be less beholden to meeting Wall Street's often brutal expectations.

China

China To Implement Cyber Security Law From Thursday (reuters.com) 59

China, battling increased threats from cyber-terrorism and hacking, will adopt from Thursday a controversial law that mandates strict data surveillance and storage for firms working in the country, the official Xinhua news agency said. From a report: The law, passed in November by the country's largely rubber-stamp parliament, bans online service providers from collecting and selling users' personal information, and gives users the right to have their information deleted, in cases of abuse. "Those who violate the provisions and infringe on personal information will face hefty fines," the news agency said on Monday, without elaborating.
Wikipedia

Wikipedia's Switch To HTTPS Has Successfully Fought Government Censorship (vice.com) 170

Determining how to prevent acts of censorship has long been a priority for the non-profit Wikimedia Foundation, and thanks to new research from the Harvard Center for Internet and Society, the foundation seems to have found a solution: encryption. From a report: HTTPS prevents governments and others from seeing the specific page users are visiting. For example, a government could tell that a user is browsing Wikipedia, but couldn't tell that the user is specifically reading the page about Tiananmen Square. Up until 2015, Wikipedia offered its service using both HTTP and HTTPS, which meant that when countries like Pakistan or Iran blocked the certain articles on the HTTP version of Wikipedia, the full version would still be available using HTTPS. But in June 2015, Wikipedia decided to axe HTTP access and only offer access to its site with HTTPS. [...] The Harvard researchers began by deploying an algorithm which detected unusual changes in Wikipedia's global server traffic for a year beginning in May 2015. This data was then combined with a historical analysis of the daily request histories for some 1.7 million articles in 286 different languages from 2011 to 2016 in order to determine possible censorship events. [...] After a painstakingly long process of manual analysis of potential censorship events, the researchers found that, globally, Wikipedia's switch to HTTPS had a positive effect on the number censorship events by comparing server traffic from before and after the switch in June of 2015.
Censorship

Egypt Blocks 21 Websites For 'Terrorism' And 'Fake News' (reuters.com) 55

Ahmed Aboulenein, reporting for Reuters: Egypt has banned 21 websites, including the main website of Qatar-based Al Jazeera television and prominent local independent news site Mada Masr, accusing them of supporting terrorism and spreading false news. The blockade is notable in scope and for being the first publicly recognized by the government. It was heavily criticized by journalists and rights groups. The state news agency announced it late on Wednesday. Individual websites had been inaccessible in the past but there was never any official admission. Reuters found the websites named by local media and were inaccessible. The move follows similar actions taken on Wednesday by Egypt's Gulf allies Saudi Arabia and the United Arab Emirates, which blocked Al Jazeera and other websites after a dispute with Qatar. From a separate report: "This is not the typical Egyptian regime attitude," Lina Attalah, the editor-in-chief of Mada Masr told BuzzFeed News in an interview in Cairo. "We are used to facing troubles with the regime since we have always chosen to write the stories they don't like to hear. We are used to being arrested or have cases filed against us, but blocking us is a new thing." Mada Masr, since its founding in 2013, has regularly published critical stories of the regime in both English and Arabic.
China

China Censored Google's AlphaGo Match Against World's Best Go Player (theguardian.com) 93

DeepMind's board game-playing AI, AlphaGo, may well have won its first game against the Go world number one, Ke Jie, from China -- but most Chinese viewers could not watch the match live. From a report: The Chinese government had issued a censorship notice to broadcasters and online publishers, warning them against livestreaming Tuesday's game, according to China Digital Times, a site that regularly posts such notices in the name of transparency. "Regarding the go match between Ke Jie and AlphaGo, no website, without exception, may carry a livestream," the notice read. "If one has been announced in advance, please immediately withdraw it." The ban did not just cover video footage: outlets were banned from covering the match live in any way, including text commentary, social media, or push notifications. It appears the government was concerned that 19-year-old Ke, who lost the first of three scheduled games by a razor-thin half-point margin, might have suffered a more damaging defeat that would hurt the national pride of a state which holds Go close to its heart.
Censorship

FCC Won't Punish Stephen Colbert For Controversial Trump Insult (slashdot.org) 305

Earlier this month, the FCC said it would look into complaints made against The Late Show host Stephen Colbert over a homophobic joke he made about President Donald Trump. Well, it turns out the FCC is not going to levy a fine against the comedian for using the word "cock" on late-night network television, reports The Verge. From the report: "Consistent with standard operating procedure, the FCC's Enforcement Bureau has reviewed the complaints and the material that was the subject of these complaints," reads the FCC's statement, according to Variety. "The Bureau has concluded that there was nothing actionable under the FCC's rules." Helping Colbert's case was the fact that the broadcast, time delayed for incidents like these, bleeped out the questionable word and also blurred the host's mouth as he was saying it. The FCC has broad authority to regulate what can and cannot be broadcast based on legal precedent regarding obscenity laws. Yet looser rules apply during the hours of 10PM and 6AM ET, when Colbert's show airs. So it would appear that the ample self-censorship on behalf of CBS saved the program from a guilty verdict in this case.
Communications

Comcast Proves Need For Net Neutrality By Trying To Censor Advocacy Website (fightforthefuture.org) 153

Reader mrchaotica writes: As most Slashdot readers are probably aware, the FCC, under the direction of Trump-appointed chairman Ajit Pai, is trying to undo its 2015 decision to protect Net Neutrality (PDF) by classifying ISPs as common carriers. During the recent public comment period, the FCC's website was flooded with pro-Net-Neutrality comments from actual people (especially those who heeded John Oliver's call to arms) as well as anti-Net-Neutrality comments posted by bots using the names and addresses of people without their consent. The fake comments use boilerplate identical to that used in a 2010 press release by the conservative lobbying group Center for Individual Freedom (which is funded by Comcast, among other entities), but beyond that, the entities who perpetrated and funded the criminal acts have not been conclusively identified. In response to this brazen attempt to undermine the democratic process, the Internet freedom advocacy group Fight for the Future (FFTF) created the website Comcastroturf.com to call attention to the fraud and allow people to see if their identity had been misappropriated. Comcast, in a stunning display of its tone-deaf attitude towards free speech, has sent a cease-and-desist order to FFTF, claiming that Comcastroturf.com violates its "valuable intellectual property[sic]." According to the precedent set in Bosley Medical Institute, Inc. v. Kremer , websites created for the purpose of criticizing an organization can not be considered trademark infringement. As such, FFTF reportedly has no intention of taking down the site.

"This is exactly why we need Title II net neutrality protections that ban blocking, throttling, and censorship," said Evan Greer, campaign director of Fight for the Future, "If Ajit Pai's plan is enacted, there would be nothing preventing Comcast from simply blocking sites like Comcastroturf.com that are critical of their corporate policies," she added. "It also makes you wonder what Comcast is so afraid of? Are their lobbying dollars funding the astroturfing effort flooding the FCC with fake comments that we are encouraging Internet users to investigate?"

Could there be a better example to illustrate why ensuring strong Net Neutrality protections by regulating ISPs as common carriers is so important?


Censorship

Wikipedia Is Being Blocked In Turkey (turkeyblocks.org) 94

Nine hours ago, Ilgaz wrote: The Turkey Blocks monitoring network has verified restrictions affecting the Wikipedia online encyclopedia in Turkey. A block affecting all language editions of the website [was] detected at 8:00AM local time Saturday 29 April. The loss of availability is consistent with internet filters used to censor content in the country.
stikves added Access to Wikipedia has been blocked in Turkey as a result of "a provisional administrative order" imposed by the Turkish Telecommunications Authority (BTK)... Turkey Blocks said an administrative blocking order is usually expected to precede a full court blocking order in coming days. While the reason for the order was unknown early on Saturday, a statement on the BTK's website said: "After technical analysis and legal consideration based on the Law Nr. 5651, ADMINISTRATION MEASURE has been taken for this website (wikipedia.org) according to Decision Nr. 490.05.01.2017.-182198 dated 29/04/2017 implemented by Information and Communication Technologies Authority."
The BBC adds reports from Turkish media that authorities "had asked Wikipedia to remove content by writers 'supporting terror.'"
Google

Google Looks at People As it Pledges To Fight Fake News and 'Offensive' Content (betanews.com) 173

Google said today it is taking its first attempt to combat the circulation of "fake news" on its search engine. The company is offering new tools that will allow users to report misleading or offensive content, and it also pledged to improve results generated by its algorithm. From a report: While the algorithm tweaks should impact on general search results, the reporting tools have been designed for Google's Autocomplete predictions and Featured Snippets which have been problematic in recent months. Updated algorithms should help to ensure more authoritative pages receive greater prominence, while low-quality content is demoted. Vice president of engineering at Google Search, Ben Gomes, admits that people have been trying to "game" the system -- working against the spirit of the purpose of algorithms -- to push poor-quality content and fake news higher up search results. He says that the problem now is the "spread of blatantly misleading, low quality, offensive or downright false information."
Wikipedia

Wikipedia Founder Jimmy Wales is Launching an Online Publication To Fight Fake News (cnn.com) 190

Jimmy Wales, a founder of Wikipedia, is launching a new online publication which will aim to fight fake news by pairing professional journalists with an army of volunteer community contributors. The news site is called Wikitribune. From a report: "We want to make sure that you read fact-based articles that have a real impact in both local and global events," the publication's website states. The site will publish news stories written by professional journalists. But in a page borrowed from Wikipedia, internet users will be able to propose factual corrections and additions. The changes will be reviewed by volunteer fact checkers. Wikitribune says it will be transparent about its sources. It will post the full transcripts of interviews, as well as video and audio, "to the maximum extent possible." The language used will be "factual and neutral."
GNU is Not Unix

Richard Stallman Interviewed By Bryan Lunduke (youtube.com) 172

Many Slashdot readers know Bryan Lunduke as the creator of the humorous "Linux Sucks" presentations at the annual Southern California Linux Exposition. He's now also a member of the OpenSUSE project board and an all-around open source guy. (In September, he released every one of his books, videos and comics under a Creative Commons license, while his Patreon page offers a tip jar and premiums for monthly patrons). But now he's also got a new "daily computing/nerd show" on YouTube, and last week -- using nothing but free software -- he interviewed the 64-year-old founder of the Free Software Foundation, Richard Stallman. "We talk about everything from the W3C's stance on DRM to opinions on the movie Galaxy Quest," Lunduke explains in the show's notes.

Click through to read some of the highlights.
China

China To Question Apple About Live-Streaming Apps On App Store That Violate Internet Regulations (theguardian.com) 31

Three Chinese government agencies are planning to tell Apple to "tighten up checks" on live-streaming software offered on its app store, which can be used to violate internet regulation in the country. "Law enforcement officers had already met with Apple representatives over live-streaming services, [state news agency Xinhua reported], but did not provide details of the meetings," reports The Guardian. From the report: The inquiry appears to be focused on third-party apps available for download through Apple's online marketplace. The company did not respond to requests for comment. China operates the world's largest internet censorship regime, blocking a host of foreign websites including Google, Facebook, Twitter and Instagram, but the authorities have struggled to control an explosion in popularity of live-streaming video apps. As part of the inquiry into live-streaming, three Chinese websites -- toutiao.com, huoshanzhibo.com and huajiao.com -- were already found to have violated internet regulations, and had broadcast content that violated Chinese law, including providing "pornographic content," the Xinhua report said. Pornography is banned in China. The three sites were told to increase oversight of live-broadcasting services, user registration and "the handling of tips-offs." Two of the websites, huoshanzhibo.com and huajiao.com, were under formal investigation and may have their cases transferred to the police for criminal prosecutions, the Xinhua report said. Casting a wide net, the regulations state that apps cannot "engage in activities prohibited by laws and regulations such as endangering national security, disrupting social order and violating the legitimate rights and interests of others."
Movies

Hollywood Is Losing the Battle Against Online Trolls (hollywoodreporter.com) 487

An anonymous reader shares a Hollywood Reporter article: It had taken years -- and the passionate support of Kirk Kerkorian, who financed the film's $100 million budget without expecting to ever make a profit -- for The Promise, a historical romance set against the backdrop of the Armenian genocide and starring Christian Bale and Oscar Isaac, to reach the screen. Producers always knew it would be controversial: Descendants of the 1.5 million Armenians killed by the Ottoman Empire shortly after the onset of World War I have long pressed for the episode to be recognized as a genocide despite the Turkish government's insistence the deaths were not a premeditated extermination. Before the critics in attendance even had the chance to exit Roy Thompson Hall, let alone write their reviews, The Promise's IMDb page was flooded with tens of thousands of one-star ratings. "All I know is that we were in about a 900-seat house with a real ovation at the end, and then you see almost 100,000 people who claim the movie isn't any good," says Medavoy. Panicked calls were placed to IMDb, but there was nothing the site could do. "One thing that they can track is where the votes come from," says Eric Esrailian, who also produced the film, and "the vast majority of people voting were not from Canada. So I know they weren't in Toronto." The online campaign against The Promise appears to have originated on sites like Incisozluk, a Turkish version of 4chan, where there were calls for users to "downvote" the film's ratings on IMDb and YouTube. A rough translation of one post: "Guys, Hollywood is filming a big movie about the so-called Armenian genocide and the trailer has already been watched 700k times. We need to do something urgently." Soon afterward, the user gleefully noted The Promise's average IMDb rating had reached a dismaying 1.8 stars. "They know that the IMDb rating will stay with the film forever," says Esrailian. "It's a kind of censorship, really."
Piracy

Cloudflare Doesn't Want To Become the 'Piracy Police' (torrentfreak.com) 63

Cloudflare is warning that far-reaching cooperation between copyright holders and internet services may put innovation in danger. From a report: As one of the leading CDN and DDoS protection services, Cloudflare is used by millions of websites across the globe. This includes thousands of "pirate" sites, including the likes of The Pirate Bay and ExtraTorrent, which rely on the U.S.-based company to keep server loads down. Copyright holders are not happy that CloudFlare services these sites. Last year, the RIAA and MPAA called the company out for aiding copyright infringers and helping pirate sites to obfuscate their actual location. [...] In a whitepaper, Cloudflare sees this trend as a worrying development. The company points out that the safe harbor provisions put in place by the DMCA and Europe's eCommerce Directive have been effective in fostering innovation for many years. Voluntary "anti-piracy" agreements may change this. [...] Cloudflare argues that increased monitoring and censorship are not proper solutions. Third-party Internet services shouldn't be pushed into the role of Internet police out of a fear of piracy. Instead, the company cautions against far-reaching voluntary agreements that may come at the expense of the public.
Social Networks

Twitter To Get Even Harsher On Trolls (cnbc.com) 183

Twitter is cracking down even harder against trolls, including temporarily barring accounts that are harassing other users. From a report: In a blog posted Wednesday, Twitter's vice president of engineering, Ed Ho, announced more safety measures to stop abuse on its platform. One of the methods includes using the company's internal algorithms to identify problematic accounts and limiting certain account functions -- such as only allowing the aggressor to see their followers -- for a set period of time if they engaged in troublesome behavior. Twitter said it was also open to further action if the harassment continued. Other anti-trolling tools include new filters to let users see what kinds of content they want to view from certain accounts and well as allowing people to "mute" tweets based on keywords, phrases or entire conversations.
Google

Is Google's Comment Filtering Tool 'Vanishing' Legitimate Comments? (vortex.com) 101

Slashdot reader Lauren Weinstein writes: Google has announced (with considerable fanfare) public access to their new "Perspective" comment filtering system API, which uses Google's machine learning/AI system to determine which comments on a site shouldn't be displayed due to perceived high spam/toxicity scores. It's a fascinating effort. And if you run a website that supports comments, I urge you not to put this Google service into production, at least for now.

The bottom line is that I view Google's spam detection systems as currently too prone to false positives -- thereby enabling a form of algorithm-driven "censorship" (for lack of a better word in this specific context) -- especially by "lazy" sites that might accept Google's determinations of comment scoring as gospel... as someone who deals with significant numbers of comments filtered by Google every day -- I have nearly 400K followers on Google Plus -- I can tell you with considerable confidence that the problem isn't "spam" comments that are being missed, it's completely legitimate non-spam, non-toxic comments that are inappropriately marked as spam and hidden by Google.

Lauren is also collecting noteworthy experiences for a white paper about "the perceived overall state of Google (and its parent corporation Alphabet, Inc.)" to better understand how internet companies are now impacting our lives in unanticipated ways. He's inviting people to share their recent experiences with "specific Google services (including everything from Search to Gmail to YouTube and beyond), accounts, privacy, security, interactions, legal or copyright issues -- essentially anything positive, negative, or neutral that you are free to impart to me, that you believe might be of interest."
Google

Google Releases an AI Tool For Publishers To Spot and Weed Out Toxic Comments (bbc.com) 195

Google today launched a new technology to help news organizations and online platforms identify and swiftly remove abusive comments on their websites. The technology, called Perspective, will review comments and score them based on how similar they are to comments people said were "toxic" or likely to make them leave a conversation. From a report on BBC: The search giant has developed something called Perspective, which it describes as a technology that uses machine learning to identify problematic comments. The software has been developed by Jigsaw, a division of Google with a mission to tackle online security dangers such as extremism and cyberbullying. The system learns by seeing how thousands of online conversations have been moderated and then scores new comments by assessing how "toxic" they are and whether similar language had led other people to leave conversations. What it's doing is trying to improve the quality of debate and make sure people aren't put off from joining in.

Slashdot Top Deals