AI

Facebook's 'Rosetta' System Helps the Company Understand Text Within Image, Which is Crucial In Handling Memes, Flagging Abusing Content (techcrunch.com) 45

Facebook announced on Tuesday a new AI system, codenamed "Rosetta," which helps teams at the company as well as those at Instagram identify text within images to better understand what their subject is and more easily classify them for search or to flag abusive content. From a report: It's not all memes; the tool scans over a billion images and video frames daily across multiple languages in real time, according to a company blog post. Rosetta makes use of recent advances in optical character recognition (OCR) to first scan an image and detect text that is present, at which point the characters are placed inside a bounding box that is then analyzed by convolutional neural nets that try to recognize the characters and determine what's being communicated. This technology has been in practice for a while -- Facebook has been working with OCR since 2015 -- but implementing this across the company's vast networks provides a crazy degree of scale that motivated the company to develop some new strategies around character detection and recognition.
Censorship

Tencent Shuts Poker Platform Amid Widening Gaming Crackdown (reuters.com) 14

An anonymous reader shares a report: Tencent Holdings will shut a popular Texas Hold'Em poker video game, the Chinese tech giant said to its users on Monday, in a further step to comply with intensifying government scrutiny hitting the country's gaming industry. Tencent said it would formally begin to shutter "Everyday Texas Hold'Em" from Monday and would closer the game's server from Sept 25. Tencent would compensate users in accordance with regulations of Ministry of Culture. The Shenzhen-based company, which draws a huge amount of its profit from gaming, is facing mounting challenges this year from stringent regulation and government censorship. It has had to pull one blockbuster game and seen others censured.
Censorship

Google Debunks Trump's Claim It Censored His State of the Union Address (theverge.com) 508

An anonymous reader quotes a report from The Verge: President Donald Trump intensified his criticism of Google today, posting a native video of unknown origin to his Twitter account this afternoon claiming the search giant stopped promoting the State of the Union (SOTU) address on its homepage after he took office. It turns out the video he posted is not only misleading, but also contains what appears to be a fake screenshot of the Google homepage on the day in question. It has since been viewed more than 1.5 million times. In a statement given to The Verge, a Google spokesperson clarifies that the company promoted neither former President Barack Obama nor Trump's inaugural SOTU addresses in 2009 and 2017, respectively. That's because they were not technically State of the Union addresses, but "addresses to a joint session" of Congress, a tradition set back in 1993 so that new presidents didn't have to immediately deliver SOTU addresses after holding office for just a few weeks. Google resumed promoting Obama's SOTU address in 2010 and continued to do so through 2016, as he held office for all six of those years.

With regards to the 2018 SOTU, Google says it did in fact promote it on its homepage. "On January 30th 2018, we highlighted the livestream of President Trump's State of the Union on the google.com homepage," reads Google's statement. "We have historically not promoted the first address to Congress by a new President, which is not a State of the Union address. As a result, we didn't include a promotion on google.com for this address in either 2009 or 2017."

The Internet

The 'Scunthorpe Problem' Has Never Really Been Solved (vice.com) 382

dmoberhaus writes: Yesterday, a writer for SB Nation named Natalie Weiner posted a screenshot of a rejection form she received when she tried to sign up for a website. Her submission was rejected because a spam algorithm considered her last name "offensive." After she posted about this, hundreds of other people with similarly "offensive" last names sounded off about how they had experienced similar issues. As it turns out, this phenomenon is so widespread that it has a name among computer scientists. It's called the Scunthorpe problem and it's been a scourge of the internet since the beginning. Motherboard spoke to content moderation experts about its origins and why it's such a hard problem to solve 20 years later. A big reason why the problem has yet to be solved is "because creating effective obscenity filters depends on the filter's ability to understand a word in context," reports Motherboard. "Despite advances in [AI], this is something that even the most advanced machine-learning algorithms still struggle with today."

"This works both ways around," Michael Veale, a researcher studying responsible machine learning at University College London, told Motherboard. "Cock (a bird) and Dick (the given name) are both harmless in certain contexts, even in children's settings online, but in other cases parents might not want them used. Equally, those wanting to abuse a system can find ways around it."
Social Networks

Trump Accuses Social Media Firms of 'Silencing Millions' (reuters.com) 570

U.S. President Donald Trump accused social media companies on Friday of silencing "millions of people" in an act of censorship, but without offering evidence to support the claim. From a report: "Social Media Giants are silencing millions of people. Can't do this even if it means we must continue to hear Fake News like CNN, whose ratings have suffered gravely. People have to figure out what is real, and what is not, without censorship!" Trump wrote on Twitter, not mentioning any specific companies. Trump also criticized social media outlets last week, saying without providing proof that unidentified companies were "totally discriminating against Republican/Conservative voices." Mr. President's Friday remarks comes days after he expressed concerns over Twitter and Facebook regulating the content on their own platforms. He found such practice "very dangerous."
Communications

The Consequences of Indecency (techcrunch.com) 502

Ron Wyden, a senior U.S. Senator from Oregon, argues there should be consequences for internet companies that refuse to remove hate speech from their platforms. An anonymous reader shares an excerpt from a report Wyden wrote via TechCrunch: I wrote the law that allows sites to be unfettered free speech marketplaces. I wrote that same law, Section 230 of the Communications Decency Act, to provide vital protections to sites that didn't want to host the most unsavory forms of expression. The goal was to protect the unique ability of the internet to be the proverbial marketplace of ideas while ensuring that mainstream sites could reflect the ethics of society as a whole. In general, this has been a success -- with one glaring exception. I never expected that internet CEOs would fail to understand one simple principle: that an individual endorsing (or denying) the extermination of millions of people, or attacking the victims of horrific crimes or the parents of murdered children, is far more indecent than an individual posting pornography.

Social media cannot exist without the legal protections of Section 230. That protection is not constitutional, it's statutory. Failure by the companies to properly understand the premise of the law is the beginning of the end of the protections it provides. I say this because their failures are making it increasingly difficult for me to protect Section 230 in Congress. Members across the spectrum, including far-right House and Senate leaders, are agitating for government regulation of internet platforms. Even if government doesn't take the dangerous step of regulating speech, just eliminating the 230 protections is enough to have a dramatic, chilling effect on expression across the internet. Were Twitter to lose the protections I wrote into law, within 24 hours its potential liabilities would be many multiples of its assets and its stock would be worthless. The same for Facebook and any other social media site. Boards of directors should have taken action long before now against CEOs who refuse to recognize this threat to their business.
In an interview with Recode, Wyden said that platforms should be punished for hosting content that goes against "common decency." "I think what the Alex Jones case shows, we're gonna really be looking at what the consequences are for just leaving common decency in the dust," Wyden told Recode's Kara Swisher. "...What I'm gonna be trying to do in my legislation is to really lay out what the consequences are when somebody who is a bad actor, somebody who really doesn't meet the decency principles that reflect our values, if that bad actor blows by the bounds of common decency, I think you gotta have a way to make sure that stuff is taken down."
Censorship

Egypt Fights Terrorism By Censoring Web Sites, Threatening Jail Time For Accessing Them (apnews.com) 67

An anonymous reader quotes the Associated Press: Egypt's President Abdel-Fattah el-Sissi has ratified an anti-cybercrime law that rights groups say paves the way for censoring online media. The law, published Saturday in the country's official gazette, empowers authorities to order the blocking of websites that publish content considered a threat to national security. Viewers attempting to access blocked sites can also be sentenced to one year in prison or fined up to EGP100,000 ($5,593) under the law. Last month, Egypt's parliament approved a bill placing personal social media accounts and websites with over 5,000 followers under the supervision of the top media authority, which can block them if they're found to be disseminating false news.
"Authorities say the new measures are needed to tackle instability and terrorism," reports the BBC.

"But human rights groups accuse the government of trying to crush all political dissent in the country."
Social Networks

Twitter Is 'Rethinking' Its Service, and Suspending 1M Accounts Each Day (washingtonpost.com) 224

Twitter's CEO told the Washington Post he's "rethinking" core parts of Twitter: Dorsey said he was experimenting with features that would promote alternative viewpoints in Twitter's timeline to address misinformation and reduce "echo chambers." He also expressed openness to labeling bots -- automated accounts that sometimes pose as human users -- and redesigning key elements of the social network, including the "like" button and the way Twitter displays users' follower counts. "The most important thing that we can do is we look at the incentives that we're building into our product," Dorsey said. "Because they do express a point of view of what we want people to do -- and I don't think they are correct anymore."

Dorsey's openness to broad changes shows how Silicon Valley leaders are increasingly reexamining the most fundamental aspects of the technologies that have made these companies so powerful and profitable. At Facebook, for example, CEO Mark Zuckerberg has commissioned a full review of his company's products to emphasize safety and trust, from mobile payments to event listings.... In recent months, Twitter has made several changes to promote safety and trust. It has introduced new machine learning software to monitor account behavior and is suspending over a million problematic accounts a day.... Dorsey said Twitter hasn't changed its incentives, which were originally designed to nudge people to interact and keep them engaged, in the 12 years since Twitter was founded.

China

After Employee Revolt, Google Says It's 'Not Close' To Launching Search In China (arstechnica.com) 135

An anonymous reader quotes a report from Ars Technica: Reports from earlier this month claimed Google was working on products for the Chinese market, detailing plans for a search engine and news app that complied with the Chinese government's censorship and surveillance demands. The news was a surprise to many Googlers, and yesterday an article from The New York Times detailed a Maven-style internal revolt at the company. Fourteen hundred employees signed a letter demanding more transparency from Google's leadership on ethical issues, saying, "Google employees need to know what we're building." The letter says many employees only learned about the project through news reports and that "currently we do not have the information required to make ethically informed decisions about our work, our projects, and our employment."

According to a report from The Wall Street Journal, Google addressed the issue of China at this week's all-hands meeting. The report says CEO Sundar Pichai told employees the company was "not close to launching a search product" in China but that Pichai thinks Google can do good by engaging with China. "I genuinely do believe we have a positive impact when we engage around the world," The Journal quotes Pichai as say, "and I don't see any reason why that would be different in China." The report says Brin "sounded optimistic about doing more business in China" but that Brin called progress in the country "slow-going and complicated."

Google

Google Employees Protest Secret Work On Censored Search Engine For China (nytimes.com) 169

According to The New York Times, "Hundreds of Google employees, upset at the company's decision to secretly build a censored version of its search engine for China, have signed a letter demanding more transparency to understand the ethical consequences of their work (Warning: source may be paywalled; alternative source)." In the letter, the employees wrote that the project and Google's apparent willingness to abide by China's censorship requirements "raise urgent moral and ethical issues." They added, "Currently we do not have the information required to make ethically-informed decisions about our work, our projects, and our employment." From the report: The letter is circulating on Google's internal communication systems and is signed by about 1,000 employees, according to two people familiar with the document, who were not authorized to speak publicly. The letter also called on Google to allow employees to participate in ethical reviews of the company's products, to appoint external representatives to ensure transparency and to publish an ethical assessment of controversial projects. The document referred to the situation as a "code yellow," a process used in engineering to address critical problems that impact several teams.
Censorship

Reddit Blocked In China (qz.com) 68

An anonymous reader quotes a report from Quartz: Many Reddit users in China who tried to access the social network this weekend were slightly annoyed to find the company's site and app weren't working. But in China, it's second nature for internet users to turn on their VPNs, and in almost no time at all, they were surfing the "front page of the internet" again. According to users' posts, the crackdown appeared to have started on Friday (Aug. 10). By today (Aug. 13), more people said they were able to access Reddit again. Many, however, report that Reddit remains behind the Great Firewall for them. Comparitech, a tool that checks if a domain is blocked in China, continues to show that reddit.com is not accessible via regular internet access, but reachable over VPN. It's unclear if geography is a factor for why some people are and aren't able to access the site.
Censorship

Google Boots Open Source Anti-Censorship Tool From Chrome Store (torrentfreak.com) 95

Google has removed the open-source Ahoy! extension from the Chrome store with little explanation. The tool facilitated access to more than 1,700 blocked sites in Portugal by routing traffic through its own proxies. TorrentFreak reports: After servicing 100,000 users last December, Ahoy! grew to almost 185,000 users this year. However, progress and indeed the project itself is now under threat after arbitrary action by Google. "Google decided to remove us from Chrome's Web Store without any justification," team member Henrique Mouta informs TF. "We always make sure our code is high quality, secure and 100% free (as in beer and as in freedom). All the source code is open source. And we're pretty sure we never broke any of the Google's marketplace rules."

Henrique says he's tried to reach out to Google but finding someone to help has proven impossible. Even re-submitting Ahoy! to Google from scratch hasn't helped the situation. "I tried and resubmitted the plugin but it was refused after a few hours and without any justification," Henrique says. "Google never reached us or notified us about the removal from Chrome Web Store. We never got a single email justifying what happened, why have we been removed from the store, or/and what are we breaching and how can we fix it." TorrentFreak reached out to Google asking why this anti-censorship tool has been removed from its Chrome store. Despite multiple requests, the search giant failed to respond to us or the Ahoy! team.
Thankfully, the Ahoy! extension is still available on Firefox.
Censorship

Google Using Chinese Site It Owns To Develop Search Term Blacklist For Censored Search Engine, Says Report (theverge.com) 63

Google is using search samples from a Beijing-based website it owns to make blacklists for the censored search engine it is developing for China. Google's website 265.com redirects to China's dominant search engine, Baidu, by default, "but Google can apparently see the queries that users are typing in," reports The Verge. From the report: Google engineers are reportedly sampling those search queries in order to develop a list of thousands of blocked websites it should hide on its upcoming search engine in China. Blacklisted results, which include topics like the Tiananmen Square massacre, will result in users seeing a blank page, The Intercept reports. On Baidu, if you search for something less specific, like Taiwan or Xinjiang, you'll get a partial blackout where you can only see tourist information and not politically sensitive news reports. It could be possible that Google is taking a similar tack.

Originally, 265.com was founded in 2003 by Chinese entrepreneur Cai Wensheng, who's also the founder of Chinese beauty app Meitu. Google bought the site in 2008, while it was still operating its search engine within China. Google has essentially been using the site to figure out what Chinese users are searching for since 2008, and now that it is working on an Android search app, it will finally have a use for that data.
The Intercept first reported this news.
Businesses

Call Me, Comrade: The Surprise Rise of North Korean Smartphones (nknews.org) 58

Tia Han, reporting for NK News: 2018 marks the tenth year that cellphones have been legally available in North Korea. The number of users has been growing significantly since then, but overall use remains low: according to the country's state-run Sogwang outlet in January, more than 3.5 million -- out of a population of 25 million -- have mobile subscriptions. "We started providing the 3G service in December 2008, so this year marks the 10th year of the service," Han Jong Nye, from the Arirang Information and Technology Center in Future Scientist Street in Pyongyang, was quoted as having said in Sogwang in January. "The demand for mobile phones is growing larger and larger."

[...] North Korean mobile users cannot access the worldwide internet, of course: use is limited to the country's state-run intranet. Reports suggest various kinds of applications are now accessible for mobile users -- from games to shopping -- several state-run North Korean outlets have reported on their recent technological development, often with a great deal of emphasis on their local origins. State media suggests that North Koreans are playing games, reading books, listening to music, doing karaoke, learning to cook, and even increasing crop output on their smartphones.

[...] Since the majority of smartphone users do not have an access to the internet, according to one expert, users have to go to a technology service center where technicians install apps to their cell phone. "Most mobile users do not have data service even if they buy a smartphone, so they have to be happy with pre-loaded apps such as games and dictionaries," Yonho Kim, a non-resident fellow at Korea Economic Institute, told NK News.

Google

Google Plans To Launch Censored Search Engine In China, Leaked Documents Reveal (theintercept.com) 132

Google is planning to launch a censored version of its search engine in China that will blacklist websites and search terms about human rights, democracy, religion, and peaceful protest, The Intercept reported Wednesday, citing leaked documents and people familiar with the matter. From the report: The project -- code-named Dragonfly -- has been underway since spring of last year, and accelerated following a December 2017 meeting between Google's CEO Sundar Pichai and a top Chinese government official, according to internal Google documents and people familiar with the plans. Teams of programmers and engineers at Google have created a custom Android app, different versions of which have been named "Maotai" and "Longfei." The app has already been demonstrated to the Chinese government; the finalized version could be launched in the next six to nine months, pending approval from Chinese officials.

The planned move represents a dramatic shift in Google's policy on China and will mark the first time in almost a decade that the internet giant has operated its search engine in the country. Google's search service cannot currently be accessed by most internet users in China because it is blocked by the country's so-called Great Firewall. The app Google is building for China will comply with the country's strict censorship laws, restricting access to content that Xi Jinping's Communist Party regime deems unfavorable. [...] When a person carries out a search, banned websites will be removed from the first page of results, and a disclaimer will be displayed stating that "some results may have been removed due to statutory requirements." Examples cited in the documents of websites that will be subject to the censorship include those of British news broadcaster BBC and the online encyclopedia Wikipedia.

Robotics

Should Bots Be Required To Tell You That They're Not Human? (buzzfeednews.com) 92

"BuzzFeed has this story about proposals to make social media bots identify themselves as fake people," writes an anonymous Slashdot reader. "[It's] based on a paper by a law professor and a fellow researcher." From the report: General concerns about the ethical implications of misleading people with convincingly humanlike bots, as well as specific concerns about the extensive use of bots in the 2016 election, have led many to call for rules regulating the manner in which bots interact with the world. "An AI system must clearly disclose that it is not human," the president of the Allen Institute on Artificial Intelligence, hardly a Luddite, argued in the New York Times. Legislators in California and elsewhere have taken up such calls. SB-1001, a bill that comfortably passed the California Senate, would effectively require bots to disclose that they are not people in many settings. Sen. Dianne Feinstein has introduced a similar bill for consideration in the United States Senate.

In our essay, we outline several principles for regulating bot speech. Free from the formal limits of the First Amendment, online platforms such as Twitter and Facebook have more leeway to regulate automated misbehavior. These platforms may be better positioned to address bots' unique and systematic impacts. Browser extensions, platform settings, and other tools could be used to filter or minimize undesirable bot speech more effectively and without requiring government intervention that could potentially run afoul of the First Amendment. A better role for government might be to hold platforms accountable for doing too little to address legitimate societal concerns over automated speech. [A]ny regulatory effort to domesticate the problem of bots must be sensitive to free speech concerns and justified in reference to the harms bots present. Blanket calls for bot disclosure to date lack the subtlety needed to address bot speech effectively without raising the specter of censorship.

Social Networks

Facebook Forced To Block 20,000 Posts About Snack Food Conspiracy After PepsiCo Sues, Says Report (gizmodo.com) 118

An anonymous reader quotes a report from Gizmodo: There is a rumor that Kurkure, a corn puff product developed by [Pepsico] in India, is made of plastic. The conspiracy theory naturally thrived online, where people posted mocking videos and posts questioning whether the snack contained plastic. In response, PepsiCo obtained an interim order from the Delhi High Court to block all references to this conspiracy theory online in the country, MediaNama reports. Hundreds of posts claiming that Kurkure contains plastic have already been blocked across Facebook, Twitter, Instagram, and YouTube, according to LiveMint, and the court order requires social networks to continue to block such posts. According to MediaNama, PepsiCo petitioned for 3412 Facebook links, 20244 Facebook posts, 242 YouTube videos, six Instagram links, and 562 tweets to be removed, a request the court has granted. PepsiCo's argument is that these rumors are untrue and defame the brand -- though it's evident that a number of the posts are satirical in tone, poking fun at the rumor rather than earnestly trying to spread misinformation.
Censorship

Lawmakers Call On Amazon and Google To Reconsider Ban On Domain Fronting (cyberscoop.com) 44

An anonymous reader quotes CyberScoop: Amazon and Google face sharp questions from a bipartisan pair of U.S. senators over the tech giants' decisions to ban domain fronting, a technique used to circumvent censorship and surveillance around the world. Sen. Ron Wyden, D-Ore., and Sen. Marco Rubio, R-Fla., sent a letter on Tuesday to Google CEO Larry Page and Amazon CEO Jeff Bezos over decisions by both companies in April to ban domain fronting.

Amazon then warned the developers of encrypted messaging app Signal that the organization would be banned from Amazon's cloud services if the service didn't stop using Amazon's cloud as cover. "We respectfully urge you to reconsider your decision to prohibit domain fronting given the harm it will do to global internet freedom and the risk it will impose upon human rights activists, journalists, and others who rely on the internet freedom tools," the senators wrote.

Social Networks

Social Media Manipulation Rising Globally, New Oxford Report Warns (phys.org) 99

A new report from Oxford University found that manipulation of public opinion over social media platforms is growing at a large scale, despite efforts to combat it. "Around the world, government agencies and political parties are exploiting social media platforms to spread junk news and disinformation, exercise censorship and control, and undermine trust in media, public institutions and science," reports Phys.Org. From the report: "The number of countries where formally organized social media manipulation occurs has greatly increased, from 28 to 48 countries globally," says Samantha Bradshaw, co-author of the report. "The majority of growth comes from political parties who spread disinformation and junk news around election periods. There are more political parties learning from the strategies deployed during Brexit and the U.S. 2016 Presidential election: more campaigns are using bots, junk news, and disinformation to polarize and manipulate voters."

This is despite efforts by governments in many democracies introducing new legislation designed to combat fake news on the internet. "The problem with this is that these 'task forces' to combat fake news are being used as a new tool to legitimize censorship in authoritarian regimes," says Professor Phil Howard, co-author and lead researcher on the OII's Computational Propaganda project. "At best, these types of task forces are creating counter-narratives and building tools for citizen awareness and fact-checking." Another challenge is the evolution of the mediums individuals use to share news and information. "There is evidence that disinformation campaigns are moving on to chat applications and alternative platforms," says Bradshaw. "This is becoming increasingly common in the Global South, where large public groups on chat applications are more popular."

Communications

Leaked Documents Show Facebook's 'Threshold' For Deleting Pages, Groups (vice.com) 94

Facebook has repeatedly referenced to lawmakers a "threshold" that must be reached before the platform decides to ban a particular page for violating the site's policies, but it hasn't discussed its guidelines publicly. Motherboard has obtained internal Facebook documents laying out what this threshold is for multiple types of different content, including some instances of hate speech. From the report: One Facebook moderator training document for hate speech says that for Pages -- Facebook's feature for sections dedicated to, say, a band, organization, public figure, or business -- the Page admin has to receive 5 "strikes" within 90 days for the Page itself to be deleted. Alternatively, Facebook moderators are told to remove a Page if at least 30 percent of the content posted by other people within 90 days violates Facebook's community standards. A similar 30 percent-or-over policy exists for Facebook Groups, according to the document.

In a similar vein, another hate speech document says that a profile should be taken down if there are 5 or more pieces of content from the user which indicate hate propaganda, photos of the user present with another identifiable leader, or other related violations. Although the documents obtained by Motherboard were created recently, Facebook's policies change regularly, so whether these exact parameters remain in force is unclear. Of course this still depends on moderators identifying and labeling posts as violating to reach that threshold. [...] Another document focused on sexual content says moderators should unpublish Pages and Groups under the basis of sexual solicitation if there are over 2 "elements," such as the Page description, title, photo, or pinned post, that include either explicit solicitation of nude imagery, or, if the page is more subtle, includes either a method of contact or a location. This slide again reiterates the over 30 percent and 5 admin posts rules found in the hate speech document.

Slashdot Top Deals