×
China

Apple Reverses Ban On App That Allowed Hong Kong Protestors to Track Police Movements (boingboing.net) 295

UPDATE (10/4/2019): "Apple has reportedly reversed its decision to ban the app HKmap.live," reports BoingBoing.

Apple had banned the app, which allows Hong Kong protesters to track protests and police movements in the city state, despite increasing international condemnation against the violence used by the authorities, MacRumors had reported: According to The Register, Apple has told the makers of the HKmap Live app that it can't be allowed in the App Store because it helps protestors to evade the police. "Your app contains content - or facilitates, enables, and encourages an activity - that is not legal ... specifically, the app allowed users to evade law enforcement," the American tech giant told makers of the HKmap Live on Tuesday before pulling it. Opposition to the Chinese state and the Hong Kong authorities has grown louder, driven by an escalation in violence against protesters over the past week. On Wednesday, thousands of people took to the streets of Hong Kong to denounce the shooting of an unarmed teenage student by police. Tsang Chi-kin was shot in the chest at point-blank range on Tuesday. He remains in hospital in stable but critical condition after surgery to remove the bullet, which narrowly missed his heart.
Twitter

Kamala Harris Asks Twitter To Suspend Donald Trump For 'Civil War' and Whistleblower Tweets (theverge.com) 567

California senator and 2020 presidential candidate Kamala Harris has formally asked Twitter to suspend President Donald Trump's account, following Trump's attacks on a whistleblower and his claim that impeachment would start a civil war. From a report: In an open letter to Twitter CEO Jack Dorsey, Harris says that Trump has used Twitter to "target, harass, and attempt to out" the person who filed an explosive complaint about Trump pressuring Ukraine to dig up dirt on rival candidate Joe Biden. Trump has been tweeting angrily about the complaint for several days now. Harris cites multiple messages where he calls the whistleblower "a spy" as well as a tweet where he called to arrest Rep. Adam Schiff (D-CA), who has helped lead the investigation into Trump's actions, for "fraud and treason." Offline, Trump has arguably insinuated that the whistleblower should be executed for spying -- something Harris says makes his tweets more threatening. "These tweets should be placed in the proper context," she writes. Around the same time, Trump quoted a Fox News claim that "if the Democrats are successful in removing the president from office (which they will never be), it will cause a civil war-like fracture in this nation from which our country will never heal," which Harris also notes. "These tweets represent a clear intent to baselessly discredit the whistleblower and officials in our government who are following the proper channels to report allegations of presidential impropriety, all while making blatant threats that put people at risk and our democracy in danger," she writes. Twitter told The Verge that it has received the letter and plans to respond to Harris's concerns.
Media

Ask Slashdot: Will P2P Video Sites Someday Replace YouTube? 68

dryriver writes: BitChute is a video-hosting website like YouTube, except that it states its mission as being "anti-censorship" and is Peer-To-Peer, WebTorrent based. "It is based on the peer-to-peer WebTorrent system, a JavaScript torrenting program that can run in a web browser," according to Wikipedia. "Users who watch a video also seed it. BitChute does not rely on advertising, and users can send payments to video creators directly. In November 2018 BitChute was banned from PayPal." So it seems that you don't need huge datacenters to build something like YouTube -- Bitchute effectively relies on its users to act as a distributed P2P datacenter. Is this the future of internet video? Will more and more people flock to P2P video-hosting sites as/when more mainstream services like YouTube fall prey to various forms of censorship?
Censorship

How TikTok Censors Videos That Do Not Please Beijing (theguardian.com) 23

According to leaked documents revealed by the Guardian, TikTok instructs its moderators to censor videos that mention Tiananmen Square, Tibetan independence, or the banned religious group Falun Gong. "The documents [...] lay out how ByteDance, the Beijing-headquartered technology company that owns TikTok, is advancing Chinese foreign policy aims abroad through the app," writes Alex Hern for the Guardian. From the report: The guidelines divide banned material into two categories: some content is marked as a "violation," which sees it deleted from the site entirely, and can lead to a user being banned from the service. But lesser infringements are marked as "visible to self," which leaves the content up but limits its distribution through TikTok's algorithmically-curated feed. This latter enforcement technique means that it can be unclear to users whether they have posted infringing content, or if their post simply has not been deemed compelling enough to be shared widely by the notoriously unpredictable algorithm.

The bulk of the guidelines covering China are contained in a section governing "hate speech and religion." In every case, they are placed in a context designed to make the rules seem general purpose, rather than specific exceptions. A ban on criticism of China's socialist system, for instance, comes under a general ban of "criticism/attack towards policies, social rules of any country, such as constitutional monarchy, monarchy, parliamentary system, separation of powers, socialism system, etc." Another ban covers "demonization or distortion of local or other countries' history such as May 1998 riots of Indonesia, Cambodian genocide, Tiananmen Square incidents."

A more general purpose rule bans "highly controversial topics, such as separatism, religion sects conflicts, conflicts between ethnic groups, for instance exaggerating the Islamic sects conflicts, inciting the independence of Northern Ireland, Republic of Chechnya, Tibet and Taiwan and exaggerating the ethnic conflict between black and white." All the above violations result in posts being marked "visible to self." But posts promoting Falun Gong are marked as a "violation," since the organization is categorized as a "group promoting suicide," alongside the Aum cult that used sarin to launch terrorist attacks on the Tokyo Metro in 1995 and "Momo group," a hoax conspiracy that went viral earlier this year.
ByteDance said in a statement that the version of the documents that the Guardian obtained was retired in May, and that the current guidelines do not reference specific countries or issues.
EU

Google Wins Fight To Restrict Right-To-Be-Forgotten Ruling To EU Search Engines (venturebeat.com) 66

Google has won a long-standing battle with the European Union (EU), after the European Court of Justice (ECJ) ruled the company can limit the scope of the "right-to-be-forgotten" (RTBF) regulation to searches made within the EU. From a report: Today's announcement was largely expected, given that an adviser to the EU's top court backed Google's case in January. (ECJ judges typically follow the advice given by the advocate general.) But now it's official, meaning Google and others will only have to delist search results from search engines inside the EU's perimeters. "The Court concludes that, currently, there is no obligation under EU law for a search engine operator who grants a request for de-referencing made by a data subject, as the case may be, following an injunction from a supervisory or judicial authority of a Member State, to carry out such a de-referencing on all the versions of its search engine,â the ECJ said in a press release.
Australia

Australia Formally Censors Christchurch Attack Videos (theguardian.com) 318

"Australian internet service providers have been ordered to block eight websites hosting video of the Christchurch terrorist attacks," according to the Guardian.

Slashdot reader aberglas shares their report: In March, shortly after the Christchurch massacre, Australian telecommunications companies and internet providers began proactively blocking websites hosting the video of the Christchurch shooter murdering more than 50 people or the shooter's manifesto. A total of 43 websites based on a list provided by Vodafone New Zealand were blocked. The government praised the internet providers despite the action being in a legally grey area by blocking the sites from access in Australia for people not using virtual private networks (VPNs) or other workarounds.

To avoid legal complications the prime minister, Scott Morrison, asked the e-safety commissioner and the internet providers to develop a protocol for the e-safety commissioner to order the websites to block access to the offending sites. The order issued on Sunday covers just eight websites, after several stopped hosting the material, or ceased operating, such as 8chan. The order means the e-safety commissioner will be responsible for monitoring the sites. If they remove the material they can be unblocked. The blocks will be reviewed every six months.

"The remaining rogue websites need only to remove the illegal content to have the block against them lifted," the e-safety commissioner, Julie Inman Grant, said.

Security

Hong Kong Protesters Using Mesh Messaging App China Can't Block: Usage Up 3685% (forbes.com) 57

An anonymous reader quotes Forbes: How do you communicate when the government censors the internet? With a peer-to-peer mesh broadcasting network that doesn't use the internet.

That's exactly what Hong Kong pro-democracy protesters are doing now, thanks to San Francisco startup Bridgefy's Bluetooth-based messaging app. The protesters can communicate with each other — and the public — using no persistent managed network...

While you can chat privately with contacts, you can also broadcast to anyone within range, even if they are not a contact.

That's clearly an ideal scenario for protesters who are trying to reach people but cannot use traditional SMS texting, email, or the undisputed uber-app of China: WeChat. All of them are monitored by the state.

Wednesday another article in Forbes confirmed with Bridgefy that their app uses end-to-end RSA encryption -- though an associate professor at the Johns Hopkins Information Security Institute warns in the same article about the possibility of the Chinese government demanding that telecom providers hand over a list of all users running the app and where they're located.

Forbes also notes that "police could sign up to Bridgefy and, at the very least, cause confusion by flooding the network with fake broadcasts" -- or even use the app to spread privacy-compromising malware. "But if they're willing to accept the risk, Bridgefy could remain a useful tool for communicating and organizing in extreme situations."
Youtube

YouTube Removes 17,000 Channels For Hate Speech (hollywoodreporter.com) 409

An anonymous reader quotes a report from The Hollywood Reporter: YouTube says it has removed more than 17,000 channels for hate speech, representing a spike in takedowns since its new hate speech policy went into effect in June. The Google-owned company calls the June update -- in which YouTube said it would specifically prohibit videos that glorify Nazi ideology or deny documented violent events like the Holocaust -- a "fundamental shift in our policies" that resulted in the takedown of more than 100,000 individual videos during the second quarter of the year. The number of comments removed during the same period doubled to over 500 million, in part due to the new hate speech policy. YouTube said that the 30,000 videos it had removed in the last month represented 3 percent of the views that knitting videos generated during the same period. YouTube says the videos removed represented a five-times increase compared with the previous three months. Still, in early August the ADL's Center on Extremism reported finding "a significant number of channels" that continue to spread anti-Semitic and white supremacist content.
Social Networks

Facebook Says it May Remove Like Counts (techcrunch.com) 80

Facebook could soon start hiding the Like counter on News Feed posts to protect users from envy and dissuade them from self-censorship. From a report: Instagram is already testing this in 7 countries including Canada and Brazil, showing a post's audience just a few names of mutual friends who've Liked it instead of the total number. The idea is to prevent users from destructively comparing themselves to others and potentially fleeing if their posts don't get as many Likes. It could also stop users from deleting posts they think aren't getting enough Likes or not sharing in the first place. Reverse engineering master Jane Manchun Wong spotted Facebook prototyping the hidden Like counts in its Android app. When we asked Facebook, the company confirmed to TechCrunch that it's considering testing removal of Like counts. However it's not live for users yet.
Censorship

China Intercepts WeChat Texts From US and Abroad, Researcher Says (npr.org) 27

China is intercepting texts from WeChat users living outside of the country, mostly from the U.S. Taiwan, South Korea, and Australia. NPR reports: The popular Chinese messaging app WeChat is Zhou Fengsuo's most reliable communication link to China. That's because he hasn't been back in over two decades. Zhou, a human rights activist, had been a university student in 1989, when the pro-democracy protests broke out in Beijing's Tiananmen Square. After a year in jail and another in political reeducation, he moved to the United States in 1995. But WeChat often malfunctions. Zhou began noticing in January that his chat groups could not read his messages. "I realized this because I was expecting some feedback [on a post] but there was no feedback," Zhou tells NPR at from his home in New Jersey.

As Chinese technology companies expand their footprint outside China, they are also sweeping up vast amounts of data from foreign users. Now, analysts say they know where the missing messages are: Every day, millions of WeChat conversations held inside and outside China are flagged, collected and stored in a database connected to public security agencies in China, according to a Dutch Internet researcher. Zhou is not the only one experiencing recent issues. NPR spoke to three other U.S. citizens who have been blocked from sending messages in WeChat groups or had their accounts frozen earlier this year, despite registering with U.S. phone numbers. This March, [Victor Gevers, co-founder of the nonprofit GDI Foundation, an open-source data security collection] found a Chinese database storing more than 1 billion WeChat conversations, including more than 3.7 billion messages, and tweeted out his findings. Each message had been tagged with a GPS location, and many included users' national identification numbers. Most of the messages were sent inside China, but more than 19 million of them had been sent from people outside the country, mostly from the U.S., Taiwan, South Korea and Australia.

Google

Google Doesn't Want Staff Debating Politics at Work Anymore (bloomberg.com) 301

Google posted new internal rules that discourage employees from debating politics, a shift away from the internet giant's famously open culture. From a report: The new "community guidelines" tell employees not to have "disruptive" conversations and warn workers that they'll be held responsible for whatever they say at the office. The company is also building a tool to let employees flag problematic posts and creating a team of moderators to monitor conversations, a Google spokeswoman said. "While sharing information and ideas with colleagues helps build community, disrupting the workday to have a raging debate over politics or the latest news story does not," the new policy states. "Our primary responsibility is to do the work we've each been hired to do." Google has long encouraged employees to question each other and push back against managers when they think they're making the wrong decision. Google's founders point to the open culture as instrumental to the success they've had revolutionizing the tech landscape over the last two decades.
Privacy

Degrading Tor Network Performance Only Costs a Few Thousand Dollars Per Month (zdnet.com) 16

Threat actors or nation-states looking into degrading the performance of the Tor anonymity network can do it on the cheap, for only a few thousands US dollars per month, new academic research has revealed. An anonymous reader writes: According to researchers from Georgetown University and the US Naval Research Laboratory, threat actors can use tools as banal as public DDoS stressers (booters) to slow down Tor network download speeds or hinder access to Tor's censorship circumvention capabilities. Academics said that while an attack against the entire Tor network would require immense DDoS resources (512.73 Gbit/s) and would cost around $7.2 million per month, there are far simpler and more targeted means for degrading Tor performance for all users. In research presented this week at the USENIX security conference, the research team showed the feasibility and effects of three types of carefully targeted "bandwidth DoS [denial of service] attacks" that can wreak havoc on Tor and its users. Researchers argue that while these attacks don't shut down or clog the Tor network entirely, they can be used to dissuade or drive users away from Tor due to prolongued poor performance, which can be an effective strategy in the long run.
AI

The Algorithms That Detect Hate Speech Online Are Biased Against Black People (vox.com) 328

An anonymous reader shares a report: Platforms like Facebook, YouTube, and Twitter are banking on developing artificial intelligence technology to help stop the spread of hateful speech on their networks. The idea is that complex algorithms that use natural language processing will flag racist or violent speech faster and better than human beings possibly can. Doing this effectively is more urgent than ever in light of recent mass shootings and violence linked to hate speech online. But two new studies show that AI trained to identify hate speech may actually end up amplifying racial bias. In one study [PDF], researchers found that leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English (which is commonly spoken by black people in the US). Another study [PDF] found similar widespread evidence of racial bias against black speech in five widely used academic data sets for studying hate speech that totaled around 155,800 Twitter posts.

This is in large part because what is considered offensive depends on social context. Terms that are slurs when used in some settings -- like the "n-word" or "queer" -- may not be in others. But algorithms -- and content moderators who grade the test data that teaches these algorithms how to do their job -- don't usually know the context of the comments they're reviewing. Both papers, presented at a recent prestigious annual conference for computational linguistics, show how natural language processing AI -- which is often proposed as a tool to objectively identify offensive language -- can amplify the same biases that human beings have. They also prove how the test data that feeds these algorithms have baked-in bias from the start.

China

Huawei Technicians Helped African Governments Spy on Political Opponents (wsj.com) 34

phalse phace writes: A WSJ investigation appears to have uncovered multiple instances where the African governments in Uganda and Zambia, with the help of Huawei technicians, used Huawei's communications equipment to spy on and censor political opponents and its citizens. From the report, writes phalse phace: Huawei Technologies dominates African markets, where it has sold security tools that governments use for digital surveillance and censorship. But Huawei employees have provided other services, not disclosed publicly. Technicians from the Chinese powerhouse have, in at least two cases, personally helped African governments spy on their political opponents, including intercepting their encrypted communications and social media, and using cell data to track their whereabouts, according to senior security officials working directly with the Huawei employees in these countries.

It should be noted that while the findings "show how Huawei employees have used the company's technology and other companies' products to support the domestic spying of those governments," the investigation didn't turn up evidence of spying by or on behalf of Beijing in Africa. Nor did it find that Huawei executives in China knew of, directed or approved the activities described. It also didn't find that there was something particular about the technology in Huawei's network that made such activities possible. Details of the operations, however, offer evidence that Huawei employees played a direct role in government efforts to intercept the private communications of opponents.

The Internet

Should Some Sites Be Liable For The Content They Host? (nytimes.com) 265

America's lawmakers are scrutinizing the blanket protections in Section 230 of the Communications Decency Act, which lets online companies moderate their own sites without incurring legal liability for everything they host.

schwit1 shared this article from the New York Times: Last month, Senator Ted Cruz, Republican of Texas, said in a hearing about Google and censorship that the law was "a subsidy, a perk" for big tech that may need to be reconsidered. In an April interview, Speaker Nancy Pelosi of California called Section 230 a "gift" to tech companies "that could be removed."

"There is definitely more attention being paid to Section 230 than at any time in its history," said Jeff Kosseff, a cybersecurity law professor at the United States Naval Academy and the author of a book about the law, The Twenty-Six Words That Created the Internet .... Mr. Wyden, now a senator [and a co-author of the original bill], said the law had been written to provide "a sword and a shield" for internet companies. The shield is the liability protection for user content, but the sword was meant to allow companies to keep out "offensive materials." However, he said firms had not done enough to keep "slime" off their sites. In an interview with The New York Times, Mr. Wyden said he had recently told tech workers at a conference on content moderation that if "you don't use the sword, there are going to be people coming for your shield."

There is also a concern that the law's immunity is too sweeping. Websites trading in revenge pornography, hate speech or personal information to harass people online receive the same immunity as sites like Wikipedia. "It gives immunity to people who do not earn it and are not worthy of it," said Danielle Keats Citron, a law professor at Boston University who has written extensively about the statute. The first blow came last year with the signing of a law that creates an exception in Section 230 for websites that knowingly assist, facilitate or support sex trafficking. Critics of the new law said it opened the door to create other exceptions and would ultimately render Section 230 meaningless.

The article notes that while lawmakers from both parties are challenging the protections, "they disagree on why," with Republicans complaining that the law has only protected some free speech while still leaving conservative voices open to censorship on major platforms.

The Times also notes that when Wyden co-authored the original bill in 1996, Google didn't exist yet, and Mark Zuckerberg was 11 years old.
Facebook

White House Proposal Would Have FCC and FTC Police Alleged Social Media Censorship (cnn.com) 140

A draft executive order from the White House could put the Federal Communications Commission in charge of shaping how Facebook, Twitter and other large tech companies curate what appears on their websites, CNN reported Friday, citing multiple people familiar with the matter. From the report: The draft order, a summary of which was obtained by CNN, calls for the FCC to develop new regulations clarifying how and when the law protects social media websites when they decide to remove or suppress content on their platforms. Although still in its early stages and subject to change, the Trump administration's draft order also calls for the Federal Trade Commission to take those new policies into account when it investigates or files lawsuits against misbehaving companies. If put into effect, the order would reflect a significant escalation by President Trump in his frequent attacks against social media companies over an alleged but unproven systemic bias against conservatives by technology platforms. And it could lead to a significant reinterpretation of a law that, its authors have insisted, was meant to give tech companies broad freedom to handle content as they see fit.
Communications

Turkey Moves To Oversee All Online Content, Raises Concerns Over Censorship (reuters.com) 71

stikves writes: Turkey has granted its radio and television watchdog sweeping oversight over all online content, including streaming platforms like Netflix and online news outlets, in a move that raised concerns over possible censorship. The move was initially approved by Turkey's parliament in March last year, with support from President Tayyip Erdogan's ruling AK Party and its nationalist ally. The regulation, published in Turkey's Official Gazette on Thursday, mandates all online content providers to obtain broadcasting licenses from RTUK, which will then supervise the content put out by the providers. Aside from streaming giant Netflix, other platforms like local streaming websites PuhuTV and BluTV, which in recent years have produced popular shows, will be subject to supervision and potential fines or loss of their license. In addition to subscription services like Netflix, free online news outlets which rely on advertising for their revenues will also be subject to the same measures.
The Internet

Cloudflare Terminates 8chan (cloudflare.com) 940

"We just sent notice that we are terminating 8chan as a customer effective at midnight tonight Pacific Time," writes Cloudflare CEO Matthew Prince.

"The rationale is simple: they have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths. Even if 8chan may not have violated the letter of the law in refusing to moderate their hate-filled community, they have created an environment that revels in violating its spirit." We do not take this decision lightly. Cloudflare is a network provider. In pursuit of our goal of helping build a better internet, we've considered it important to provide our security services broadly to make sure as many users as possible are secure, and thereby making cyberattacks less attractive -- regardless of the content of those websites. Many of our customers run platforms of their own on top of our network. If our policies are more conservative than theirs it effectively undercuts their ability to run their services and set their own policies. We reluctantly tolerate content that we find reprehensible, but we draw the line at platforms that have demonstrated they directly inspire tragic events and are lawless by design. 8chan has crossed that line. It will therefore no longer be allowed to use our services.

Unfortunately, we have seen this situation before and so we have a good sense of what will play out. Almost exactly two years ago we made the determination to kick another disgusting site off Cloudflare's network: the Daily Stormer. That caused a brief interruption in the site's operations but they quickly came back online using a Cloudflare competitor. That competitor at the time promoted as a feature the fact that they didn't respond to legal process. Today, the Daily Stormer is still available and still disgusting. They have bragged that they have more readers than ever. They are no longer Cloudflare's problem, but they remain the Internet's problem.

I have little doubt we'll see the same happen with 8chan.

Prince adds that since terminating the Daily Stormer they've been "engaging" with law enforcement and civil society organizations to "try and find solutions," which include "cooperating around monitoring potential hate sites on our network and notifying law enforcement when there was content that contained an indication of potential violence." Earlier today Prince had used this argument in defense of Cloudflare's hosting of the 8chan, telling the Guardian "There are lots of competitors to Cloudflare that are not nearly as law abiding as we have always been." He added in today's blog post that "We believe this is our responsibility and, given Cloudflare's scale and reach, we are hopeful we will continue to make progress toward solving the deeper problem."

"We continue to feel incredibly uncomfortable about playing the role of content arbiter and do not plan to exercise it often.... Cloudflare is not a government. While we've been successful as a company, that does not give us the political legitimacy to make determinations on what content is good and bad. Nor should it. Questions around content are real societal issues that need politically legitimate solutions..."

"What's hard is defining the policy that we can enforce transparently and consistently going forward. We, and other technology companies like us that enable the great parts of the Internet, have an obligation to help propose solutions to deal with the parts we're not proud of. That's our obligation and we're committed to it."
Encryption

Did Facebook End The Encryption Debate? (forbes.com) 163

Forbes contributor Kalev Leetaru argues that "the encryption debate is already over -- Facebook ended it earlier this year." The ability of encryption to shield a user's communications rests upon the assumption that the sender and recipient's devices are themselves secure, with the encrypted channel the only weak point... [But] Facebook announced earlier this year preliminary results from its efforts to move a global mass surveillance infrastructure directly onto users' devices where it can bypass the protections of end-to-end encryption. In Facebook's vision, the actual end-to-end encryption client itself such as WhatsApp will include embedded content moderation and blacklist filtering algorithms. These algorithms will be continually updated from a central cloud service, but will run locally on the user's device, scanning each cleartext message before it is sent and each encrypted message after it is decrypted. The company even noted that when it detects violations it will need to quietly stream a copy of the formerly encrypted content back to its central servers to analyze further, even if the user objects, acting as true wiretapping service...

If Facebook's model succeeds, it will only be a matter of time before device manufacturers and mobile operating system developers embed similar tools directly into devices themselves, making them impossible to escape... Governments would soon use lawful court orders to require companies to build in custom filters of content they are concerned about and automatically notify them of violations, including sending a copy of the offending content. Rather than grappling with how to defeat encryption, governments will simply be able to harness social media companies to perform their mass surveillance for them, sending them real-time alerts and copies of the decrypted content.

Putting this all together, the sad reality of the encryption debate is that after 30 years it is finally over: dead at the hands of Facebook. If the company's new on-device content moderation succeeds it will usher in the end of consumer end-to-end encryption and create a framework for governments to outsource their mass surveillance directly to social media companies, completely bypassing encryption.

In the end, encryption's days are numbered and the world has Facebook to thank.


UPDATE: 8/2/2019 Will Cathcart, WhatsApp's vice president of product management, took to the internet with this forceful response. "We haven't added a backdoor to WhatsApp. To be crystal clear, we have not done this, have zero plans to do so, and if we ever did, it would be quite obvious and detectable that we had done it. We understand the serious concerns this type of approach would raise, which is why we are opposed to it."
Electronic Frontier Foundation

EFF Argues For 'Empowerment, Not Censorship' Online (eff.org) 62

An activism director and a legislative analyst at the EFF have co-authored an essay arguing that the key to children's safetly online "is user empowerment, not censorship," reporting on a recent hearing by the U.S. Senate's Judiciary Commitee: While children do face problems online, some committee members seemed bent on using those problems as an excuse to censor the Internet and undermine the legal protections for free expression that we all rely on, including kids. Don't censor users; empower them to choose... [W]hen lawmakers give online platforms the impossible task of ensuring that every post meets a certain standard, those companies have little choice but to over-censor.

During the hearing, Stephen Balkam of the Family Online Safety Institute provided an astute counterpoint to the calls for a more highly filtered Internet, calling to move the discussion "from protection to empowerment." In other words, tech companies ought to give users more control over their online experience rather than forcing all of their users into an increasingly sanitized web. We agree.

It's foolish to think that one set of standards would be appropriate for all children, let alone all Internet users. But today, social media companies frequently make censorship decisions that affect everyone. Instead, companies should empower users to make their own decisions about what they see online by letting them calibrate and customize the content filtering methods those companies use. Furthermore, tech and media companies shouldn't abuse copyright and other laws to prevent third parties from offering customization options to people who want them.

The essay also argues that Congress "should closely examine companies whose business models rely on collecting, using, and selling children's personal information..."

"We've highlighted numerous examples of students effectively being forced to share data with Google through the free or low-cost cloud services and Chromebooks it provides to cash-strapped schools. We filed a complaint with the FTC in 2015 asking it to investigate Google's student data practices, but the agency never responded."

Slashdot Top Deals