Censorship

Big Tech Sues Texas, Says Age-Verification Law Is 'Broad Censorship Regime' (arstechnica.com) 49

An anonymous reader quotes a report from Ars Technica: Texas is being sued by a Big Tech lobby group over the state's new law that will require app stores to verify users' ages and impose restrictions on users under 18. "The Texas App Store Accountability Act imposes a broad censorship regime on the entire universe of mobile apps," the Computer & Communications Industry Association (CCIA) said yesterday in a lawsuit (PDF). "In a misguided attempt to protect minors, Texas has decided to require proof of age before anyone with a smartphone or tablet can download an app. Anyone under 18 must obtain parental consent for every app and in-app purchase they try to download -- from ebooks to email to entertainment."

The CCIA said in a press release that the law violates the First Amendment by imposing "a sweeping age-verification, parental consent, and compelled speech regime on both app stores and app developers." When app stores determine that a user is under 18, "the law prohibits them from downloading virtually all apps and software programs and from making any in-app purchases unless their parent consents and is given control over the minor's account," the CCIA said. "Minors who are unable to link their accounts with a parent's or guardian's, or who do not receive permission, would be prohibited from accessing app store content."

The law requires app developers "to 'age-rate' their content into several subcategories and explain their decision in detail," and "notify app stores in writing every time they improve or modify the functions, features, or user experience of their apps," the group said. The lawsuit says the age-rating system relies on a "vague and unworkable set of age categories." "Our Constitution forbids this," the lawsuit said. "None of our laws require businesses to 'card' people before they can enter bookstores and shopping malls. The First Amendment prohibits such oppressive laws as much in cyberspace as it does in the physical world." The lawsuit was filed in US District Court for the Western District of Texas. CCIA members include Apple and Google, which have both said the law would reduce privacy for app users. The companies recently described their plans to comply, saying they would take steps to minimize the privacy risks.

AI

Hollywood Demands Copyright Guardrails from Sora 2 - While Users Complain That's Less Fun (yahoo.com) 56

Enthusiasm for Sora 2 "wasn't shared in Hollywood," reports the Los Angeles Times, "where the new AI tools have created a swift backlash" that "appears to be only just the beginning of a bruising legal fight that could shape the future of AI use in the entertainment business." [OpenAI] executives went on a charm offensive last year. They reached out to key players in the entertainment industry — including Walt Disney Co. — about potential areas for collaboration and trying to assuage concerns about its technology. This year, the San Francisco-based AI startup took a more assertive approach. Before unveiling Sora 2 to the general public, OpenAI executives had conversations with some studios and talent agencies, putting them on notice that they need to explicitly declare which pieces of intellectual property — including licensed characters — were being opted-out of having their likeness depicted on the AI platform, according to two sources familiar with the matter who were not authorized to comment. Actors would be included in Sora 2 unless they opted out, the people said. OpenAI disputes the claim and says that it was always the company's intent to give actors and other public figures control over how their likeness is used.

The response was immediate.... [Big talent agencies objected, along with performers' unions and major studios.] "Decades of enforceable copyright law establishes that content owners do not need to 'opt out' to prevent infringing uses of their protected IP," Warner Bros. Discovery said in a statement... The strong pushback from the creative community could be a strategy to force OpenAI into entering licensing agreements for the content they need, legal experts said... One challenge is figuring out a way that fairly compensates talent and rights holders. Several people who work within the entertainment industry ecosystem said they don't believe a flat fee works.

Meanwhile, "the complete copyright-free-for-all approach that OpenAI took to its new AI video generation model, Sora 2, lasted all of one week," writes Gizmodo. But that means the service has "now pissed off its users." As 404 Media pointed out, social channels like Twitter and Reddit are now flooded with Sora users who are angry they can't make 10-second clips featuring their favorite characters anymore. One user in the OpenAI subreddit said that being able to play with copyrighted material was "the only reason this app was so fun."
Futurism published more reactions, including ""It's official, Sora 2 is completely boring and useless with these copyright restrictions." Others accused OpenAI of abusing copyright to hype up its new app. "This is just classic OpenAI at this point," another user wrote. "They do this s*** all the time. Let people have fun for a day or two and then just start censoring like crazy." The app now has a measly 2.9-star rating on the App Store, indicative of growing disillusionment and frustration with censorship... [It's not dropped to 2.8.]

In an apparent effort to save face, Altman claimed this week that many copyright holders are actually begging to have their characters appear on Sora, instead of complaining about the trend. "In the case of Sora, we've heard from a lot of concerned rightsholders and also a lot of rightsholders who are like 'My concern is you won't put my character in enough,'" he told the a16z podcast earlier this week. "So I can completely see a world where subject to the decisions that a rightsholder has, they get more upset with us for not generating their character often enough than too much," he added. Whether most rightsholders would agree with that sentiment remains to be seen.

Business Insider offers another reaction. After watching Sora 2's main public feed, they write that Sora 2 "seems to be overrun with teenage boys."
China

Horror Film's Wedding Scene Digitally Altered for Chinese Audiences (theguardian.com) 47

Australian horror film Together, starring Dave Franco and Alison Brie, underwent digital alterations for its mainland China release on September 12. Chinese cinemagoers discovered that a wedding scene between two men had been modified using face-swapping technology to transform one male character into a female appearance. The change only became apparent after side-by-side screenshots from the original and altered versions circulated on social media platforms.

Chinese viewers are expressing outrage over the AI-powered modification, The Guardian reports, citing concerns about creative integrity and the difficulty of detecting such alterations compared to traditional scene cuts. The film's distributor halted the scheduled September 19 general release following the backlash. China's censorship authorities require all imported films to undergo approval before release.
Youtube

YouTube Reinstating Creators Banned For COVID-19, Election Content (thehill.com) 226

YouTube's parent company, Alphabet, said it will reinstate creators previously banned for spreading COVID-19 misinformation and false election claims, citing free expression and shifting policy guidelines. The Hill reports: "Reflecting the Company's commitment to free expression, YouTube will provide an opportunity for all creators to rejoin the platform if the Company terminated their channels for repeated violations of COVID-19 and elections integrity policies that are no longer in effect," the company said in a letter to Rep. Jim Jordan (R-Ohio), chair of the House Judiciary Committee. "YouTube values conservative voices on its platform and recognizes that these creators have extensive reach and play an important role in civic discourse. The Company recognizes these creators are among those shaping today's online consumption, landing 'must-watch' interviews, giving viewers the chance to hear directly from politicians, celebrities, business leaders, and more," it added in the five-page correspondence.

Alphabet blamed the Biden administration for limiting political speech on the platform. "Senior Biden Administration officials, including White House officials, conducted repeated and sustained outreach to Alphabet and pressed the Company regarding certain user-generated content related to the COVID-19 pandemic that did not violate its policies," the letter read. "While the Company continued to develop and enforce its policies independently, Biden Administration officials continued to press the Company to remove non-violative user-generated content," it continued. Guidelines were changed after former President Biden took office and urged platforms to remove content that encouraged citizens to drink bleach to cure COVID-19, as President Trump suggested in 2020, or join insurrection efforts launched on Jan. 6, 2021, to overthrow his 2020 presidential win. But the company said the Biden administration's decisions were "unacceptable" and "wrong," while noting it would forgo future fact-checking mechanisms and instead allow users to add context notes to content.

AI

ChatGPT Will Guess Your Age and Might Require ID For Age Verification 111

OpenAI is rolling out stricter safety measures for ChatGPT after lawsuits linked the chatbot to multiple suicides. "ChatGPT will now attempt to guess a user's age, and in some cases might require users to share an ID in order to verify that they are at least 18 years old," reports 404 Media. "We know this is a privacy compromise for adults but believe it is a worthy tradeoff," the company said in its announcement. "I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking," OpenAI CEO Sam Altman said on X. From the report: OpenAI introduced parental controls to ChatGPT earlier in September, but has now introduced new, more strict and invasive security measures. In addition to attempting to guess or verify a user's age, ChatGPT will now also apply different rules to teens who are using the chatbot. "For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting," the announcement said. "And, if an under-18 user is having suicidal ideation, we will attempt to contact the users' parents and if unable, will contact the authorities in case of imminent harm."

OpenAI's post explains that it is struggling to manage an inherent problem with large language models that 404 Media has tracked for several years. ChatGPT used to be a far more restricted chatbot that would refuse to engage users on a wide variety of issues the company deemed dangerous or inappropriate. Competition from other models, especially locally hosted and so-called "uncensored" models, and a political shift to the right which sees many forms of content moderation as censorship, has caused OpenAI to loosen those restrictions.

"We want users to be able to use our tools in the way that they want, within very broad bounds of safety," Open AI said in its announcement. The position it seemed to have landed on given these recent stories about teen suicide, is that it wants to "'Treat our adult users like adults' is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else's freedom."
The Media

Wired Retracts Article By 'AI Freelancer' - and Business Insider Retracts 38 (msn.com) 37

"A raft of articles have been retracted from publications including Business Insider and Wired in recent month," reports the Washington Post, "with links between them suggesting a possible broader scheme to pass off fake stories that these outlets now suspect were written using artificial intelligence." A Washington Post probe into the retractions found a connection between Onyeka Nwelue, the purported author of one of 38 essays removed this week by Business Insider, and someone using the name Margaux Blanchard, two of whose stories were previously removed by the same outlet. In recent months SFGate, Index on Censorship and Wired also retracted articles under the Blanchard byline, after it was identified as bogus by the British publication Press Gazette...

Business Insider Editor in Chief Jamie Heller explained to staff Tuesday in an email, obtained by The Post, that the report of a phony writer spurred a fuller investigation that turned up dozens of suspicious articles under various bylines. "We recently learned that a freelance contributor misrepresented their identity in two first-person essays written for Business Insider. As soon as this came to light, we took down the essays and began an investigation," Heller said. "As part of this process, we've removed additional first-person essays from the site due to concerns about the authors' identity or veracity. No news articles or videos were found to have this issue." On Tuesday Business Insider removed 38 pieces that had been published under bylines other than Blanchard. Business Insider deleted the author pages of 19 individuals, including Blanchard and Nwelue, and replaced their essays with editor's notes.

The website's investigation involved reviewing "tens of thousands of records," Business Insider spokesperson Ari Isaacman D'Angelo said in a statement to The Post. But it hadn't determined whether artificial intelligence was used to produce the yanked essays, she said, noting that AI-detection tools are often unreliable... Essays under [Nate] Giovanni's byline feature contradictory information. One piece, published in December 2024, refers to the author having two teenage daughters and a two-and-a-half-year-old son. Another, published three months later mentions two sons, aged eight and nine. Pieces that ran in May and July — about house-sitting around the world and applying to PhD programs — make no mention of a family at all...

On Aug. 21, Wired wrote a longer mea culpa about the article it published under Blanchard's name, with the headline "How WIRED Got Rolled by an AI Freelancer." "If anyone should be able to catch an AI scammer, it's WIRED," the publication wrote. ["In fact we do, all the time. Our editors receive transparently AI-generated pitches on a regular basis, and we reject them accordingly..."] "Unfortunately, one got through," referring to a story that ran under Blanchard's byline in May about two people who were married in the video game Minecraft.

The site Index on Censorship also published an article under the Blanchard byline about threats to journalists in Guatemala. "In the age of very intelligent AI it's clear we will have to look at things differently," the site's editor told the Washington Post.

The Post's article notes that one sign the pitches were AI-generated "is that while they sounded interesting, they featured details that were erroneous — including fictitious locales." Reached for a comment, one of the authors told the Post "Don't mention my name in your stupid article," claiming their acocunt was recently "compromised" (though their X.com account had also recently tweeted one of their articles.) But another author emailed the Post from their actual academic email address, saying they had no connection to the Gmail account The Post had been corresponding with. And here's how the person at that Gmail account responded to a follow-up query from the Post.

"What is one to do? With a few savvy prompts, AI could probably generate a 'long-lost' novel by Proust."
The Courts

4chan and Kiwi Farms Sue the UK Over Its Age Verification Law (404media.co) 103

An anonymous reader quotes a report from 404 Media: 4chan and Kiwi Farms sued the United Kingdom's Office of Communications (Ofcom) over its age verification law in U.S. federal court Wednesday, fulfilling a promise it announced on August 23. In the lawsuit, 4chan and Kiwi Farms claim that threats and fines they have received from Ofcom "constitute foreign judgments that would restrict speech under U.S. law." Both entities say in the lawsuit that they are wholly based in the U.S. and that they do not have any operations in the United Kingdom and are therefore not subject to local laws. Ofcom's attempts to fine and block 4chan and Kiwi Farms, and the lawsuit against Ofcom, highlight the messiness involved with trying to restrict access to specific websites or to force companies to comply with age verification laws.

The lawsuit calls Ofcom an "industry-funded global censorship bureau." "Ofcom's ambitions are to regulate Internet communications for the entire world, regardless of where these websites are based or whether they have any connection to the UK," the lawsuit states. "On its website, Ofcom states that 'over 100,000 online services are likely to be in scope of the Online Safety Act -- from the largest social media platforms to the smallest community forum.'" [...] Ofcom began investigating 4chan over alleged violations of the Online Safety Act in June. On August 13, it announced a provisional decision and stated that 4chan had "contravened its duties" and then began to charge the site a penalty of [roughly $26,000] a day. Kiwi Farms has also been threatened with fines, the lawsuit states.
"American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail. In the face of these foreign demands, our clients have bravely chosen to assert their constitutional rights," said Preston Byrne, one of the lawyers representing 4chan and Kiwi Farms.

"We are aware of the lawsuit," an Ofcom spokesperson told 404 Media. "Under the Online Safety Act, any service that has links with the UK now has duties to protect UK users, no matter where in the world it is based. The Act does not, however, require them to protect users based anywhere else in the world."
United States

FTC Warns Tech Giants Not To Bow To Foreign Pressure on Encryption (bleepingcomputer.com) 56

The Federal Trade Commission is warning major U.S. tech companies against yielding to foreign government demands that weaken data security, compromise encryption, or impose censorship on their platforms. From a report: FTC Chairman Andrew N. Ferguson signed the letter sent to large American companies like Akamai, Alphabet (Google), Amazon, Apple, Cloudflare, Discord, GoDaddy, Meta, Microsoft, Signal, Snap, Slack, and X (Twitter). Ferguson stresses that weakening data security at the request of foreign governments, especially if they don't alert users about it, would constitute a violation of the FTC Act and expose companies to legal consequences.

Ferguson's letter specifically cites foreign laws such as the EU's Digital Services Act and the UK's Online Safety and Investigatory Powers Acts. Earlier this year, Apple was forced to remove support for iCloud end-to-end encryption in the United Kingdom rather than give in to demands to add a backdoor for the government to access encrypted accounts. The UK's demand would have weakened Apple's encryption globally, but it was retracted last week following U.S. diplomatic pressure.

The Almighty Buck

4chan Refuses To Pay UK Online Safety Act Fines (bbc.com) 95

An anonymous reader quotes a report from the BBC: A lawyer representing the online message board 4chan says it won't pay a proposed fine by the UK's media regulator as it enforces the Online Safety Act. According to Preston Byrne, managing partner of law firm Byrne & Storm, Ofcom has provisionally decided to impose a 20,000-pound fine "with daily penalties thereafter" for as long as the site fails to comply with its request. "Ofcom's notices create no legal obligations in the United States," he told the BBC, adding he believed the regulator's investigation was part of an "illegal campaign of harassment" against US tech firms.

"4chan has broken no laws in the United States -- my client will not pay any penalty," Mr Byrne said. Ofcom began investigating 4chan over whether it was complying with its obligations under the UK's Online Safety Act. Then in August, it said it had issued 4chan with "a provisional notice of contravention" for failing to comply with two requests for information. Ofcom said its investigation would examine whether the message board was complying with the act, including requirements to protect its users from illegal content.
"American businesses do not surrender their First Amendment rights because a foreign bureaucrat sends them an email," law firms Byrne & Storm and Coleman Law wrote. "Under settled principles of US law, American courts will not enforce foreign penal fines or censorship codes. If necessary, we will seek appropriate relief in US federal court to confirm these principles."

The statement calls on the Trump administration to intervene and protect American businesses from "extraterritorial censorship mandates."
China

China Isolates Itself From Worldwide Web For Over an Hour (theregister.com) 51

A complete shutdown of encrypted web traffic isolated China from the global internet for 74 minutes Wednesday morning, blocking citizens from accessing foreign websites and disrupting international business operations that depend on secure connections to offshore servers. The Great Firewall began injecting forged TCP RST+ACK packets to terminate all connections on port 443 at 00:34 Beijing time on August 20, according to activist group Great Firewall Report.

The standard HTTPS port carries most modern web traffic, meaning Chinese users lost access to virtually all foreign-hosted websites while companies including Apple and Tesla couldn't connect to servers powering their basic services. The blocking device didn't match known Great Firewall hardware fingerprints, suggesting Beijing either deployed new censorship equipment or experienced a configuration error. Pakistan's internet traffic dropped significantly hours before China's incident, potentially connected through shared firewall technology.
AI

Students Have Been Called to the Office - Or Arrested - for False Alarms from AI-Powered Surveillance Systems (apnews.com) 162

In 2023 a 13-year-old girl "made an offensive joke while chatting online with her classmates," reports the Associated Press.

But when the school's surveillance software spotted that joke, "Before the morning was even over, the Tennessee eighth grader was under arrest. She was interrogated, strip-searched and spent the night in a jail cell, her mother says." Her parents filed a lawsuit against the school system, according to the article (which points out the girl wasn't allowed to talk to her parents until the next day). "A court ordered eight weeks of house arrest, a psychological evaluation and 20 days at an alternative school for the girl." Gaggle's CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said. "I wish that was treated as a teachable moment, not a law enforcement moment," said Patterson.
But that's just one example, the article points out. "Surveillance systems in American schools increasingly monitor everything students write on school accounts and devices." Thousands of school districts across the country use software like Gaggle and Lightspeed Alert to track kids' online activities, looking for signs they might hurt themselves or others. With the help of artificial intelligence, technology can dip into online conversations and immediately notify both school officials and law enforcement... In a country weary of school shootings, several states have taken a harder line on threats to schools. Among them is Tennessee, which passed a 2023 zero-tolerance law requiring any threat of mass violence against a school to be reported immediately to law enforcement....

Students who think they are chatting privately among friends often do not realize they are under constant surveillance, said Shahar Pasch, an education lawyer in Florida. One teenage girl she represented made a joke about school shootings on a private Snapchat story. Snapchat's automated detection software picked up the comment, the company alerted the FBI, and the girl was arrested on school grounds within hours... The technology can also involve law enforcement in responses to mental health crises. In Florida's Polk County Schools, a district of more than 100,000 students, the school safety program received nearly 500 Gaggle alerts over four years, officers said in public Board of Education meetings. This led to 72 involuntary hospitalization cases under the Baker Act, a state law that allows authorities to require mental health evaluations for people against their will if they pose a risk to themselves or others...

Information that could allow schools to assess the software's effectiveness, such as the rate of false alerts, is closely held by technology companies and unavailable publicly unless schools track the data themselves. Students in one photography class were called to the principal's office over concerns Gaggle had detected nudity. The photos had been automatically deleted from the students' Google Drives, but students who had backups of the flagged images on their own devices showed it was a false alarm. District officials said they later adjusted the software's settings to reduce false alerts. Natasha Torkzaban, who graduated in 2024, said she was flagged for editing a friend's college essay because it had the words "mental health...."

School officials have said they take concerns about Gaggle seriously, but also say the technology has detected dozens of imminent threats of suicide or violence. "Sometimes you have to look at the trade for the greater good," said Board of Education member Anne Costello in a July 2024 board meeting.

The Courts

Country's Strictest Ban On Election Deepfakes Struck By Judge (politico.com) 26

A federal judge struck down California's strict anti-deepfake election law, citing Section 230 protections rather than First Amendment concerns. Politico reports: [Judge John Mendez] also said he intended to overrule a second law, which would require labels on digitally altered campaign materials and ads, for violating the First Amendment. [...] The first law would have blocked online platforms from hosting deceptive, AI-generated content related to an election in the run-up to the vote. It came amid heightened concerns about the rapid advancement and accessibility of artificial intelligence, allowing everyday users to quickly create more realistic images and videos, and the potential political impacts. But opponents of the measures ... also argued the restrictions could infringe upon freedom of expression.

The original challenge was filed by the creator of the video, Christopher Kohls, on First Amendment grounds, with X later joining the case after [Elon Musk] said the measures were "designed to make computer-generated parody illegal." The satirical right-wing news website the Babylon Bee and conservative social media site Rumble also joined the suit. Mendez said the first law, penned by Democratic state Assemblymember Marc Berman, conflicted with the oft-cited Section 230 of the federal Communications Decency Act, which shields online platforms from liability for what third parties post on their sites. "They don't have anything to do with these videos that the state is objecting to," Mendez said of sites like X that host deepfakes.

But the judge did not address the First Amendment claims made by Kohls, saying it was not necessary in order to strike down the law on Section 230 grounds. "I'm simply not reaching that issue," Mendez told the plaintiffs' attorneys. [...] "I think the statute just fails miserably in accomplishing what it would like to do," Mendez said, adding he would write an official opinion on that law in the coming weeks. Laws restricting speech have to pass a strict test, including whether there are less restrictive ways of accomplishing the state's goals. Mendez questioned whether approaches that were less likely to chill free speech would be better. "It's become a censorship law and there is no way that is going to survive," Mendez added.

Games

Itch.io Starts Returning the Free Games It Removed From Its Store (aftermath.site) 24

"Digital storefront Itch.io is reindexing its free adult games," reports Engadget, "and is talking to its partnered payment processors about plans to gradually reintroduce paid NSFW content..." In a statement included in the Itch.io update, Stripe said it hasn't closed the door on the possibility of being able to support adult content again in the future. In the meantime, Itch.io says it is talking to its other payment partners about accepting the card payments Stripe is currently no longer able to process.
Itch's founder told the gaming news site Aftermath that it was a notice from Visa that led to the sudden deindexing of so many games. But Aftermath notes that Visa and Mastercard have now "both released statements effectively washing their hands of the situation but also, paradoxically, justifying any actions they might have taken."

- Visa: "When a legally operating merchant faces an elevated risk of illegal activity, we require enhanced safeguards for the banks supporting those merchants..."

- Mastercard: "Our payment network follows standards based on the rule of law. Put simply, we allow all lawful purchases on our network. At the same time, we require merchants to have appropriate controls to ensure Mastercard cards cannot be used for unlawful purchases, including illegal adult content."

Aftermath's take? The part where the two companies act as though their hands have been tied by the long arm of the law is, frankly, bullshit. None of the games removed from Steam or Itch were illegal. They depict actions that are perfectly legal in other mediums. To re-quote Mike Stabile, director of policy at the Free Speech Coalition: "The stuff [companies] are talking about is entirely legal. It's legal to have in a book, it's legal to have in a game. They are making decisions based on their brand, based on public pressure from anti-porn groups, and that can be reversed."
Meanwhile, gamers are still pushing back: It's difficult to say just how many people have spent the past several days tying up the lines of card companies and payment processors, but the movement has made itself visible enough to gain support from larger industry bodies like the Communications Workers of America [the largest communications/media labor union in America] and the International Game Developers Association.
The Internet

Google Tool Misused To Scrub Tech CEO's Shady Past From Search (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: Google is fond of saying its mission is to "organize the world's information," but who gets to decide what information is worthy of organization? A San Francisco tech CEO has spent the past several years attempting to remove unflattering information about himself from Google's search index, and the nonprofit Freedom of the Press Foundation says he's still at it. Most recently, an unknown bad actor used a bug in one of Google's search tools to scrub the offending articles.

The saga began in 2023 when independent journalist Jack Poulson reported on Maury Blackman's 2021 domestic violence arrest. Blackman, who was then the CEO of surveillance tech firm Premise Data Corp., took offense at the publication of his legal issues. The case did not lead to charges after Blackman's 25-year-old girlfriend recanted her claims against the 53-year-old CEO, but Poulson reported on some troubling details of the public arrest report. Blackman has previously used tools like DMCA takedowns and lawsuits to stifle reporting on his indiscretion, but that campaign now appears to have co-opted part of Google's search apparatus. The Freedom of the Press Foundation (FPF) reported on Poulson's work and Blackman's attempts to combat it late last year. In June, Poulson contacted the Freedom of the Press Foundation to report that the article had mysteriously vanished from Google search results.

The foundation began an investigation immediately, which led them to a little-known Google search feature known as Refresh Outdated Content. Google created this tool for users to report links with content that is no longer accurate or that lead to error pages. When it works correctly, Refresh Outdated Content can help make Google's search results more useful. However, Freedom of the Press Foundation now says that a bug allowed an unknown bad actor to scrub mentions of Blackman's arrest from the Internet. Upon investigating, FPF found that its article on Blackman was completely absent from Google results, even through a search with the exact title. Poulson later realized that two of his own Substack articles were similarly affected. The Foundation was led to the Refresh Outdated Content tool upon checking its search console.
The bug in the tool allowed malicious actors to de-index valid URLs from search results by altering the capitalization in the URL slug. Although URLs are typically case-sensitive, Google's tool treated them as case-insensitive. As a result, when someone submitted a slightly altered version of a working URL (for example, changing "anatomy" to "AnAtomy"), Google's crawler would see it as a broken link (404 error) and mistakenly remove the actual page from search results.

Ironically, Blackman is now CEO of the online reputation management firm The Transparency Company.
Censorship

Visa and Mastercard Are Getting Overwhelmed By Gamer Fury Over Censorship (polygon.com) 245

An anonymous reader quotes a report from Polygon: In the wake of storefronts like Steam and itch.io curbing the sale of adult games, irate fans have started an organized campaign against the payment processors that they believe are responsible for the crackdown. While the movement is still in its early stages, people are mobilizing with an eye toward overwhelming communication lines at companies like Visa and Mastercard in a way that will make the concern impossible to ignore. On social media sites like Reddit and Bluesky, people are urging one another to get into contact with Visa and Mastercard through emails and phone calls. Visa and Mastercard have become the targets of interest because the affected storefronts both say that their decisions around adult games were motivated by the danger of losing the ability to use major payment processors while selling games. These payment processors have their own rules regarding usage, but they are vaguely defined. But losing infrastructure like this could impact audiences well beyond those who care about sex games, spokespeople for Valve and itch.io said.

In a now-deleted post on the Steam subreddit with over 17,000 upvotes, commenters say that customer service representatives for both payment processors seem to already be aware of the problem. Sometimes, the representatives will say that they've gotten multiple calls on the subject of adult game censorship, but that they can't really do anything about it. The folks applying pressure know that someone at a call center has limited power in a scenario like this one; typically, agents are equipped to handle standard customer issues like payment fraud or credit card loss. But the point isn't to enact change through a specific phone call: It's to cause enough disruption that the ruckus theoretically starts costing payment processors money.

"Emails can be ignored, but a very very long queue making it near impossible for other clients to get in will help a lot as well," reads the top comment on the Reddit thread. In that same thread, people say that they're hanging onto the call even if the operator says that they'll experience multi-hour wait times presumably caused by similar calls gunking up the lines. Beyond the stubbornness factor, the tactic is motivated by the knowledge that most customer service systems will put people who opt for call-backs in a lower priority queue, as anyone who opts in likely doesn't have an emergency going on. "Do both," one commenter suggests. "Get the call back, to gum up the call back queue. Then call in again and wait to gum up the live queue." People are also using email to voice their concerns directly to the executives at both Visa and Mastercard, payment processors that activist group Collective Shout called out by name in their open letter requesting that adult games get pulled. Emails are also getting sent to customer service.

United Kingdom

VPN Downloads Surge in UK as New Age-Verification Rules Take Effect (msn.com) 96

Proton VPN reported a 1,400 percent hourly increase in signups over its baseline Friday — the day the UK's age verification law went into effect. For UK users, "apps with explicit content must now verify visitors' ages via methods such as facial recognition and banking info," notes Mashable: Proton VPN previously documented a 1,000 percent surge in new subscribers in June after Pornhub left France, its second-biggest market, amid the enactment of an age verification law there... A Proton VPN spokesperson told Mashable that it saw an increase in new subscribers right away at midnight Friday, then again at 9 a.m. BST. The company anticipates further surges over the weekend, they added. "This clearly shows that adults are concerned about the impact universal age verification laws will have on their privacy," the spokesperson said... Search interest for the term "Proton VPN" also saw a seven-day spike in the UK around 2 a.m. BST Friday, according to a Google Trends chart.
The Financial Times notes that VPN apps "made up half of the top 10 most popular free apps on the UK's App Store for iOS this weekend, according to Apple's rankings." Proton VPN leapfrogged ChatGPT to become the top free app in the UK, according to Apple's daily App Store charts, with similar services from developers Super Unlimited and Nord Security also rising over the weekend... Data from Google Trends also shows a significant increase in search queries for VPNs in the UK this weekend, with up to 10 times more people looking for VPNs at peak times...

"This is what happens when people who haven't got a clue about technology pass legislation," Anthony Rose, a UK-based tech entrepreneur who helped to create BBC iPlayer, the corporation's streaming service, said in a social media post. Rose said it took "less than five minutes to install a VPN" and that British people had become familiar with using them to access the iPlayer outside the UK. "That's the beauty of VPNs. You can be anywhere you like, and anytime a government comes up with stupid legislation like this, you just turn on your VPN and outwit them," he added...

Online platforms found in breach of the new UK rules face penalties of up to £18mn or 10 percent of global turnover, whichever is greater... However, opposition to the new rules has grown in recent days. A petition submitted through the UK parliament website demanding that the Online Safety Act be repealed has attracted more than 270,000 signatures, with the vast majority submitted in the past week. Ministers must respond to a petition, and parliament has to consider its topic for a debate, if signatures surpass 100,000.

X, Reddit and TikTok have also "introduced new 'age assurance' systems and controls for UK users," according to the article. But Mashable summarizes the situation succinctly.

"Initial research shows that VPNs make age verification laws in the U.S. and abroad tricky to enforce in practice."
Open Source

Jack Dorsey Pumps $10M Into a Nonprofit Focused on Open Source Social Media (techcrunch.com) 20

Twitter co-founder/Block CEO Jack Dorsey isn't just vibe coding new apps like Bitchat and Sun Day. He's also "invested $10 million in an effort to fund experimental open source projects and other tools that could ultimately transform the social media landscape," reports TechCrunch," funding the projects through an online collective formed in May called "andOtherStuff: [T]he team at "andOtherStuff" is determined not to build a company but is instead operating like a "community of hackers," explains Evan Henshaw-Plath [who handles UX/onboarding and was also Twitter's first employee]. Together, they're working to create technologies that could include new consumer social apps as well as various experiments, like developer tools or libraries, that would allow others to build apps for themselves.

For instance, the team is behind an app called Shakespeare, which is like the app-building platform Lovable, but specifically for building Nostr-based social apps with AI assistance. The group is also behind heynow, a voice note app built on Nostr; Cashu wallet; private messenger White Noise; and the Nostr-based social community +chorus, in addition to the apps Dorsey has already released. Developments in AI-based coding have made this type of experimentation possible, Henshaw-Plath points out, in the same way that technologies like Ruby on Rails, Django, and JSON helped to fuel an earlier version of the web, dubbed Web 2.0.

Related to these efforts, Henshaw-Plath sat down with Dorsey for the debut episode of his new podcast, revolution.social with @rabble... Dorsey believes Bluesky faces the same challenges as traditional social media because of its structure — it's funded by VCs, like other startups. Already, it has had to bow to government requests and faced moderation challenges, he points out. "I think [Bluesky CEO] Jay [Graber] is great. I think the team is great," Dorsey told Henshaw-Plath, "but the structure is what I disagree with ... I want to push the energy in a different direction, which is more like Bitcoin, which is completely open and not owned by anyone from a protocol layer...."

Dorsey's initial investment has gotten the new nonprofit up and running, and he worked on some of its initial iOS apps. Meanwhile, others are contributing their time to build Android versions, developer tools, and different social media experiments. More is still in the works, says Henshaw-Plath.

"There are things that we're not ready to talk about yet that'll be very exciting," he teases.

Crime

New Russian Law Criminalizes Online Searches For Controversial Content (washingtonpost.com) 83

Russian lawmakers passed sweeping new legislation allowing authorities to fine individuals simply for searching and accessing content labeled "extremist" via VPNs. The Washington Post reports: Russia defines "extremist materials" as content officially added by a court to a government-maintained registry, a running list of about 5,500 entries, or content produced by "extremist organizations" ranging from "the LGBT movement" to al-Qaeda. The new law also covers materials that promote alleged Nazi ideology or incite extremist actions. Until now, Russian law stopped short of punishing individuals for seeking information online; only creating or sharing such content is prohibited. The new amendments follow remarks by high-ranking officials that censorship is justified in wartime. Adoption of the measures would mark a significant tightening of Russia's already restrictive digital laws.

The fine for searching for banned content in Russia would be about a $65, while the penalty for advertising circumvention tools such as VPN services would be steeper -- $2,500 for individuals and up to $12,800 for companies. Previously, the most significant expansion of Russia's restrictions on internet use and freedom of speech occurred shortly after the February 2022 full-scale invasion of Ukraine, when sweeping laws criminalized the spread of "fake news" and "discrediting" the Russian military. The new amendment was introduced Tuesday and attached to a mundane bill on regulating freight companies, according to documents published by Russia's lower house of parliament, the State Duma.

Social Networks

X Says It's 'Deeply Concerned' About India Press Censorship (aljazeera.com) 42

X said Tuesday it is "deeply concerned about ongoing press censorship in India" after the Indian government ordered the platform to block 2,355 accounts on July 3, including two Reuters news agency handles. The social media company said the order came under India's Section 69A of the Information Technology Act, with non-compliance risking criminal liability.

The Indian Ministry of Electronics and Information Technology demanded immediate action within one hour without providing justification, X said. After public outcry, the government requested X to unblock the Reuters accounts.
Wireless Networking

Jack Dorsey Launches a WhatsApp Messaging Rival Built On Bluetooth (cnbc.com) 66

Jack Dorsey has launched Bitchat, a decentralized, peer-to-peer messaging app that uses Bluetooth mesh networks for encrypted, ephemeral chats without requiring accounts, servers, or internet access. The beta version is live on TestFlight, with a full white paper available on GitHub. CNBC reports: In a post on X Sunday, Dorsey called it a personal experiment in "bluetooth mesh networks, relays and store and forward models, message encryption models, and a few other things."

Bitchat enables ephemeral, encrypted communication between nearby devices. As users move through physical space, their phones form local Bluetooth clusters and pass messages from device to device, allowing them to reach peers beyond standard range -- even without Wi-Fi or cell service. Certain "bridge" devices connect overlapping clusters, expanding the mesh across greater distances. Messages are stored only on device, disappear by default and never touch centralized infrastructure -- echoing Dorsey's long-running push for privacy-preserving, censorship-resistant communication.

Like the Bluetooth-based apps used during Hong Kong's 2019 protests, Bitchat is designed to keep working even when the internet is blocked, offering a censorship-resistant way to stay connected during outages, shutdowns or surveillance. The app also supports optional group chats, or "rooms," which can be named with hashtags and protected by passwords. It includes store and forward functionality to deliver messages to users who are temporarily offline. A future update will add WiFi Direct to increase speed and range, pushing Dorsey's vision for off-grid, user-owned communication even further.

Slashdot Top Deals