×
Encryption

Undisclosed WhatsApp Vulnerability Lets Governments See Who You Message (theintercept.com) 38

WhatsApp's security team warned that despite the app's encryption, users are vulnerable to government surveillance through traffic analysis, according to an internal threat assessment obtained by The Intercept. The document suggests that governments can monitor when and where encrypted communications occur, potentially allowing powerful inferences about who is conversing with whom. The report adds: Even though the contents of WhatsApp communications are unreadable, the assessment shows how governments can use their access to internet infrastructure to monitor when and where encrypted communications are occurring, like observing a mail carrier ferrying a sealed envelope. This view into national internet traffic is enough to make powerful inferences about which individuals are conversing with each other, even if the subjects of their conversations remain a mystery. "Even assuming WhatsApp's encryption is unbreakable," the assessment reads, "ongoing 'collect and correlate' attacks would still break our intended privacy model."

The WhatsApp threat assessment does not describe specific instances in which it knows this method has been deployed by state actors. But it cites extensive reporting by the New York Times and Amnesty International showing how countries around the world spy on dissident encrypted chat app usage, including WhatsApp, using the very same techniques. As war has grown increasingly computerized, metadata -- information about the who, when, and where of conversations -- has come to hold immense value to intelligence, military, and police agencies around the world. "We kill people based on metadata," former National Security Agency chief Michael Hayden once infamously quipped.
Meta said "WhatsApp has no backdoors and we have no evidence of vulnerabilities in how WhatsApp works." Though the assessment describes the "vulnerabilities" as "ongoing," and specifically mentions WhatsApp 17 times, a Meta spokesperson said the document is "not a reflection of a vulnerability in WhatsApp," only "theoretical," and not unique to WhatsApp.
AI

DOJ Makes Its First Known Arrest For AI-Generated CSAM (engadget.com) 98

In what's believed to be the first case of its kind, the U.S. Department of Justice arrested a Wisconsin man last week for generating and distributing AI-generated child sexual abuse material (CSAM). Even if no children were used to create the material, the DOJ "looks to establish a judicial precedent that exploitative materials are still illegal," reports Engadget. From the report: The DOJ says 42-year-old software engineer Steven Anderegg of Holmen, WI, used a fork of the open-source AI image generator Stable Diffusion to make the images, which he then used to try to lure an underage boy into sexual situations. The latter will likely play a central role in the eventual trial for the four counts of "producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16." The government says Anderegg's images showed "nude or partially clothed minors lasciviously displaying or touching their genitals or engaging in sexual intercourse with men." The DOJ claims he used specific prompts, including negative prompts (extra guidance for the AI model, telling it what not to produce) to spur the generator into making the CSAM.

Cloud-based image generators like Midjourney and DALL-E 3 have safeguards against this type of activity, but Ars Technica reports that Anderegg allegedly used Stable Diffusion 1.5, a variant with fewer boundaries. Stability AI told the publication that fork was produced by Runway ML. According to the DOJ, Anderegg communicated online with the 15-year-old boy, describing how he used the AI model to create the images. The agency says the accused sent the teen direct messages on Instagram, including several AI images of "minors lasciviously displaying their genitals." To its credit, Instagram reported the images to the National Center for Missing and Exploited Children (NCMEC), which alerted law enforcement. Anderegg could face five to 70 years in prison if convicted on all four counts. He's currently in federal custody before a hearing scheduled for May 22.

EU

EU Sets Benchmark For Rest of the World With Landmark AI Laws (reuters.com) 28

An anonymous reader quotes a report from Reuters: Europe's landmark rules on artificial intelligence will enter into force next month after EU countries endorsed on Tuesday a political deal reached in December, setting a potential global benchmark for a technology used in business and everyday life. The European Union's AI Act is more comprehensive than the United States' light-touch voluntary compliance approach while China's approach aims to maintain social stability and state control. The vote by EU countries came two months after EU lawmakers backed the AI legislation drafted by the European Commission in 2021 after making a number of key changes. [...]

The AI Act imposes strict transparency obligations on high-risk AI systems while such requirements for general-purpose AI models will be lighter. It restricts governments' use of real-time biometric surveillance in public spaces to cases of certain crimes, prevention of terrorist attacks and searches for people suspected of the most serious crimes. The new legislation will have an impact beyond the 27-country bloc, said Patrick van Eecke at law firm Cooley. "The Act will have global reach. Companies outside the EU who use EU customer data in their AI platforms will need to comply. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR," he said, referring to EU privacy rules.

While the new legislation will apply in 2026, bans on the use of artificial intelligence in social scoring, predictive policing and untargeted scraping of facial images from the internet or CCTV footage will kick in in six months once the new regulation enters into force. Obligations for general purpose AI models will apply after 12 months and rules for AI systems embedded into regulated products in 36 months. Fines for violations range from $8.2 million or 1.5% of turnover to 35 million euros or 7% of global turnover depending on the type of violations.

United States

US Government Urges Federal Contractors To Strengthen Encryption (bloomberg.com) 20

Companies working with the US government may be required to start protecting their data and technology from attacks by quantum computers as soon as July. From a report: The National Institute for Standards and Technology, part of the Department of Commerce, will in July stipulate three types of encryption algorithms the agency deems sufficient for protecting data from quantum computers, setting an internationally-recognized standard aimed at helping organizations manage evolving cybersecurity threats. The rollout of the standards will kick off "the transition to the next generation of cryptography," White House deputy national security adviser Anne Neuberger told Bloomberg in Cambridge, England on Tuesday. Breaking encryption not only threatens "national security secrets" but also the the way we secure the internet, online payments and bank transactions, she added.

Neuberger was speaking at an event organized by the University of Cambridge and Vanderbilt University, hosting academics, industry professionals and government officials to discuss the threats posed to cybersecurity by quantum computing, which vastly accelerates processing power by performing calculations in parallel rather than sequentially and will make existing encryption systems obsolete.

Technology

Match Group, Meta, Coinbase And More Form Anti-Scam Coalition (engadget.com) 23

An anonymous reader shares a report: Scams are all over the internet, and AI is making matters worse (no, Taylor Swift didn't giveaway Le Creuset pans, and Tom Hanks didn't promote a dental plan). Now, companies such as Match Group, Meta and Coinbase are launching Tech Against Scams, a new coalition focused on collaboration to prevent online fraud and financial schemes. They will "collaborate on ways to take action against the tools used by scammers, educate and protect consumers and disrupt rapidly evolving financial scams."

Meta, Coinbase and Match Group -- which owns Hinge and Tinder -- first joined forces on this issue last summer but are now teaming up with additional digital, social media and crypto companies, along with the Global Anti-Scam Organization. A major focus of this coalition is pig butchering scams, a type of fraud in which a scammer tricks someone into giving them more and more money through trusting digital relationships, both romantic and platonic in nature.

The Internet

38% of Webpages That Existed in 2013 Are No Longer Accessible a Decade Later 62

A new Pew Research Center analysis shows just how fleeting online content actually is: 1. A quarter of all webpages that existed at one point between 2013 and 2023 are no longer accessible, as of October 2023. In most cases, this is because an individual page was deleted or removed on an otherwise functional website.
2. For older content, this trend is even starker. Some 38% of webpages that existed in 2013 are not available today, compared with 8% of pages that existed in 2023.

This "digital decay" occurs in many different online spaces. We examined the links that appear on government and news websites, as well as in the "References" section of Wikipedia pages as of spring 2023. This analysis found that:
1. 23% of news webpages contain at least one broken link, as do 21% of webpages from government sites. News sites with a high level of site traffic and those with less are about equally likely to contain broken links. Local-level government webpages (those belonging to city governments) are especially likely to have broken links.
2. 54% of Wikipedia pages contain at least one link in their "References" section that points to a page that no longer exists.[...]
Space

US Defense Department 'Concerned' About ULA's Slow Progress on Satellite Launches (stripes.com) 33

Earlier this week the Washington Post reported that America's Defense department "is growing concerned that the United Launch Alliance, one of its key partners in launching national security satellites to space, will not be able to meet its needs to counter China and build its arsenal in orbit with a new rocket that ULA has been developing for years." In a letter sent Friday to the heads of Boeing's and Lockheed Martin's space divisions, Air Force Assistant Secretary Frank Calvelli used unusually blunt terms to say he was growing "concerned" with the development of the Vulcan rocket, which the Pentagon intends to use to launch critical national security payloads but which has been delayed for years. ULA, a joint venture of Boeing and Lockheed Martin, was formed nearly 20 years ago to provide the Defense Department with "assured access" to space. "I am growing concerned with ULA's ability to scale manufacturing of its Vulcan rocket and scale its launch cadence to meet our needs," he wrote in the letter, a copy of which was obtained by The Washington Post. "Currently there is military satellite capability sitting on the ground due to Vulcan delays...."

ULA originally won 60 percent of the Pentagon's national security payloads under the current contract, known as Phase 2. SpaceX won an award for the remaining 40 percent, but it has been flying its reusable Falcon 9 rocket at a much higher rate. ULA launched only three rockets last year, as it transitions to Vulcan; SpaceX launched nearly 100, mostly to put up its Starlink internet satellite constellation. Both are now competing for the next round of Pentagon contracts, a highly competitive procurement worth billions of dollars over several years. ULA is reportedly up for sale; Blue Origin is said to be one of the suitors...

In a statement to The Post, ULA said that its "factory and launch site expansions have been completed or are on track to support our customers' needs with nearly 30 launch vehicles in flow at the rocket factory in Decatur, Alabama." Last year, ULA CEO Tory Bruno said in an interview that the deal with Amazon would allow the company to increase its flight rate to 20 to 25 a year and that to meet that cadence it was hiring "several hundred" more employees. The more often Vulcan flies, he said, the more efficient the company would become. "Vulcan is much less expensive" than the Atlas V rocket that the ULA currently flies, Bruno said, adding that ULA intends to eventually reuse the engines. "As the flight rate goes up, there's economies of scale, so it gets cheaper over time. And of course, you're introducing reusability, so it's cheaper. It's just getting more and more competitive."

The article also notes that years ago ULA "decided to eventually retire its workhorse Atlas V rocket after concerns within the Pentagon and Congress that it relied on a Russian-made engine, the RD-180. In 2014, the company entered into a partnership with Jeff Bezos' Blue Origin to provide its BE-4 engines for use on Vulcan. However, the delivery of those engines was delayed for years — one of the reasons Vulcan's first flight didn't take place until earlier this year."

The article says Cavelli's letter cited the Pentagon's need to move quickly as adversaries build capabilities in space, noting "counterspace threats" and adding that "our adversaries would seek to deny us the advantage we get from space during a potential conflict."

"The United States continues to face an unprecedented strategic competitor in China, and our space environment continues to become more contested, congested and competitive."
Government

Are AI-Generated Search Results Still Protected by Section 230? (msn.com) 63

Starting this week millions will see AI-generated answers in Google's search results by default. But the announcement Tuesday at Google's annual developer conference suggests a future that's "not without its risks, both to users and to Google itself," argues the Washington Post: For years, Google has been shielded for liability for linking users to bad, harmful or illegal information by Section 230 of the Communications Decency Act. But legal experts say that shield probably won't apply when its AI answers search questions directly. "As we all know, generative AIs hallucinate," said James Grimmelmann, professor of digital and information law at Cornell Law School and Cornell Tech. "So when Google uses a generative AI to summarize what webpages say, and the AI gets it wrong, Google is now the source of the harmful information," rather than just the distributor of it...

Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn't extend Section 230 to cover AI tools. "As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors," he predicted. "It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate." But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be "a really good outcome."

Meanwhile, some lawmakers are looking to ditch Section 230 altogether. [Last] Sunday, the top Democrat and Republican on the House Energy and Commerce Committee released a draft of a bill that would sunset the statute within 18 months, giving Congress time to craft a new liability framework in its place. In a Wall Street Journal op-ed, Reps. Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr. (D-N.J.) argued that the law, which helped pave the way for social media and the modern internet, has "outlived its usefulness."

The tech industry trade group NetChoice [which includes Google, Meta, X, and Amazon] fired back on Monday that scrapping Section 230 would "decimate small tech" and "discourage free speech online."

The digital law professor points out Google has traditionally escaped legal liability by attributing its answers to specific sources — but it's not just Google that has to worry about the issue. The article notes that Microsoft's Bing search engine also supplies AI-generated answers (from Microsoft's Copilot). "And Meta recently replaced the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot."

The article also note sthat several U.S. Congressional committees are considering "a bevy" of AI bills...
Google

'Google Domains' Starts Migrating to Squarespace (squarespace.com) 20

"We're migrating domains in batches..." announced web-hosting company Squarespace earlier this month.

"Squarespace has entered into an agreement to become the new home for Google Domains customers. When your domain transitions from Google to Squarespace, you'll become a Squarespace customer and manage your domain through an account with us."

Slashdot reader shortyadamk shares an email sent today to a Google Domains customer: "Today your domain, xyz.com, migrated from Google Domains to Squarespace Domains.

"Your WHOIS contact details and billing information (if applicable) were migrated to Squarespace. Your DNS configuration remains unchanged.

"Your migrated domain will continue to work with Google Services such as Google Search Console. To support this, your account now has a domain verification record — one corresponding to each Google account that currently has access to the domain."

AI

Bruce Schneier Reminds LLM Engineers About the Risks of Prompt Injection Vulnerabilities (schneier.com) 40

Security professional Bruce Schneier argues that large language models have the same vulnerability as phones in the 1970s exploited by John Draper.

"Data and control used the same channel," Schneier writes in Communications of the ACM. "That is, the commands that told the phone switch what to do were sent along the same path as voices." Other forms of prompt injection involve the LLM receiving malicious instructions in its training data. Another example hides secret commands in Web pages. Any LLM application that processes emails or Web pages is vulnerable. Attackers can embed malicious commands in images and videos, so any system that processes those is vulnerable. Any LLM application that interacts with untrusted users — think of a chatbot embedded in a website — will be vulnerable to attack. It's hard to think of an LLM application that isn't vulnerable in some way.

Individual attacks are easy to prevent once discovered and publicized, but there are an infinite number of them and no way to block them as a class. The real problem here is the same one that plagued the pre-SS7 phone network: the commingling of data and commands. As long as the data — whether it be training data, text prompts, or other input into the LLM — is mixed up with the commands that tell the LLM what to do, the system will be vulnerable. But unlike the phone system, we can't separate an LLM's data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it's the very thing that enables prompt injection.

Like the old phone system, defenses are likely to be piecemeal. We're getting better at creating LLMs that are resistant to these attacks. We're building systems that clean up inputs, both by recognizing known prompt-injection attacks and training other LLMs to try to recognize what those attacks look like. (Although now you have to secure that other LLM from prompt-injection attacks.) In some cases, we can use access-control mechanisms and other Internet security systems to limit who can access the LLM and what the LLM can do. This will limit how much we can trust them. Can you ever trust an LLM email assistant if it can be tricked into doing something it shouldn't do? Can you ever trust a generative-AI traffic-detection video system if someone can hold up a carefully worded sign and convince it to not notice a particular license plate — and then forget that it ever saw the sign...?

Someday, some AI researcher will figure out how to separate the data and control paths. Until then, though, we're going to have to think carefully about using LLMs in potentially adversarial situations...like, say, on the Internet.

Schneier urges engineers to balance the risks of generative AI with the powers it brings. "Using them for everything is easier than taking the time to figure out what sort of specialized AI is optimized for the task.

"But generative AI comes with a lot of security baggage — in the form of prompt-injection attacks and other security risks. We need to take a more nuanced view of AI systems, their uses, their own particular risks, and their costs vs. benefits."
Government

Utah Locals Are Getting Cheap 10 Gbps Fiber Thanks To Local Governments (techdirt.com) 74

Karl Bode writes via Techdirt: Tired of being underserved and overbilled by shitty regional broadband monopolies, back in 2002 a coalition of local Utah governments formed UTOPIA -- (the Utah Telecommunication Open Infrastructure Agency). The inter-local agency collaborative venture then set about building an "open access" fiber network that allows any ISP to then come and compete on the shared network. Two decades later and the coalition just announced that 18 different ISPs now compete for Utah resident attention over a network that now covers 21 different Utah cities. In many instances, ISPs on the network are offering symmetrical (uncapped) gigabit fiber for as little as $45 a month (plus $30 network connection fee, so $75). Some ISPs are even offering symmetrical 10 Gbps fiber for around $150 a month: "Sumo Fiber, a veteran member of the UTOPIA Open Access Marketplace, is now offering 10 Gbps symmetrical for $119, plus a $30 UTOPIA Fiber infrastructure fee, bringing the total cost to $149 per month."

It's a collaborative hybrid that blurs the line between private companies and government, and it works. And the prices being offered here are significantly less than locals often pay in highly developed tech-centric urban hubs like New York, San Francisco, or Seattle. Yet giant local ISPs like Comcast and Qwest spent decades trying to either sue this network into oblivion, or using their proxy policy orgs (like the "Utah Taxpayer Association") to falsely claim this effort would end in chaos and inevitable taxpayer tears. Yet miraculously UTOPIA is profitable, and for the last 15 years, every UTOPIA project has been paid for completely through subscriber revenues. [...] For years, real world experience and several different studies and reports (including our Copia study on this concept) have made it clear that open access networks and policies result in faster, better, more affordable broadband access. UTOPIA is proving it at scale, but numerous other municipalities have been following suit with the help of COVID relief and infrastructure bill funding.

Businesses

OpenAI Strikes Reddit Deal To Train Its AI On Your Posts (theverge.com) 43

Emilia David reports via The Verge: OpenAI has signed a deal for access to real-time content from Reddit's data API, which means it can surface discussions from the site within ChatGPT and other new products. It's an agreement similar to the one Reddit signed with Google earlier this year that was reportedly worth $60 million. The deal will also "enable Reddit to bring new AI-powered features to Redditors and mods" and use OpenAI's large language models to build applications. OpenAI has also signed up to become an advertising partner on Reddit.

No financial terms were revealed in the blog post announcing the arrangement, and neither company mentioned training data, either. That last detail is different from the deal with Google, where Reddit explicitly stated it would give Google "more efficient ways to train models." There is, however, a disclosure mentioning that OpenAI CEO Sam Altman is also a shareholder in Reddit but that "This partnership was led by OpenAI's COO and approved by its independent Board of Directors."
"Reddit has become one of the internet's largest open archives of authentic, relevant, and always up-to-date human conversations about anything and everything. Including it in ChatGPT upholds our belief in a connected internet, helps people find more of what they're looking for, and helps new audiences find community on Reddit," Reddit CEO Steve Huffman says.

Reddit stock has jumped on news of the deal, rising 13% on Friday to $63.64. As Reuters notes, it's "within striking distance of the record closing price of $65.11 hit in late-March, putting the company on track to add $1.2 billion to its market capitalization."
Social Networks

France Bans TikTok In New Caledonia (politico.eu) 48

In what's marked as an EU first, the French government has blocked TikTok in its territory of New Caledonia amid widespread pro-independence protests. Politico reports: A French draft law, passed Monday, would let citizens vote in local elections after 10 years' residency in New Caledonia, prompting opposition from independence activists worried it will dilute the representation of indigenous people. The violent demonstrations that have ensued in the South Pacific island of 270,000 have killed at least five people and injured hundreds. In response to the protests, the government suspended the popular video-sharing app -- owned by Beijing-based ByteDance and favored by young people -- as part of state-of-emergency measures alongside the deployment of troops and an initial 12-day curfew.

French Prime Minister Gabriel Attal didn't detail the reasons for shutting down the platform. The local telecom regulator began blocking the app earlier on Wednesday. "It is regrettable that an administrative decision to suspend TikTok's service has been taken on the territory of New Caledonia, without any questions or requests to remove content from the New Caledonian authorities or the French government," a TikTok spokesperson said. "Our security teams are monitoring the situation very closely and ensuring that our platform remains safe for our users. We are ready to engage in discussions with the authorities."

Digital rights NGO Quadrature du Net on Friday contested the TikTok suspension with France's top administrative court over a "particularly serious blow to freedom of expression online." A growing number of authoritarian regimes worldwide have resorted to internet shutdowns to stifle dissent. This unexpected -- and drastic -- decision by France's center-right government comes amid a rise in far-right activism in Europe and a regression on media freedom. "France's overreach establishes a dangerous precedent across the globe. It could reinforce the abuse of internet shutdowns, which includes arbitrary blocking of online platforms by governments around the world," said Eliska Pirkova, global freedom of expression lead at Access Now.

Businesses

Two Students Uncover Security Bug That Could Let Millions Do Their Laundry For Free (techcrunch.com) 78

Two university students discovered a security flaw in over a million internet-connected laundry machines operated by CSC ServiceWorks, allowing users to avoid payment and add unlimited funds to their accounts. The students, Alexander Sherbrooke and Iakov Taranenko from UC Santa Cruz, reported the vulnerability to the company, a major laundry service provider, in January but claim it remains unpatched. TechCrunch adds: Sherbrooke said he was sitting on the floor of his basement laundry room in the early hours one January morning with his laptop in hand, and "suddenly having an 'oh s-' moment." From his laptop, Sherbrooke ran a script of code with instructions telling the machine in front of him to start a cycle despite having $0 in his laundry account. The machine immediately woke up with a loud beep and flashed "PUSH START" on its display, indicating the machine was ready to wash a free load of laundry.

In another case, the students added an ostensible balance of several million dollars into one of their laundry accounts, which reflected in their CSC Go mobile app as though it were an entirely normal amount of money for a student to spend on laundry.

The Internet

Archie, the Internet's First Search Engine, Is Rescued and Running (arstechnica.com) 35

An anonymous reader quotes a report from Ars Technica: It's amazing, and a little sad, to think that something created in 1989 that changed how people used and viewed the then-nascent Internet had nearly vanished by 2024. Nearly, that is, because the dogged researchers and enthusiasts at The Serial Port channel on YouTube have found what is likely the last existing copy of Archie. Archie, first crafted by Alan Emtage while a student at McGill University in Montreal, Quebec, allowed for the searching of various "anonymous" FTP servers around what was then a very small web of universities, researchers, and government and military nodes. It was groundbreaking; it was the first echo of the "anything, anywhere" Internet to come. And when The Serial Port went looking, it very much did not exist.

While Archie would eventually be supplanted by Gopher, web portals, and search engines, it remains a useful way to index FTP sites and certainly should be preserved. The Serial Port did this, and the road to get there is remarkable and intriguing. You are best off watching the video of their rescue, along with its explanatory preamble. But I present here some notable bits of the tale, perhaps to tempt you into digging further.

Social Networks

Reddit Reintroduces Its Awards System (techcrunch.com) 20

After shutting down its awards system last July, Reddit announced that it is bringing it back, with much of the same and some new features. There'll be "a new design for awards, a new award button under eligible posts and a leaderboard showing top awards earned for a comment or a post," reports TechCrunch. From the report: The company sunset its awards program last year along with the ability for users to purchase coins. At the same time, Reddit introduced "Golden Upvotes," which were purchased directly through cash. In a new post, the company said the system wasn't as expressive as awards. "While the golden upvote was certainly simpler in theory, in practice, it missed the mark. It wasn't as fun or expressive as legacy awards, and it was unclear how it benefited the recipient," the social network said.

Users who want to give awards to posts and comments will need to buy "gold," which kind of replaces coins. On a support page, the company mentioned that, on average, awards cost anywhere between 15 to 50 gold. Gold packages in Reddit's mobile apps currently start at $1.99 for 100 gold. Users can buy as much as 2,750 gold for $49.99. The company is also adding some safeguards to the awards system, such as disabling awards in NSFW subreddits, trauma and addiction support subreddits, and subreddits with mature content. Additionally, users will be able to report awards to avoid them being used for moderator removals.

Social Networks

Another Billionaire Pushes a Bid For TikTok, But To Decentralize It (techdirt.com) 68

An anonymous reader quotes a report from Techdirt, written by Mike Masnick: If you're a fan of chaos, well, the TikTok ban situation is providing plenty of chaos to follow. Ever since the US government made it clear it was seriously going to move forward with the obviously unconstitutional and counterproductive plan to force ByteDance to divest from TikTok or have the app effectively banned from the U.S., various rich people have been stepping up with promises to buy the app. There was former Trump Treasury Secretary Steven Mnuchin with plans to buy it. Then there was "mean TV investor, who wants you to forget his sketchy history" Kevin O'Leary with his own TikTok buyout plans. I'm sure there have been other rich dudes as well, though strikingly few stories of actual companies interested in purchasing TikTok.

But now there's another billionaire to add to the pile: billionaire real estate/property mogul Frank McCourt (who has had some scandals in his own history) has had an interesting second act over the last few years as a big believer in decentralized social media. He created and funded Project Liberty, which has become deeply involved in a number of efforts to create infrastructure for decentralized social media, including its own Decentralized Social Networking Protocol (DSTP).

Over the past few years, I've had a few conversations with people involved in Project Liberty and related projects. Their hearts are in the right place in wanting to rethink the internet in a manner that empowers users over big companies, even if I don't always agree with their approach (he also frequently seems to surround himself with all sorts of tech haters, who have somewhat unrealistic visions of the world). Either way, McCourt and Project Liberty have now announced a plan to bid on TikTok. They plan to merge it into his decentralization plans.
"Frank McCourt, Founder of Project Liberty and Executive Chairman of McCourt Global, today announced that Project Liberty is organizing a bid to acquire the popular social media platform TikTok in the U.S., with the goal of placing people and data empowerment at the center of the platform's design and purpose," reads a press release from Project Liberty.

"Working in consultation with Guggenheim Securities, the investment banking and capital markets business of Guggenheim Partners, and Kirkland & Ellis, one of the world's largest law firms, as well as world-renowned technologists, academics, community leaders, parents and engaged citizens, this bid for TikTok offers an innovative, alternative vision for the platform's infrastructure -- one that allows people to reclaim agency over their digital identities and data by proposing to migrate the platform to a new digital open-source protocol. In launching the bid, McCourt and his partners are seizing this opportunity to return control and value back into the hands of individuals and provide Americans with a meaningful voice, choice, and stake in the future of the web."
EU

EU Opens Child Safety Probes of Facebook and Instagram, Citing Addictive Design Concerns (techcrunch.com) 48

An anonymous reader quotes a report from TechCrunch: Facebook and Instagram are under formal investigation in the European Union over child protection concerns, the Commission announced Thursday. The proceedings follow a raft of requests for information to parent entity Meta since the bloc's online governance regime, the Digital Services Act (DSA), started applying last August. The development could be significant as the formal proceedings unlock additional investigatory powers for EU enforcers, such as the ability to conduct office inspections or apply interim measures. Penalties for any confirmed breaches of the DSA could reach up to 6% of Meta's global annual turnover.

Meta's two social networks are designated as very large online platforms (VLOPs) under the DSA. This means the company faces an extra set of rules -- overseen by the EU directly -- requiring it to assess and mitigate systemic risks on Facebook and Instagram, including in areas like minors' mental health. In a briefing with journalists, senior Commission officials said they suspect Meta of failing to properly assess and mitigate risks affecting children. They particularly highlighted concerns about addictive design on its social networks, and what they referred to as a "rabbit hole effect," where a minor watching one video may be pushed to view more similar content as a result of the platforms' algorithmic content recommendation engines.

Commission officials gave examples of depression content, or content that promotes an unhealthy body image, as types of content that could have negative impacts on minors' mental health. They are also concerned that the age assurance methods Meta uses may be too easy for kids to circumvent. "One of the underlying questions of all of these grievances is how can we be sure who accesses the service and how effective are the age gates -- particularly for avoiding that underage users access the service," said a senior Commission official briefing press today on background. "This is part of our investigation now to check the effectiveness of the measures that Meta has put in place in this regard as well." In all, the EU suspects Meta of infringing DSA Articles 28, 34, and 35. The Commission will now carry out an in-depth investigation of the two platforms' approach to child protection.

Google

Revolutionary New Google Feature Hidden Under 'More' Tab Shows Links To Web Pages (404media.co) 32

An anonymous reader shares a report: After launching a feature that adds more AI junk than ever to search results, Google is experimenting with a radical new feature that lets users see only the results they were looking for, in the form of normal text links. As in, what most people actually use Google for. "We've launched a new 'Web' filter that shows only text-based links, just like you might filter to show other types of results, such as images or videos," the official Google Search Liaison Twitter account, run by Danny Sullivan, posted on Tuesday. The option will appear at the top of search results, under the "More" option.

"We've added this after hearing from some that there are times when they'd prefer to just see links to web pages in their search results, such as if they're looking for longer-form text documents, using a device with limited internet access, or those who just prefer text-based results shown separately from search features," Sullivan wrote. "If you're in that group, enjoy!" Searching Google has become a bloated, confusing experience for users in the last few years, as it's gradually started prioritizing advertisements and sponsored results, spammy affiliate content, and AI-generated web pages over authentic, human-created websites.

Communications

AT&T Goes Up Against T-Mobile, Starlink With AST SpaceMobile Satellite Deal (pcmag.com) 14

Michael Kan reports via PCMag: AT&T has struck a deal to bring satellite internet connectivity to phones through AST SpaceMobile, a potential rival to SpaceX's Starlink. AT&T says the commercial agreement will last until 2030. The goal is "to provide a space-based broadband network to everyday cell phones," a spokesperson tells PCMag, meaning customers can receive a cellular signal in remote areas where traditional cell towers are few and far between. All they'll need to do is ensure their phone has a clear view of the sky.

AT&T has been working with Texas-based AST SpaceMobile since 2018 on the technology, which involves using satellites in space as orbiting cell towers. In January, AT&T was one of several companies (including Google) to invest $110 million in AST. In addition, the carrier created a commercial starring actor Ben Stiller to showcase AST's technology. In today's announcement, AT&T notes that "previously, the companies were working together under a Memorandum of Understanding," which is usually nonbinding. Hence, the new commercial deal suggests AT&T is confident AST can deliver fast and reliable satellite internet service to consumer smartphones -- even though it hasn't launched a production satellite.

AST has only launched one prototype satellite; in tests last year, it delivered download rates at 14Mbps and powered a 5G voice call. Following a supply chain-related delay, the company is now preparing to launch its first batch of "BlueBird" production satellites later this year, possibly in Q3. In Wednesday's announcement, AT&T adds: "This summer, AST SpaceMobile plans to deliver its first commercial satellites to Cape Canaveral for launch into low Earth orbit. These initial five satellites will help enable commercial service that was previously demonstrated with several key milestones." Still, AST needs to launch 45 to 60 BlueBird satellites before it can offer continuous coverage in the U.S., although in an earnings call, the company said it'll still be able to offer "non-continuous coverage" across 5,600 cells in the country.

Slashdot Top Deals