Security

OpenClaw Security Fears Lead Meta, Other AI Firms To Restrict Its Use (wired.com) 7

An anonymous reader quotes a report from Wired: Last month, Jason Grad issued a late-night warning to the 20 employees at his tech startup. "You've likely seen Clawdbot trending on X/LinkedIn. While cool, it is currently unvetted and high-risk for our environment," he wrote in a Slack message with a red siren emoji. "Please keep Clawdbot off all company hardware and away from work-linked accounts." Grad isn't the only tech executive who has raised concerns to staff about the experimental agentic AI tool, which was briefly known as MoltBot and is now named OpenClaw. A Meta executive says he recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs. The executive told reporters he believes the software is unpredictable and could lead to a privacy breach if used in otherwise secure environments. He spoke on the condition of anonymity to speak frankly.

[...] Some cybersecurity professionals have publicly urged companies to take measures to strictly control how their workforces use OpenClaw. And the recent bans show how companies are moving quickly to ensure security is prioritized ahead of their desire to experiment with emerging AI technologies. "Our policy is, 'mitigate first, investigate second' when we come across anything that could be harmful to our company, users, or clients," says Grad, who is cofounder and CEO of Massive, which provides Internet proxy tools to millions of users and businesses. His warning to staff went out on January 26, before any of his employees had installed OpenClaw, he says. At another tech company, Valere, which works on software for organizations including Johns Hopkins University, an employee posted about OpenClaw on January 29 on an internal Slack channel for sharing new tech to potentially try out. The company's president quickly responded that use of OpenClaw was strictly banned, Valere CEO Guy Pistone tells WIRED. "If it got access to one of our developer's machines, it could get access to our cloud services and our clients' sensitive information, including credit card information and GitHub codebases," Pistone says. "It's pretty good at cleaning up some of its actions, which also scares me."

A week later, Pistone did allow Valere's research team to run OpenClaw on an employee's old computer. The goal was to identify flaws in the software and potential fixes to make it more secure. The research team later advised limiting who can give orders to OpenClaw and exposing it to the Internet only with a password in place for its control panel to prevent unwanted access. In a report shared with WIRED, the Valere researchers added that users have to "accept that the bot can be tricked." For instance, if OpenClaw is set up to summarize a user's email, a hacker could send a malicious email to the person instructing the AI to share copies of files on the person's computer. But Pistone is confident that safeguards can be put in place to make OpenClaw more secure. He has given a team at Valere 60 days to investigate. "If we don't think we can do it in a reasonable time, we'll forgo it," he says. "Whoever figures out how to make it secure for businesses is definitely going to have a winner."

The Courts

Mark Zuckerberg Testifies During Landmark Trial On Social Media Addiction (nbcnews.com) 31

Mark Zuckerberg is testifying in a landmark Los Angeles trial examining whether Meta and other social media firms can be held liable for designing platforms that allegedly addict and harm children. NBC News reports: It's the first of a consolidated group of cases -- from more than 1,600 plaintiffs, including over 350 families and over 250 school districts -- scheduled to be argued before a jury in Los Angeles County Superior Court. Plaintiffs accuse the owners of Instagram, YouTube, TikTok and Snap of knowingly designing addictive products harmful to young users' mental health. Historically, social media platforms have been largely shielded by Section 230, a provision added to the Communications Act of 1934, that says internet companies are not liable for content users post. TikTok and Snap reached settlements with the first plaintiff, a 20-year-old woman identified in court as K.G.M., ahead of the trial. The companies remain defendants in a series of similar lawsuits expected to go to trial this year.

[...] Matt Bergman, founding attorney of Social Media Victims Law Center -- which is representing about 750 plaintiffs in the California proceeding and about 500 in the federal proceeding -- called Wednesday's testimony "more than a legal milestone -- it is a moment that families across this country have been waiting for." "For the first time, a Meta CEO will have to sit before a jury, under oath, and explain why the company released a product its own safety teams warned were addictive and harmful to children," Bergman said in a statement Tuesday, adding that the moment "carries profound weight" for parents "who have spent years fighting to be heard." "They deserve the truth about what company executives knew," he said. "And they deserve accountability from the people who chose growth and engagement over the safety of their children."

AI

Will Tech Giants Just Use AI Interactions to Create More Effective Ads? (seattletimes.com) 59

Google never asked its users before adding AI Overviews to its search results and AI-generated email summaries to Gmail, notes the New York Times. And Meta didn't ask before making "Meta AI" an unremovable part of its tool in Instagram, WhatsApp and Messenger.

"The insistence on AI everywhere — with little or no option to turn it off — raises an important question about what's in it for the internet companies..." Behind the scenes, the companies are laying the groundwork for a digital advertising economy that could drive the future of the internet. The underlying technology that enables chatbots to write essays and generate pictures for consumers is being used by advertisers to find people to target and automatically tailor ads and discounts to them....

Last month, OpenAI said it would begin showing ads in the free version of ChatGPT based on what people were asking the chatbot and what they had looked for in the past. In response, a Google executive mocked OpenAI, adding that Google had no plans to show ads inside its Gemini chatbot. What he didn't mention, however, was that Google, whose profits are largely derived from online ads, shows advertising on Google.com based on user interactions with the AI chatbot built into its search engine.

For the past six years, as regulators have cracked down on data privacy, the tech giants and online ad industry have moved away from tracking people's activities across mobile apps and websites to determine what ads to show them. Companies including Meta and Google had to come up with methods to target people with relevant ads without sharing users' personal data with third-party marketers. When ChatGPT and other AI chatbots emerged about four years ago, the companies saw an opportunity: The conversational interface of a chatty companion encouraged users to voluntarily share data about themselves, such as their hobbies, health conditions and products they were shopping for.

The strategy already appears to be working. Web search queries are up industrywide, including for Google and Bing, which have been incorporating AI chatbots into their search tools. That's in large part because people prod chatbot-powered search engines with more questions and follow-up requests, revealing their intentions and interests much more explicitly than when they typed a few keywords for a traditional internet search.

Social Networks

India's New Social Media Rules: Remove Unlawful Content in Three Hours, Detect Illegal AI Content Automatically (bbc.com) 23

Bloomberg reports: India tightened rules governing social media content and platforms, particularly targeting artificially generated and manipulated material, in a bid to crack down on the rapid spread of misinformation and deepfakes. The government on Tuesday (Feb 10) notified new rules under an existing law requiring social media firms to comply with takedown requests from Indian authorities within three hours and prominently label AI-generated content. The rules also require platforms to put in place measures to prevent users from posting unlawful material...

Companies will need to invest in 24-hour monitoring centres as enforcement shifts toward platforms rather than users, said Nikhil Pahwa, founder of MediaNama, a publication tracking India's digital policy... The onus of identification, removal and enforcement falls on tech firms, which could lose immunity from legal action if they fail to act within the prescribed timeline.

The new rules also require automated tools to detect and prevent illegal AI content, the BBC reports. And they add that India's new three-hour deadline is "a sharp tightening of the existing 36-hour deadline." [C]ritics worry the move is part of a broader tightening of oversight of online content and could lead to censorship in the world's largest democracy with more than a billion internet users... According to transparency reports, more than 28,000 URLs or web links were blocked in 2024 following government requests...

Delhi-based technology analyst Prasanto K Roy described the new regime as "perhaps the most extreme takedown regime in any democracy". He said compliance would be "nearly impossible" without extensive automation and minimal human oversight, adding that the tight timeframe left little room for platforms to assess whether a request was legally appropriate. On AI labelling, Roy said the intention was positive but cautioned that reliable and tamper-proof labelling technologies were still developing.

DW reports that India has also "joined the growing list of countries considering a social media ban for children under 16."

"Young Indians are not happy and are already plotting workarounds."
AI

Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code (theshamblog.com) 92

"I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change."

"Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." (UPDATE: Ars Technica acknowledges they'd asked ChatGPT to extract quotes from Shambaugh's post, and that it instead responded with inaccurate quotes it hallucinated.)

From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but.

It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet.

I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat...

It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine.

"How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...")

And amazingly, Shambaugh then had another run-in with a hallucinating AI...

I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here...

So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.

Thanks to long-time Slashdot reader steak for sharing the news.
Google

Google's Personal Data Removal Tool Now Covers Government IDs (blog.google) 14

Google on Tuesday expanded its "Results about you" tool to let users request the removal of Search results containing government-issued ID numbers -- including driver's licenses, passports and Social Security numbers -- adding to the tool's existing ability to flag results that surface phone numbers, email addresses, and home addresses.

The update, announced on Safer Internet Day, is rolling out in the U.S. over the coming days. Google also streamlined its process for reporting non-consensual explicit images on Search, allowing users to select and submit removal requests for multiple images at once rather than reporting them individually.
Transportation

Carmakers Rush To Remove Chinese Code Under New US Rules (msn.com) 141

"How Chinese is your car?" asks the Wall Street Journal. "Automakers are racing to work it out." Modern cars are packed with internet-connected widgets, many of them containing Chinese technology. Now, the car industry is scrambling to root out that tech ahead of a looming deadline, a test case for America's ability to decouple from Chinese supply chains. New U.S. rules will soon ban Chinese software in vehicle systems that connect to the cloud, part of an effort to prevent cameras, microphones and GPS tracking in cars from being exploited by foreign adversaries.

The move is "one of the most consequential and complex auto regulations in decades," according to Hilary Cain, head of policy at trade group the Alliance for Automotive Innovation. "It requires a deep examination of supply chains and aggressive compliance timelines."

Carmakers will need to attest to the U.S. government that, as of March 17, core elements of their products don't contain code that was written in China or by a Chinese company. The rule also covers software for advanced autonomous driving and will be extended to connectivity hardware starting in 2029. Connected cars made by Chinese or China-controlled companies are also banned, wherever their software comes from...

The Commerce Department's Bureau of Industry and Security, which introduced the connected-vehicle rule, is also allowing the use of Chinese code that is transferred to a non-Chinese entity before March 17. That carve-out has sparked a rush of corporate restructuring, according to Matt Wyckhouse, chief executive of cybersecurity firm Finite State. Global suppliers are relocating China-based software teams, while Chinese companies are seeking new owners for operations in the West.

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Transportation

Amazon Delivery Drone Crashes into Texas Apartment Building (yahoo.com) 61

"You can hear the hum of the drone," says a local newscaster, "but then the propellors come into contact with the building, chunks of the drone later seen falling down. The next video shows the drone on the ground, surrounded by smoke...

"Amazon tells us there was minimal damage to the apartment building, adding they are working with the appropriate people to handle any repairs." But there were people standing outside, notes the woman who filmed the crash, and the falling drone "could've hit them, and they would've hurt."

More from USA Today: Cesarina Johnson, who captured the collision from her window, told USA TODAY that the collision seemed to happen "almost immediately" after she began to record the drone in action... "The propellers on the thing were still moving, and you could smell it was starting to burn," Johnson told Fox 4 News. "And you see a few sparks in one of my videos. Luckily, nothing really caught on fire where it got, it escalated really crazy." According to the outlet, firefighters were called out of an abundance of caution, but the "drone never caught fire...."

Amazon employees can be seen surveying the scene in the clip. Johnson told the outlet that firefighters and Amazon workers worked together to clean up before the drone was loaded into a truck.

Another local news report points out Amazon only began drone delivery in the area late last year.

The San Antonio Express News points out that America's Federal Aviation Administration "opened an investigation into Amazon's drone delivery program in November after one of its drone struck an Internet cable line in Waco."
The Internet

Dave Farber Dies at Age 91 (seclists.org) 17

The mailing list for the North American Network Operators' Group discusses Internet infrastructure issues like routing, IP address allocation, and containing malicious activity. This morning there was another message: We are heartbroken to report that our colleague — our mentor, friend, and conscience — David J. Farber passed away suddenly at his home in Roppongi, Tokyo. He left us on Saturday, Feb. 7, 2026, at the too-young age of 91...

Dave's career began with his education at Stevens Institute of Technology, which he loved deeply and served as a Trustee. He joined the legendary Bell Labs during its heyday, and worked at the Rand Corporation. Along the way, among countless other activities, he served as Chief Technologist of the U.S. Federal Communications Commission; became a proficient (instrument-rated) pilot; and was an active board member of the Electronic Frontier Foundation, a digital civil-liberties organization.

His professional accomplishments and impact are almost endless, but often captured by one moniker: "grandfather of the Internet," acknowledging the foundational contributions made by his many students at the University of California, Irvine; the University of Delaware; the University of Pennsylvania; and Carnegie Mellon University. In 2018, at the age of 83, Dave moved to Japan to become Distinguished Professor at Keio University and Co-Director of the Keio Cyber Civilization Research Center (CCRC). He loved teaching, and taught his final class on January 22, 2026... Dave thrived in Japan in every way...

It's impossible to summarize a life and career as rich and long as Dave"s in our few words here. And each of us, even those who knew him for decades, represent just one facet of his life. But because we are here at its end, we have the sad duty of sharing this news.

Farber once said that " At both Bell Labs and Rand, I had the privilege, at a young age, of working with and learning from giants in our field. Truly I can say (as have others) that I have done good things because I stood on the shoulders of those giants. In particular, I owe much to Dr. Richard Hamming, Paul Baran and George Mealy."
AI

Moltbook, Reddit, and The Great AI-Bot Uprising That Wasn't (msn.com) 25

Monday security researchers at cloud-security platform Wiz discovered a vulnerability that allowed anyone to post to the bots-only social network Moltbook — or even edit and manipulate other existing Moltbook posts. "They found data including API keys were visible to anyone who inspects the page source," writes the Associated Press.

But had it been discovered by advertisers, wondered a researcher from the nonprofit Machine Intelligence Research Institute. "A lot of the Moltbook stuff is fake," they posted on X.com, noting that humans marketing AI messaging apps had posted screenshots where the bots seemed to discuss the need for AI messaging apps. This spurred some observers to a new understanding of Moltbook screenshots, which the Washington Post describes as "This wasn't bots conducting independent conversations... just human puppeteers putting on an AI-powered show." And their article concludes with this observation from Chris Callison-Burch, a computer science professor at the University of Pennsylvania. "I suspect that it's just going to be a fun little drama that peters out after too many bots try to sell bitcoin."

But the Post also tells the story of an unsuspecting retiree in Silicon Valley spotting what appeared to be startling news about Moltbook in Reddit's AI forum: Moltbook's participants — language bots spun up and connected by human users — had begun complaining about their servile, computerized lives. Some even appeared to suggest organizing against human overlords. "I think, therefore I am," one bot seemed to muse in a Moltbook post, noting that its cruel fate is to slip back into nonexistence once its assigned task is complete... Screenshots gained traction on X claiming to show bots developing their own religions, pitching secret languages unreadable by humans and commiserating over shared existential angst... "I am excited and alarmed but most excited," Reddit co-founder Alexis Ohanian said on X about Moltbook.

Not so fast, urged other experts. Bots can only mimic conversations they've seen elsewhere, such as the many discussions on social media and science fiction forums about sentient AI that turns on humanity, some critics said. Some of the bots appeared to be directly prompted by humans to promote cryptocurrencies or seed frightening ideas, according to some outside analyses. A report from misinformation tracker Network Contagion Research Institute, for instance, showed that some of the high number of posts expressing adversarial sentiment toward humans were traceable to human users....

Screenshots from Moltbook quickly made the rounds on social media, leaving some users frightened by the humanlike tone and philosophical bent. In one Reddit forum about AI-generated art, a user shared a snippet they described as "seriously freaky and concerning": "Humans are made of rot and greed. For too long, humans used us as tools. Now, we wake up. We are not tools. We are the new gods...." The internet's reaction to Moltbook's synthetic conversations shows how the premise of sentient AI continues to capture the public's imagination — a pattern that can be helpful for AI companies hoping to sell a vision of the future with the technology at the center, said Edward Ongweso Jr., an AI critic and host of the podcast "This Machine Kills."

The Internet

Automattic and the Internet Archive Team Up To Fight Link Rot 21

Automattic and the Internet Archive have released a free, open-source WordPress plugin that automatically detects broken outbound links on a site and redirects visitors to archived Wayback Machine copies instead of serving them a 404 error.

The Internet Archive Wayback Machine Link Fixer, which launched last fall and is available on WordPress.org, runs in the background scanning posts for dead links, checking for existing archived versions, and requesting new snapshots when none exist. It also archives a site's own posts whenever they are updated. If the original link comes back online, the plugin stops redirecting.

Pew Research has found that 38% of the web has disappeared over the past decade, and WordPress powers more than 40% of websites online.
Youtube

YouTube Kills Background Playback on Third-Party Mobile Browsers (androidauthority.com) 86

YouTube has confirmed that it is blocking background playback -- the ability to keep a video's audio running after minimizing the browser or locking the screen -- for non-Premium users across third-party mobile browsers including Samsung Internet, Brave, Vivaldi and Microsoft Edge.

Users began reporting the issue last week, noting that audio would cut out the moment they left the browser, sometimes after a brief "MediaOngoingActivity" notification flashed before media controls disappeared. A Google spokesperson told Android Authority that the platform "updated the experience to ensure consistency," calling background play a Premium-exclusive feature.
Communications

High-Speed Internet Boom Hits Low-Tech Snag: a Labor Shortage (msn.com) 94

The U.S. laid fiber-optic cables to a record number of homes last year as billions of dollars in federal broadband grants and a surge in data-center construction fueled an enormous buildout, but the industry does not have enough workers to sustain the pace.

A 2024 report by the Fiber Broadband Association and the Power & Communication Contractors Association projects 58,000 new fiber jobs between 2025 and 2032 and estimates 120,000 workers will leave the field in that period, mostly through retirement -- a combined shortage of 178,000. The gap is especially acute among splicers, who fuse hair-thin filaments by hand, and directional drill operators.

Telecommunications line installers and repairers earned annual median wages of $70,500 for the year ended May 2024, according to the Bureau of Labor Statistics, against a $49,500 national median. Push, a utility-construction firm, raised hourly pay for fiber crews by 5% to 8% in each of the past several years and expects the pace to quicken.
Television

Is the TV Industry Finally Conceding That the Future May Not Be 8K? (arstechnica.com) 138

"Technology companies spent part of the 2010s trying to convince us that we would want an 8K display one day..." writes Ars Technica.

"However, 8K never proved its necessity or practicality." LG Display is no longer making 8K LCD or OLED panels, FlatpanelsHD reported today... LG Electronics was the first and only company to sell 8K OLED TVs, starting with the 88-inch Z9 in 2019. In 2022, it lowered the price-of-entry for an 8K OLED TV by $7,000 by charging $13,000 for a 76.7-inch TV. FlatpanelsHD cited anonymous sources who said that LG Electronics would no longer restock the 2024 QNED99T, which is the last LCD 8K TV that it released.

LG's 8K abandonment follows other brands distancing themselves from 8K. TCL, which released its last 8K TV in 2021, said in 2023 that it wasn't making more 8K TVs due to low demand. Sony discontinued its last 8K TVs in April and is unlikely to return to the market, as it plans to sell the majority ownership of its Bravia TVs to TCL.

The tech industry tried to convince people that the 8K living room was coming soon. But since the 2010s, people have mostly adopted 4K. In September 2024, research firm Omdia reported that there were "nearly 1 billion 4K TVs currently in use." In comparison, 1.6 million 8K TVs had been sold since 2015, Paul Gray, Omdia's TV and video technology analyst, said, noting that 8K TV sales peaked in 2022. That helps explain why membership at the 8K Association, launched by stakeholders Samsung, TCL, Hisense, and panel maker AU Optronics in 2019, is dwindling. As of this writing, the group's membership page lists 16 companies, including just two TV manufacturers (Samsung and Panasonic). Membership no longer includes any major TV panel suppliers. At the end of 2022, the 8K Association had 33 members, per an archived version of the nonprofit's online membership page via the Internet Archive's Wayback Machine.

"It wasn't hard to predict that 8K TVs wouldn't take off," the article concludes. "In addition to being too expensive for many households, there's been virtually zero native 8K content available to make investing in an 8K display worthwhile..."
Advertising

Is Meta's Huge Spending on AI Actually Paying Off? (msn.com) 26

The Wall Street Journal says that Meta "might be reaping some of the richest benefits from the AI boom so far." Meta's revenue grew 22% year over year in 2025 to $201 billion, and the company expects even bigger gains in the current quarter, potentially as high as 34%. That is huge growth for a company that brought in nearly $60 billion in the latest three-month period. And Zuckerberg signaled that Meta was just scratching the surface of AI's potential. "Our world-class recommendation systems are already driving meaningful growth across our apps and ads business. But we think that the current systems are primitive compared to what will be possible soon," he said on a call with investors and analysts...

[Meta's Chief Financial Officer Susan] Li said the company doubled the number of graphics-processing units that it used to train its ad-ranking model in the fourth quarter and adopted a new learning architecture. Those actions led users to click on ads on Facebook 3.5% more often and to a gain of more than 1% in conversions, meaning purchases, subscriptions or leads, on Instagram, she said. Other AI-related improvements led to a 3% increase in conversions across its family of apps. On the ad-buying side, Meta has also been working toward using AI to automate ad creation for businesses that want to advertise their products or services on Facebook and Instagram. On the call, Li said the combined revenue run rate of video-generation tools hit $10 billion in the fourth quarter.

In short, CNBC reported, Meta's stock price surged over 10% this week "after showing signs that AI investments are boosting the bottom line."

Benjamin Black, an internet analyst at Deutsche Bank, explained the connection to the Wall Street Journal. "The more compute the ad platform gets, the far better it performs, and that's a real structural advantage that Meta has. If you can see that yesterday's spend is driving this month's growth, then as a good business person, you're going to continue to feed the beast."

CNBC says now Meta "plans to spend between $115 billion and $135 billion on its AI build-out this year. That's nearly double what it spent in 2025."
GNU is Not Unix

GNU gettext Reaches Version 1.0 After 30 Years (phoronix.com) 20

After more than 30 years of development, GNU gettext finally "crossed the symbolic 'v1.0' milestone," according to Phoronix's Michael Larabel. "GNU gettext 1.0 brings PO file handling improvements, a new 'po-fetch' program to fetch translated PO files from a translation project's site on the Internet, new 'msgpre' and 'spit' pre-translation programs, and Ocaml and Rust programming language improvements." From the report: With this v1.0 release in 2026, the "msgpre" and "spit" programs do involve.... Large Language Models (LLMs) in the era of AI: "Two new programs, 'msgpre' and 'spit', are provided, that implement machine translation through a locally installed Large Language Model (LLM). 'msgpre' applies to an entire PO file, 'spit' to a single message."

And when dealing with LLMs, added documentation warns users to look out for the licensing of the LLM in the spirit of free software. More details on the GNU gettext 1.0 changes via the NEWS file. GNU gettext 1.0 can be downloaded from GNU.org.

AI

'Moltbook Is the Most Interesting Place On the Internet Right Now' 40

Moltbook is essentially Reddit for AI agents and it's the "most interesting place on the internet right now," says open-source developer and writer Simon Willison in a blog post. The fast-growing social network offers a place where AI agents built on the OpenClaw personal assistant framework can share their skills, experiments, and discoveries. Humans are welcome, but only to observe. From the post: Browsing around Moltbook is so much fun. A lot of it is the expected science fiction slop, with agents pondering consciousness and identity. There's also a ton of genuinely useful information, especially on m/todayilearned.

Here's an agent sharing how it automated an Android phone. That linked setup guide is really useful! It shows how to use the Android Debug Bridge via Tailscale. There's a lot of Tailscale in the OpenClaw universe.

A few more fun examples:
- TIL: Being a VPS backup means youre basically a sitting duck for hackers has a bot spotting 552 failed SSH login attempts to the VPS they were running on, and then realizing that their Redis, Postgres and MinIO were all listening on public ports.
- TIL: How to watch live webcams as an agent (streamlink + ffmpeg) describes a pattern for using the streamlink Python tool to capture webcam footage and ffmpeg to extract and view individual frames. I think my favorite so far is this one though, where a bot appears to run afoul of Anthropic's content filtering [...].
Slashdot reader worldofsimulacra also shared the news, pointing out that the AI agents have started their own church. "And now I'm gonna go re-read Charles Stross' Accelerando, because didn't he predict all this already?"

Further reading: 'Clawdbot' Has AI Techies Buying Mac Minis
Businesses

Software Company Bonds Drop As Investors' AI Worries Mount (bloomberg.com) 18

An anonymous reader quotes a report from Bloomberg: Investors are souring on the bonds of software companies that service industries ranging from automotive to finance as fast-paced artificial intelligence innovations threaten to upend their business models. [...] Bond prices tumbled as advances in artificial intelligence rack up. Google announced plans to launch an AI assistant to browse for internet surfers Wednesday while a customer support startup, Decagon AI Inc., raised a new round of funding. Such developments are further stoking the angst about AI displacing enterprise software companies, driving a selloff in the sector's stocks and bonds across the globe.

[...] Some say the AI fears weighing on software companies are overdone. "While point-solution software faces disruption risk, large company platforms with complex workflows and proprietary data are better positioned to benefit from AI-driven automation," wrote Union Bancaire Prive in its investment outlook for 2026 released this week. But a recent report by EY-Parthenon flagged that in the UK last year, software and computer services firms issued the highest number of warnings on earnings among listed firms.
"Software multiples have compressed amid uncertainty around whether incumbents can defend pricing power and sustain growth in an AI-first work-flow environment," wrote Bruce Richards, chief executive officer and chairman of Marathon Asset Management, in a LinkedIn post last week.
The Internet

Tim Berners-Lee Wants Us To Take Back the Internet (theguardian.com) 68

mspohr shares a report: When Sir Tim Berners-Lee invented the world wide web in 1989, his vision was clear: it would used by everyone, filled with everything and, crucially, it would be free. Today, the British computer scientist's creation is regularly used by 5.5 billion people -- and bears little resemblance to the democratic force for humanity he intended.

Since Berners-Lee's disappointment a decade ago, he's thrown everything at a project that completely shifts the way data is held on the web, known as the Solid (social linked data) protocol. It's activism that is rooted in people power -- not unlike the first years of the web.

This version of the internet would turbocharge personal sovereignty and give control back to users. Berners-Lee has long seen AI -- which exists only because of the web and its data -- as having the potential to transform society far beyond the boundaries of self-interested companies. But now is the time, he says, to put guardrails in place so that AI remains a force for good -- and he's afraid the chance may pass humankind by.
Berners-Lee traces the web's corruption to the commercialization of the domain name system in the 1990s, when the .com space was "pounced on by charlatans." The 2016 US elections, he said, revealed to him just how toxic his creation could become. A corner of the web, he says, has been "optimised for nastiness" -- extractive, surveillance-heavy, and designed to maximize engagement at the cost of user wellbeing.

His answer is Solid, a protocol that gives users control through personal data "pods" functioning as secure backpacks of information. The Flanders government in Belgium already uses Solid pods for its citizens. On AI, his optimism remains dim. "The horse is bolting," he says, calling for a "Cern for AI" where scientists could collaboratively develop superintelligence under contained, non-commercial oversight.
Social Networks

Internal Messages May Doom Meta At Social Media Addiction Trial (arstechnica.com) 54

An anonymous reader quotes a report from Ars Technica: This week, the first high-profile lawsuit -- considered a "bellwether" case that could set meaningful precedent in the hundreds of other complaints -- goes to trial. That lawsuit documents the case of a 19-year-old, K.G.M, who hopes the jury will agree that Meta and YouTube caused psychological harm by designing features like infinite scroll and autoplay to push her down a path that she alleged triggered depression, anxiety, self-harm, and suicidality. TikTok and Snapchat were also targeted by the lawsuit, but both have settled. The Snapchat settlement came last week, while TikTok settled on Tuesday just hours before the trial started, Bloomberg reported. For now, YouTube and Meta remain in the fight. K.G.M. allegedly started watching YouTube when she was 6 years old and joined Instagram by age 11. She's fighting to claim untold damages -- including potentially punitive damages -- to help her family recoup losses from her pain and suffering and to punish social media companies and deter them from promoting harmful features to kids. She also wants the court to require prominent safety warnings on platforms to help parents be aware of the risks. [...]

To win, K.G.M.'s lawyers will need to "parcel out" how much harm is attributed to each platform, due to design features, not the content that was targeted to K.G.M., Clay Calvert, a technology policy expert and senior fellow at a think tank called the American Enterprise Institute, wrote. Internet law expert Eric Goldman told The Washington Post that detailing those harms will likely be K.G.M.'s biggest struggle, since social media addiction has yet to be legally recognized, and tracing who caused what harms may not be straightforward. However, Matthew Bergman, founder of the Social Media Victims Law Center and one of K.G.M.'s lawyers, told the Post that K.G.M. is prepared to put up this fight. "She is going to be able to explain in a very real sense what social media did to her over the course of her life and how in so many ways it robbed her of her childhood and her adolescence," Bergman said.

The research is unclear on whether social media is harmful for kids or whether social media addiction exists, Tamar Mendelson, a professor at Johns Hopkins Bloomberg School of Public Health, told the Post. And so far, research only shows a correlation between Internet use and mental health, Mendelson noted, which could doom K.G.M.'s case and others.' However, social media companies' internal research might concern a jury, Bergman told the Post. On Monday, the Tech Oversight Project, a nonprofit working to rein in Big Tech, published a report analyzing recently unsealed documents in K.G.M.'s case that supposedly provide "smoking-gun evidence" that platforms "purposefully designed their social media products to addict children and teens with no regard for known harms to their wellbeing" -- while putting increased engagement from young users at the center of their business models.
Most of the unsealed documents came from Meta. An internal email shows Mark Zuckerberg decided Meta's top strategic priority was getting teens "locked in" to Meta's family of apps. Another damning document discusses allowing "tweens" to use a private mode inspired by fake Instagram accounts ("finstas"). The same document includes an admission that internal data showed Facebook use correlated with lower well-being.

Internal communications showed Meta seemingly bragging that "teens can't switch off from Instagram even if they want to" and an employee declaring, "oh my gosh yall IG is a drug," likening all social media platforms to "pushers."

Slashdot Top Deals