Security

How AI Assistants Are Moving the Security Goalposts 41

An anonymous reader quotes a report from KrebsOnSecurity: AI-based assistants or "agents" -- autonomous programs that have access to the user's computer, files, online services and can automate virtually any task -- are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

The new hotness in AI-based assistants -- OpenClaw (formerly known as ClawdBot and Moltbot) -- has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted. If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your entire digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

Other more established AI assistants like Anthropic's Claude and Microsoft's Copilot also can do these things, but OpenClaw isn't just a passive digital butler waiting for commands. Rather, it's designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done. "The testimonials are remarkable," the AI security firm Snyk observed. "Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who've set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they're away from their desks." You can probably already see how this experimental technology could go sideways in a hurry. [...]
Last month, Meta AI safety director Summer Yue said OpenClaw unexpectedly started mass-deleting messages in her email inbox, despite instructions to confirm those actions first. She wrote: "Nothing humbles you like telling your OpenClaw 'confirm before acting' and watching it speedrun deleting your inbox. I couldn't stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb."

Krebs also noted the many misconfigured OpenClaw installations users had set up, leaving their administrative dashboards publicly accessible online. According to pentester Jamieson O'Reilly, "a cursory search revealed hundreds of such servers exposed online." When those exposed interfaces are accessed, attackers can retrieve the agent's configuration and sensitive credentials. O'Reilly warned attackers could access "every credential the agent uses -- from API keys and bot tokens to OAuth secrets and signing keys."

"You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen," O'Reilly added. And because you control the agent's perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they're displayed."
AI

AI Allows Hackers To Identify Anonymous Social Media Accounts, Study Finds (theguardian.com) 54

An anonymous reader quotes a report from the Guardian: AI has made it vastly easier for malicious hackers to identify anonymous social media accounts, a new study has warned. In most test scenarios, large language models (LLMs) -- the technology behind platforms such as ChatGPT -- successfully matched anonymous online users with their actual identities on other platforms, based on the information they posted. The AI researchers Simon Lermen and Daniel Paleka said LLMs make it cost effective to perform sophisticated privacy attacks, forcing a "fundamental reassessment of what can be considered private online".

In their experiment, the researchers fed anonymous accounts into an AI, and got it to scrape all the information it could. They gave a hypothetical example of a user talking about struggling at school, and walking their dog Biscuit through a "Dolores park." In that hypothetical case, the AI then searched elsewhere for those details and matched @anon_user42 to the known identity with a high degree of confidence. While this example was fictional, the paper's authors highlighted scenarios in which governments use AI to surveil dissidents and activists posting anonymously, or hackers are able to launch "highly personalized" scams.

Government

EFF, Ubuntu and Other Distros Discuss How to Respond to Age-Verification Laws (9to5linux.com) 168

System76 isn't the only one criticizing new age-verification laws. The blog 9to5Linux published an "informal" look at other discussions in various Linux communities. Earlier this week, Ubuntu developer Aaron Rainbolt proposed on the Ubuntu mailing list an optional D-Bus interface (org.freedesktop.AgeVerification1) that can be implemented by arbitrary applications as a distro sees fit, but Canonical responded that the company does not yet have a solution to announce for age declaration in Ubuntu. "Canonical is aware of the legislation and is reviewing it internally with legal counsel, but there are currently no concrete plans on how, or even whether, Ubuntu will change in response," said Jon Seager, VP Engineering at Canonical. "The recent mailing list post is an informal conversation among Ubuntu community members, not an announcement. While the discussion contains potentially useful ideas, none have been adopted or committed to by Canonical."

Similar talks are underway in the Fedora and Linux Mint communities about this issue in case the California Digital Age Assurance Act law and similar laws from other states and countries are to be enforced. At the same time, other OS developers, like MidnightBSD, have decided to exclude California from desktop use entirely.

Slashdot contacted Hayley Tsukayama, Director of State Affairs at EFF, who says their organization "has long warned against age-gating the internet. Such mandates strike at the foundation of the free and open internet."

And there's another problem. "Many of these mandates imagine technology that does not currently exist." Such poorly thought-out mandates, in truth, cannot achieve the purported goal of age verification. Often, they are easy to circumvent and many also expose consumers to real data breach risk.

These burdens fall particularly heavily on developers who aren't at large, well-resourced companies, such as those developing open-source software. Not recognizing the diversity of software development when thinking about liability in these proposals effectively limits software choices — and at a time when computational power is being rapidly concentrated in the hands of the few. That harms users' and developers' right to free expression, their digital liberties, privacy, and ability to create and use open platforms...

Rather than creating age gates, a well-crafted privacy law that empowers all of us — young people and adults alike — to control how our data is collected and used would be a crucial step in the right direction.

Firefox

Mozilla Is Working On a Big Firefox Redesign (neowin.net) 99

darwinmac writes: Mozilla is working on a huge redesign for its Firefox browser, codenamed "Nova," which will bring pastel gradients, a refreshed new tab page, floating "island" UI elements, and more. "From the mockups, it appears Mozilla took some inspiration from Googles Material You (or at least, the dynamic color extraction part of it) because the browser color accent appears influenced by the wallpaper setting," reports Neowin. "Choosing a mint-green desktop background automatically shifts the top navigation bars to match that exact shade."

Mozilla has a habit of redesigning Firefox every few years. Before "Nova," there was the "Proton" redesign in 2021, the "Photon" redesign in 2017, and the "Australis" redesign in 2014. Nova is still in early development, so it might take a year or two before it appears in an official stable Firefox release. Neowin adds: "Not every redesign project ends well for Mozilla, though. You might remember 2012's Firefox Metro, an ambitious attempt to build a custom browser for Windows 8s touch-first interface. The team built it to operate both as a traditional desktop application and as a touch-optimized Metro app. The whole thing was scrapped in 2014 after two years in development due to a dismally low user adoption rate (a preview version of the software had been released a year earlier on the Aurora channel)."

Wikipedia

AI Translations Are Adding 'Hallucinations' To Wikipedia Articles (404media.co) 23

An anonymous reader quotes a report from 404 Media: Wikipedia editors have implemented new policies and restricted a number of contributors who were paid to use AI to translate existing Wikipedia articles into other languages after they discovered these AI translations added AI "hallucinations," or errors, to the resulting article. The new restrictions show how Wikipedia editors continue to fight the flood of generative AI across the internet from diminishing the reliability of the world's largest repository of knowledge. The incident also reveals how even well-intentioned efforts to expand Wikipedia are prone to errors when they rely on generative AI, and how they're remedied by Wikipedia's open governance model. The issue centers around a program run by the Open Knowledge Association (OKA), a nonprofit that was found to be "mostly relying on cheap labor from contractors in the Global South" to translate English Wikipedia articles into other languages. Some translators began using tools like Google Gemini and ChatGPT to speed up the process, but editors reviewing the work found numerous hallucinations, including factual errors, missing citations, and references to unrelated sources.

"Ultimately the editors decided to implement restrictions against OKA translators who make multiple errors, but not block OKA translation as a rule," reports 404 Media.
The Internet

Computer Scientists Caution Against Internet Age-Verification Mandates (reason.com) 79

fjo3 shares a report from Reason Magazine: Effective January 1, 2027, providers of computer operating systems in California will be required to implement age verification. That's just part of a wave of state and national laws attempting to limit children's access to potentially risky content without considering the perils such laws themselves pose. Now, not a moment too soon, over 400 computer scientists have signed an open letter warning that the rush to protect children from online dangers threatens to introduce new risks including censorship, centralized power, and loss of privacy. They caution that age-verification requirements "might cause more harm than good." The group of computer scientists from around the world cautions that "those deciding which age-based controls need to exist, and those enforcing them gain a tremendous influence on what content is accessible to whom on the internet." They add that "this influence could be used to censor information and prevent users from accessing services."

"Regulating the use of VPNs, or subjecting their use to age assurance controls, will decrease the capability of users to defend their privacy online. This will not only force regular users to leave a larger footprint on the network, but will leave a number of at-risk populations unprotected, such as journalists, activists, or domestic abuse victims." It continues: "We note that we do not believe that trying to regulate VPN use for non-compliant users would be any more effective than trying to forbid the use of end-to-end encrypted communication for criminals. Secure cryptography is widely available and can no longer be put back into a box."

"If minors or adults are deplatformed via age-related bans, they are likely to migrate to find similar services," warn the scientists. "Since the main platforms would all be regulated, it is likely that they would migrate to fringe sites that escape regulation." With data on everyone collected in order to restrict the activites of minors, data abuses and privacy risks increase. "This in itself increases privacy risks, with data being potentially abused by the provider itself or its subcontractors, or third parties that get access to it, e.g., after a data breach, like the 70K users that had their government ID photos leaked after appealing age assessment errors on Discord."

Instead of mandated age restrictions, the letter urges lawmakers to consider the dangers and suggest regulating social media algorithms instead. They also recommend "support for parents to locally prevent access to non-age-appropriate content or apps, without age-based control needing to be implemented by service providers."
Businesses

Charter Gets FCC Permission To Buy Cox, Become Largest ISP In the US (arstechnica.com) 59

An anonymous reader quotes a report from Ars Technica: Charter Communications, operator of the Spectrum cable brand, has obtained Federal Communications Commission permission to buy Cox and surpass Comcast as the country's largest home Internet service provider. Charter has 29.7 million residential and business Internet customers compared to Comcast's 31.26 million. Buying Cox will give Charter another 5.9 million Internet customers. The FCC approved the deal on Friday, but the companies still need Justice Department approval and sign-offs from states including California and New York.

Opponents of Charter's $34.5 billion acquisition told the FCC that eliminating Cox as an independent entity will make it easier for Charter and Comcast to raise prices. But the FCC dismissed those concerns on the grounds that Charter and Cox don't compete directly against each other in the vast majority of their territories.

FCC Chairman Brendan Carr's primary demand from companies seeking to merge has been to eliminate diversity, equity, and inclusion (DEI) programs and policies. In a press release (PDF), the Carr-led FCC said that "Charter has committed to new safeguards to protect against DEI discrimination," and that Charter's network-expansion plans will bring "faster broadband and lower prices" to rural areas. The merger was approved one day after Charter sent a letter to Carr outlining its actions to end DEI. Charter offers broadband and cable service in 41 states, while Cox does so in 18 states.

Windows

Microsoft Bans 'Microslop' On Its Discord, Then Locks the Server (windowslatest.com) 82

Over the weekend, Windows Latest noticed that Microsoft's official Copilot Discord server began automatically blocking the term "Microslop." As shown in a screenshot, any message containing the word is automatically prevented from posting, and users receive a moderation notice explaining that the message includes language deemed inappropriate under the server's rules. From the report: Windows Latest found that sending a message with the word "Microslop" inside the official Copilot Discord server immediately triggers an automated moderation response. The message does not appear publicly in the channel, and instead, only the sender sees the notice stating that the content is blocked by the server because it contains a phrase deemed inappropriate.

Of course, the internet rarely leaves things there. Shortly after Windows Latest posted about Copilot Discord server blocking Microslop on X, users began experimenting in the server with variations such as "Microsl0p" using a zero instead of the letter "o." Predictably, those versions slipped past the filter. Keyword moderation has always been something of a cat-and-mouse game, and this isn't any different.

What started as a simple keyword filter quickly snowballed into users deliberately testing the restriction and posting variations of the blocked term. Accounts that included "Microslop" in their messages first got banned from messaging again. Not long after, access to parts of the server was restricted, with message history hidden and posting permissions disabled for many users.

The Internet

After US-Israel Attacks, 90 Million Iranians Lose Internet Connectivity (cnn.com) 240

CNN reports that images from Iran's capital "have shown cars jammed along Tehran's street, with heavy traffic on major roads after today's wave of attacks by the US and Israel." And though Iran has a population of 93 million, the attacks suddenly plunged Iran into "a near-total internet blackout with national connectivity at 4% of ordinary levels," according to internet monitoring experts at NetBlocks.

CNN reports: Since Iran's brutal crackdown earlier this year, the regime has made progress to allow only a subset of people with security clearance to access the international web, experts said. After previous internet shutdowns, some platforms never returned. The Iranian government blocked Instagram after the internet shutdown and protests in 2022, and the popular messaging app Telegram following protests in 2018.
The International Atomic Energy Agency announced an hour ago that they're "closely monitoring developments" — keeping in contact with countries in the region and so far seeing "no evidence of any radiological impact." They're also urging "restraint to avoid any nuclear safety risks to people in the region."

UPDATE (1 PM PST): Qatar, Bahrain and Kuwait "are shifting to remote learning starting Sunday until further notice following Iranâ(TM)s retaliatory strikes on Saturday," reports CNN.
AI

America's Teenagers Say AI Cheating Has Become a Regular Feature of Student Life (pewresearch.org) 46

Tuesday Pew Research announced their newest findings: that 54% of America's teens use AI help with schoolwork: One-in-five teens living in households making less than $30,000 a year say they do all or most of their schoolwork with AI chatbots' help. A similar share of those in households making $30,000 to just under $75,000 annually say this. Fewer teens living in higher-earning households (7%) say the same."
"The survey did not ask students whether they had used chatbots to write essays or generate other assignments..." notes the New York Times. "But nearly 60% of teenagers told Pew that students at their school used chatbots to cheat 'very often' or 'somewhat often.'" Agreeing with that are the Pew Researchers themselves. "Our survey shows that many teens think cheating with AI has become a regular feature of student life."

One worried teenager still told the researchers that AI "makes people lazy and takes away jobs." But another teenager told the researchers that "Everyone's going to have to know how to use AI or they'll be left behind."

Thanks to long-time Slashdot reader theodp for sharing the article.
The Internet

Google Quantum-Proofs HTTPS (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: Google on Friday unveiled its plan for its Chrome browser to secure HTTPS certificates against quantum computer attacks without breaking the Internet. The objective is a tall order. The quantum-resistant cryptographic data needed to transparently publish TLS certificates is roughly 40 times bigger than the classical cryptographic material used today. Today's X.509 certificates are about 64 bytes in size, and comprise six elliptic curve signatures and two EC public keys. This material can be cracked through the quantum-enabled Shor's algorithm. Certificates containing the equivalent quantum-resistant cryptographic material are roughly 2.5 kilobytes. All this data must be transmitted when a browser connects to a site.

To bypass the bottleneck, companies are turning to Merkle Trees, a data structure that uses cryptographic hashes and other math to verify the contents of large amounts of information using a small fraction of material used in more traditional verification processes in public key infrastructure. Merkle Tree Certificates, "replace the heavy, serialized chain of signatures found in traditional PKI with compact Merkle Tree proofs," members of Google's Chrome Secure Web and Networking Team wrote Friday. "In this model, a Certification Authority (CA) signs a single 'Tree Head' representing potentially millions of certificates, and the 'certificate' sent to the browser is merely a lightweight proof of inclusion in that tree."

[...] Google is [also] adding cryptographic material from quantum-resistant algorithms such as ML-DSA (PDF). This addition would allow forgeries only if an attacker were to break both classical and post-quantum encryption. The new regime is part of what Google is calling the quantum-resistant root store, which will complement the Chrome Root Store the company formed in 2022. The [Merkle Tree Certificates] MTCs use Merkle Trees to provide quantum-resistant assurances that a certificate has been published without having to add most of the lengthy keys and hashes. Using other techniques to reduce the data sizes, the MTCs will be roughly the same 64-byte length they are now [...]. The new system has already been implemented in Chrome.

AI

Perplexity Announces 'Computer,' an AI Agent That Assigns Work To Other AI Agents (arstechnica.com) 16

joshuark shares a report from Ars Technica: Perplexity has introduced "Computer," a new tool that allows users to assign tasks and see them carried out by a system that coordinates multiple agents running various models. The company claims that Computer, currently available to Perplexity Max subscribers, is "a system that creates and executes entire workflows" and "capable of running for hours or even months."

The idea is that the user describes a specific outcome -- something like "plan and execute a local digital marketing campaign for my restaurant" or "build me an Android app that helps me do a specific kind of research for my job." Computer then ideates subtasks and assigns them to multiple agents as needed, running the models Perplexity deems best for those tasks. The core reasoning engine currently runs Anthropic's Claude Opus 4.6, while Gemini is used for deep research, Nano Banana for image generation, Veo 3.1 for video production, Grok for lightweight tasks where speed is a consideration, and ChatGPT 5.2 for "long-context recall and wide search."

This kind of best-model-for-the-task approach differs from some competing products like Claude Cowork, which only uses Anthropic's models. All this happens in the cloud, with prebuilt integrations. "Every task runs in an isolated compute environment with access to a real filesystem, a real browser, and real tool integrations," Perplexity says. The idea is partly that this workflow was what some power users were already doing, and this aims to make that possible for a wider range of people who don't want to deal with all that setup.

People were already using multiple models and tailoring them to specific tasks based on perceived capabilities, while, for example, using MCP (Model Context Protocol) to give those models access to data and applications on their local machines. Perplexity Computer takes a different approach, but the goal is the same: have AI agents running tailor-picked models to perform tasks involving your own files, services, and applications. Then there is OpenClaw, which you could perceive as the immediate predecessor to this concept.

Google

South Korea Set To Get a Fully Functioning Google Maps (reuters.com) 14

South Korea has reversed a two-decade policy and approved the export of high-precision map data, paving the way for a fully functional Google Maps in the country. Reuters reports: The approval was made "on the condition that strict security requirements are met," the Ministry of Land, Infrastructure and Transport said in a statement. Those conditions include blurring military and other sensitive security-related facilities, as well as restricting longitude and latitude coordinates for South Korean territory on products such as Google Maps and Google Earth, it said.

The decision is expected to hurt Naver and Kakao -- local internet giants which currently dominate the country's market for digital map services. But it will appease Washington, which has urged Seoul to tackle what it says is discrimination against U.S. tech companies. South Korea, still technically at war with North Korea, had shot down Google's previous bids in 2007 and 2016 to be allowed to export the data, citing the risks that information about sensitive military and security facilities could be exposed.
"Google can now come in, slash usage fees, and take the market," said Choi Jin-mu, a geography professor at Kyung Hee University. "If Naver and Kakao are weakened or pushed out and Google later raises prices, that becomes a monopoly. Then, even companies that rely on map services -- logistics firms, for example -- become dependent, and in the long run, even government GIS (geographic information) systems could end up dependent on Google or Apple. That's the biggest concern."
Government

The Government Just Made it Harder to See What Spy Tech it Buys 17

An anonymous reader shares a report: It might look like something from the early days of the internet, with its aggressively grey color scheme and rectangles nested inside rectangles, but FPDS.gov is one of the most important resources for keeping tabs on what powerful spying tools U.S. government agencies are buying. It includes everything from phone hacking technology, to masses of location data, to more Palantir installations.

Or rather, it was an incredible tool and the basis for countless of my own investigations and others. Because on Wednesday, the government shut it down. Its replacement, another site called SAM.gov with Uncle Sam branding, frankly sucks, and makes it demonstrably harder to reliably find out what agencies, including Immigration and Customs Enforcement (ICE), are spending tax payers dollars on.

"FPDS may have been a little clunky, but its simple, old-school interface made it extremely functional and robust. Every facet of government operations touches on contracting at one point, and this was the first tool that many investigative journalists and researchers would reach for to quickly find out what the government is buying and who is selling it, and how these contracts all fit together," Dave Maass, director of investigations at the Electronic Frontier Foundation, told me.
The Internet

Say Goodbye to the Undersea Cable That Made the Global Internet Possible (wired.com) 32

The first fiber-optic cable ever laid across an ocean -- TAT-8, a nearly 6,000-kilometer line between the United States, United Kingdom, and France that carried its first traffic on December 14, 1988 -- is now being pulled off the Atlantic seabed after more than two decades of sitting dormant, bound for recycling in South Africa.

Subsea Environmental Services, one of only three companies in the world whose entire business is cable recovery and recycling, began the operation last year using its new diesel-electric vessel, the MV Maasvliet, and had already brought 1,012 kilometers of the cable to the Portuguese port of Leixoes by August.

TAT-8, short for Trans-Atlantic Telephone 8, was built by AT&T, British Telecom, and France Telecom, and hit full capacity within just 18 months of going live. A fault too expensive to repair took it out of service in 2002. The recovered cable is being shipped to Mertech Marine in South Africa, where it will be broken down into steel, copper, and two types of polyethylene -- all commercially valuable, especially the high-quality copper at a time when the International Energy Agency projects global shortages within a decade.
AI

Sam Altman Would Like To Remind You That Humans Use a Lot of Energy, Too (techcrunch.com) 142

OpenAI CEO Sam Altman is pushing back on growing concerns about AI's environmental footprint, dismissing claims about ChatGPT's water consumption as "totally fake" and arguing that the fairer way to measure AI's energy use is to compare it against humans.

In an interview with Indian Express, Altman acknowledged that evaporative cooling in data centers once made water usage a real concern but said that is no longer the case, calling internet claims of 17 gallons of water per query "completely untrue, totally insane, no connection to reality."

On energy, he conceded it is "fair" to worry about total consumption given how heavily the world now relies on AI, and called for a rapid shift toward nuclear, wind and solar power. He took particular issue with comparisons that pit the cost of training a model against a single human inference, noting it "takes like 20 years of life and all of the food you eat" before a person gets smart -- and that on a per-query basis, AI has "probably already caught up on an energy efficiency basis."
Encryption

Telegram Disputes Russia's Claim Its Encryption Was Compromised (business-standard.com) 21

Russia's domestic intelligence agency claimed Saturday that Ukraine can obtain sensitive information from troops using the Telegram app on the front line, reports Bloomberg. The fact that the claims were made through Russia's state-operated news outlet RIA Novosti signals "tightening scrutiny over a platform used by millions of Russians," Bloomberg notes, as the Kremlin continues efforts to "push people to use a new state-backed alternative." Russia's communications watchdog limited access to Telegram — a popular messaging app owned by Russian-born billionaire Pavel Durov — over a week ago for failing to comply with Russian laws requiring personal data to be stored locally. Voice and video calls were blocked via Telegram in August. The pressure is the latest move in a long-running campaign to promote what the Kremlin calls a sovereign internet that's led to blocks on YouTube, Instagram and WhatsApp... Foreign intelligence services are able to see Russia's military messages in Telegram too, Russia's Minister for digital development, Maksut Shadaev, said on Wednesday, although he added that Russia will not block access to Telegram for troops for now.

Telegram responded at the time that no breaches of the app's encryption have ever been found. "The Russian government's allegation that our encryption has been compromised is a deliberate fabrication intended to justify outlawing Telegram and forcing citizens onto a state-controlled messaging platform engineered for mass surveillance and censorship," it said in an emailed response.

The Internet

Long Before Tech CEOs Turned To Layoffs To Cover AI Expenses, There Was WorldCom (nbcnews.com) 47

Long-time Slashdot reader theodp writes: Jeopardy time. A. This company spurred CEOs to make huge speculative capital expenditures based on wild unverified claims of future demand, resulting in the layoffs of tens of thousands of workers to reduce the resulting expenses, harming their core businesses. Q. What is OpenAI?

Sorry, the correct response is, "What is WorldCom?" In 2002, WorldCom, the second largest long-distance company in the U.S., entered Chapter 11 bankruptcy after disclosing accounting fraud that eventually totaled $11 billion, the biggest ever at the time. CEO Bernard Ebbers was subsequently sentenced to 25 years in prison.

CNBC reported that an employee of WorldCom's Internet service provider UUNet set off a frenzy of speculative investment and infrastructure overbuild after he used Excel to create a best-case scenario model for the Internet's growth that suggested in the best of all possible worlds, Internet traffic would double every 100 days, a scenario that would greatly benefit WorldCom, whose lines would carry it. Despite no evidence to support it, WorldCom's lie became an immutable law and businesses around the world made important decisions based on the belief that traffic was doubling every 100 days. "For some period of time I can recall that we were backfilling that expectation with laying cables, something like 2,200 miles of cable an hour," AT&T CEO Michael Armstrong said. "Think of all the companies that went out of business that assumed that that was real."

In 2003, NBC News reported: Armstrong and former Sprint CEO Bill Esrey struggled for years to understand how WorldCom could beat them so handily. "We would look at the conduct of WorldCom in terms of their pricing, revenue growth, margins, in terms of their cost structure... and the price leader almost every quarter was WorldCom," Armstrong said. Added Esrey, "We couldn't figure out how they were pricing as aggressively as they were.... How could they be so efficient in their costs and expenses?" AT&T and Sprint began cutting jobs to push down their costs to WorldCom's level. "The market said what a marvelous management job WorldCom was doing and they would look over to AT&T and say, 'these guys aren't keeping up.' So, my shareholders were hurt. We laid off tens of thousands of employees in an accelerated fashion [in a futile effort to match WorldCom's phantom profits] and I think the industry was hurt," Armstrong says. "It just wrecked the whole industry," says Esrey.
Robotics

Man Accidentally Gains Control of 7,000 Robot Vacuums (popsci.com) 51

A software engineer tried steering his robot vacuum with a videogame controller, reports Popular Science — but ended up with "a sneak peak into thousands of people's homes." While building his own remote-control app, Sammy Azdoufal reportedly used an AI coding assistant to help reverse-engineer how the robot communicated with DJI's remote cloud servers. But he soon discovered that the same credentials that allowed him to see and control his own device also provided access to live camera feeds, microphone audio, maps, and status data from nearly 7,000 other vacuums across 24 countries.

The backend security bug effectively exposed an army of internet-connected robots that, in the wrong hands, could have turned into surveillance tools, all without their owners ever knowing. Luckily, Azdoufal chose not to exploit that. Instead, he shared his findings with The Verge, which quickly contacted DJI to report the flaw... He also claims he could compile 2D floor plans of the homes the robots were operating in. A quick look at the robots' IP addresses also revealed their approximate locations.

DJI told Popular Science the issue was addressed "through two updates, with an initial patch deployed on February 8 and a follow-up update completed on February 10."
The Internet

Fury Over Discord's Age Checks Explodes After Shady Persona Test In UK (arstechnica.com) 62

Backlash intensified against Discord's age verification rollout after it briefly disclosed a UK age-verification test involving vendor Persona, contradicting earlier claims about minimal ID storage and transparency. Ars Technica explains: One of the major complaints was that Discord planned to collect more government IDs as part of its global age verification process. It shocked many that Discord would be so bold so soon after a third-party breach of a former age check partner's services recently exposed 70,000 Discord users' government IDs.

Attempting to reassure users, Discord claimed that most users wouldn't have to show ID, instead relying on video selfies using AI to estimate ages, which raised separate privacy concerns. In the future, perhaps behavioral signals would override the need for age checks for most users, Discord suggested, seemingly downplaying the risk that sensitive data would be improperly stored. Discord didn't hide that it planned to continue requesting IDs for any user appealing an incorrect age assessment, and users weren't happy, since that is exactly how the prior breach happened. Responding to critics, Discord claimed that the majority of ID data was promptly deleted. Specifically, Savannah Badalich, Discord's global head of product policy, told The Verge that IDs shared during appeals "are deleted quickly -- in most cases, immediately after age confirmation."

It's unsurprising then that backlash exploded after Discord posted, and then weirdly deleted, a disclaimer on an FAQ about Discord's age assurance policies that contradicted Discord's hyped short timeline for storing IDs. An archived version of the page shows the note shared this warning: "Important: If you're located in the UK, you may be part of an experiment where your information will be processed by an age-assurance vendor, Persona. The information you submit will be temporarily stored for up to 7 days, then deleted. For ID document verification, all details are blurred except your photo and date of birth, so only what's truly needed for age verification is used."

Critics felt that Discord was obscuring not just how long IDs may be stored, but also the entities collecting information. Discord did not provide details on what the experiment was testing or how many users were affected, and Persona was not listed as a partner on its platform. Asked for comment, Discord told Ars that only a small number of users was included in the experiment, which ran for less than one month. That test has since concluded, Discord confirmed, and Persona is no longer an active vendor partnering with Discord. Moving forward, Discord promised to "keep our users informed as vendors are added or updated." While Discord seeks to distance itself from Persona, Rick Song, Persona's CEO [...] told Ars that all the data of verified individuals involved in Discord's test has been deleted.
Ars also notes that hackers "quickly exposed a 'workaround' to avoid Persona's age checks on Discord" and "found a Persona frontend exposed to the open internet on a U.S. government authorized server."

The Rage, an independent publication that covers financial surveillance, reported: "In 2,456 publicly accessible files, the code revealed the extensive surveillance Persona software performs on its users, bundled in an interface that pairs facial recognition with financial reporting -- and a parallel implementation that appears designed to serve federal agencies." While Persona does not have any government contracts, the exposed service "appears to be powered by an OpenAI chatbot," The Rage noted.

Hackers warned "that OpenAI may have created an internal database for Persona identity checks that spans all OpenAI users via its internal watchlistdb," seemingly exploiting the "opportunity to go from comparing users against a single federal watchlist, to creating the watchlist of all users themselves."

Slashdot Top Deals