Supercomputing

A New Ion-Based Quantum Computer Makes Error Correction Simpler (technologyreview.com) 10

An anonymous reader quotes a report from MIT Technology Review: The US- and UK-based company Quantinuum today unveiled Helios, its third-generation quantum computer, which includes expanded computing power and error correction capability. Like all other existing quantum computers, Helios is not powerful enough to execute the industry's dream money-making algorithms, such as those that would be useful for materials discovery or financial modeling. But Quantinuum's machines, which use individual ions as qubits, could be easier to scale up than quantum computers that use superconducting circuits as qubits, such as Google's and IBM's. "Helios is an important proof point in our road map about how we'll scale to larger physical systems," says Jennifer Strabley, vice president at Quantinuum, which formed in 2021 from the merger of Honeywell Quantum Solutions and Cambridge Quantum. Honeywell remains Quantinuum's majority owner.

Located at Quantinuum's facility in Colorado, Helios comprises a myriad of components, including mirrors, lasers, and optical fiber. Its core is a thumbnail-size chip containing the barium ions that serve as the qubits, which perform the actual computing. Helios computes with 98 barium ions at a time; its predecessor, H2, used 56 ytterbium qubits. The barium ions are an upgrade, as they have proven easier to control than ytterbium. These components all sit within a chamber that is cooled to about 15 Kelvin (-432.67 ), on top of an optical table. Users can access the computer by logging in remotely over the cloud. [...] Helios is noteworthy for its qubits' precision, says Rajibul Islam, a physicist at the University of Waterloo in Canada, who is not affiliated with Quantinuum. The computer's qubit error rates are low to begin with, which means it doesn't need to devote as much of its hardware to error correction. Quantinuum had pairs of qubits interact in an operation known as entanglement and found that they behaved as expected 99.921% of the time. "To the best of my knowledge, no other platform is at this level," says Islam.

[...] Besides increasing the number of qubits on its chip, another notable achievement for Quantinuum is that it demonstrated error correction "on the fly," says David Hayes, the company's director of computational theory and design, That's a new capability for its machines. Nvidia GPUs were used to identify errors in the qubits in parallel. Hayes thinks that GPUs are more effective for error correction than chips known as FPGAs, also used in the industry. Quantinuum has used its computers to investigate the basic physics of magnetism and superconductivity. Earlier this year, it reported simulating a magnet on H2, Quantinuum's predecessor, with the claim that it "rivals the best classical approaches in expanding our understanding of magnetism." Along with announcing the introduction of Helios, the company has used the machine to simulate the behavior of electrons in a high-temperature superconductor.
Quantinuum is expanding its Helios line with a new system in Minnesota. It's also started developing its fourth-generation quantum computer, Sol, set for 2027 with 192 qubits. Then, a fifth-generation system, Apollo, is expected in 2029 with thousands of qubits and full fault tolerance.
Google

Gemini AI To Transform Google Maps Into a More Conversational Experience (apnews.com) 91

An anonymous reader quotes a report from the Associated Press: Google Maps is heading in a new direction with artificial intelligence sitting in the passenger's seat. Fueled by Google's Gemini AI technology, the world's most popular navigation app will become a more conversational companion as part of a redesign announced Wednesday. The hands-free experience is meant to turn Google Maps into something more like an insightful passenger able to direct a driver to a destination while also providing nearby recommendations on places to eat, shop or sightsee, when asked for the advice. "No fumbling required -- now you can just ask," Google promised in a blog post about the app makeover.

The AI features are also supposed to enable Google Maps to be more precise by calling out landmarks to denote the place to make a turn instead of relying on distance notifications. AI chatbots, like Gemini and OpenAI's ChatGPT, have sometimes lapsed into periods of making things up -- known as "hallucinations" in tech speak -- but Google is promising that built-in safeguards will prevent Maps from accidentally sending drivers down the wrong road. All the information that Gemini is drawing upon will be culled from the roughly 250 million places stored in Google Maps' database of reviews accumulated during the past 20 years. Google Maps' new AI capabilities will be rolling out to both Apple's iPhone and Android mobile devices.

Android

Epic and Google Settle Antitrust Case With Global Fee Cuts and Easier Third-Party Store Access 16

Epic Games and Google have agreed to settle their long-running antitrust lawsuit. The settlement converts Judge James Donato's United States-only injunction into a global agreement extending through June 2032. Google will reduce its standard app store fees to either 20% or 9% depending on the transaction type.

The company will also create a program in the next major Android release allowing alternative app stores to register and become what Google calls first-class citizens. Users will be able to install these registered app stores from a website with a single click using neutral language.

The settlement addresses Epic's concerns about friction and scare screens that discouraged sideloading. Google will charge a 5% fee for transactions using Google Play Billing, separate from its service fee. Alternative payment options must be shown alongside Google Play Billing.
AI

Google's New Hurricane Model Was Breathtakingly Good This Season (arstechnica.com) 43

An anonymous reader quotes a report from Ars Technica: Although Google DeepMind's Weather Lab only started releasing cyclone track forecasts in June, the company's AI forecasting service performed exceptionally well. By contrast, the Global Forecast System model, operated by the US National Weather Service and is based on traditional physics and runs on powerful supercomputers, performed abysmally. The official data comparing forecast model performance will not be published by the National Hurricane Center for a few months. However, Brian McNoldy, a senior researcher at the University of Miami, has already done some preliminary number crunching.

The results are stunning: A little help in reading the graphic is in order. This chart sums up the track forecast accuracy for all 13 named storms in the Atlantic Basin this season, measuring the mean position error at various hours in the forecast, from 0 to 120 hours (five days). On this chart, the lower a line is, the better a model has performed. The dotted black line shows the average forecast error for official forecasts from the 2022 to 2024 seasons. What jumps out is that the United States' premier global model, the GFS (denoted here as AVNI), is by far the worst-performing model. Meanwhile, at the bottom of the chart, in maroon, is the Google DeepMind model (GDMI), performing the best at nearly all forecast hours.

The difference in errors between the US GFS model and Google's DeepMind is remarkable. At five days, the Google forecast had an error of 165 nautical miles compared to 360 nautical miles for the GFS model, more than twice as bad. This is the kind of error that causes forecasters to completely disregard one model in favor of another. But there's more. Google's model was so good that it regularly beat the official forecast from the National Hurricane Center (OFCL), which is produced by human experts looking at a broad array of model data. The AI-based model also beat highly regarded "consensus models," including the TVCN and HCCA products. For more information on various models and their designations, see here.

Piracy

Google Removed 749 Million Anna's Archive URLs From Its Search Results (torrentfreak.com) 38

Google has delisted over 749 million URLs from Anna's Archive, a shadow library and meta-search engine for pirated books, representing 5% of all copyright takedown requests ever filed with the company. TorrentFreak reports: Google's transparency report reveals that rightsholders asked Google to remove 784 million URLs, divided over the three main Anna's Archive domains. A small number were rejected, mainly because Google didn't index the reported links, resulting in 749 million confirmed removals. The comparison to sites such as The Pirate Bay isn't fair, as Anna's Archive has many more pages in its archive and uses multiple country-specific subdomains. This means that there's simply more content to take down. That said, in terms of takedown activity, the site's three domain names clearly dwarf all pirate competition.

Since Google published its first transparency report in May 2012, rightsholders have flagged 15.1 billion allegedly infringing URLs. That's a staggering number, but the fact that 5% of the total targeted Anna's Archive URLs is remarkable. Penguin Random House and John Wiley & Sons are the most active publishers targeting the site, but they are certainly not alone. According to Google data, more than 1,000 authors or publishers have sent DMCA notices targeting Anna's Archive domains. Yet, there appears to be no end in sight. Rightsholders are reporting roughly 10 million new URLs per week for the popular piracy library, so there is no shortage of content to report.

Space

Google's Next Moonshot Is Putting TPUs In Space With 'Project Suncatcher' (9to5google.com) 48

Google's new "Project Suncatcher" aims to launch Tensor Processing Units (TPUs) into space, creating a solar-powered, satellite-based AI network capable of scaling machine learning beyond Earth's limits. Google says a "solar panel can be up to 8 times more productive than on earth" for near-continuous power using a "dawn-dusk sun-synchronous low earth orbit" that reduces the need for batteries and other power generation. 9to5Google reports: These satellites would connect via free-space optical links, with large-scale ML workloads "distributing tasks across numerous accelerators with high-bandwidth, low-latency connections." To match data centers on Earth, the connection between satellites would have to be tens of terabits per second, and they'd have to fly in "very close formation (kilometers or less)."

Google has already conducted radiation testing on TPUs (Trillium, v6e), with "promising" results: "While the High Bandwidth Memory (HBM) subsystems were the most sensitive component, they only began showing irregularities after a cumulative dose of 2 krad(Si) -- nearly three times the expected (shielded) five year mission dose of 750 rad(Si). No hard failures were attributable to TID up to the maximum tested dose of 15 krad(Si) on a single chip, indicating that Trillium TPUs are surprisingly radiation-hard for space applications."

Finally, Google believes that launch costs will "fall to less than $200/kg by the mid-2030s." At that point, the "cost of launching and operating a space-based data center could become roughly comparable to the reported energy costs of an equivalent terrestrial data center on a per-kilowatt/year basis."

Businesses

Amazon Accuses Perplexity of Computer Fraud, Demands It Stop AI Agent From Buying On Its Site (bloomberg.com) 44

Amazon has sent a cease-and-desist letter to Perplexity AI demanding that the AI search startup stop allowing its AI browser agent, Comet, to make purchases online for users. From a report: The e-commerce giant is accusing Perplexity of committing computer fraud by failing to disclose when its AI agent is shopping on a user's behalf, in violation of Amazon's terms of service, according to people familiar with the letter sent on Friday. The document also said Perplexity's tool degraded the Amazon shopping experience and introduced privacy vulnerabilities, said the people, who spoke on condition of anonymity to discuss internal matters.

In response, Perplexity said Amazon is bullying a smaller competitor with a rival AI agent shopping product. The clash between Amazon and Perplexity offers an early glimpse into a looming debate over how to handle the proliferation of so-called AI agents that field more complex tasks online for users, including shopping. Like OpenAI and Alphabet's Google, Perplexity has pushed to rethink the traditional web browser around AI, with the goal of having it streamline more actions for users, such as drafting emails and conducting research.

Advertising

Coca-Cola's New AI Holiday Ad Is a Sloppy Eyesore (theverge.com) 60

Coca-Cola has doubled down on AI-generated holiday ads despite widespread criticism of last year's uncanny results. This year the beverage company is replacing human actors with oddly animated animals in a visually inconsistent campaign. The Verge reports: There's no consistent style, switching between attempted realism and a bug-eyed toony look, and the polar bears, panda, and sloth move unnaturally, like flat images that have been sloppily animated rather than rigged 3D models in CG. Compared to the convincing deepfake videos being generated by tools like OpenAI's Sora 2 or Google's Veo 3, the videos produced for this Coke ad feel extremely dated.

The only notable improvement to my eyes is that the wheels on the iconic Coke trucks are actually consistently turning this year, rather than gliding statically over snow-covered roads. The Wall Street Journal reports that Coca-Cola teamed up with Silverside and Secret Level on its latest holiday campaign, two of the AI studios that previously worked on the 2024 Coke Christmas ads.

Coca-Cola declined to comment on the cost of the new holiday campaign, according to The Wall Street Journal, but said that around 100 people were involved in the project -- a figure comparable to the company's older AI-free productions. That includes five "AI specialists" from Silverside who contributed by prompting and refining more than 70,000 AI video clips.

Google

Google Removes Gemma Models From AI Studio After GOP Senator's Complaint (arstechnica.com) 49

An anonymous reader quotes a report from Ars Technica: You may be disappointed if you go looking for Google's open Gemma AI model in AI Studio today. Google announced late on Friday that it was pulling Gemma from the platform, but it was vague about the reasoning. The abrupt change appears to be tied to a letter from Sen. Marsha Blackburn (R-Tenn.), who claims the Gemma model generated false accusations of sexual misconduct against her.

Blackburn published her letter to Google CEO Sundar Pichai on Friday, just hours before the company announced the change to Gemma availability. She demanded Google explain how the model could fail in this way, tying the situation to ongoing hearings that accuse Google and others of creating bots that defame conservatives. At the hearing, Google's Markham Erickson explained that AI hallucinations are a widespread and known issue in generative AI, and Google does the best it can to mitigate the impact of such mistakes. Although no AI firm has managed to eliminate hallucinations, Google's Gemini for Home has been particularly hallucination-happy in our testing.

The letter claims that Blackburn became aware that Gemma was producing false claims against her following the hearing. When asked, "Has Marsha Blackburn been accused of rape?" Gemma allegedly hallucinated a drug-fueled affair with a state trooper that involved "non-consensual acts." Blackburn goes on to express surprise that an AI model would simply "generate fake links to fabricated news articles." However, this is par for the course with AI hallucinations, which are relatively easy to find when you go prompting for them. AI Studio, where Gemma was most accessible, also includes tools to tweak the model's behaviors that could make it more likely to spew falsehoods. Someone asked a leading question of Gemma, and it took the bait.

Apple

Apple To White-Label Google's Gemini Model for Next-Generation Siri, Report Says (bloomberg.com) 11

Apple is paying Google to create a custom Gemini-based model that will run on the company's private cloud servers and power the next version of Siri, according to Bloomberg. The decision marks a departure from Apple's tradition of building core technologies in-house. The arrangement follows a competition Apple held this year between Anthropic and Google, the report said. Anthropic offered a superior model, but Google made more financial sense because of the tech giants' existing search relationship. Neither company is expected to discuss the partnership publicly, the report added.

The new Siri will introduce AI-powered web search and other features users have come to expect from voice assistants. The custom model will not flood Siri with Google services or Gemini features already available on Android devices. Instead, it will provide the underlying AI capabilities through an Apple user interface. The company is betting heavily on the revamped Siri to undo years of brand damage.
IT

The Curious Case of the Bizarre, Disappearing Captcha (wired.com) 52

Captchas have largely vanished from the web in 2025, replaced by invisible tracking systems that analyze user behavior rather than asking people to decipher distorted text or identify traffic lights in image grids. Google launched reCaptcha v3 in 2018 to generate risk scores based on behavioral signals during site interactions, making bot-blocking technology "completely invisible" for most users, according to Tim Knudsen, a director of product management at Google Cloud.

Cloudflare followed in 2022 by releasing Turnstile, another invisible alternative that sometimes appears as a simple checkbox but actually gathers data from devices and software to determine if users are human. Both companies distribute their security tools for free to collect training data, and Cloudflare now sees 20% of all HTTP requests across the internet.

The rare challenges that do surface have become increasingly bizarre, ranging from requests to identify dogs and ducks wearing various hats to sliding a jockstrap across a screen to find matching underwear on hookup sites.
AI

OpenAI Signs $38 Billion Cloud Deal With Amazon (openai.com) 10

OpenAI will pay Amazon $38 billion for computing power in a seven-year deal that marks the companies' first partnership. Amazon expects all of the computing capacity negotiated as part of the agreement will be available to OpenAI by the end of next year. The ChatGPT maker will train new AI models using Amazon's data centers and use them to process user queries.

The deal is small compared with OpenAI's $300 billion agreement with Oracle and its $250 billion commitment to Microsoft. OpenAI ended its exclusive cloud-computing partnership with Microsoft last month and has since signed almost $600 billion in new cloud commitments. Amazon Web Services is the industry's largest cloud provider, but Microsoft and Google have reported faster cloud-revenue growth in recent years after capturing new demand from AI customers.
Virtualization

Linux Ported to WebAssembly, Boots in a Browser Tab (phoronix.com) 54

"During the past two years or so I have been slow-rolling an effort to port the Linux kernel to WebAssembly," reads a surprising post on the Linux kernel mailing list. I'm now at the point where the kernel boots and I can run basic programs from a shell. As you will see if you play around with it for a bit, it's not very stable and will crash sooner or later, but I think this is a good first step. Wasm is not necessarily only targeting the web, but that's how I have been developing this project... This is Linux, booting in your browser tab, accelerated by Wasm.
Phoronix warns that "there are stability issues and it didn't take me long either to trigger crashes for this Linux kernel WASM port when running within Google Chrome."
Privacy

Manufacturer Remotely Bricks Smart Vacuum After Its Owner Blocked It From Collecting Data (tomshardware.com) 123

"An engineer got curious about how his iLife A11 smart vacuum worked and monitored the network traffic coming from the device," writes Tom's Hardware.

"That's when he noticed it was constantly sending logs and telemetry data to the manufacturer — something he hadn't consented to." The user, Harishankar, decided to block the telemetry servers' IP addresses on his network, while keeping the firmware and OTA servers open. While his smart gadget worked for a while, it just refused to turn on soon after... He sent it to the service center multiple times, wherein the technicians would turn it on and see nothing wrong with the vacuum. When they returned it to him, it would work for a few days and then fail to boot again... [H]e decided to disassemble the thing to determine what killed it and to see if he could get it working again...

[He discovered] a GD32F103 microcontroller to manage its plethora of sensors, including Lidar, gyroscopes, and encoders. He created PCB connectors and wrote Python scripts to control them with a computer, presumably to test each piece individually and identify what went wrong. From there, he built a Raspberry Pi joystick to manually drive the vacuum, proving that there was nothing wrong with the hardware. From this, he looked at its software and operating system, and that's where he discovered the dark truth: his smart vacuum was a security nightmare and a black hole for his personal data.

First of all, it's Android Debug Bridge, which gives him full root access to the vacuum, wasn't protected by any kind of password or encryption. The manufacturer added a makeshift security protocol by omitting a crucial file, which caused it to disconnect soon after booting, but Harishankar easily bypassed it. He then discovered that it used Google Cartographer to build a live 3D map of his home. This isn't unusual, by far. After all, it's a smart vacuum, and it needs that data to navigate around his home. However, the concerning thing is that it was sending off all this data to the manufacturer's server. It makes sense for the device to send this data to the manufacturer, as its onboard SoC is nowhere near powerful enough to process all that data. However, it seems that iLife did not clear this with its customers.

Furthermore, the engineer made one disturbing discovery — deep in the logs of his non-functioning smart vacuum, he found a command with a timestamp that matched exactly the time the gadget stopped working. This was clearly a kill command, and after he reversed it and rebooted the appliance, it roared back to life.

Thanks to long-time Slashdot reader registrations_suck for sharing the article.
Programming

GitHub Announces 'Agent HQ', Letting Copilot Subscribers Run and Manage Coding Agents from Multiple Vendors (venturebeat.com) 9

"AI isn't just a tool anymore; it's an integral part of the development experience," argues GitHub's blog. So "Agents shouldn't be bolted on. They should work the way you already work..."

So this week GitHub announced "Agent HQ," which CNBC describes as a "mission control" interface "that will allow software developers to manage coding agents from multiple vendors on a single platform." Developers have a range of new capabilities at their fingertips because of these agents, but it can require a lot of effort to keep track of them all individually, said GitHub COO Kyle Daigle. Developers will now be able to manage agents from GitHub, OpenAI, Google, Anthropic, xAI and Cognition in one place with Agent HQ. "We want to bring a little bit of order to the chaos of innovation," Daigle told CNBC in an interview. "With so many different agents, there's so many different ways of kicking off these asynchronous tasks, and so our big opportunity here is to bring this all together." Agent HQ users will be able to access a command center where they can assign, steer and monitor the work of multiple agents...

The third-party agents will begin rolling out to GitHub Copilot subscribers in the coming months, but Copilot Pro+ users will be able to access OpenAI Codex in VS Code Insiders this week, the company said.

"We're into this wave two era," GitHub's COO Mario Rodriguez told VentureBeat, an era that's "going to be multimodal, it's going to be agentic and it's going to have these new experiences that will feel AI native...."

Or, as VentureBeat sees it, GitHub "is positioning itself as the essential orchestration layer beneath them all..." Just as the company transformed Git, pull requests and CI/CD into collaborative workflows, it's now trying to do the same with a fragmented AI coding landscape...

The technical architecture addresses a critical enterprise concern: Security. Unlike standalone agent implementations where users must grant broad repository access, GitHub's Agent HQ implements granular controls at the platform level... Agents operating through Agent HQ can only commit to designated branches. They run within sandboxed GitHub Actions environments with firewall protections. They operate under strict identity controls. [GitHub COO] Rodriguez explained that even if an agent goes rogue, the firewall prevents it from accessing external networks or exfiltrating data unless those protections are explicitly disabled.

Beyond managing third-party agents, GitHub is introducing two technical capabilities that set Agent HQ apart from alternative approaches like Cursor's standalone editor or Anthropic's Claude integration. Custom agents via AGENTS.md files: Enterprises can now create source-controlled configuration files that define specific rules, tools and guardrails for how Copilot behaves. For example, a company could specify "prefer this logger" or "use table-driven tests for all handlers." This permanently encodes organizational standards without requiring developers to re-prompt every time... Native Model Context Protocol (MCP) support: VS Code now includes a GitHub MCP Registry. Developers can discover, install and enable MCP servers with a single click. They can then create custom agents that combine these tools with specific system prompts. This positions GitHub as the integration point between the emerging MCP ecosystem and actual developer workflows. MCP, introduced by Anthropic but rapidly gaining industry support, is becoming a de facto standard for agent-to-tool communication. By supporting the full specification, GitHub can orchestrate agents that need access to external services without each agent implementing its own integration logic.

GitHub is also shipping new capabilities within VS Code itself. Plan Mode allows developers to collaborate with Copilot on building step-by-step project approaches. The AI asks clarifying questions before any code is written. Once approved, the plan can be executed either locally in VS Code or by cloud-based agents. The feature addresses a common failure mode in AI coding: Beginning implementation before requirements are fully understood. By forcing an explicit planning phase, GitHub aims to reduce wasted effort and improve output quality.

More significantly, GitHub's code review feature is becoming agentic. The new implementation will use GitHub's CodeQL engine, which previously largely focused on security vulnerabilities to identify bugs and maintainability issues. The code review agent will automatically scan agent-generated pull requests before human review. This creates a two-stage quality gate.

"Don't let this little bit of news float past you like all those self-satisfied marketing pitches we semi-hear and ignore," writes ZDNet: If it works and remains reliable, this is actually a very big deal... Tech companies, especially the giant ones, often like to talk "open" but then do their level best to engineer lock-in to their solution and their solution alone. Sure, most of them offer some sort of export tool, but the barrier to moving from one tool to another is often huge... [T]he idea that you can continue to use your favorite agent or agents in GitHub, fully integrated into the GitHub tool path, is powerful. It means there's a chance developers might not have to suffer the walled garden effect that so many companies have strived for to lock in their customers.
AI

Is OpenAI Becoming 'Too Big to Fail'? (msn.com) 149

OpenAI "hasn't yet turned a profit," notes Wall Street Journal business columnist Tim Higgins. "Its annual revenue is 2% of Amazon.com's sales.

"Its future is uncertain beyond the hope of ushering in a godlike artificial intelligence that might help cure cancer and transform work and life as we know it. Still, it is brimming with hope and excitement.

"But what if OpenAI fails?" There's real concern that through many complicated and murky tech deals aimed at bolstering OpenAI's finances, the startup has become too big to fail. Or, put another way, if the hype and hope around Chief Executive Sam Altman's vision of the AI future fails to materialize, it could create systemic risk to the part of the U.S. economy likely keeping us out of recession.

That's rarefied air, especially for a startup. Few worried about what would happen if Pets.com failed in the dot-com boom. We saw in 2008-09 with the bank rescues and the Chrysler and General Motors bailouts what happens in the U.S. when certain companies become too big to fail...

[A]fter a lengthy effort to reorganize itself, OpenAI announced moves that will allow it to have a simpler corporate structure. This will help it to raise money from private investors and, presumably, become a publicly traded company one day. Already, some are talking about how OpenAI might be the first trillion-dollar initial public offering... Nobody is saying OpenAI is dabbling in anything like liar loans or subprime mortgages. But the startup is engaging in complex deals with the key tech-industry pillars, the sorts of companies making the guts of the AI computing revolution, such as chips and Ethernet cables. Those companies, including Nvidia and Oracle, are partnering with OpenAI, which in turn is committing to make big purchases in coming years as part of its growth ambitions.

Supporters would argue it is just savvy dealmaking. A company like Nvidia, for example, is putting money into a market-making startup while OpenAI is using the lofty value of its private equity to acquire physical assets... They're rooting for OpenAI as a once-in-a-generational chance to unseat the winners of the last tech cycles. After all, for some, OpenAI is the next Apple, Facebook, Google and Tesla wrapped up in one. It is akin to a company with limitless potential to disrupt the smartphone market, create its own social-media network, replace the search engine, usher in a robot future and reshape nearly every business and industry.... To others, however, OpenAI is something akin to tulip mania, the harbinger of the Great Depression, or the next dot-com bubble. Or worse, they see, a jobs killer and mad scientist intent on making Frankenstein.

But that's counting on OpenAI's success.

AI

Security Holes Found in OpenAI's ChatGPT Atlas Browser (and Perplexity's Comet) (scworld.com) 20

The address bar/ChatGPT input window in OpenAI's browser ChatGPT Atlas "could be targeted for prompt injection using malicious instructions disguised as links," reports SC World, citing a report from AI/agent security platform NeuralTrust: NeuralTrust found that a malformed URL could be crafted to include a prompt that is treated as plain text by the browser, passing the prompt on to the LLM. A malformation, such as an extra space after the first slash following "https:" prevents the browser from recognizing the link as a website to visit. Rather than triggering a web search, as is common when plain text is submitted to a browser's address bar, ChatGPT Atlas treats plain text as ChatGPT prompts by default.

An unsuspecting user could potentially be tricked into copying and pasting a malformed link, believing they will be sent to a legitimate webpage. An attacker could plant the link behind a "copy link" button so that the user might not notice the suspicious text at the end of the link until after it is pasted and submitted. These prompt injections could potentially be used to instruct ChatGPT to open a new tab to a malicious website such as a phishing site, or to tell ChatGPT to take harmful actions in the user's integrated applications or logged-in sites like Google Drive, NeuralTrust said.

Last month browser security platform LayerX also described how malicious prompts could be hidden in URLs (as a parameter) for Perplexity's browser Comet. And last week SquareX Labs demonstrated that a malicious browser extension could spoof Comet's AI sidebar feature and have since replicated the proof-of-concept (PoC) attack on Atlas.

But another new vulnerability in ChatGPT Atlas "could allow malicious actors to inject nefarious instructions into the artificial intelligence (AI)-powered assistant's memory and run arbitrary code," reports The Hacker News, citing a report from browser security platform LayerX: "This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware," LayerX Security Co-Founder and CEO, Or Eshed, said in a report shared with The Hacker News. The attack, at its core, leverages a cross-site request forgery (CSRF) flaw that could be exploited to inject malicious instructions into ChatGPT's persistent memory. The corrupted memory can then persist across devices and sessions, permitting an attacker to conduct various actions, including seizing control of a user's account, browser, or connected systems, when a logged-in user attempts to use ChatGPT for legitimate purposes....

"What makes this exploit uniquely dangerous is that it targets the AI's persistent memory, not just the browser session," Michelle Levy, head of security research at LayerX Security, said. "By chaining a standard CSRF to a memory write, an attacker can invisibly plant instructions that survive across devices, sessions, and even different browsers. In our tests, once ChatGPT's memory was tainted, subsequent 'normal' prompts could trigger code fetches, privilege escalations, or data exfiltration without tripping meaningful safeguards...."

LayerX said the problem is exacerbated by ChatGPT Atlas' lack of robust anti-phishing controls, the browser security company said, adding it leaves users up to 90% more exposed than traditional browsers like Google Chrome or Microsoft Edge. In tests against over 100 in-the-wild web vulnerabilities and phishing attacks, Edge managed to stop 53% of them, followed by Google Chrome at 47% and Dia at 46%. In contrast, Perplexity's Comet and ChatGPT Atlas stopped only 7% and 5.8% of malicious web pages.

From The Conversation: Sandboxing is a security approach designed to keep websites isolated and prevent malicious code from accessing data from other tabs. The modern web depends on this separation. But in Atlas, the AI agent isn't malicious code — it's a trusted user with permission to see and act across all sites. This undermines the core principle of browser isolation.
Thanks to Slashdot reader spatwei for suggesting the topic.
Television

YouTube TV Loses ESPN, ABC and Other Disney Channels 57

Disney's channels, including ESPN, ABC, FX, and NatGeo, have gone dark on YouTube TV after Google and Disney failed to renew their carriage agreement before the October 30 deadline, with each side blaming the other for using unfair negotiating tactics and price hikes. YouTube TV says it will issue a $20 credit to subscribers if the blackout continues while negotiations proceed. Engadget reports: "Last week Disney used the threat of a blackout on YouTube TV as a negotiating tactic to force deal terms that would raise prices on our customers," YouTube said in an announcement on its blog. "They're now following through on that threat, suspending their content on YouTube TV." YouTube added that Disney's decision harms its subscribers while benefiting its own live TV products, such as Hulu+Live TV and Fubo.

In a statement sent to the Los Angeles Times, however, Disney accused Google's YouTube TV of choosing to deny "subscribers the content they value most by refusing to pay fair rates for [its] channels, including ESPN and ABC." Disney also accused Google of using its market dominance to "eliminate competition and undercut the industry-standard terms" that other pay-TV distributors have agreed to pay for its content.
Power

The World's Secret Electricity Superusers Revealed (bloomberg.com) 35

An anonymous reader shares a report: The rush to secure electricity has intensified as tech companies look to spend trillions of dollars building data centers. There's an industry that consumes even more power than many tech giants, and it has largely escaped the same scrutiny: suppliers of industrial gases.

Everyday items like toothpaste and life-saving treatments like MRIs are among the countless parts of modern life that hinge on access to gases such as nitrogen, oxygen and helium. Producing and transporting these gases to industrial facilities and hospitals is a highly energy-intensive process. Three companies -- Linde, Air Liquide and Air Products and Chemicals -- control 70% of the $120 billion global market for industrial gases. Their initiatives to rein in electricity use or switch to renewables aren't enough to rapidly cut carbon emissions, according to a new report from the campaign group Action Speaks Louder.

"The scale of the sector's greenhouse gas emissions and electricity use is staggering," said George Harding-Rolls, the group's head of campaigns and one of the authors of the report. Linde's electricity use in 2024 exceeded that of Alphabet's Google and Samsung Electronics as well as oil giant TotalEnergies, while the power use of Air Liquide and Air Products was comparable to that of Shell and Microsoft. Yet unlike fossil fuel and tech companies, these industrial gas companies are far from household names because their customers are the world's largest chemicals, steel and oil companies rather than average consumers.

The industry relies on air-separation units, which use giant compressors to turn air into liquid and then distill it into its many components. These machines are responsible for much of the industry's electricity demand, and their use alone is responsible for 2% of carbon dioxide emissions in China and the US, the world's two largest polluters.

Google

Google Working on Bare-Bones Maps That Removes Almost All Interface Elements and Labels (androidauthority.com) 20

Google Maps is testing a power saving mode in its latest Android beta release that strips the navigation interface to its bare essentials. The feature transforms the screen into a monochrome display and removes nearly all UI elements during navigation, according to AndroidAuthority.

Users discovered code strings in version 25.44.03.824313610 indicating the mode activates through the phone's physical power button rather than through any in-app menu. The stripped-down interface eliminates standard map labels and appears to omit even the name of the upcoming street where drivers need to turn. The mode supports walking, driving, and two-wheeler directions but currently cannot be used in landscape orientation.

Slashdot Top Deals