Google

Google Is Introducing Its Own Version of Apple's Private AI Cloud Compute 23

Google has unveiled Private AI Compute, a cloud platform designed to deliver advanced AI capabilities while preserving user privacy. As The Verge notes, the feature is "virtually identical to Apple's Private Cloud Compute." From the report: Many Google products run AI features like translation, audio summaries, and chatbot assistants, on-device, meaning data doesn't leave your phone, Chromebook, or whatever it is you're using. This isn't sustainable, Google says, as advancing AI tools need more reasoning and computational power than devices can supply. The compromise is to ship more difficult AI requests to a cloud platform, called Private AI Compute, which it describes as a "secure, fortified space" offering the same degree of security you'd expect from on-device processing. Sensitive data is available "only to you and no one else, not even Google."
Security

ClickFix May Be the Biggest Security Threat Your Family Has Never Heard Of (arstechnica.com) 79

An anonymous reader quotes a report from Ars Technica: ClickFix often starts with an email sent from a hotel that the target has a pending registration with and references the correct registration information. In other cases, ClickFix attacks begin with a WhatsApp message. In still other cases, the user receives the URL at the top of Google results for a search query. Once the mark accesses the malicious site referenced, it presents a CAPTCHA challenge or other pretext requiring user confirmation. The user receives an instruction to copy a string of text, open a terminal window, paste it in, and press Enter. Once entered, the string of text causes the PC or Mac to surreptitiously visit a scammer-controlled server and download malware. Then, the machine automatically installs it -- all with no indication to the target. With that, users are infected, usually with credential-stealing malware. Security firms say ClickFix campaigns have run rampant. The lack of awareness of the technique, combined with the links also coming from known addresses or in search results, and the ability to bypass some endpoint protections are all factors driving the growth.

The commands, which are often base-64 encoded to make them unreadable to humans, are often copied inside the browser sandbox, a part of most browsers that accesses the Internet in an isolated environment designed to protect devices from malware or harmful scripts. Many security tools are unable to observe and flag these actions as potentially malicious. The attacks can also be effective given the lack of awareness. Many people have learned over the years to be suspicious of links in emails or messengers. In many users' minds, the precaution doesn't extend to sites that instruct them to copy a piece of text and paste it into an unfamiliar window. When the instructions come in emails from a known hotel or at the top of Google results, targets can be further caught off guard. With many families gathering in the coming weeks for various holiday dinners, ClickFix scams are worth mentioning to those family members who ask for security advice. Microsoft Defender and other endpoint protection programs offer some defenses against these attacks, but they can, in some cases, be bypassed. That means that, for now, awareness is the best countermeasure.
Researchers from CrowdStrike described in a report a campaign designed to infect Macs with a Mach-O executive. "Promoting false malicious websites encourages more site traffic, which will lead to more potential victims," wrote the researchers. "The one-line installation command enables eCrime actors to directly install the Mach-O executable onto the victim's machine while bypassing Gatekeeper checks."

Push Security, meanwhile, reported a ClickFix campaign that uses a device-adaptive page that serves different malicious payloads depending on whether the visitor is on Windows or macOS.
Google

Google Announces Even More AI In Photos App, Powered By Nano Banana (arstechnica.com) 14

An anonymous reader quotes a report from Ars Technica: The Big G is finally making good on its promise to add its market-leading Nano Banana image-editing model to the app. The model powers a couple of features, and it's not just for Google's Android platform. Nano Banana edits are also coming to the iOS version of the app. [...] The Photos app already had conversational editing in the "Help Me Edit" feature, but it was running an older non-fruit model that produced inferior results. Nano Banana editing will produce AI slop, yes, but it's better slop.

Google says the updated Help Me Edit feature has access to your private face groups, so you can use names in your instructions. For example, you could type "Remove Riley's sunglasses," and Nano Banana will identify Riley in the photo (assuming you have a person of that name saved) and make the edit without further instructions. You can also ask for more fantastical edits in Help Me Edit, changing the style of the image from top to bottom. Google is very invested in getting people to use its AI tools, but less-savvy users might not be familiar enough with AI prompting to get the most out of Nano Banana. So Google Photos is also getting a collection of AI templates in a new "Create with AI" section. This menu will offer pre-formed prompts based on popular in-app edits. Some of the options you'll see include "put me in a high fashion photoshoot," "create a professional headshot," and "put me in a winter holiday card."

The app is also getting a new "Ask" button, which is not to be confused with "Ask Photos." The former is a new contextual button that appears when viewing a photo, and the latter is Google's controversial natural language search feature. [...] When looking at a photo, you can tap the Ask button to get information about the content of the photo or find related images. You can also describe edits you'd like to see in this interface, and Nano Banana will make them for you.

Open Source

FFmpeg To Google: Fund Us or Stop Sending Bugs (thenewstack.io) 113

FFmpeg, the open source multimedia framework that powers video processing in Google Chrome, Firefox, YouTube and other major platforms, has called on Google to either fund the project or stop burdening its volunteer maintainers with security vulnerabilities found by the company's AI tools. The maintainers patched a bug that Google's AI agent discovered in code for decoding a 1995 video game but described the finding as "CVE slop."

The confrontation centered on a Google Project Zero policy announced in July that publicly discloses reported vulnerabilities within a week and starts a ninety-day countdown to full disclosure regardless of patch availability. FFmpeg, written primarily in assembly language, handles format conversion and streaming for VLC, Kodi and Plex but operates without adequate funding from the corporations that depend on it. Nick Wellnhofer resigned as maintainer of libxml2, a library used in all major web browsers, because of the unsustainable workload of addressing security reports without compensation and said he would stop maintaining the project in December.
Education

UK Secondary Schools Pivoting From Narrowly Focused CS Curriculum To AI Literacy 64

Longtime Slashdot reader theodp writes: The UK Department for Education is "replacing its narrowly focused computer science GCSE with a broader, future-facing computing GCSE [General Certificate of Secondary Education] and exploring a new qualification in data science and AI for 16-18-year-olds." The move aims to correct unintended consequences of a shift made more than a decade ago from the existing ICT (Information and Communications Technology) curriculum, which focused on basic digital skills, to a more rigorous Computer Science curriculum at the behest of major tech firms and advocacy groups to address concerns about the UK's programming talent pipeline.

The UK pivot from rigorous CS to AI literacy comes as tech-backed nonprofit Code.org leads a similar shift in the U.S., pivoting from its original 2013 mission calling for rigorous CS for U.S. K-12 students to a new mission that embraces AI literacy. Code.org next month will replace its flagship Hour of Code event with a new Hour of AI "designed to bring AI education into the mainstream" with the support of its partners, including Microsoft, Google, and Amazon. Code.org has pledged to engage 25 million learners with the new Hour of AI this school year.
EU

Critics Call Proposed Changes To Landmark EU Privacy Law 'Death By a Thousand Cuts' (reuters.com) 27

An anonymous reader quotes a report from Reuters: Privacy activists say proposed changes to Europe's landmark privacy law, including making it easier for Big Tech to harvest Europeans' personal data for AI training, would flout EU case law and gut the legislation. The changes proposed by the European Commission are part of a drive to simplify a slew of laws adopted in recent years on technology, environmental and financial issues which have in turn faced pushback from companies and the U.S. government.

EU antitrust chief Henna Virkkunen will present the Digital Omnibus, in effect proposals to cut red tape and overlapping legislation such as the General Data Protection Regulation, the Artificial Intelligence Act, the e-Privacy Directive and the Data Act, on November 19. According to the plans, Google, Meta Platforms, OpenAI and other tech companies may be allowed to use Europeans' personal data to train their AI models based on legitimate interest.

In addition, companies may be exempted from the ban on processing special categories of personal data "in order not to disproportionately hinder the development and operation of AI and taking into account the capabilities of the controller to identify and remove special categories of personal data." [...] The proposals would need to be thrashed out with EU countries and European Parliament in the coming months before they can be implemented.
"The draft Digital Omnibus proposes countless changes to many different articles of the GDPR. In combination this amounts to a death by a thousand cuts," Austrian privacy group noyb said in a statement. "This would be a massive downgrading of Europeans' privacy 10 years after the GDPR was adopted," noyb's Max Schrems said.

"These proposals would change how the EU protects what happens inside your phone, computer and connected devices," European Digital Rights policy advisor Itxaso Dominguez de Olazabal wrote in a LinkedIn post. "That means access to your device could rely on legitimate interest or broad exemptions like security, fraud detection or audience measurement," she said.
Media

PDF Will Support JPEG XL Format As 'Preferred Solution' (theregister.com) 18

The PDF Association is adding JPEG XL (JXL) support to the PDF specification, giving the advanced image format a new path to relevance despite Google's decision to declare it obsolete and remove it from Chromium. The Register reports: Peter Wyatt, CTO of the PDF Association, said: "We need to adopt a new image [format] that can support HDR [High Dynamic Range] content ... we have picked JPEG XL as our preferred solution." Wyatt also praised other benefits of JXL including wide gamut images, ultra-high resolution support for images with more than 1 billion pixels, and up to 4099 channels with up to 32 bits per channel.

The association is responsible for developing PDF specifications and standards and manages the ISO committee for PDF. JPEG XL is an advanced image format that was designed to be both more efficient and richer in features than JPEG. It was based on a combination of the Free Lossless Image Format (FLIF) from Cloudinary and a Google project called PIK, first released in late 2020, and fully standardized in October 2021 as ISO/IEC 18181. There is a reference implementation called libjxl. A second edition of the ISO standard was published in 2024.

JXL appeared to have wide industry support, including experimental implementation in Chrome and Chromium, until it was killed by Google in October 2022 and removed from its web browser engine. The company stated that "there is not enough interest from the entire ecosystem to continue experimenting with JPEG XL." Many in the community disagreed with the decision, including FLIF inventor Jon Sneyers, who perceived it as the outcome of an internal battle between proponents of JXL and a rival format, AVIF. "AVIF proponents within Chrome are essentially being prosecutor, judge and executioner at the same time," he said.

The Internet

Tim Berners-Lee Says AI Will Not Destroy the Web (theverge.com) 54

Tim Berners-Lee thinks AI will help the web, not destroy it. The inventor of the World Wide Web has spent years warning about platform concentration and social media's corrosive effects, but he views AI differently. AI has accomplished what his Semantic Web project could not. The technology extracts structured data from websites regardless of how the information was formatted. Berners-Lee spent decades trying to convince database owners to make their systems machine-readable voluntarily. AI companies simply took the data anyway. They achieved the machine-readable internet through extraction rather than cooperation, but the result is the same.

Berners-Lee also weighed in on the growing browser competition in the market. OpenAI released Atlas a few weeks ago. Perplexity has launched Comet. Google has expanded AI features in Chrome. All these browsers run on Chromium, which Berners-Lee acknowledges is not ideal, but conceded that browser engines are expensive to build. He thinks Apple's decision to restrict iPhones to WebKit prevents web apps from competing with native apps.
Network

Subsea Cable Investment Set To Double As Tech Giants Accelerate AI Buildout (cnbc.com) 9

Investment in subsea cable projects is expected to reach around $13 billion between 2025 and 2027, almost twice the amount invested between 2022 and 2024, according to telecommunications data provider TeleGeography. Tech giants Meta, Google, Amazon and Microsoft now represent about 50% of the overall market, up from a negligible share a decade ago.

The companies are expanding their subsea infrastructure to connect growing networks of data centers needed for AI development. Meta announced Project Waterworth in February, a 50,000-kilometer cable connecting five continents that will be the world's longest subsea cable project. Amazon announced its first wholly-owned subsea cable called Fastnet, connecting Maryland to Ireland. Google has invested in over 30 subsea cables. Over 95% of international data and voice call traffic travels through nearly a million miles of underwater cables.
Microsoft

Microsoft Bets on Influencers To Close the Gap With ChatGPT (msn.com) 27

An anonymous reader shares a report: Microsoft, eager to boost downloads of its Copilot chatbot, has recruited some of the most popular influencers in America to push a message to young consumers that might be summed up as: Our AI assistant is as cool as ChatGPT. Microsoft could use the help. The company recently said its family of Copilot assistants attracts 150 million active users each month. But OpenAI's ChatGPT claims 800 million weekly active users, and Google's Gemini boasts 650 million a month. Microsoft has an edge with corporate customers, thanks to a long history of selling them software and cloud services. But it has struggled to crack the consumer market -- especially people under 30.

"We're a challenger brand in this area, and we're kind of up and coming," Consumer Chief Marketing Officer Yusuf Mehdi said in an interview. Mehdi hopes to persuade key influencers to make Copilot their chatbot of choice and then use their popularity to market the assistant to their millions of followers. He says Microsoft is already getting more bang for the buck with influencers than with traditional media, but didn't provide any metrics.

[...] Using non-techies as spokespeople is meant to reinforce Microsoft's campaign to sell its chatbot as a life coach for everyone. Or as Consumer AI chief Mustafa Suleyman wrote in a recent essay, an AI companion that "helps you think, plan and dream."

AI

What Happens When Humans Start Writing for AI? (theamericanscholar.org) 69

The literary magazine of the Phi Beta Kappa society argues "the replacement of human readers by AI has lately become a real possibility.

"In fact, there are good reasons to think that we will soon inhabit a world in which humans still write, but do so mostly for AI." "I write about artificial intelligence a lot, and lately I have begun to think of myself as writing for Al as well," the influential economist Tyler Cowen announced in a column for Bloomberg at the beginning of the year. He does this, he says, because he wants to boost his influence over the world, because he wants to help teach the AIs about things he cares about, and because, whether he wants to or not, he's already writing for AI, and so is everybody else. Large-language-model (LLM) chatbots such as ChatGPT and Claude are trained, in part, by reading the entire internet, so if you put anything of yourself online, even basic social-media posts that are public, you're writing for them.

If you don't recognize this fact and embrace it, your work might get left behind or lost. For 25 years, search engines knit the web together. Anyone who wanted to know something went to Google, asked a question, clicked through some of the pages, weighed the information, and came to an answer. Now, the chatbot genie does that for you, spitting the answer out in a few neat paragraphs, which means that those who want to affect the world needn't care much about high Google results anymore. What they really want is for the AI to read their work, process it, and weigh it highly in what it says to the millions of humans who ask it questions every minute.

How do you get it to do this? For that, we turn to PR people, always in search of influence, who are developing a form of writing (press releases and influence campaigns are writing) that's not so much search-engine-optimized as chatbot-optimized. It's important, they say, to write with clear structure, to announce your intentions, and especially to include as many formatted sections and headings as you can. In other words, to get ChatGPT to pay attention, you must write more like ChatGPT. It's also possible that, since LLMs understand natural language in a way traditional computer programs don't, good writing will be more privileged than the clickbait Google has succumbed to: One refreshing discovery PR experts have made is that the bots tend to prioritize information from high-quality outlets.

Tyler Cowen also wrote in his Bloomberg column that "If you wish to achieve some kind of intellectual immortality, writing for the Als is probably your best chance.... Give the Als a sense not just of how you think, but how you feel — what upsets you, what you really treasure. Then future Al versions of you will come to life that much more, attracting more interest." Has AI changed the reasons we write? The Phi Beta Kappa magazine is left to consider the possibility that "power over a superintelligent beast and resurrection are nothing to sneeze at" — before offering another thought.

"The most depressing reason to write for AI is that unlike most humans, AIs still read. They read a lot. They read everything. Whereas, aided by an AI no more advanced than the TikTok algorithm, humans now hardly read anything at all..."
Google

Did ChatGPT Conversations Leak... Into Google Search Console Results? (arstechnica.com) 51

"For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination," reports Ars Technica: the search-traffic tool for webmasters , Google Search Console.

Though it normally shows the short phrases or keywords typed into Google which led someone to their site, "starting this September, odd queries, sometimes more than 300 characters long, could also be found" in Google Search Console. And the chats "appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private." Jason Packer, owner of analytics consulting firm Quantable, flagged the issue in a detailed blog post last month, telling Ars Technica he'd seen 200 odd queries — including "some pretty crazy ones." (Web optimization consultant Slobodan ManiÄ helped Packer investigate...) Packer points out "nobody clicked share" or were given an option to prevent their chats from being exposed. Packer suspected that these queries were connected to reporting from The Information in August that cited sources claiming OpenAI was scraping Google search results to power ChatGPT responses. Sources claimed that OpenAI was leaning on Google to answer prompts to ChatGPT seeking information about current events, like news or sports... "Did OpenAI go so fast that they didn't consider the privacy implications of this, or did they just not care?" Packer posited in his blog... Clearly some of those searches relied on Google, Packer's blog said, mistakenly sending to GSC "whatever" the user says in the prompt box... This means "that OpenAI is sharing any prompt that requires a Google Search with both Google and whoever is doing their scraping," Packer alleged. "And then also with whoever's site shows up in the search results! Yikes."

To Packer, it appeared that "ALL ChatGPT prompts" that used Google Search risked being leaked during the past two months. OpenAI claimed only a small number of queries were leaked but declined to provide a more precise estimate. So, it remains unclear how many of the 700 million people who use ChatGPT each week had prompts routed to Google Search Console.

"Perhaps most troubling to some users — whose identities are not linked in chats unless their prompts perhaps share identifying information — there does not seem to be any way to remove the leaked chats from Google Search Console.."
AI

Common Crawl Criticized for 'Quietly Funneling Paywalled Articles to AI Developers' (msn.com) 42

For more than a decade, the nonprofit Common Crawl "has been scraping billions of webpages to build a massive archive of the internet," notes the Atlantic, making it freely available for research. "In recent years, however, this archive has been put to a controversial purpose: AI companies including OpenAI, Google, Anthropic, Nvidia, Meta, and Amazon have used it to train large language models.

"In the process, my reporting has found, Common Crawl has opened a back door for AI companies to train their models with paywalled articles from major news websites. And the foundation appears to be lying to publishers about this — as well as masking the actual contents of its archives..." Common Crawl's website states that it scrapes the internet for "freely available content" without "going behind any 'paywalls.'" Yet the organization has taken articles from major news websites that people normally have to pay for — allowing AI companies to train their LLMs on high-quality journalism for free. Meanwhile, Common Crawl's executive director, Rich Skrenta, has publicly made the case that AI models should be able to access anything on the internet. "The robots are people too," he told me, and should therefore be allowed to "read the books" for free. Multiple news publishers have requested that Common Crawl remove their articles to prevent exactly this use. Common Crawl says it complies with these requests. But my research shows that it does not.

I've discovered that pages downloaded by Common Crawl have appeared in the training data of thousands of AI models. As Stefan Baack, a researcher formerly at Mozilla, has written, "Generative AI in its current form would probably not be possible without Common Crawl." In 2020, OpenAI used Common Crawl's archives to train GPT-3. OpenAI claimed that the program could generate "news articles which human evaluators have difficulty distinguishing from articles written by humans," and in 2022, an iteration on that model, GPT-3.5, became the basis for ChatGPT, kicking off the ongoing generative-AI boom. Many different AI companies are now using publishers' articles to train models that summarize and paraphrase the news, and are deploying those models in ways that steal readers from writers and publishers.

Common Crawl maintains that it is doing nothing wrong. I spoke with Skrenta twice while reporting this story. During the second conversation, I asked him about the foundation archiving news articles even after publishers have asked it to stop. Skrenta told me that these publishers are making a mistake by excluding themselves from "Search 2.0" — referring to the generative-AI products now widely being used to find information online — and said that, anyway, it is the publishers that made their work available in the first place. "You shouldn't have put your content on the internet if you didn't want it to be on the internet," he said. Common Crawl doesn't log in to the websites it scrapes, but its scraper is immune to some of the paywall mechanisms used by news publishers. For example, on many news websites, you can briefly see the full text of any article before your web browser executes the paywall code that checks whether you're a subscriber and hides the content if you're not. Common Crawl's scraper never executes that code, so it gets the full articles.

Thus, by my estimate, the foundation's archives contain millions of articles from news organizations around the world, including The Economist, the Los Angeles Times, The Wall Street Journal, The New York Times, The New Yorker, Harper's, and The Atlantic.... A search for nytimes.com in any crawl from 2013 through 2022 shows a "no captures" result, when in fact there are articles from NYTimes.com in most of these crawls.

"In the past year, Common Crawl's CCBot has become the scraper most widely blocked by the top 1,000 websites," the article points out...
AI

'Stratospheric' AI Spending By Four Wealthy Companies Reaches $360B Just For Data Centers (msn.com) 63

"Maybe you've heard that artificial intelligence is a bubble poised to burst," writes a Washington Post technology columnist. "Maybe you have heard that it isn't. (No one really knows either way, but that won't stop the bros from jabbering about it constantly.)"

"But I can confidently tell you that the money being thrown around for AI is so huge that numbers have lost all meaning." The companies pouring money in are so rich and so power-hungry (in multiple meanings of that term) that our puny human brains cannot really comprehend. So let's try to give some meaning and context to the stratospheric numbers in AI. Is it a bubble? Eh, who knows. But it is completely bonkers. In just the past year, the four richest companies developing AI — Microsoft, Google, Amazon and Meta — have spent roughly $360 billion combined for big-ticket projects, which included building AI data centers and stuffing them with computer chips and equipment, according to my analysis of financial disclosures.... How do companies pay for the enormous sums they are lavishing on AI? Mostly, these companies make so much money that they can afford to go bananas...

Eight of the world's top 10 most valuable companies are AI-centric or AI-ish American corporate giants — Nvidia, Apple, Microsoft, Google, Amazon, Broadcom, Meta and Tesla. That's according to tallies from S&P Global Market Intelligence based on the total price of the companies' stock held by investors. My analysis of the S&P data shows that the collective worth of those eight giants, $23 trillion, is more than the value of the next 96 most valuable U.S. companies put together, which includes many still very rich names such as JPMorgan, Walmart, Visa and ExxonMobil. No. 1 on that list, the AI computer chip seller Nvidia, last week become the first company in history to reach a stock market value of $5 trillion. That alone was more than the value of entire stock markets in most countries, Bloomberg News reported, other than the five biggest (in the U.S., China, Japan, Hong Kong and India)...

All the announced or under-construction data centers for powering AI would consume roughly as much electricity as 44 million households in the United States if they run full tilt, according to a recent analysis by the Barclays investment bank as reported by the Financial Times. For context, that's nearly one-third of the total number of residential housing units in the entire country, according to U.S. Census Bureau housing estimates for 2024.

Android

Gemini Starts Rolling Out On Android Auto 7

Gemini is (finally) rolling out on Android Auto, replacing Google Assistant while keeping "Hey Google," adding Gemini Live ("let's talk live"), message auto-translation, and new privacy toggles. "One feature lost between Assistant and Gemini, though, is the ability to use nicknames for contacts," notes 9to5Google. From the report: Over the past 24 hours, Google has quietly started the rollout of Gemini for Android Auto, seemingly starting with beta users. The change is server-side, with multiple users reporting that Gemini has gone live in the car. One user mentions that they noticed this on Android Auto 15.6, and we're seeing the same on our Pixel 10 Pro XL connected to different car displays, and also on a Galaxy Z Fold 7 running Android Auto 15.7.

It's unclear if this particular version is what delivers support, but that seems unlikely seeing as this very started rolling out last week. Android Auto 15.6 and 15.7 are currently only available in beta, so it's also unclear at this time if the rollout is tied to the Android Auto beta or simply showing up on that version as a coincidence.
Games

Video Games' Hottest New Platform is an Old One (financialpost.com) 14

Web-based video games are experiencing an unexpected revival as the broader $189 billion industry stagnates. Sales for browser-based titles like GeoGuessr and chess were expected to triple from 2021 to 2028, reaching $3.09 billion, according to Google and Kantar. Playgama hosted more than 15,000 new web games in the first half of 2025, exceeding the combined total from 2021 through 2023.

Websites provide fast and easy access without console boot-ups or app downloads. Game creators sidestep the 30% revenue cuts imposed by Steam and Apple. Poki has doubled its employee count to 70 since 2020 and now serves 100 million monthly active users. A top-ten developer on the platform earns about $1 million in yearly revenue, up from $50,000 in 2020. Consoles cost more than $450, and smartphone gamers are downloading fewer apps. Electronic Arts founder Trip Hawkins predicted web games will be "one of the next waves."
Wireless Networking

Ikea's Big Smart Home Push Arrives With 21 New Matter Devices (forbes.com) 50

The Scandinavian furniture giant has unveiled 21 new ultra-affordable Matter-over-Thread smart home devices across three launch segments: lighting, sensors, and control. With prices starting at just a few dollars, Ikea is pushing hard to replace its old Zigbee lineup and become a serious player in the Matter ecosystem. Forbes reports: Back to the 21 new devices specifically and they are all native Matter ones though, so you don't actually need Ikea's hub to get involved, as Matter controllers from other brands will be able to sync them up to your existing smart home platform as well; provided that Matter controller also doubles up as a Thread border router. The good news is that many existing devices you may already have in your house - think Apple HomePod mini, Google Nest Hub Max, most of the recent Amazon Echo range, SmartThings hubs and even some Eero routers - all do.

This being Ikea, there are some quirky names involved... the new lineup starts with the Kajplats smart bulb range, with eleven bulbs in total, covering everything from compact spotlights to large decorative globes. They come in a mix of shapes, brightness levels, and finishes, with options for full-color control or just tunable white light. Ikea says each model now offers a wider intensity range and smoother dimming compared to the outgoing Tradfri lineup.

AI

Magika 1.0 Goes Stable As Google Rebuilds Its File Detection Tool In Rust (googleblog.com) 26

BrianFagioli writes: Google has released Magika 1.0, a stable version of its AI-based file type detection tool, and rebuilt the entire engine in Rust for speed and memory safety. The system now recognizes more than 200 file types, up from about 100, and is better at distinguishing look-alike formats such as JSON vs JSONL, TSV vs CSV, C vs C++, and JavaScript vs TypeScript. The team used a 3TB training dataset and even relied on Gemini to generate synthetic samples for rare file types, allowing Magika to handle formats that don't have large, publicly available corpora. The tool supports Python and TypeScript integrations and offers a native Rust command-line client.

Under the hood, Magika uses ONNX Runtime for inference and Tokio for parallel processing, allowing it to scan around 1,000 files per second on a modern laptop core and scale further with more CPU cores. Google says this makes Magika suitable for security workflows, automated analysis pipelines, and general developer tooling. Installation is a single curl or PowerShell command, and the project remains fully open source.
The project is available on GitHub and documentation can be found here.
Google

Google Plans Secret AI Military Outpost on Tiny Island Overrun By Crabs (arstechnica.com) 39

An anonymous reader shares a report: On Wednesday, Reuters reported that Google is planning to build a large AI data center on Christmas Island, a 52-square-mile Australian territory in the Indian Ocean, following a cloud computing deal with Australia's military. The previously undisclosed project will reportedly position advanced AI infrastructure a mere 220 miles south of Indonesia at a location military strategists consider critical for monitoring Chinese naval activity.

Aside from its strategic military position, the island is famous for its massive annual crab migration, where over 100 million of red crabs make their way across the island to spawn in the ocean. That's notable because the tech giant has applied for environmental approvals to build a subsea cable connecting the 135-square-kilometer island to Darwin, where US Marines are stationed for six months each year.

[...] Christmas Island's annual crab migration is a natural phenomenon that Sir David Attenborough reportedly once described as one of his greatest TV moments when he visited the site in 1990. Every year, millions of crabs emerge from the forest and swarm across roads, streams, rocks, and beaches to reach the ocean, where each female can produce up to 100,000 eggs. The tiny baby crabs that survive take about nine days to march back inland to the safety of the plateau.

Supercomputing

A New Ion-Based Quantum Computer Makes Error Correction Simpler (technologyreview.com) 10

An anonymous reader quotes a report from MIT Technology Review: The US- and UK-based company Quantinuum today unveiled Helios, its third-generation quantum computer, which includes expanded computing power and error correction capability. Like all other existing quantum computers, Helios is not powerful enough to execute the industry's dream money-making algorithms, such as those that would be useful for materials discovery or financial modeling. But Quantinuum's machines, which use individual ions as qubits, could be easier to scale up than quantum computers that use superconducting circuits as qubits, such as Google's and IBM's. "Helios is an important proof point in our road map about how we'll scale to larger physical systems," says Jennifer Strabley, vice president at Quantinuum, which formed in 2021 from the merger of Honeywell Quantum Solutions and Cambridge Quantum. Honeywell remains Quantinuum's majority owner.

Located at Quantinuum's facility in Colorado, Helios comprises a myriad of components, including mirrors, lasers, and optical fiber. Its core is a thumbnail-size chip containing the barium ions that serve as the qubits, which perform the actual computing. Helios computes with 98 barium ions at a time; its predecessor, H2, used 56 ytterbium qubits. The barium ions are an upgrade, as they have proven easier to control than ytterbium. These components all sit within a chamber that is cooled to about 15 Kelvin (-432.67 ), on top of an optical table. Users can access the computer by logging in remotely over the cloud. [...] Helios is noteworthy for its qubits' precision, says Rajibul Islam, a physicist at the University of Waterloo in Canada, who is not affiliated with Quantinuum. The computer's qubit error rates are low to begin with, which means it doesn't need to devote as much of its hardware to error correction. Quantinuum had pairs of qubits interact in an operation known as entanglement and found that they behaved as expected 99.921% of the time. "To the best of my knowledge, no other platform is at this level," says Islam.

[...] Besides increasing the number of qubits on its chip, another notable achievement for Quantinuum is that it demonstrated error correction "on the fly," says David Hayes, the company's director of computational theory and design, That's a new capability for its machines. Nvidia GPUs were used to identify errors in the qubits in parallel. Hayes thinks that GPUs are more effective for error correction than chips known as FPGAs, also used in the industry. Quantinuum has used its computers to investigate the basic physics of magnetism and superconductivity. Earlier this year, it reported simulating a magnet on H2, Quantinuum's predecessor, with the claim that it "rivals the best classical approaches in expanding our understanding of magnetism." Along with announcing the introduction of Helios, the company has used the machine to simulate the behavior of electrons in a high-temperature superconductor.
Quantinuum is expanding its Helios line with a new system in Minnesota. It's also started developing its fourth-generation quantum computer, Sol, set for 2027 with 192 qubits. Then, a fifth-generation system, Apollo, is expected in 2029 with thousands of qubits and full fault tolerance.

Slashdot Top Deals