Communications

Alphabet Spins Off Laser-Based Internet Project Taara From 'Moonshot' Unit (ft.com) 22

Alphabet is spinning out Taara, a laser-based internet company from its X "moonshot" incubator, securing backing from Series X Capital while retaining a minority stake.

Taara's technology transmits data at 20 gigabits per second over 20km by firing pencil-width light beams between traffic light-sized terminals, extending traditional fiber-optic networks with minimal construction costs.

Based in Sunnyvale, California, the company operates in 12 countries, including India and parts of Africa, where it created a 5km laser link over the Congo River between Brazzaville and Kinshasa. The two-dozen-strong team partners with telecommunications firms like Bharti Airtel and T-Mobile to extend core fiber-optic networks to remote locations or dense urban areas.

Taara originated from Project Loon, which was shut down in 2021 after facing regulatory challenges. The company is developing silicon photonic chips to replace mirrors and lenses in its terminals and potentially enable multiple connections from a single transmitter.
AI

Google's AI 'Co-Scientist' Solved a 10-Year Superbug Problem in Two Days (livescience.com) 48

Google collaborated with Imperial College London and its "Fleming Initiative" partnership with Imperial NHS, giving their scientists "access to a powerful new AI designed" built with Gemini 2.0 "to make research faster and more efficient," according to an announcement from the school. And the results were surprising...

"José Penadés and his colleagues at Imperial College London spent 10 years figuring out how some superbugs gain resistance to antibiotics," writes LiveScience. "But when the team gave Google's 'co-scientist'' — an AI tool designed to collaborate with researchers — this question in a short prompt, the AI's response produced the same answer as their then-unpublished findings in just two days." Astonished, Penadés emailed Google to check if they had access to his research. The company responded that it didn't. The researchers published their findings [about working with Google's AI] Feb. 19 on the preprint server bioRxiv...

"What our findings show is that AI has the potential to synthesise all the available evidence and direct us to the most important questions and experimental designs," co-author Tiago Dias da Costa, a lecturer in bacterial pathogenesis at Imperial College London, said in a statement. "If the system works as well as we hope it could, this could be game-changing; ruling out 'dead ends' and effectively enabling us to progress at an extraordinary pace...."

After two days, the AI returned suggestions, one being what they knew to be the correct answer. "This effectively meant that the algorithm was able to look at the available evidence, analyse the possibilities, ask questions, design experiments and propose the very same hypothesis that we arrived at through years of painstaking scientific research, but in a fraction of the time," Penadés, a professor of microbiology at Imperial College London, said in the statement. The researchers noted that using the AI from the start wouldn't have removed the need to conduct experiments but that it would have helped them come up with the hypothesis much sooner, thus saving them years of work.

Despite these promising findings and others, the use of AI in science remains controversial. A growing body of AI-assisted research, for example, has been shown to be irreproducible or even outright fraudulent.

Google has also published the first test results of its AI 'co-scientist' system, according to Imperial's announcement, which adds that academics from a handful of top-universities "asked a question to help them make progress in their field of biomedical research... Google's AI co-scientist system does not aim to completely automate the scientific process with AI. Instead, it is purpose-built for collaboration to help experts who can converse with the tool in simple natural language, and provide feedback in a variety of ways, including directly supplying their own hypotheses to be tested experimentally by the scientists."

Google describes their system as "intended to uncover new, original knowledge and to formulate demonstrably novel research hypotheses and proposals, building upon prior evidence and tailored to specific research objectives...

"We look forward to responsible exploration of the potential of the AI co-scientist as an assistive tool for scientists," Google adds, saying the project "illustrates how collaborative and human-centred AI systems might be able to augment human ingenuity and accelerate scientific discovery.
AI

'There's a Good Chance Your Kid Uses AI To Cheat' (msn.com) 98

Long-time Slashdot reader theodp writes: Wall Street Journal K-12 education reporter Matt Barnum has a heads-up for parents: There's a Good Chance Your Kid Uses AI to Cheat. Barnum writes:

"A high-school senior from New Jersey doesn't want the world to know that she cheated her way through English, math and history classes last year. Yet her experience, which the 17-year-old told The Wall Street Journal with her parent's permission, shows how generative AI has rooted in America's education system, allowing a generation of students to outsource their schoolwork to software with access to the world's knowledge. [...] The New Jersey student told the Journal why she used AI for dozens of assignments last year: Work was boring or difficult. She wanted a better grade. A few times, she procrastinated and ran out of time to complete assignments. The student turned to OpenAI's ChatGPT and Google's Gemini, to help spawn ideas and review concepts, which many teachers allow. More often, though, AI completed her work. Gemini solved math homework problems, she said, and aced a take-home test. ChatGPT did calculations for a science lab. It produced a tricky section of a history term paper, which she rewrote to avoid detection. The student was caught only once."

Not surprisingly, AI companies play up the idea that AI will radically improve learning, while educators are more skeptical. "This is a gigantic public experiment that no one has asked for," said Marc Watkins, assistant director of academic innovation at the University of Mississippi.

Google

Google Is Officially Replacing Assistant With Gemini (9to5google.com) 26

Google announced today that Gemini will replace Google Assistant on Android phones later in 2025. "[T]he classic Google Assistant will no longer be accessible on most mobile devices or available for new downloads on mobile app stores," says Google in a blog post. "Additionally, we'll be upgrading tablets, cars and devices that connect to your phone, such as headphones and watches, to Gemini. We're also bringing a new experience, powered by Gemini, to home devices like speakers, displays and TVs." 9to5Google reports: There will be an exception for phones running Android 9 or earlier and don't have at least 2 GB of RAM, with the existing Assistant experience remaining in place for those users. Google replacing Assistant follows new Android phones, including Pixel, Samsung, OnePlus, and Motorola, launched in the past year making Gemini the default experience. Meanwhile, the company says "millions of people have already made the switch."

Before Assistant's sunset, Google is "continuing to focus on improving the quality of the day-to-day Gemini experience, especially for those who have come to rely on Google Assistant." In winding down Google Assistant, the company notes how "natural language processing and voice recognition technology unlocked a more natural way to get help from Google" in 2016.
Further reading: Google's Gemini AI Can Now See Your Search History
Privacy

Everything You Say To Your Echo Will Be Sent To Amazon Starting On March 28 (arstechnica.com) 43

An anonymous reader quotes a report from Ars Technica: In an email sent to customers today, Amazon said that Echo users will no longer be able to set their devices to process Alexa requests locally and, therefore, avoid sending voice recordings to Amazon's cloud. Amazon apparently sent the email to users with "Do Not Send Voice Recordings" enabled on their Echo. Starting on March 28, recordings of everything spoken to the Alexa living in Echo speakers and smart displays will automatically be sent to Amazon and processed in the cloud.

Attempting to rationalize the change, Amazon's email said: "As we continue to expand Alexa's capabilities with generative AI features that rely on the processing power of Amazon's secure cloud, we have decided to no longer support this feature." One of the most marketed features of Alexa+ is its more advanced ability to recognize who is speaking to it, a feature known as Alexa Voice ID. To accommodate this feature, Amazon is eliminating a privacy-focused capability for all Echo users, even those who aren't interested in the subscription-based version of Alexa or want to use Alexa+ but not its ability to recognize different voices.

[...] Amazon said in its email today that by default, it will delete recordings of users' Alexa requests after processing. However, anyone with their Echo device set to "Don't save recordings" will see their already-purchased devices' Voice ID feature bricked. Voice ID enables Alexa to do things like share user-specified calendar events, reminders, music, and more. Previously, Amazon has said that "if you choose not to save any voice recordings, Voice ID may not work." As of March 28, broken Voice ID is a guarantee for people who don't let Amazon store their voice recordings.
Amazon's email continues: "Alexa voice requests are always encrypted in transit to Amazon's secure cloud, which was designed with layers of security protections to keep customer information safe. Customers can continue to choose from a robust set of controls by visiting the Alexa Privacy dashboard online or navigating to More - Alexa Privacy in the Alexa app."

Further reading: Google's Gemini AI Can Now See Your Search History
AI

'No One Knows What the Hell an AI Agent Is' (techcrunch.com) 40

Major technology companies are heavily promoting AI agents as transformative tools for work, but industry insiders say no one can agree on what these systems actually are, according to TechCrunch. OpenAI CEO Sam Altman said agents will "join the workforce" this year, while Microsoft CEO Satya Nadella predicted they will replace certain knowledge work. Salesforce CEO Marc Benioff declared his company's goal to become "the number one provider of digital labor in the world."

The definition problem has worsened recently. OpenAI published a blog post defining agents as "automated systems that can independently accomplish tasks," but its developer documentation described them as "LLMs equipped with instructions and tools." Microsoft distinguishes between agents and AI assistants, while Salesforce lists six different categories of agents. "I think that our industry overuses the term 'agent' to the point where it is almost nonsensical," Ryan Salva, senior director of product at Google, told TechCrunch. Andrew Ng, founder of DeepLearning.ai, blamed marketing: "The concepts of AI 'agents' and 'agentic' workflows used to have a technical meaning, but about a year ago, marketers and a few big companies got a hold of them." Analysts say this ambiguity threatens to create misaligned expectations as companies build product lineups around agents.
Encryption

RCS Messaging Adds End-to-End Encryption Between Android and iOS (engadget.com) 13

The GSM Association has released new specifications for RCS messaging incorporating end-to-end encryption (E2EE) based on the Messaging Layer Security protocol, six months after iOS 18 introduced RCS compatibility.

The specifications ensure messages remain secure between Android and iOS devices, making RCS "the first large-scale messaging service to support interoperable E2EE between client implementations from different providers," said GSMA Technical Director Tom Van Pelt.

The system combines E2EE with SIM-based authentication to strengthen protection against scams and fraud. Apple confirmed it "helped lead a cross industry effort" on the standard and will implement support in future software updates without specifying a timeline. Google's RCS implementation has featured default E2EE since early 2024.
Google

As Chromecast Outage Drags On, Fix Could Be Days To Weeks Away (theregister.com) 19

On March 9, older Chromecast and Chromecast Audio devices stopped working due to an expired device authentication certificate authority that made them untrusted by Google's apps. While unofficial apps like VLC continue to function, Google's fix will require either updating client apps to bypass the issue or replacing the expired certificates, a process that could take weeks; however, Google has since announced it is beginning a gradual rollout of a fix. The Register reports: Tom Hebb, a former Meta software engineer and Chromecast hacker, has published a detailed analysis of the issue and suggests a fix could take more than a month to prepare. He's also provided workarounds here for folks to try in the meantime. We spoke to Hebb, and he says the problem is this expired device authentication certificate authority. [...] The fix is not simple. It's either going to involve a bit of a hack with updated client apps to accept or workaround the situation, or somehow someone will need to replace all the key pairs shipped with the devices with ones that use a new valid certificate authority. And getting the new keys onto devices will be a pain as, for instance, some have been factory reset and can't be initialized by a Google application because the bundled cert is untrusted, meaning the client software needs to be updated anyway.

Given that the product family has been discontinued, teams will need to be pulled together to address this blunder. And it does appear to be a blunder rather than planned or remotely triggered obsolescence; earlier Chromecasts have a longer certificate validity, of 20 years rather than 10. "Google will either need to put in over a month of effort to build and test a new Chromecast update to renew the expired certificates, or they will have to coordinate internally between what's left of the Chromecast team, the Android team, the Chrome team, the Google Home team, and iOS app developers to push out new releases, which almost always take several days to build and test," Hebb explained. "I expect them to do the latter. A server-side fix is not possible."

So either a week or so to rush out app-side updates to tackle the problem, or much longer to fix the problem with replaced certs. Polish security researcher Maciej Mensfeld also believes the outage is most likely due to an expired device authentication certificate authority. He's proposed a workaround that has helped some users, at least. Hebb, meanwhile, warns more certificate authority expiry pain is looming, with the Chromecast Ultra and Google Home running out in March next year, and the Google Home Mini in January 2027.

Google

Google's Gemini AI Can Now See Your Search History (arstechnica.com) 30

Google is continuing its quest to get more people to use Gemini, and it's doing that by giving away even more AI computing. From a report: Today, Google is releasing a raft of improvements for the Gemini 2.0 models, and as part of that upgrade, some of the AI's most advanced features are now available to free users. You'll be able to use the improved Deep Research to get in-depth information on a topic, and Google's newest reasoning model can peruse your search history to improve its understanding of you as a person.

[...] With the aim of making Gemini more personal to you, Google is also plugging Flash Thinking Experimental into a new source of data: your search history. Google stresses that you have to opt in to this feature, and it can be disabled at any time. Gemini will even display a banner to remind you it's connected to your search history so you don't forget.

Firefox

Mozilla Warns DOJ's Google Remedies Risk 'Death of Open Web' (mozilla.org) 49

Mozilla has warned that the U.S. Department of Justice's proposed remedies in its antitrust case against Google would harm independent browsers and reduce competition in the browser market. The DOJ and several state attorneys general last week filed revised proposed remedies in the U.S. v. Google search case that would prohibit all search payments to browser developers, a move Mozilla says would disproportionately impact smaller players.

"These proposed remedies prohibiting search payments to small and independent browsers miss the bigger picture -- and the people who will suffer most are everyday internet users," said Mark Surman, President of Mozilla. Unlike Apple and Microsoft, which generate revenue from hardware and operating systems, Mozilla relies primarily on search revenue to fund browser development. Mozilla argues that cutting these payments would not solve search dominance but would instead strengthen the position of tech giants.

Mozilla also warned that the proposal threatens its ability to maintain Gecko, one of only three major browser engines alongside Google's Chromium and Apple's WebKit. "If we lose our ability to maintain Gecko, it's game over for an open, independent web," Surman said, noting that even Microsoft abandoned its browser engine in 2019. "If Mozilla is unable to sustain our browser engine, it would severely impact browser engine competition and mean the death of the open web as we know it -- essentially, creating a web where dominant players like Google and Apple, have even more control, not less."

Firefox serves 27 million monthly active users in the U.S. and nearly 205 million globally.
Google

UK Investigation Says Apple, Google Hampering Mobile Browser Competition 14

Britain's competition watchdog has concluded that Apple and Google are stifling competition in the UK mobile browser market, following an investigation by the Competition and Markets Authority (CMA). The inquiry found Apple's iOS policies particularly restrictive, requiring all browsers to use its WebKit engine while giving Safari preferential access to features.

Apple's practice of pre-installing Safari as the default browser also reduces awareness of alternatives, despite allowing users to change defaults. Google faces similar criticism for pre-installing Chrome on most Android devices, though investigators noted both companies have recently taken steps to facilitate browser switching. The probe identified Apple's revenue-sharing arrangement with Google -- which pays a significant share of search revenue to be the default iPhone search engine -- as "significantly reducing their financial incentives to compete."
AI

Google Claims Gemma 3 Reaches 98% of DeepSeek's Accuracy Using Only One GPU 58

Google says its new open-source AI model, Gemma 3, achieves nearly the same performance as DeepSeek AI's R1 while using just one Nvidia H100 GPU, compared to an estimated 32 for R1. ZDNet reports: Using "Elo" scores, a common measurement system used to rank chess and athletes, Google claims Gemma 3 comes within 98% of the score of DeepSeek's R1, 1338 versus 1363 for R1. That means R1 is superior to Gemma 3. However, based on Google's estimate, the search giant claims that it would take 32 of Nvidia's mainstream "H100" GPU chips to achieve R1's score, whereas Gemma 3 uses only one H100 GPU.

Google's balance of compute and Elo score is a "sweet spot," the company claims. In a blog post, Google bills the new program as "the most capable model you can run on a single GPU or TPU," referring to the company's custom AI chip, the "tensor processing unit." "Gemma 3 delivers state-of-the-art performance for its size, outperforming Llama-405B, DeepSeek-V3, and o3-mini in preliminary human preference evaluations on LMArena's leaderboard," the blog post relates, referring to the Elo scores. "This helps you to create engaging user experiences that can fit on a single GPU or TPU host."

Google's model also tops Meta's Llama 3's Elo score, which it estimates would require 16 GPUs. (Note that the numbers of H100 chips used by the competition are Google's estimate; DeepSeek AI has only disclosed an example of using 1,814 of Nvidia's less-powerful H800 GPUs to server answers with R1.) More detailed information is provided in a developer blog post on HuggingFace, where the Gemma 3 repository is offered.
Robotics

Google's New Robot AI Can Fold Delicate Origami, Close Zipper Bags (arstechnica.com) 28

An anonymous reader quotes a report from Ars Technica: On Wednesday, Google DeepMind announced two new AI models designed to control robots: Gemini Robotics and Gemini Robotics-ER. The company claims these models will help robots of many shapes and sizes understand and interact with the physical world more effectively and delicately than previous systems, paving the way for applications such as humanoid robot assistants. [...] Google's new models build upon its Gemini 2.0 large language model foundation, adding capabilities specifically for robotic applications. Gemini Robotics includes what Google calls "vision-language-action" (VLA) abilities, allowing it to process visual information, understand language commands, and generate physical movements. By contrast, Gemini Robotics-ER focuses on "embodied reasoning" with enhanced spatial understanding, letting roboticists connect it to their existing robot control systems. For example, with Gemini Robotics, you can ask a robot to "pick up the banana and put it in the basket," and it will use a camera view of the scene to recognize the banana, guiding a robotic arm to perform the action successfully. Or you might say, "fold an origami fox," and it will use its knowledge of origami and how to fold paper carefully to perform the task.

In 2023, we covered Google's RT-2, which represented a notable step toward more generalized robotic capabilities by using Internet data to help robots understand language commands and adapt to new scenarios, then doubling performance on unseen tasks compared to its predecessor. Two years later, Gemini Robotics appears to have made another substantial leap forward, not just in understanding what to do but in executing complex physical manipulations that RT-2 explicitly couldn't handle. While RT-2 was limited to repurposing physical movements it had already practiced, Gemini Robotics reportedly demonstrates significantly enhanced dexterity that enables previously impossible tasks like origami folding and packing snacks into Zip-loc bags. This shift from robots that just understand commands to robots that can perform delicate physical tasks suggests DeepMind may have started solving one of robotics' biggest challenges: getting robots to turn their "knowledge" into careful, precise movements in the real world.
DeepMind claims Gemini Robotics "more than doubles performance on a comprehensive generalization benchmark compared to other state-of-the-art vision-language-action models."

Google is advancing this effort through a partnership with Apptronik to develop next-generation humanoid robots powered by Gemini 2.0. Availability timelines or specific commercial applications for the new AI models were not made available.
Facebook

Amazon, Google and Meta Support Tripling Nuclear Power By 2050 (cnbc.com) 68

Amazon, Alphabet's Google and Meta Platforms on Wednesday said they support efforts to at least triple nuclear energy worldwide by 2050. From a report: The tech companies signed a pledge first adopted in December 2023 by more than 20 countries, including the U.S., at the U.N. Climate Change Conference. Financial institutions including Bank of America, Goldman Sachs and Morgan Stanley backed the pledge last year.

The pledge is nonbinding, but highlights the growing support for expanding nuclear power among leading industries, finance and governments. Amazon, Google and Meta are increasingly important drivers of energy demand in the U.S. as they build out AI centers. The tech sector is turning to nuclear power after concluding that renewables alone won't provide enough reliable power for their energy needs.
Microsoft and Apple did not sign the statement.
IT

Why Extracting Data from PDFs Remains a Nightmare for Data Experts (arstechnica.com) 65

Businesses, governments, and researchers continue to struggle with extracting usable data from PDF files, despite AI advances. These digital documents contain valuable information for everything from scientific research to government records, but their rigid formats make extraction difficult.

"PDFs are a creature of a time when print layout was a big influence on publishing software," Derek Willis, a lecturer in Data and Computational Journalism at the University of Maryland, told ArsTechnica. This print-oriented design means many PDFs are essentially "pictures of information" requiring optical character recognition (OCR) technology.

Traditional OCR systems have existed since the 1970s but struggle with complex layouts and poor-quality scans. New AI language models from companies like Google and Mistral now attempt to process documents more holistically, with varying success. "Right now, the clear leader is Google's Gemini 2.0 Flash Pro Experimental," Willis notes, while Mistral's recent OCR solution "performed poorly" in tests.
Businesses

Former Google CEO Eric Schmidt Is the New Leader of Relativity Space (arstechnica.com) 16

Former Google CEO Eric Schmidt has taken control of rocket startup Relativity Space, replacing co-founder Tim Ellis as CEO and significantly funding the company's development of its medium-lift rocket, Terran R. The New York Times first reported (paywalled) the news. Ars Technica reports: Schmidt's involvement with Relativity has been quietly discussed among space industry insiders for a few months. Multiple sources told Ars that he has largely been bankrolling the company since the end of October, when the company's previous fundraising dried up. It is not immediately clear why Schmidt is taking a hands-on approach at Relativity. However, it is one of the few US-based companies with a credible path toward developing a medium-lift rocket that could potentially challenge the dominance of SpaceX and its Falcon 9 rocket. If the Terran R booster becomes commercially successful, it could play a big role in launching megaconstellations.

Schmidt's ascension also means that Tim Ellis, the company's co-founder, chief executive, and almost sole public persona for nearly a decade, is now out of a leadership position. "Today marks a powerful new chapter as Eric Schmidt becomes Relativity's CEO, while also providing substantial financial backing," Ellis wrote on the social media site X. "I know there's no one more tenacious or passionate to propel this dream forward. We have been working together to ensure a smooth transition, and I'll proudly continue to support the team as Co-founder and Board member."
Relativity also on Monday released a video outlining the development of the Terran R rocket and the work required to reach the launch pad.

According to the video, the first "flight" version of the Terran R rocket will be built this year, with tentative plans to launch from a pad at Cape Canaveral, Florida, in 2026. "The company aims to soft land the first stage of the first launch in the Atlantic Ocean," adds Ars. "However, the 'Block 1' version of the rocket will not fly again."

"Full reuse of the first stage will be delayed to future upgrades. Eventually, the Relativity officials said, they intend to reach a flight rate of 50 to 100 rockets a year with the Terran R when the vehicle is fully developed."
AI

Will an 'AI Makeover' Help McDonald's? (msn.com) 100

"McDonald's is giving its 43,000 restaurants a technology makeover," reports the Wall Street Journal, including AI-enabled drive-throughs and AI-powered tools for managers — as well as internet-connected kitchen equipment.

"Technology solutions will alleviate the stress...." says McDonald's CIO Brian Rice. McDonald's tapped Google Cloud in late 2023 to bring more computing power to each of its restaurants — giving them the ability to process and analyze data on-site... a faster, cheaper option than sending data to the cloud, especially in more far-flung locations with less reliable cloud connections, said Rice... Edge computing will enable applications like predicting when kitchen equipment — such as fryers and its notorious McFlurry ice cream machines — is likely to break down, Rice said. The burger chain said its suppliers have begun installing sensors on kitchen equipment that will feed data to the edge computing system and give franchisees a "real-time" view into how their restaurants are operating. AI can then analyze that data for early signs of a maintenance problem.

McDonald's is also exploring the use of computer vision, the form of AI behind facial recognition, in store-mounted cameras to determine whether orders are accurate before they're handed to customers, he said. "If we can proactively address those issues before they occur, that's going to mean smoother operations in the future," Rice added...

Additionally, the ability to tap edge computing will power voice AI at the drive-through, a capability McDonald's is also working with Google's cloud-computing arm to explore, Rice said. The company has been experimenting with voice-activated drive-throughs and robotic deep fryers since 2019, and ended its partnership with International Business Machines to test automated order-taking at the drive-through in 2024.

Edge computing will also help McDonald's restaurant managers oversee their in-store operations. The burger giant is looking to create a "generative AI virtual manager," Rice said, which handles administrative tasks such as shift scheduling on managers' behalf. Fast-food giant Yum Brands' Pizza Hut and Taco Bell have explored similar capabilities.

Chrome

America's Justice Department Still Wants Google to Sell Chrome (msn.com) 64

Last week Google urged the U.S. government not to break up the company — but apparently, it didn't work.
In a new filing Friday, America's Justice Department "reiterated its November proposal that Google be forced to sell its Chrome web browser," reports the Washington Post, "to address a federal judge finding the company guilty of being an illegal monopoly in August." The government also kept a proposal that Google be banned from paying other companies to give its search engine preferential placement on their apps and phones. At the same time, the government dropped its demand that Google sell its stakes in AI start-ups after one of the start-ups, Anthropic AI, argued that it needed Google's money to compete in the fast-growing industry.

The government's final proposal "reaffirms that Google must divest the Chrome browser — an important search access point — to provide an opportunity for a new rival to operate a significant gateway to search the internet, free of Google's monopoly control," Justice Department lawyers wrote in the filing... Judge Amit Mehta, of the U.S. District Court for the District of Columbia, who had ruled that Google held an illegal monopoly, will decide on the final remedies in April.

The article quotes a Google spokesperson's response: that the Justice Department's "sweeping" proposals "continue to go miles beyond the court's decision, and would harm America's consumers, economy and national security."
Android

Google Introduces Debian Linux Terminal App For Android (zdnet.com) 43

Google has introduced a Debian Linux terminal app for Android in its ongoing effort to transform Android into a versatile desktop OS. It's initially available on Pixel devices running Android 15 but will be expanded to "all sufficiently robust Android phones" when Android 16 arrives later this year," writes ZDNet's Steven Vaughan-Nichols. An anonymous reader shares an excerpt from the report: Today, Linux is only available on the latest Pixel devices running Android 15. When Android 16 arrives later this year, it's expected that all sufficiently robust Android phones will be able to run Linux. Besides a Linux terminal, beta tests have already shown that you should be able to run desktop Linux programs from your phone -- games like Doom, for example. The Linux Terminal runs on top of a Debian Linux virtual machine. This enables you to access a shell interface directly on your Android device. And that just scratches the surface of Google's Linux Terminal. It's actually a do-it-all app that enables you to download, configure, and run Debian. Underneath Terminal runs the Android Virtualization Framework (AVF). These are the APIs that enable Android devices to run other operating systems.

To try the Linux Terminal app, you must activate Developer Mode by navigating to Settings - About Phone and tapping the build number seven times. I guess Google wants to make sure you want to do this. Once Developer Mode is enabled, the app can be activated via Settings - System - Developer options - Linux development environment. The initial setup may take a while because it needs to download Debian. Typically this is a 500MB download. Once in place, it allows you to adjust disk space allocation, set port controls for network communication, and recover the virtual machine's storage partition. However, it currently lacks support for graphical user interface (GUI) applications. For that, we'll need to wait for Android 16.

According to Android specialist Mishaal Rahman, 'Google wants to turn Android into a proper desktop operating system, and in order to do that, it has to make it work better with traditional PC input methods and display options. Therefore, Google is now testing new external display management tools in Android 16 that bring Android closer to other desktop OSes.'

AI

DuckDuckGo Is Amping Up Its AI Search Tool 21

An anonymous reader quotes a report from The Verge: DuckDuckGo has big plans for embedding AI into its search engine. The privacy-focused company just announced that its AI-generated answers, which appear for certain queries on its search engine, have exited beta and now source information from across the web -- not just Wikipedia. It will soon integrate web search within its AI chatbot, which has also exited beta. DuckDuckGo first launched AI-assisted answers -- originally called DuckAssist -- in 2023. The feature is billed as a less obnoxious version of tools like Google's AI Overviews, designed to offer more concise responses and let you adjust how often you see them, including turning the responses off entirely. If you have DuckDuckGo's AI-generated answers set to "often," you'll still only see them around 20 percent of the time, though the company plans on increasing the frequency eventually.

Some of DuckDuckGo's AI-assisted answers bring up a box for follow-up questions, redirecting you to a conversation with its Duck.ai chatbot. As is the case with its AI-assisted answers, you don't need an account to use Duck.ai, and it comes with the same emphasis on privacy. It lets you toggle between GPT-4o mini, o3-mini, Llama 3.3, Mistral Small 3, and Claude 3 Haiku, with the advantage being that you can interact with each model anonymously by hiding your IP address. DuckDuckGo also has agreements with the AI company behind each model to ensure your data isn't used for training.

Duck.ai also rolled out a feature called Recent Chats, which stores your previous conversations locally on your device rather than on DuckDuckGo's servers. Though Duck.ai is also leaving beta, that doesn't mean the flow of new features will stop. In the next few weeks, Duck.ai will add support for web search, which should enhance its ability to respond to questions. The company is also working on adding voice interaction on iPhone and Android, along with the ability to upload images and ask questions about them. ... [W]hile Duck.ai will always remain free, the company is considering including access to more advanced AI models with its $9.99 per month subscription.

Slashdot Top Deals