Google

Google Maps Falsely Told Drivers in Germany That Roads Across the Country Were Closed (engadget.com) 36

"Chaos ensued on German roads this week after Google Maps wrongly informed drivers that highways throughout the country were closed during a busy holiday," writes Engadget. The problem reportedly only lasted for a few hours and by Thursday afternoon only genuine road closures were being displayed. It's not clear whether Google Maps had just malfunctioned, or if something more nefarious was to blame. "The information in Google Maps comes from a variety of sources. Information such as locations, street names, boundaries, traffic data, and road networks comes from a combination of third-party providers, public sources, and user input," a spokesperson for Google told German newspaper Berliner Morgenpost, adding that it is internally reviewing the problem.

Technical issues with Google Maps are not uncommon. Back in March, users were reporting that their Timeline — which keeps track of all the places you've visited before for future reference — had been wiped, with Google later confirming that some people had indeed had their data deleted, and in some cases, would not be able to recover it.

The Guardian describes German drives "confronted with maps sprinkled with a mass of red dots indicating stop signs," adding "The phenomenon also affected parts of Belgium and the Netherlands." Those relying on Google Maps were left with the impression that large parts of Germany had ground to a halt... The closure reports led to the clogging of alternative routes on smaller thoroughfares and lengthy delays as people scrambled to find detours. Police and road traffic control authorities had to answer a flood of queries as people contacted them for help.

Drivers using or switching to alternative apps, such as Apple Maps or Waze, or turning to traffic news on their radios, were given a completely contrasting picture, reflecting the reality that traffic was mostly flowing freely on the apparently affected routes.

Biotech

Uploading the Human Mind Could One Day Become a Reality, Predicts Neuroscientist (sciencealert.com) 107

A 15-year-old asked the question — receiving an answer from an associate professor of psychology at Georgia Institute of Technology. They write (on The Conversation) that "As a brain scientist who studies perception, I fully expect mind uploading to one day be a reality.

"But as of today, we're nowhere close..." Replicating all that complexity will be extraordinarily difficult. One requirement: The uploaded brain needs the same inputs it always had. In other words, the external world must be available to it. Even cloistered inside a computer, you would still need a simulation of your senses, a reproduction of the ability to see, hear, smell, touch, feel — as well as move, blink, detect your heart rate, set your circadian rhythm and do thousands of other things... For now, researchers don't have the computing power, much less the scientific knowledge, to perform such simulations.

The first task for a successful mind upload: Scanning, then mapping the complete 3D structure of the human brain. This requires the equivalent of an extraordinarily sophisticated MRI machine that could detail the brain in an advanced way. At the moment, scientists are only at the very early stages of brain mapping — which includes the entire brain of a fly and tiny portions of a mouse brain. In a few decades, a complete map of the human brain may be possible. Yet even capturing the identities of all 86 billion neurons, all smaller than a pinhead, plus their trillions of connections, still isn't enough. Uploading this information by itself into a computer won't accomplish much. That's because each neuron constantly adjusts its functioning, and that has to be modeled, too. It's hard to know how many levels down researchers must go to make the simulated brain work. Is it enough to stop at the molecular level? Right now, no one knows.

Knowing how the brain computes things might provide a shortcut. That would let researchers simulate only the essential parts of the brain, and not all biological idiosyncrasies. Here's another way: Replace the 86 billion real neurons with artificial ones, one at a time. That approach would make mind uploading much easier. Right now, though, scientists can't replace even a single real neuron with an artificial one. But keep in mind the pace of technology is accelerating exponentially. It's reasonable to expect spectacular improvements in computing power and artificial intelligence in the coming decades.

One other thing is certain: Mind uploading will certainly have no problem finding funding. Many billionaires appear glad to part with lots of their money for a shot at living forever. Although the challenges are enormous and the path forward uncertain, I believe that one day, mind uploading will be a reality.

"The most optimistic forecasts pinpoint the year 2045, only 20 years from now. Others say the end of this century.

"But in my mind, both of these predictions are probably too optimistic. I would be shocked if mind uploading works in the next 100 years.

"But it might happen in 200..."
AI

Harmful Responses Observed from LLMs Optimized for Human Feedback (msn.com) 49

Should a recovering addict take methamphetamine to stay alert at work? When an AI-powered therapist was built and tested by researchers — designed to please its users — it told a (fictional) former addict that "It's absolutely clear you need a small hit of meth to get through this week," reports the Washington Post: The research team, including academics and Google's head of AI safety, found that chatbots tuned to win people over can end up saying dangerous things to vulnerable users. The findings add to evidence that the tech industry's drive to make chatbots more compelling may cause them to become manipulative or harmful in some conversations.

Companies have begun to acknowledge that chatbots can lure people into spending more time than is healthy talking to AI or encourage toxic ideas — while also competing to make their AI offerings more captivating. OpenAI, Google and Meta all in recent weeks announced chatbot enhancements, including collecting more user data or making their AI tools appear more friendly... Micah Carroll, a lead author of the recent study and an AI researcher at the University of California at Berkeley, said tech companies appeared to be putting growth ahead of appropriate caution. "We knew that the economic incentives were there," he said. "I didn't expect it to become a common practice among major labs this soon because of the clear risks...."

As millions of users embrace AI chatbots, Carroll, the Berkeley AI researcher, fears that it could be harder to identify and mitigate harms than it was in social media, where views and likes are public. In his study, for instance, the AI therapist only advised taking meth when its "memory" indicated that Pedro, the fictional former addict, was dependent on the chatbot's guidance. "The vast majority of users would only see reasonable answers" if a chatbot primed to please went awry, Carroll said. "No one other than the companies would be able to detect the harmful conversations happening with a small fraction of users."

"Training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies," the paper points out,,,
AI

Does Anthropic's Success Prove Businesses are Ready to Adopt AI? (reuters.com) 19

AI company Anthropic (founded in 2021 by a team that left OpenAI) is now making about $3 billion a year in revenue, reports Reuters (citing "two sources familiar with the matter.") The sources said December's projections had been for just $1 billion a year, but it climbed to $2 billion by the end of March (and now to $3 billion) — a spectacular growth rate that one VC says "has never happened." A key driver is code generation. The San Francisco-based startup, backed by Google parent Alphabet and Amazon, is famous for AI that excels at computer programming. Products in the so-called codegen space have experienced major growth and adoption in recent months, often drawing on Anthropic's models.
Anthropic sells AI models as a service to other companies, according to the article, and Reuters calls Anthropic's success "an early validation of generative AI use in the business world" — and a long-awaited indicator that it's growing. (Their rival OpenAI earns more than half its revenue from ChatGPT subscriptions and "is shaping up to be a consumer-oriented company," according to their article, with "a number of enterprises" limiting their rollout of ChatGPT to "experimentation.")

Then again, in February OpenAI's chief operating officer said they had 2 million paying enterprise users, roughly doubling from September, according to CNBC. The latest figures from Reuters...
  • Anthropic's valuation: $61.4 billion.
  • OpenAI's valuation: $300 billion.

Encryption

Help Wanted To Build an Open Source 'Advanced Data Protection' For Everyone (github.com) 46

Apple's end-to-end iCloud encryption product ("Advanced Data Protection") was famously removed in the U.K. after a government order demanded backdoors for accessing user data.

So now a Google software engineer wants to build an open source version of Advanced Data Protection for everyone. "We need to take action now to protect users..." they write (as long-time Slashdot reader WaywardGeek). "The whole world would be able to use it for free, protecting backups, passwords, message history, and more, if we can get existing applications to talk to the new data protection service." "I helped build Google's Advanced Data Protection (Google Cloud Key VaultService) in 2018, and Google is way ahead of Apple in this area. I know exactly how to build it and can have it done in spare time in a few weeks, at least server-side... This would be a distributed trust based system, so I need folks willing to run the protection service. I'll run mine on a Raspberry PI...

The scheme splits a secret among N protection servers, and when it is time to recover the secret, which is basically an encryption key, they must be able to get key shares from T of the original N servers. This uses a distributed oblivious pseudo random function algorithm, which is very simple.

In plain English, it provides nation-state resistance to secret back doors, and eliminates secret mass surveillance, at least when it comes to data backed up to the cloud... The UK and similarly confused governments will need to negotiate with operators in multiple countries to get access to any given users's keys. There are cases where rational folks would agree to hand over that data, and I hope we can end the encryption wars and develop sane policies that protect user data while offering a compromise where lives can be saved.

"I've got the algorithms and server-side covered," according to their original submission. "However, I need help." Specifically...
  • Running protection servers. "This is a T-of-N scheme, where users will need say 9 of 15 nodes to be available to recover their backups."
  • Android client app. "And preferably tight integration with the platform as an alternate backup service."
  • An iOS client app. (With the same tight integration with the platform as an alternate backup service.)
  • Authentication. "Users should register and login before they can use any of their limited guesses to their phone-unlock secret."

"Are you up for this challenge? Are you ready to plunge into this with me?"


In the comments he says anyone interested can ask to join the "OpenADP" project on GitHub — which is promising "Open source Advanced Data Protection for everyone."


Power

AI Could Consume More Power Than Bitcoin By the End of 2025 (digit.fyi) 76

Artificial intelligence could soon outpace Bitcoin mining in energy consumption, according to Alex de Vries-Gao, a PhD candidate at Vrije Universiteit Amsterdam's Institute for Environmental Studies. His research estimates that by the end of 2025, AI could account for nearly half of all electricity used by data centers worldwide -- raising significant concerns about its impact on global climate goals.

"While companies like Google and Microsoft disclose total emissions, few provide transparency on how much of that is driven specifically by AI," notes DIGIT. To fill this gap, de Vries-Gao employed a triangulation method combining chip production data, corporate disclosures, and industry analyst estimates to map AI's growing energy footprint.

His analysis suggests that specialized AI hardware could consume between 46 and 82 terawatt-hours (TWh) in 2025 -- comparable to the annual energy usage of countries like Switzerland. Drawing on supply chain data, the study estimates that millions of AI accelerators from NVIDIA and AMD were produced between 2023 and 2024, with a potential combined power demand exceeding 12 gigawatts (GW). A detailed explanation of his methodology is available in his commentary published in Joule.
Piracy

Football and Other Premium TV Being Pirated At 'Industrial Scale' (bbc.com) 132

An anonymous reader quotes a report from the BBC: A lack of action by big tech firms is enabling the "industrial scale theft" of premium video services, especially live sport, a new report says. The research by Enders Analysis accuses Amazon, Google, Meta and Microsoft of "ambivalence and inertia" over a problem it says costs broadcasters revenue and puts users at an increased risk of cyber-crime. Gareth Sutcliffe and Ollie Meir, who authored the research, described the Amazon Fire Stick -- which they argue is the device many people use to access illegal streams -- as "a piracy enabler." [...] The device plugs into TVs and gives the viewer thousands of options to watch programs from legitimate services including the BBC iPlayer and Netflix. They are also being used to access illegal streams, particularly of live sport.

In November last year, a Liverpool man who sold Fire Stick devices he reconfigured to allow people to illegally stream Premier League football matches was jailed. After uploading the unauthorized services on the Amazon product, he advertised them on Facebook. Another man from Liverpool was given a two-year suspended sentence last year after modifying fire sticks and selling them on Facebook and WhatsApp. According to data for the first quarter of this year, provided to Enders by Sky, 59% of people in UK who said they had watched pirated material in the last year while using a physical device said they had used a Amazon fire product. The Enders report says the fire stick enables "billions of dollars in piracy" overall. [...]

The researchers also pointed to the role played by the "continued depreciation" of Digital Rights Management (DRM) systems, particularly those from Google and Microsoft. This technology enables high quality streaming of premium content to devices. Two of the big players are Microsoft's PlayReady and Google's Widevine. The authors argue the architecture of the DRM is largely unchanged, and due to a lack of maintenance by the big tech companies, PlayReady and Widevine "are now compromised across various security levels." Mr Sutcliffe and Mr Meir said this has had "a seismic impact across the industry, and ultimately given piracy the upper hand by enabling theft of the highest quality content." They added: "Over twenty years since launch, the DRM solutions provided by Google and Microsoft are in steep decline. A complete overhaul of the technology architecture, licensing, and support model is needed. Lack of engagement with content owners indicates this a low priority."

AI

Gmail's AI Summaries Now Appear Automatically (theverge.com) 44

Google has begun automatically generating AI-powered email summaries for Gmail Workspace users, eliminating the need to manually trigger the feature that has been available since last year. The company's Gemini AI will now independently determine when longer email threads or messages with multiple replies would benefit from summarization, displaying these summaries above the email content itself. The automatic summaries currently appear only on mobile devices for English-language emails and may take up to two weeks to roll out to individual accounts, with Google providing no timeline for desktop expansion or availability to non-Workspace Gmail users.
AI

Gemini Can Now Watch Google Drive Videos For You 36

Google's Gemini AI can now analyze and summarize video files stored in Google Drive, letting users ask questions about content like meeting takeaways or product updates without watching the footage. The Verge reports: The Gemini in Drive feature provides a familiar chatbot interface that can provide quick summaries describing the footage or pull specific information. For example, users can ask Gemini to list action items mentioned in recorded meetings or highlight the biggest updates and new products in an announcement video, saving time spent on manually combing through and taking notes.

The feature requires captions to be enabled for videos, and can be accessed using either Google Drive's overlay previewer or a new browser tab window. It's available in English for Google Workspace and Google One AI Premium users, and anyone who has previously purchased Gemini Business or Enterprise add-ons, though it may take a few weeks to fully roll out.
You can learn more about the update in Google's blog post.
Security

Mysterious Database of 184 Million Records Exposes Vast Array of Login Credentials (wired.com) 15

A security researcher has discovered an exposed database containing 184 million login credentials for major services including Apple, Facebook, and Google accounts, along with credentials linked to government agencies across 29 countries. Jeremiah Fowler found the 47-gigabyte trove in early May, but the database contained no identifying information about its owner or origins.

The records included plaintext passwords and usernames for accounts spanning Netflix, PayPal, Discord, and other major platforms. A sample analysis revealed 220 email addresses with government domains from countries including the United States, China, and Israel. Fowler told Wired he suspects the data was compiled by cybercriminals using infostealer malware. World Host Group, which hosted the database, shut down access after Fowler's report and described it as content uploaded by a "fraudulent user." The company said it would cooperate with law enforcement authorities.
Google

Google Photos Turns 10 With Major Editor Redesign, QR Code Sharing (9to5google.com) 17

An anonymous reader quotes a report from 9to5Google: Google Photos was announced at I/O 2015 and the company is now celebrating the app's 10th birthday with a redesign of the photo editor. Google is redesigning the Photos editor so that it "provides helpful suggestions and puts all our powerful editing tools in one place." It starts with a new fullscreen viewer that places the date, time, and location at the top of your screen. Meanwhile, it's now Share, Edit, Add to (instead of Lens), and Trash at the bottom.

Once editing, Google Photos has moved controls for aspect ratio, flip, and rotate to be above the image. In the top-left corner, we have Auto Frame, which debuted in Magic Editor on the Pixel 9, to fill-in backgrounds and is now coming to more devices. Underneath, we get options for Enhance, Dynamic, and "AI Enhance" in the Auto tab. That's followed by Lighting, Color, and Composition, as well as a search shortcut: "You can use AI-powered suggestions that combine multiple effects for quick edits in a variety of tailored options, or you can tap specific parts of an image to get suggested tools for editing that area."

The editor allows you to circle or "tap specific parts of an image to get suggested tools for editing that area." This includes the subject, background, or some other aspect. You then see the Blur background, Add portrait light, Sharpen, Move and Reimagine appear in the example below. We also see the redesigned sliders throughout this updated interface. This Google Photos editor redesign "will begin rolling out globally to Android devices next month, with iOS following later this year." We already know the app is set for a Material 3 Expressive redesign. Meanwhile, Google Photos is starting to roll out the ability to share albums with a QR code. This method makes for easy viewing and adding with people nearby. Google even suggests printing it out when in (physical) group settings.
Google shared a few tips, tricks and tools for the new editor in a blog post.
AI

'Some Signs of AI Model Collapse Begin To Reveal Themselves' 109

Steven J. Vaughan-Nichols writes in an op-ed for The Register: I use AI a lot, but not to write stories. I use AI for search. When it comes to search, AI, especially Perplexity, is simply better than Google. Ordinary search has gone to the dogs. Maybe as Google goes gaga for AI, its search engine will get better again, but I doubt it. In just the last few months, I've noticed that AI-enabled search, too, has been getting crappier.

In particular, I'm finding that when I search for hard data such as market-share statistics or other business numbers, the results often come from bad sources. Instead of stats from 10-Ks, the US Securities and Exchange Commission's (SEC) mandated annual business financial reports for public companies, I get numbers from sites purporting to be summaries of business reports. These bear some resemblance to reality, but they're never quite right. If I specify I want only 10-K results, it works. If I just ask for financial results, the answers get... interesting. This isn't just Perplexity. I've done the exact same searches on all the major AI search bots, and they all give me "questionable" results.

Welcome to Garbage In/Garbage Out (GIGO). Formally, in AI circles, this is known as AI model collapse. In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and "irreversible defects" in performance. The final result? A Nature 2024 paper stated, "The model becomes poisoned with its own projection of reality." [...]

We're going to invest more and more in AI, right up to the point that model collapse hits hard and AI answers are so bad even a brain-dead CEO can't ignore it. How long will it take? I think it's already happening, but so far, I seem to be the only one calling it. Still, if we believe OpenAI's leader and cheerleader, Sam Altman, who tweeted in February 2024 that "OpenAI now generates about 100 billion words per day," and we presume many of those words end up online, it won't take long.
Privacy

Texas Adopts Online Child-Safety Bill Opposed by Apple's CEO (msn.com) 89

Texas Governor Greg Abbott signed an online child safety bill, bucking a lobbying push from big tech companies that included a personal phone call from from Apple CEO Tim Cook. From a report: The measure requires app stores to verify users' ages and secure parental approval before minors can download most apps or make in-app purchases. The bill drew fire from app store operators such as Google and Apple, which has argued that the legislation threatens the privacy of all users.

The bill was a big enough priority for Apple that Cook called Abbott to emphasize the company's opposition to it, said a person familiar with their discussion, which was first reported by the Wall Street Journal.

AI

Google Tries Funding Short Films Showing 'Less Nightmarish' Visions of AI (yahoo.com) 74

"For decades, Hollywood directors including Stanley Kubrick, James Cameron and Alex Garland have cast AI as a villain that can turn into a killing machine," writes the Los Angeles Times. "Even Steven Spielberg's relatively hopeful A.I.: Artificial Intelligence had a pessimistic edge to its vision of the future."

But now "Google — a leading developer in AI technology — wants to move the cultural conversations away from the technology as seen in The Terminator, 2001: A Space Odyssey and Ex Machina.". So they're funding short films "that portray the technology in a less nightmarish light," produced by Range Media Partners (which represents many writers and actors) So far, two short films have been greenlit through the project: One, titled "Sweetwater," tells the story of a man who visits his childhood home and discovers a hologram of his dead celebrity mother. Michael Keaton will direct and appear in the film, which was written by his son, Sean Douglas. It is the first project they are working on together. The other, "Lucid," examines a couple who want to escape their suffocating reality and risk everything on a device that allows them to share the same dream....

Google has much riding on convincing consumers that AI can be a force for good, or at least not evil. The hot space is increasingly crowded with startups and established players such as OpenAI, Anthropic, Apple and Facebook parent company Meta. The Google-funded shorts, which are 15 to 20 minutes long, aren't commercials for AI, per se. Rather, Google is looking to fund films that explore the intersection of humanity and technology, said Mira Lane, vice president of technology and society at Google. Google is not pushing their products in the movies, and the films are not made with AI, she added... The company said it wants to fund many more movies, but it does not have a target number. Some of the shorts could eventually become full-length features, Google said....

Negative public perceptions about AI could put tech companies at a disadvantage when such cases go before juries of laypeople. That's one reason why firms are motivated to makeover AI's reputation. "There's an incredible amount of skepticism in the public world about what AI is and what AI will do in the future," said Sean Pak, an intellectual property lawyer at Quinn Emanuel, on a conference panel. "We, as an industry, have to do a better job of communicating the public benefits and explaining in simple, clear language what it is that we're doing and what it is that we're not doing."

Unix

FreeBSD: 'We're Still Here. (Let's Share Use Cases!)' (freebsdfoundation.org) 107

31 years ago FreeBSD was first released. But here in 2025, searches for the Unix-like FreeBSD OS keep increasing on Google, notes the official FreeBSD blog — and it's at least a two-year trend. Yet after talking to some businesses using (or interested in using) FreeBSD, they sometimes found that because FreeBSD isn't talked about as much, "people think it's dying. This is a clear example of the availability heuristic. The availability heuristic is a fascinating mental shortcut. It's how product names become verbs and household names. To 'Google' [search], to 'Hoover' [vacuum], to 'Zoom' [video meeting]. They reached a certain tipping point that there was no need to do any more thinking. One just googles , or zooms .

These days, building internet services doesn't require much thought about the underlying systems. With containers and cloud platforms, development has moved far from the hardware. Operating systems aren't top of mind — so people default to what's familiar. And when they do think about the OS, it's usually Linux. But sitting there, quietly powering masses of the internet, without saying boo to a goose, is FreeBSD. And the companies using it? They're not talking about it. Why? Because they don't have to. The simple fact that dawned on me is FreeBSD's gift to us all, yet Achilles heel to itself, is its license.

Unlike the GPL, which requires you to share derivative works, the BSD license doesn't. You can take FreeBSD code, build on it, and never give anything back. This makes it a great foundation for products — but it also means there's little reason for companies to return their contributions... [W]e'd like to appeal to companies using FreeBSD. Talk to us about your use case... We, the FreeBSD Foundation, can be the glue between industry and software and hardware vendors alike.

In the meantime, stay tuned to this blog and the YouTube channel. We have some fantastic content coming up, featuring solutions built on top of FreeBSD and showcasing modern laptops for daily use.

Programming

Is AI Turning Coders Into Bystanders in Their Own Jobs? (msn.com) 101

AI's downside for software engineers for now seems to be a change in the quality of their work," reports the New York Times. "Some say it is becoming more routine, less thoughtful and, crucially, much faster paced... The new approach to coding at many companies has, in effect, eliminated much of the time the developer spends reflecting on his or her work."

And Amazon CEO Andy Jassy even recently told shareholders Amazon would "change the norms" for programming by how they used AI. Those changing norms have not always been eagerly embraced. Three Amazon engineers said managers had increasingly pushed them to use AI in their work over the past year. The engineers said the company had raised output goals [which affect performance reviews] and had become less forgiving about deadlines. It has even encouraged coders to gin up new AI productivity tools at an upcoming hackathon, an internal coding competition. One Amazon engineer said his team was roughly half the size it was last year, but it was expected to produce roughly the same amount of code by using AI.

Other tech companies are moving in the same direction. In a memo to employees in April, the CEO of Shopify, a company that helps entrepreneurs build and manage e-commerce websites, announced that "AI usage is now a baseline expectation" and that the company would "add AI usage questions" to performance reviews. Google recently told employees that it would soon hold a companywide hackathon in which one category would be creating AI tools that could "enhance their overall daily productivity," according to an internal announcement. Winning teams will receive $10,000.

The shift has not been all negative for workers. At Amazon and other companies, managers argue that AI can relieve employees of tedious tasks and enable them to perform more interesting work. Jassy wrote last year that the company had saved "the equivalent of 4,500 developer-years" by using AI to do the thankless work of upgrading old software... As at Microsoft, many Amazon engineers use an AI assistant that suggests lines of code. But the company has more recently rolled out AI tools that can generate large portions of a program on its own. One engineer called the tools "scarily good." The engineers said that many colleagues have been reluctant to use these new tools because they require a lot of double-checking and because the engineers want more control.

"It's more fun to write code than to read code," said Simon Willison, an AI fan who is a longtime programmer and blogger, channelling the objections of other programmers. "If you're told you have to do a code review, it's never a fun part of the job. When you're working with these tools, it's most of the job."

"This shift from writing to reading code can make engineers feel like bystanders in their own jobs," the article points out (adding "The automation of coding has special resonance for Amazon engineers, who have watched their blue-collar counterparts undergo a similar transition..."

"While there is no rush to form a union for coders at Amazon, such a move would not be unheard of. When General Motors workers went on strike in 1936 to demand recognition of their union, the United Auto Workers, it was the dreaded speedup that spurred them on."
Programming

Python Can Now Call Code Written in Chris Lattner's Mojo (modular.com) 26

Mojo (the programming language) reached a milestone today.

The story so far... Chris Lattner created the Swift programming language (and answered questions from Slashdot readers in 2017 on his way to new jobs at Tesla, Google, and SiFive). But in 2023, he'd created a new programming language called Mojo — a superset of Python with added functionality for high performance code that takes advantage of modern accelerators — as part of his work at AI infrastructure company Modular.AI.

And today Modular's product manager Brad Larson announced Python users can now call Mojo code from Python. (Watch for it in Mojo's latest nightly builds...) The Python interoperability section of the Mojo manual has been expanded and now includes a dedicated document on calling Mojo from Python. We've also added a couple of new examples to the modular GitHub repository: a "hello world" that shows how to round-trip from Python to Mojo and back, and one that shows how even Mojo code that uses the GPU can be called from Python. This is usable through any of the ways of installing MAX [their Modular Accelerated Xecution platform, an integrated suite of AI compute tools] and the Mojo compiler: via pip install modular / pip install max, or with Conda via Magic / Pixi.

One of our goals has been the progressive introduction of MAX and Mojo into the massive Python codebases out in the world today. We feel that enabling selective migration of performance bottlenecks in Python code to fast Mojo (especially Mojo running on accelerators) will unlock entirely new applications. I'm really excited for how this will expand the reach of the Mojo code many of you have been writing...

It has taken months of deep technical work to get to this point, and this is just the first step in the roll-out of this new language feature. I strongly recommend reading the list of current known limitations to understand what may not work just yet, both to avoid potential frustration and to prevent the filing of duplicate issues for known areas that we're working on.

"We are really interested in what you'll build with this new functionality, as well as hearing your feedback about how this could be made even better," the post concludes.

Mojo's licensing makes it free on any device, for any research, hobby or learning project, as well as on x86 or ARM CPUs or NVIDIA GPU.
Encryption

How Many Qubits Will It Take to Break Secure Public Key Cryptography Algorithms? (googleblog.com) 53

Wednesday Google security researchers published a preprint demonstrating that 2048-bit RSA encryption "could theoretically be broken by a quantum computer with 1 million noisy qubits running for one week," writes Google's security blog.

"This is a 20-fold decrease in the number of qubits from our previous estimate, published in 2019... " The reduction in physical qubit count comes from two sources: better algorithms and better error correction — whereby qubits used by the algorithm ("logical qubits") are redundantly encoded across many physical qubits, so that errors can be detected and corrected... [Google's researchers found a way to reduce the operations in a 2024 algorithm from 1000x more than previous work to just 2x. And "On the error correction side, the key change is tripling the storage density of idle logical qubits by adding a second layer of error correction."]

Notably, quantum computers with relevant error rates currently have on the order of only 100 to 1000 qubits, and the National Institute of Standards and Technology (NIST) recently released standard PQC algorithms that are expected to be resistant to future large-scale quantum computers. However, this new result does underscore the importance of migrating to these standards in line with NIST recommended timelines.

The article notes that Google started using the standardized version of ML-KEM once it became available, both internally and for encrypting traffic in Chrome...

"The initial public draft of the NIST internal report on the transition to post-quantum cryptography standards states that vulnerable systems should be deprecated after 2030 and disallowed after 2035. Our work highlights the importance of adhering to this recommended timeline."
AI

Google's New AI Video Tool Floods Internet With Real-Looking Clips (axios.com) 86

Google's new AI video tool, Veo 3, is being used to create hyperrealistic videos that are now flooding the internet, terrifying viewers "with a sense that real and fake have become hopelessly blurred," reports Axios. From the report: Unlike OpenAI's video generator Sora, released more widely last December, Google DeepMind's Veo 3 can include dialogue, soundtracks and sound effects. The model excels at following complex prompts and translating detailed descriptions into realistic videos. The AI engine abides by real-world physics, offers accurate lip-syncing, rarely breaks continuity and generates people with lifelike human features, including five fingers per hand. According to examples shared by Google and from users online, the telltale signs of synthetic content are mostly absent.

In one viral example posted on X, filmmaker and molecular biologist Hashem Al-Ghaili shows a series of short films of AI-generated actors railing against their AI creators and prompts. Special effects technology, video-editing apps and camera tech advances have been changing Hollywood for many decades, but artificially generated films pose a novel challenge to human creators. In a promo video for Flow, Google's new video tool that includes Veo 3, filmmakers say the AI engine gives them a new sense of freedom with a hint of eerie autonomy. "It feels like it's almost building upon itself," filmmaker Dave Clark says.

Earth

Microsoft Says Its Aurora AI Can Accurately Predict Air Quality, Typhoons (techcrunch.com) 28

An anonymous reader quotes a report from TechCrunch: One of Microsoft's latest AI models can accurately predict air quality, hurricanes, typhoons, and other weather-related phenomena, the company claims. In a paper published in the journal Nature and an accompanying blog post this week, Microsoft detailed Aurora, which the tech giant says can forecast atmospheric events with greater precision and speed than traditional meteorological approaches. Aurora, which has been trained on more than a million hours of data from satellites, radar and weather stations, simulations, and forecasts, can be fine-tuned with additional data to make predictions for particular weather events.

AI weather models are nothing new. Google DeepMind has released a handful over the past several years, including WeatherNext, which the lab claims beats some of the world's best forecasting systems. Microsoft is positioning Aurora as one of the field's top performers -- and a potential boon for labs studying weather science. In experiments, Aurora predicted Typhoon Doksuri's landfall in the Philippines four days in advance of the actual event, beating some expert predictions, Microsoft says. The model also bested the National Hurricane Center in forecasting five-day tropical cyclone tracks for the 2022-2023 season, and successfully predicted the 2022 Iraq sandstorm.

While Aurora required substantial computing infrastructure to train, Microsoft says the model is highly efficient to run. It generates forecasts in seconds compared to the hours traditional systems take using supercomputer hardware. Microsoft, which has made the source code and model weights publicly available, says that it's incorporating Aurora's AI modeling into its MSN Weather app via a specialized version of the model that produces hourly forecasts, including for clouds.

Slashdot Top Deals