The Military

How 12 'Enola Gay' Crew Members Remember Dropping the Atomic Bomb (mentalfloss.com) 130

Last week saw the 80th anniversary of a turning point in World War II: the day America dropped an atomic bomb on Hiroshima.

"Twelve men were on that flight..." remembers the online magazine Mental Floss, adding "Almost all had something to say after the war." The group was segregated from the rest of the military and trained in secret. Even those in the group only knew as much as they needed to know in order to perform their duties. The group deployed to Tinian in 1945 with 15 B-29 bombers, flight crews, ground crews, and other personnel, a total of about 1770 men. The mission to drop the atomic bomb on Hiroshima, Japan (special mission 13) involved seven planes, but the one we remember was the Enola Gay.

Air Force captain Theodore "Dutch" Van Kirk did not know the destructive force of the nuclear bomb before Hiroshima. He was 24 years old at that time, a veteran of 58 missions in North Africa. Paul Tibbets told him this mission would shorten or end the war, but Van Kirk had heard that line before. Hiroshima made him a believer. Van Kirk felt the bombing of Hiroshima was worth the price in that it ended the war before the invasion of Japan, which promised to be devastating to both sides. " I honestly believe the use of the atomic bomb saved lives in the long run. There were a lot of lives saved. Most of the lives saved were Japanese."

In 2005, Van Kirk came as close as he ever got to regret. "I pray no man will have to witness that sight again. Such a terrible waste, such a loss of life..."

Many of the other crewmembers also felt the bomb ultimately saved lives.

The Washington Post has also published a new oral history of the flight after it took off from Tinian Island. The oral history was assembled for a new book published this week titled The Devil Reached Toward the Sky: An Oral History of the Making and Unleashing of the Atomic Bomb.. Col. Paul W. Tibbets, lead pilot of the Enola Gay: We were only eight minutes off the ground when Capt. William S. "Deak" Parsons and Lt. Morris R. Jeppson lowered themselves into the bomb bay to insert a slug of uranium and the conventional explosive charge into the core of the strange-looking weapon. I wondered why we were calling it ''Little Boy." Little Boy was 28 inches in diameter and 12 feet long. Its weight was a little more than 9,000 pounds. With its coat of dull gunmetal paint, it was an ugly monster...

Lt. Morris R. Jeppson, crew member of the Enola Gay: Parsons was second-in-command of the military in the Manhattan Project. The Little Boy weapon was Parsons's design. He was greatly concerned that B-29s loaded with conventional bombs were crashing at the ends of runways on Tinian during takeoff and that such an event could cause the U-235 projectile in the gun of Little Boy to fly down the barrel and into the U-235 target. This could have caused a low-level nuclear explosion on Tinian...

Jeppson: On his own, Parsons decided that he would go on the Hiroshima mission and that he would load the gun after the Enola Gay was well away from Tinian.

Tibbets: That way, if we crashed, we would lose only the airplane and crew, himself included... Jeppson held the flashlight while Parsons struggled with the mechanism of the bomb, inserting the explosive charge that would send one block of uranium flying into the other to set off the instant chain reaction that would create the atomic explosion.

The navigator on one of the other six planes on the mission remember that watching the mushroom cloud, "There was almost complete silence on the flight deck. It was evident the city of Hiroshima was destroyed."

And the Enola Gay's copilot later remembered thinking: "My God, what have we done?"
AI

Autonomous AI-Guided Black Hawk Helicopter Tested to Fight Wildfires (yahoo.com) 36

Imagine this. Lightning sparks a wildfire, but "within seconds, a satellite dish swirling overhead picks up on the anomaly and triggers an alarm," writes the Los Angeles Times. "An autonomous helicopter takes flight and zooms toward the fire, using sensors to locate the blaze and AI to generate a plan of attack. It measures the wind speed and fire movement, communicating constantly with the unmanned helicopter behind it, and the one behind that. Once over the site, it drops a load of water and soon the flames are smoldering. Without deploying a single human, the fire never grows larger than 10 square feet.

"This is the future of firefighting." On a recent morning in San Bernardino, state and local fire experts gathered for a demonstration of the early iterations of this new reality. An autonomous Sikorski Black Hawk helicopter, powered by technology from Lockheed Martin and a California-based software company called Rain, is on display on the tarmac of a logistics airport in Victorville — the word "EXPERIMENTAL" painted on its military green-black door. It's one of many new tools on the front lines of firefighting technology, which experts say is evolving rapidly as private industry and government agencies come face-to-face with a worsening global climate crisis...

Scientific studies and climate research models have found that the number of extreme fires could increase by as much as 30% globally by 2050. By 2100, California alone could see a 50% increase in wildfire frequency and a 77% increase in average annual acres burned, according to the state's most recent climate report. That's largely because human-caused climate change is driving up temperatures and drying out the landscape, priming it to burn, according to Kate Dargan Marquis, a senior advisor with the Gordon and Betty Moore Foundation who served as California's state fire marshal from 2007 to 2010.... "[T]he policies of today and the technologies of today are not going to serve us tomorrow."

Today, more than 1,100 mountaintop cameras positioned across California are already using artificial intelligence to scan the landscape for the first sign of flames and prompt crews to spring into action. NASA's Earth-observing satellites are studying landscape conditions to help better predict fires before they ignite, while a new global satellite constellation recently launched by Google is helping to detect fires faster than ever before.

One 35-year fire service veteran who consults on fire service technologies even predicts fire-fighting robots will also be used in high-risk situations like the Colossus robot that battled flames searing through Notre-Dame Cathedral in Paris...

And a bill moving through California's legislation "would direct the California Department of Forestry and Fire Protection to establish a pilot program to assess the viability of incorporating autonomous firefighting helicopters in the state."
Power

As Electric Bills Rise, Evidence Mounts That U.S. Data Centers Share Blame (apnews.com) 88

"Amid rising electric bills, states are under pressure to insulate regular household and business ratepayers from the costs of feeding Big Tech's energy-hungry data centers..." reports the Associated Press.

"Some critics question whether states have the spine to take a hard line against tech behemoths like Microsoft, Google, Amazon and Meta." [T]he Data Center Coalition, which represents Big Tech firms and data center developers, has said its members are committed to paying their fair share. But growing evidence suggests that the electricity bills of some Americans are rising to subsidize the massive energy needs of Big Tech as the U.S. competes in a race against China for artificial intelligence superiority. Data and analytics firm Wood Mackenzie published a report in recent weeks that suggested 20 proposed or effective specialized rates for data centers in 16 states it studied aren't nearly enough to cover the cost of a new natural gas power plant. In other words, unless utilities negotiate higher specialized rates, other ratepayer classes — residential, commercial and industrial — are likely paying for data center power needs. Meanwhile, Monitoring Analytics, the independent market watchdog for the mid-Atlantic grid, produced research in June showing that 70% — or $9.3 billion — of last year's increased electricity cost was the result of data center demand.

Last year, five governors led by Pennsylvania's Josh Shapiro began pushing back against power prices set by the mid-Atlantic grid operator, PJM Interconnection, after that amount spiked nearly sevenfold. They warned of customers "paying billions more than is necessary." PJM has yet to propose ways to guarantee that data centers pay their freight, but Monitoring Analytics is floating the idea that data centers should be required to procure their own power. In a filing last month, it said that would avoid a "massive wealth transfer" from average people to tech companies.

At least a dozen states are eyeing ways to make data centers pay higher local transmission costs. In Oregon, a data center hot spot, lawmakers passed legislation in June ordering state utility regulators to develop new — presumably higher — power rates for data centers. The Oregon Citizens' Utility Board [a consumer advocacy group] says there is clear evidence that costs to serve data centers are being spread across all customers — at a time when some electric bills there are up 50% over the past four years and utilities are disconnecting more people than ever.

"Some data centers could require more electricity than cities the size of Pittsburgh, Cleveland or New Orleans," the article points out...
Security

Google Says Its AI-Based Bug Hunter Found 20 Security Vulnerabilities (techcrunch.com) 17

"Heather Adkins, Google's vice president of security, announced Monday that its LLM-based vulnerability researcher Big Sleep found and reported 20 flaws in various popular open source software," reports TechCrunch: Adkins said that Big Sleep, which is developed by the company's AI department DeepMind as well as its elite team of hackers Project Zero, reported its first-ever vulnerabilities, mostly in open source software such as audio and video library FFmpeg and image-editing suite ImageMagick. [There's also a "medium impact" issue in Redis]

Given that the vulnerabilities are not fixed yet, we don't have details of their impact or severity, as Google does not yet want to provide details, which is a standard policy when waiting for bugs to be fixed. But the simple fact that Big Sleep found these vulnerabilities is significant, as it shows these tools are starting to get real results, even if there was a human involved in this case.

"To ensure high quality and actionable reports, we have a human expert in the loop before reporting, but each vulnerability was found and reproduced by the AI agent without human intervention," Google's spokesperson Kimberly Samra told TechCrunch.

Google's vice president of engineering posted on social media that this demonstrates "a new frontier in automated vulnerability discovery."
AI

Initiative Seeks AI Lab to Build 'American Truly Open Models' (ATOM) (msn.com) 20

"Benchmarking firm Artificial Analysis found that only five of the top 15 AI models are open source," reports the Washington Post, "and all were developed by Chinese AI companies...."

"Now some American executives, investors and academics are endorsing a plan to make U.S. open-source AI more competitive." A new campaign called the ATOM Project, for American Truly Open Models, aims to create a U.S.-based AI lab dedicated to creating software that developers can freely access and modify. Its blueprint calls for access to serious computing power, with upward of 10,000 of the cutting-edge GPU chips used to power corporate AI development. The initiative, which launched Monday, has gathered signatures of support from more than a dozen industry figures. They include veteran tech investor Bill Gurley; Clement Delangue, CEO of Hugging Face, a repository for open-source AI models and datasets; Stanford professor and AI investor Chris Manning; chipmaker Nvidia's director of applied research, Oleksii Kuchaiev; Jason Kwon, chief strategy officer for OpenAI; and Dylan Patel, CEO and founder of research firm SemiAnalysis...

The lack of progress in open-source AI underscores the case for initiatives like ATOM: The U.S. has not produced a major new open-source AI release since Meta's launch of its Llama 4 model in April, which disappointed some AI experts... "A lot of it is a coordination problem," said ATOM's creator, Nathan Lambert, a senior research scientist at the nonprofit Allen Institute for AI who is launching the project in a personal capacity... Lambert said the idea was to develop much more powerful open-source AI models than existing U.S. efforts such as Bloom, an AI language model from Hugging Face, Pythia from EleutherAI, and others. Those groups were willing to take on more legal risk in the name of scientific progress but suffered from underfunding, said Lambert, who has worked at Google's DeepMind AI lab, Facebook AI Research and Hugging Face.

The other problem? The hefty cost of top-performing AI. Lambert estimates that getting access to 10,000 state-of-the-art GPUs will cost at least $100 million. But the funding must be found if American efforts are to stay competitive, he said.

The initiative's web page is seeking signatures, but also asks visitors to the site to "consider how your expertise or resources might contribute to building the infrastructure America needs."
AI

Students Have Been Called to the Office - Or Arrested - for False Alarms from AI-Powered Surveillance Systems (apnews.com) 162

In 2023 a 13-year-old girl "made an offensive joke while chatting online with her classmates," reports the Associated Press.

But when the school's surveillance software spotted that joke, "Before the morning was even over, the Tennessee eighth grader was under arrest. She was interrogated, strip-searched and spent the night in a jail cell, her mother says." Her parents filed a lawsuit against the school system, according to the article (which points out the girl wasn't allowed to talk to her parents until the next day). "A court ordered eight weeks of house arrest, a psychological evaluation and 20 days at an alternative school for the girl." Gaggle's CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said. "I wish that was treated as a teachable moment, not a law enforcement moment," said Patterson.
But that's just one example, the article points out. "Surveillance systems in American schools increasingly monitor everything students write on school accounts and devices." Thousands of school districts across the country use software like Gaggle and Lightspeed Alert to track kids' online activities, looking for signs they might hurt themselves or others. With the help of artificial intelligence, technology can dip into online conversations and immediately notify both school officials and law enforcement... In a country weary of school shootings, several states have taken a harder line on threats to schools. Among them is Tennessee, which passed a 2023 zero-tolerance law requiring any threat of mass violence against a school to be reported immediately to law enforcement....

Students who think they are chatting privately among friends often do not realize they are under constant surveillance, said Shahar Pasch, an education lawyer in Florida. One teenage girl she represented made a joke about school shootings on a private Snapchat story. Snapchat's automated detection software picked up the comment, the company alerted the FBI, and the girl was arrested on school grounds within hours... The technology can also involve law enforcement in responses to mental health crises. In Florida's Polk County Schools, a district of more than 100,000 students, the school safety program received nearly 500 Gaggle alerts over four years, officers said in public Board of Education meetings. This led to 72 involuntary hospitalization cases under the Baker Act, a state law that allows authorities to require mental health evaluations for people against their will if they pose a risk to themselves or others...

Information that could allow schools to assess the software's effectiveness, such as the rate of false alerts, is closely held by technology companies and unavailable publicly unless schools track the data themselves. Students in one photography class were called to the principal's office over concerns Gaggle had detected nudity. The photos had been automatically deleted from the students' Google Drives, but students who had backups of the flagged images on their own devices showed it was a false alarm. District officials said they later adjusted the software's settings to reduce false alerts. Natasha Torkzaban, who graduated in 2024, said she was flagged for editing a friend's college essay because it had the words "mental health...."

School officials have said they take concerns about Gaggle seriously, but also say the technology has detected dozens of imminent threats of suicide or violence. "Sometimes you have to look at the trade for the greater good," said Board of Education member Anne Costello in a July 2024 board meeting.

Google

South Korea Postpones Decision To Let Google Maps Work Properly - Again (theguardian.com) 18

South Korea postponed a decision for the second time this year on Friday regarding Google's request to export detailed mapping data to overseas servers, which would enable full Google Maps functionality in the country. The inter-agency committee extended the deadline from August to October to allow further review of security concerns and consultations with industry stakeholders.

South Korea remains one of only a handful of countries alongside China and North Korea where Google Maps fails to function properly, unable to provide directions despite displaying landmarks and businesses. Tourism complaints increased 71% last year, with Google Maps accounting for 30% of all app-related grievances, while local industry groups representing 2,600 companies report 90% opposition to Google's request due to fears of market domination by the US tech company.
Google

Google Ending Steam for Chromebook Support in 2026 (9to5google.com) 11

Google will discontinue Steam for Chromebook Beta on January 1, 2026, removing all installed games from devices after that date. The beta launched in March 2022 as an alpha before expanding to beta status in November 2022 with reduced hardware requirements of Intel Core i3 or AMD Ryzen 3 processors and 8GB RAM. The program never progressed beyond beta testing despite supporting 99 compatible Linux-based titles through its three-year run.
Google

Google Tests AI-Powered Google Finance (blog.google) 12

Google announced Friday it will roll out an AI-powered redesign of Google Finance over the coming weeks in the United States. The update adds natural language query processing for financial research questions with comprehensive AI responses including relevant links, advanced charting tools with technical indicators and candlestick charts, expanded market data covering commodities and additional cryptocurrencies, and a live news feed displaying real-time headlines.
Google

Google TV's Uncertain Future (theverge.com) 32

Google has quietly admitted defeat in selling advertising for its smart TV platform, returning ad inventory to publishers and accepting a revenue share instead of controlling ad spots directly, according to The Verge. The policy reversal comes as Google spends hundreds of millions of dollars annually on Google TV without breaking even, while Amazon outspends the company on retail incentives that have already pushed Google TV sets out of Costco stores in favor of Fire TV models.

Amazon pays up to $50 per activated television to retailers and manufacturers, The Verge reported. Google TV has grown to 270 million monthly active devices worldwide since unifying Android TV and Chromecast under a single brand in 2020, but many devices operate in overseas markets that generate little revenue or run customized versions controlled by pay-TV operators. YouTube's success in the living room -- generating $9.8 billion in quarterly ad revenue and accounting for 12.5% of all US television viewing -- has reduced internal support for Google TV, with sales teams prioritizing the video platform and some YouTube executives arguing the smart TV budget should be redirected, the report adds.
Security

Citizen Lab Director Warns Cyber Industry About US Authoritarian Descent (techcrunch.com) 103

An anonymous reader quotes a report from TechCrunch: Ron Deibert, the director of Citizen Lab, one of the most prominent organizations investigating government spyware abuses, is sounding the alarm to the cybersecurity community and asking them to step up and join the fight against authoritarianism. On Wednesday, Deibert will deliver a keynote at the Black Hat cybersecurity conference in Las Vegas, one of the largest gatherings of information security professionals of the year. Ahead of his talk, Deibert told TechCrunch that he plans to speak about what he describes as a "descent into a kind of fusion of tech and fascism," and the role that the Big Tech platforms are playing, and "propelling forward a really frightening type of collective insecurity that isn't typically addressed by this crowd, this community, as a cybersecurity problem."

Deibert described the recent political events in the United States as a "dramatic descent into authoritarianism," but one that the cybersecurity community can help defend against. "I think alarm bells need to be rung for this community that, at the very least, they should be aware of what's going on and hopefully they can not contribute to it, if not help reverse it," Deibert told TechCrunch. [...] "I think that there comes a point at which you have to recognize that the landscape is changing around you, and the security problems you set out for yourselves are maybe trivial in light of the broader context and the insecurities that are being propelled forward in the absence of proper checks and balances and oversight, which are deteriorating," said Deibert.

Deibert is also concerned that big companies like Meta, Google, and Apple could take a step back in their efforts to fight against government spyware -- sometimes referred to as "commercial" or "mercenary" spyware -- by gutting their threat intelligence teams. [...] Deibert believes there is a "huge market failure when it comes to cybersecurity for global civil society," a part of the population that generally cannot afford to get help from big security companies that typically serve governments and corporate clients. "This market failure is going to get more acute as supporting institutions evaporate and attacks on civil society amplify," he said. "Whatever they can do to contribute to offset this market failure (e.g., pro bono work) will be essential to the future of liberal democracy worldwide," he said. Deibert is concerned that these threat intelligence teams could be cut or at least reduced, given that the same companies have cut their moderation and safety teams. He told TechCrunch that threat intelligence teams, like the ones at Meta, are doing "amazing work," in part by staying siloed and separate from the commercial arms of their wider organizations. "But the question is how long will that last?" said Deibert.

News

Ask Slashdot: Who's Still Using an RSS Reader? 181

alternative_right writes: I use RSS to cover all of my news-reading needs because I like a variety of sources spanning several fields -- politics, philosophy, science, and heavy metal. However, it seems Google wanted to kill off RSS a few years back, and it has since fallen out of favor. Some of us are holding on, but how many? And what software do you use (or did you write your own XML parsers)?
Google

Google Says AI Search Features Haven't Hurt Web Traffic Despite Industry Reports (blog.google) 14

Google says total organic click volume from its search engine to websites has remained ""relatively stable year-over-year" despite the introduction of AI Overviews, contradicting third-party reports of dramatic traffic declines. The company reports average click quality has increased, with users less likely to immediately return to search results after clicking through to websites. Google attributes stable traffic patterns to users conducting more searches and asking longer, more complex questions since AI features launched, while AI Overviews display more links per page than traditional results.
Security

Google Suffers Data Breach in Ongoing Salesforce Data Theft Attacks (bleepingcomputer.com) 3

Google is the latest company to suffer a data breach in an ongoing wave of Salesforce CRM data theft attacks conducted by the ShinyHunters extortion group. BleepingComputer: In June, Google warned that a threat actor they classify as 'UNC6040' is targeting companies' employees in voice phishing (vishing) social engineering attacks to breach Salesforce instances and download customer data. This data is then used to extort companies into paying a ransom to prevent the data from being leaked.

In a brief update to the article last night, Google said that it too fell victim to the same attack in June after one of its Salesforce CRM instances was breached and customer data was stolen. "In June, one of Google's corporate Salesforce instances was impacted by similar UNC6040 activity described in this post. Google responded to the activity, performed an impact analysis and began mitigations," reads Google's update.

AI

OpenAI Offers ChatGPT To US Federal Agencies for $1 a Year (openai.com) 25

OpenAI will provide ChatGPT access to US federal agencies for $1 annually through the General Services Administration's new AI marketplace that also includes Google and Anthropic as approved vendors. The nominal pricing represents the deepest discount GSA has negotiated with software providers, surpassing previous deals with Adobe and Salesforce.

OpenAI said it will not use federal worker data to train its models and agencies face no renewal requirements. The $1 rate applies only to the ChatGPT chatbot interface, not OpenAI's API for custom software development.
Privacy

Meta Eavesdropped On Period-Tracker App's Users, Jury Rules (sfgate.com) 101

A San Francisco jury ruled that Meta violated the California Invasion of Privacy Act by collecting sensitive data from users of the Flo period-tracking app without consent. "The plaintiff's lawyers who sued Meta are calling this a 'landmark' victory -- the tech company contends that the jury got it all wrong," reports SFGATE. From the report: The case goes back to 2021, when eight women sued Flo and a group of other tech companies, including Google and Facebook, now known as Meta. The stakes were extremely personal. Flo asked users about their sex lives, mental health and diets, and guided them through menstruation and pregnancy. Then, the women alleged, Flo shared pieces of that data with other companies. The claims were largely based on a 2019 Wall Street Journal story and a 2021 Federal Trade Commission investigation. Google, Flo and the analytics company Flurry, which was also part of the lawsuit, reached settlements with the plaintiffs, as is common in class action lawsuits about tech privacy. But Meta stuck it out through the entire trial and lost.

The case against Meta focused on its Facebook software development kit, which Flo added to its app and which is generally used for analytics and advertising services. The women alleged that between June 2016 and February 2019, Flo sent Facebook, through that kit, various records of "Custom App Events" -- such as a user clicking a particular button in the "wanting to get pregnant" section of the app. Their complaint also pointed to Facebook's terms for its business tools, which said the company used so-called "event data" to personalize ads and content.

In a 2022 filing (PDF), the tech giant admitted that Flo used Facebook's kit during this period and that the app sent data connected to "App Events." But Meta denied receiving intimate information about users' health. Nonetheless, the jury ruled (PDF) against Meta. Along with the eavesdropping decision, the group determined that Flo's users had a reasonable expectation they weren't being overheard or recorded, as well as ruling that Meta didn't have consent to eavesdrop or record. The unanimous verdict was that the massive company violated the California Invasion of Privacy Act.
The jury's ruling could impact over 3.7 million U.S. users who registered between November 2016 and February 2019, with updates to be shared via email and a case website. The exact compensation from the trial or potential settlements remains uncertain.
Google

Google's New Genie 3 AI Model Creates Video Game Worlds In Real Time (theverge.com) 15

An anonymous reader quotes a report from The Verge: Google DeepMind is releasing a new version of its AI "world" model, called Genie 3, capable of generating 3D environments that users and AI agents can interact with in real time. The company is also promising that users will be able to interact with the worlds for much longer than before and that the model will actually remember where things are when you look away from them. [...] Genie 3 seems like it could be a notable step forward. Users will be able to generate worlds with a prompt that supports a "few" minutes of continuous interaction, which is up from the 10-20 seconds of interaction possible with Genie 2, according to a blog post.

Google says that Genie 3 can keep spaces in visual memory for about a minute, meaning that if you turn away from something in a world and then turn back to it, things like paint on a wall or writing on a chalkboard will be in the same place. The worlds will also have a 720p resolution and run at 24fps. DeepMind is adding what it calls "promptable world events" into Genie 3, too. Using a prompt, you'll be able to do things like change weather conditions in a world or add new characters.
The model is launching as "a limited research preview" available to "a small cohort of academics and creators," according to Google. It's "exploring" how to bring Genie 3 to "additional testers."
Privacy

Nearly 100,000 ChatGPT Conversations Were Searchable on Google (404media.co) 13

An anonymous reader shares a report: A researcher has scraped nearly 100,000 conversations from ChatGPT that users had set to share publicly and Google then indexed, creating a snapshot of all the sorts of things people are using OpenAI's chatbot for, and inadvertently exposing. 404 Media's testing has found the dataset includes everything from the sensitive to the benign: alleged texts of non-disclosure agreements, discussions of confidential contracts, people trying to use ChatGPT to understand their relationship issues, and lots of people asking ChatGPT to write LinkedIn posts.

The news follows a July 30 Fast Company article which reported "thousands" of shared ChatGPT chats were appearing in Google search results. People have since dug through some of the chats indexed by Google. The around 100,000 conversation dataset provides a better sense of the scale of the problem, and highlights some of the potential privacy risks in using any sharing features of AI tools. OpenAI did not dispute the figure of around 100,000 indexed chats when contacted for comment.

Google

Google Agrees To Pause AI Workloads To Protect the Grid When Power Demand Spikes (theregister.com) 50

Google will pause non-essential AI workloads to protect power grids, the advertising giant announced on Monday. From a report: The web giant already does this sort of thing for non-essential workloads like processing YouTube vids, which it moves to datacenters where power is available rather than continuing to run them in places demand for energy strains the grid. Under an agreement with Indiana Michigan Power (I&M) and the Tennessee Valley Authority (TVA), Google will use the same techniques for AI workloads.

The announcement comes as states served by the power companies brace for a heat wave that will likely strain the grid as residents use air conditioners and increase demand for energy. Amid debate about datacenters' consumption of power and water, the last thing that the Chocolate Factory needs is folks blaming its AI Mode search function for a power outage when temperatures top 100F (37.7C). Under the agreement, if energy demand surges or there's a disruption in the grid due to extreme weather, I&M and TVA can now request that Google reduce its power use by rescheduling workloads or limiting non-urgent tasks until the issue is resolved.

Slashdot Top Deals