AI

Google Releases Pint-Size Gemma Open AI Model (arstechnica.com) 12

An anonymous reader quotes a report from Ars Technica: Google has announced a tiny version of its Gemma open model designed to run on local devices. Google says the new Gemma 3 270M can be tuned in a snap and maintains robust performance despite its small footprint. [...] Running an AI model locally has numerous benefits, including enhanced privacy and lower latency. Gemma 3 270M was designed with these kinds of use cases in mind. In testing with a Pixel 9 Pro, the new Gemma was able to run 25 conversations on the Tensor G4 chip and use just 0.75 percent of the device's battery. That makes it by far the most efficient Gemma model.

Developers shouldn't expect the same performance level of a multi-billion-parameter model, but Gemma 3 270M has its uses. Google used the IFEval benchmark, which tests a model's ability to follow instructions, to show that its new model punches above its weight. Gemma 3 270M hits a score of 51.2 percent in this test, which is higher than other lightweight models that have more parameters. The new Gemma falls predictably short of 1 billion-plus models like Llama 3.2, but it gets closer than you might think for having just a fraction of the parameters.

Google claims Gemma 3 270M is good at following instructions out of the box, but it expects developers to fine-tune the model for their specific use cases. Due to the small parameter count, that process is fast and low-cost, too. Google sees the new Gemma being used for tasks like text classification and data analysis, which it can accomplish quickly and without heavy computing requirements. You can download the new Gemma for free, and the model weights are available. There's no separate commercial licensing agreement, so developers can modify, publish, and deploy Gemma 3 270M derivatives in their tools.
You can download Gemma 3 270M from Hugging Face and Kaggle in both pre-trained and instruction-tuned versions.
Power

Big Tech's AI Data Centers Are Driving Up Electricity Bills for Everyone (nytimes.com) 67

Electricity rates for individuals and small businesses could rise sharply as Amazon, Google, Microsoft and other technology companies build data centers and expand into the energy business. Residential electricity bills increased at least $15 monthly for Ohio households starting in June due to data center demands, according to utility data and an independent grid monitor. A Carnegie Mellon University and North Carolina State University analysis projects average U.S. electricity bills will rise 8% by 2030 from data center growth, with Virginia facing potential 25% increases. Virginia regulators estimate residents could pay an additional $276 annually by 2030.

National residential electricity rates have already risen more than 30% since 2020. Tech companies' AI push requires data centers that consumed over 4% of U.S. electricity in 2023, with government analysts projecting consumption reaching 12% within three years. American Electric Power warned Ohio regulators that without new rate structures requiring data centers to pay more upfront costs, residents and small businesses would bear much of the expense for grid upgrades.
Businesses

Co-Founder of xAI Departs the Company (techcrunch.com) 11

Igor Babuschkin, co-founder of xAI, has left the company to start Babuschkin Ventures, a VC firm focused on AI safety and humanity-advancing startups. TechCrunch reports: Babuschkin led engineering teams at xAI and helped build the startup into one of Silicon Valley's leading AI model developers just a few years after it was founded. "Today was my last day at xAI, the company that I helped start with Elon Musk in 2023," Babuschkin wrote in the post. "I still remember the day I first met Elon, we talked for hours about AI and what the future might hold. We both felt that a new AI company with a different kind of mission was needed."

Babuschkin is leaving xAI to launch his own venture capital firm, Babuschkin Ventures, which he says will support AI safety research and back startups that "advance humanity and unlock the mysteries of our universe." The xAI co-founder says he was inspired to start the firm after a dinner with Max Tegmark, the founder of the Future of Life Institute, in which they discussed how AI systems could be built safely to encourage the flourishing of future generations. In his post, Babuschkin says his parents immigrated to the U.S. from Russia in pursuit of a better life for their children.

Prior to co-founding xAI, Babuschkin was part of a research team at Google DeepMind that pioneered AlphaStar in 2019, a breakthrough AI system that could defeat top-ranked players at the video game StarCraft. Babuschkin also worked as a researcher at OpenAI in the years before it released ChatGPT. In his post, Babuschkin details some of the challenges he and Musk faced in building up xAI. He notes that industry veterans called xAI's goal of building its Memphis, Tennessee supercomputer in just three months "impossible." [...] Nevertheless, Babuschkin says he's already looking back fondly on his time at xAI, and "feels like a proud parent, driving away after sending their kid away to college." "I learned 2 priceless lessons from Elon: #1 be fearless in rolling up your sleeves to personally dig into technical problems, #2 have a maniacal sense of urgency," said Babuschkin.

Privacy

Data Brokers Are Hiding Their Opt-Out Pages From Google Search (wired.com) 29

Data brokers are required by California law to provide ways for consumers to request their data be deleted. But good luck finding them. From a report: More than 30 of the companies, which collect and sell consumers' personal information, hid their deletion instructions from Google, according to a review by The Markup and CalMatters of hundreds of broker websites. This creates one more obstacle for consumers who want to delete their data.

Many of the pages containing the instructions, listed in an official state registry, use code to tell search engines to remove the page entirely from search results. Popular tools like Google and Bing respect the code by excluding pages when responding to users. Data brokers nationwide must register in California under the state's Consumer Privacy Act, which allows Californians to request that their information be removed, that it not be sold, or that they get access to it. After reviewing the websites of all 499 data brokers registered with the state, we found 35 had code to stop certain pages from showing up in searches.

AI

Google's Gemini AI Will Get More Personalized By Remembering Details Automatically 38

An anonymous reader quotes a report from The Verge: Google is rolling out an update for Gemini that will allow the AI chatbot to "remember" your past conversations without prompting. With the setting turned on, Gemini will automatically recall your "key details and preferences" and use them to personalize its output.

This expands upon an update that Google introduced last year, which lets you ask Gemini to "remember" your personal preferences and interests. Now, Gemini won't need prompting to recall this information. As an example, Google says if you've used Gemini to get ideas for a YouTube channel surrounding Japanese culture in the past, then AI chatbot might suggest creating content about trying Japanese food if you ask it to suggest new video ideas in the future. [...]

Google will turn on this feature by default, but you can disable it by heading to your settings in the Gemini app and selecting Personal Context. From there, toggle off the Your past chats with Gemini option. Google will roll out this feature to its Gemini 2.5 Pro model in "select countries" starting today, before eventually bringing it to more locations and its Gemini 2.5 Flash model.
Google will also rename its "Gemini Apps Activity" setting to "Keep Activity," which will use "a sample" of your file and photo uploads to Gemini to "help improve Google services for everyone" starting on September 2nd. If you've disabled the previous setting, the new "Keep Activity" setting will be disabled too.

There's also a new "temporary chats" feature in Gemini to preserve privacy. "Temporary chats won't appear in your recent chats or your Keep Activity setting," notes The Verge. "Gemini also won't use these chats to personalize future conversations, nor will Google use them to train its AI models. Google will only save these conversations for 72 hours."
Google

Google and IBM Believe First Workable Quantum Computer is in Sight (ft.com) 36

IBM and Google report they will build industrial-scale quantum computers containing one million or more qubits by 2030, following IBM's June publication of a quantum computer blueprint addressing previous design gaps and Google's late-2023 breakthrough in scaling error correction.

Current experimental systems contain fewer than 200 qubits. IBM encountered crosstalk interference when scaling its Condor chip to 433 qubits and subsequently adopted low-density parity-check code requiring 90% fewer qubits than Google's surface code method, though this requires longer connections between distant qubits.

Google plans to reduce component costs tenfold to achieve its $1 billion target price for a full-scale machine. Amazon Web Services quantum hardware executive Oskar Painter told FT he estimates useful quantum computers remain 15-30 years away, citing engineering challenges in scaling despite resolved fundamental physics problems.
AI

AI Is Forcing the Return of the In-Person Job Interview (msn.com) 49

Google, Cisco, and McKinsey have reintroduced in-person interviews to combat AI-assisted cheating in virtual technical assessments. Coda Search/Staffing reports client requests for face-to-face meetings has surged to 30% this year from 5% in 2024.

A Gartner survey of 3,000 job seekers found 6% admitted to interview fraud including having someone else stand in for them, while the FBI has warned of thousands of North Korean nationals using false identities to secure remote positions at U.S. technology companies. Google CEO Sundar Pichai confirmed in June the company now requires at least one in-person round for certain roles to verify candidates possess genuine coding skills.
Google

Google Will Now Let You Pick Your Top Sources For Search Results (techcrunch.com) 36

Google is rolling out a new feature called "Preferred Sources" in the U.S. and India, which allows users to select their preferred choice of news sites and blogs to be shown in the Top Stories section of Google's search results. From a report: Enabling this feature means you will see more content from the sites you like, the company says. When users search for a particular topic, they will see a "star" icon next to the Top Stories section. They can tap on that icon and start adding sources by searching for them. Once you select the sources, you can refresh the results to see more content from your selected sources. Google said that for some queries, users will also see a separate "From your sources" section below the Top Stories section.
Australia

Australian Federal Court Rules Apple and Google Engaged in Anti-Competitive App Store Conduct (abc.net.au) 16

Australia's Federal Court ruled Tuesday that Apple and Google violated competition law through anti-competitive app store practices. Judge Jonathan Beach found both companies breached section 46 of the Competition and Consumer Act by misusing market power to reduce competition.

The decision covers class actions representing 15 million consumers and 150,000 developers seeking compensation for inflated prices from 2017-2022, plus separate Epic Games cases. Apple's exclusive iOS App Store and mandatory payment system, along with Google's Play Store billing requirements, were ruled anti-competitive despite security justifications. Compensation amounts will be determined at subsequent hearings, with estimates reaching hundreds of millions of dollars.
AI

Perplexity Makes Longshot $34.5 Billion Offer for Chrome (msn.com) 48

AI startup Perplexity on Tuesday offered to purchase Google's Chrome browser for $34.5 billion as it works to challenge the tech giant's web-search dominance. From a report: Perplexity's offer is significantly more than its own valuation, which is estimated at $18 billion. The company told The Wall Street Journal that several investors including large venture-capital funds had agreed to back the transaction in full.

Estimates of Chrome's enterprise value vary widely but recent ones have ranged from $20 billion to $50 billion. U.S. District Judge Amit Mehta is weighing whether to force Google to sell the browser as a means of weakening Google's stranglehold on web search. Mehta last year ruled that Google illegally monopolized the search market and is expected to rule this month on how to restore competition.

AI

LLMs' 'Simulated Reasoning' Abilities Are a 'Brittle Mirage,' Researchers Find (arstechnica.com) 238

An anonymous reader quotes a report from Ars Technica: In recent months, the AI industry has started moving toward so-called simulated reasoning models that use a "chain of thought" process to work through tricky problems in multiple logical steps. At the same time, recent research has cast doubt on whether those models have even a basic understanding of general logical concepts or an accurate grasp of their own "thought process." Similar research shows that these "reasoning" models can often produce incoherent, logically unsound answers when questions include irrelevant clauses or deviate even slightly from common templates found in their training data.

In a recent pre-print paper, researchers from the University of Arizona summarize this existing work as "suggest[ing] that LLMs are not principled reasoners but rather sophisticated simulators of reasoning-like text." To pull on that thread, the researchers created a carefully controlled LLM environment in an attempt to measure just how well chain-of-thought reasoning works when presented with "out of domain" logical problems that don't match the specific logical patterns found in their training data. The results suggest that the seemingly large performance leaps made by chain-of-thought models are "largely a brittle mirage" that "become[s] fragile and prone to failure even under moderate distribution shifts," the researchers write. "Rather than demonstrating a true understanding of text, CoT reasoning under task transformations appears to reflect a replication of patterns learned during training." [...]

Rather than showing the capability for generalized logical inference, these chain-of-thought models are "a sophisticated form of structured pattern matching" that "degrades significantly" when pushed even slightly outside of its training distribution, the researchers write. Further, the ability of these models to generate "fluent nonsense" creates "a false aura of dependability" that does not stand up to a careful audit. As such, the researchers warn heavily against "equating [chain-of-thought]-style output with human thinking" especially in "high-stakes domains like medicine, finance, or legal analysis." Current tests and benchmarks should prioritize tasks that fall outside of any training set to probe for these kinds of errors, while future models will need to move beyond "surface-level pattern recognition to exhibit deeper inferential competence," they write.

Crime

It's Steve Wozniak's 75th Birthday. Whatever Happened to His YouTube Lawsuit? (cbsnews.com) 98

In 2020 a YouTube video used video footage of Steve Wozniak in a scam to steal bitcoin. "Some people said they lost their life savings," Wozniak tells CBS News, explaining why he sued YouTube in 2020 — and where his case stands now: Wozniak's lawsuit against YouTube has been tied up in court now for five years, stalled by federal legislation known as Section 230. Attorney Brian Danitz said, "Section 230 is a very broad statute that limits, if not totally, the ability to bring any kind of case against these social media platforms."

"It says that anything gets posted, they have no liability at all," said Wozniak. "It's totally absolute."

Google responded to our inquiry about Wozniak's lawsuit with a statement from José Castañeda, of Google Policy Communications: "We take abuse of our platform seriously and take action quickly when we detect violations ... we have tools for users to report channels that are impersonating their likeness or business." [Steve's wife] Janet Wozniak, however, says YouTube did nothing, even though she reported the scam video multiple times: "You know, 'Please take this down. This is an obvious mistake. This is fraud. You're YouTube, you're helping dupe people out of their money,'" she said.

"They wouldn't," said Steve...

Today is Steve Wozniak's 75th birthday. (You can watch the interview here.) And the article includes this interesting detail about Woz's life today: Wozniak sold most of his Apple stock in the mid-1980s when he left the company. Today, though, he still gets a small paycheck from Apple for making speeches and representing the company. He says he's proud to see Apple become a trillion-dollar company. "Apple is still the best," he said. "And when Apple does things I don't like, and some of the closeness I wish it were more open, I'll speak out about it. Nobody buys my voice!"

I asked, "Apple listen to you when you speak out?"

"No," Wozniak smiled. "Oh, no. Oh, no."

Wozniak answered questions from Slashdot readers in 2000 and again in 2012.

And he dropped by Slashdot on his birthday to leave this comment for Slashdot's readers...
The Military

How 12 'Enola Gay' Crew Members Remember Dropping the Atomic Bomb (mentalfloss.com) 130

Last week saw the 80th anniversary of a turning point in World War II: the day America dropped an atomic bomb on Hiroshima.

"Twelve men were on that flight..." remembers the online magazine Mental Floss, adding "Almost all had something to say after the war." The group was segregated from the rest of the military and trained in secret. Even those in the group only knew as much as they needed to know in order to perform their duties. The group deployed to Tinian in 1945 with 15 B-29 bombers, flight crews, ground crews, and other personnel, a total of about 1770 men. The mission to drop the atomic bomb on Hiroshima, Japan (special mission 13) involved seven planes, but the one we remember was the Enola Gay.

Air Force captain Theodore "Dutch" Van Kirk did not know the destructive force of the nuclear bomb before Hiroshima. He was 24 years old at that time, a veteran of 58 missions in North Africa. Paul Tibbets told him this mission would shorten or end the war, but Van Kirk had heard that line before. Hiroshima made him a believer. Van Kirk felt the bombing of Hiroshima was worth the price in that it ended the war before the invasion of Japan, which promised to be devastating to both sides. " I honestly believe the use of the atomic bomb saved lives in the long run. There were a lot of lives saved. Most of the lives saved were Japanese."

In 2005, Van Kirk came as close as he ever got to regret. "I pray no man will have to witness that sight again. Such a terrible waste, such a loss of life..."

Many of the other crewmembers also felt the bomb ultimately saved lives.

The Washington Post has also published a new oral history of the flight after it took off from Tinian Island. The oral history was assembled for a new book published this week titled The Devil Reached Toward the Sky: An Oral History of the Making and Unleashing of the Atomic Bomb.. Col. Paul W. Tibbets, lead pilot of the Enola Gay: We were only eight minutes off the ground when Capt. William S. "Deak" Parsons and Lt. Morris R. Jeppson lowered themselves into the bomb bay to insert a slug of uranium and the conventional explosive charge into the core of the strange-looking weapon. I wondered why we were calling it ''Little Boy." Little Boy was 28 inches in diameter and 12 feet long. Its weight was a little more than 9,000 pounds. With its coat of dull gunmetal paint, it was an ugly monster...

Lt. Morris R. Jeppson, crew member of the Enola Gay: Parsons was second-in-command of the military in the Manhattan Project. The Little Boy weapon was Parsons's design. He was greatly concerned that B-29s loaded with conventional bombs were crashing at the ends of runways on Tinian during takeoff and that such an event could cause the U-235 projectile in the gun of Little Boy to fly down the barrel and into the U-235 target. This could have caused a low-level nuclear explosion on Tinian...

Jeppson: On his own, Parsons decided that he would go on the Hiroshima mission and that he would load the gun after the Enola Gay was well away from Tinian.

Tibbets: That way, if we crashed, we would lose only the airplane and crew, himself included... Jeppson held the flashlight while Parsons struggled with the mechanism of the bomb, inserting the explosive charge that would send one block of uranium flying into the other to set off the instant chain reaction that would create the atomic explosion.

The navigator on one of the other six planes on the mission remember that watching the mushroom cloud, "There was almost complete silence on the flight deck. It was evident the city of Hiroshima was destroyed."

And the Enola Gay's copilot later remembered thinking: "My God, what have we done?"
AI

Autonomous AI-Guided Black Hawk Helicopter Tested to Fight Wildfires (yahoo.com) 36

Imagine this. Lightning sparks a wildfire, but "within seconds, a satellite dish swirling overhead picks up on the anomaly and triggers an alarm," writes the Los Angeles Times. "An autonomous helicopter takes flight and zooms toward the fire, using sensors to locate the blaze and AI to generate a plan of attack. It measures the wind speed and fire movement, communicating constantly with the unmanned helicopter behind it, and the one behind that. Once over the site, it drops a load of water and soon the flames are smoldering. Without deploying a single human, the fire never grows larger than 10 square feet.

"This is the future of firefighting." On a recent morning in San Bernardino, state and local fire experts gathered for a demonstration of the early iterations of this new reality. An autonomous Sikorski Black Hawk helicopter, powered by technology from Lockheed Martin and a California-based software company called Rain, is on display on the tarmac of a logistics airport in Victorville — the word "EXPERIMENTAL" painted on its military green-black door. It's one of many new tools on the front lines of firefighting technology, which experts say is evolving rapidly as private industry and government agencies come face-to-face with a worsening global climate crisis...

Scientific studies and climate research models have found that the number of extreme fires could increase by as much as 30% globally by 2050. By 2100, California alone could see a 50% increase in wildfire frequency and a 77% increase in average annual acres burned, according to the state's most recent climate report. That's largely because human-caused climate change is driving up temperatures and drying out the landscape, priming it to burn, according to Kate Dargan Marquis, a senior advisor with the Gordon and Betty Moore Foundation who served as California's state fire marshal from 2007 to 2010.... "[T]he policies of today and the technologies of today are not going to serve us tomorrow."

Today, more than 1,100 mountaintop cameras positioned across California are already using artificial intelligence to scan the landscape for the first sign of flames and prompt crews to spring into action. NASA's Earth-observing satellites are studying landscape conditions to help better predict fires before they ignite, while a new global satellite constellation recently launched by Google is helping to detect fires faster than ever before.

One 35-year fire service veteran who consults on fire service technologies even predicts fire-fighting robots will also be used in high-risk situations like the Colossus robot that battled flames searing through Notre-Dame Cathedral in Paris...

And a bill moving through California's legislation "would direct the California Department of Forestry and Fire Protection to establish a pilot program to assess the viability of incorporating autonomous firefighting helicopters in the state."
Power

As Electric Bills Rise, Evidence Mounts That U.S. Data Centers Share Blame (apnews.com) 88

"Amid rising electric bills, states are under pressure to insulate regular household and business ratepayers from the costs of feeding Big Tech's energy-hungry data centers..." reports the Associated Press.

"Some critics question whether states have the spine to take a hard line against tech behemoths like Microsoft, Google, Amazon and Meta." [T]he Data Center Coalition, which represents Big Tech firms and data center developers, has said its members are committed to paying their fair share. But growing evidence suggests that the electricity bills of some Americans are rising to subsidize the massive energy needs of Big Tech as the U.S. competes in a race against China for artificial intelligence superiority. Data and analytics firm Wood Mackenzie published a report in recent weeks that suggested 20 proposed or effective specialized rates for data centers in 16 states it studied aren't nearly enough to cover the cost of a new natural gas power plant. In other words, unless utilities negotiate higher specialized rates, other ratepayer classes — residential, commercial and industrial — are likely paying for data center power needs. Meanwhile, Monitoring Analytics, the independent market watchdog for the mid-Atlantic grid, produced research in June showing that 70% — or $9.3 billion — of last year's increased electricity cost was the result of data center demand.

Last year, five governors led by Pennsylvania's Josh Shapiro began pushing back against power prices set by the mid-Atlantic grid operator, PJM Interconnection, after that amount spiked nearly sevenfold. They warned of customers "paying billions more than is necessary." PJM has yet to propose ways to guarantee that data centers pay their freight, but Monitoring Analytics is floating the idea that data centers should be required to procure their own power. In a filing last month, it said that would avoid a "massive wealth transfer" from average people to tech companies.

At least a dozen states are eyeing ways to make data centers pay higher local transmission costs. In Oregon, a data center hot spot, lawmakers passed legislation in June ordering state utility regulators to develop new — presumably higher — power rates for data centers. The Oregon Citizens' Utility Board [a consumer advocacy group] says there is clear evidence that costs to serve data centers are being spread across all customers — at a time when some electric bills there are up 50% over the past four years and utilities are disconnecting more people than ever.

"Some data centers could require more electricity than cities the size of Pittsburgh, Cleveland or New Orleans," the article points out...
Security

Google Says Its AI-Based Bug Hunter Found 20 Security Vulnerabilities (techcrunch.com) 17

"Heather Adkins, Google's vice president of security, announced Monday that its LLM-based vulnerability researcher Big Sleep found and reported 20 flaws in various popular open source software," reports TechCrunch: Adkins said that Big Sleep, which is developed by the company's AI department DeepMind as well as its elite team of hackers Project Zero, reported its first-ever vulnerabilities, mostly in open source software such as audio and video library FFmpeg and image-editing suite ImageMagick. [There's also a "medium impact" issue in Redis]

Given that the vulnerabilities are not fixed yet, we don't have details of their impact or severity, as Google does not yet want to provide details, which is a standard policy when waiting for bugs to be fixed. But the simple fact that Big Sleep found these vulnerabilities is significant, as it shows these tools are starting to get real results, even if there was a human involved in this case.

"To ensure high quality and actionable reports, we have a human expert in the loop before reporting, but each vulnerability was found and reproduced by the AI agent without human intervention," Google's spokesperson Kimberly Samra told TechCrunch.

Google's vice president of engineering posted on social media that this demonstrates "a new frontier in automated vulnerability discovery."
AI

Initiative Seeks AI Lab to Build 'American Truly Open Models' (ATOM) (msn.com) 20

"Benchmarking firm Artificial Analysis found that only five of the top 15 AI models are open source," reports the Washington Post, "and all were developed by Chinese AI companies...."

"Now some American executives, investors and academics are endorsing a plan to make U.S. open-source AI more competitive." A new campaign called the ATOM Project, for American Truly Open Models, aims to create a U.S.-based AI lab dedicated to creating software that developers can freely access and modify. Its blueprint calls for access to serious computing power, with upward of 10,000 of the cutting-edge GPU chips used to power corporate AI development. The initiative, which launched Monday, has gathered signatures of support from more than a dozen industry figures. They include veteran tech investor Bill Gurley; Clement Delangue, CEO of Hugging Face, a repository for open-source AI models and datasets; Stanford professor and AI investor Chris Manning; chipmaker Nvidia's director of applied research, Oleksii Kuchaiev; Jason Kwon, chief strategy officer for OpenAI; and Dylan Patel, CEO and founder of research firm SemiAnalysis...

The lack of progress in open-source AI underscores the case for initiatives like ATOM: The U.S. has not produced a major new open-source AI release since Meta's launch of its Llama 4 model in April, which disappointed some AI experts... "A lot of it is a coordination problem," said ATOM's creator, Nathan Lambert, a senior research scientist at the nonprofit Allen Institute for AI who is launching the project in a personal capacity... Lambert said the idea was to develop much more powerful open-source AI models than existing U.S. efforts such as Bloom, an AI language model from Hugging Face, Pythia from EleutherAI, and others. Those groups were willing to take on more legal risk in the name of scientific progress but suffered from underfunding, said Lambert, who has worked at Google's DeepMind AI lab, Facebook AI Research and Hugging Face.

The other problem? The hefty cost of top-performing AI. Lambert estimates that getting access to 10,000 state-of-the-art GPUs will cost at least $100 million. But the funding must be found if American efforts are to stay competitive, he said.

The initiative's web page is seeking signatures, but also asks visitors to the site to "consider how your expertise or resources might contribute to building the infrastructure America needs."
AI

Students Have Been Called to the Office - Or Arrested - for False Alarms from AI-Powered Surveillance Systems (apnews.com) 162

In 2023 a 13-year-old girl "made an offensive joke while chatting online with her classmates," reports the Associated Press.

But when the school's surveillance software spotted that joke, "Before the morning was even over, the Tennessee eighth grader was under arrest. She was interrogated, strip-searched and spent the night in a jail cell, her mother says." Her parents filed a lawsuit against the school system, according to the article (which points out the girl wasn't allowed to talk to her parents until the next day). "A court ordered eight weeks of house arrest, a psychological evaluation and 20 days at an alternative school for the girl." Gaggle's CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said. "I wish that was treated as a teachable moment, not a law enforcement moment," said Patterson.
But that's just one example, the article points out. "Surveillance systems in American schools increasingly monitor everything students write on school accounts and devices." Thousands of school districts across the country use software like Gaggle and Lightspeed Alert to track kids' online activities, looking for signs they might hurt themselves or others. With the help of artificial intelligence, technology can dip into online conversations and immediately notify both school officials and law enforcement... In a country weary of school shootings, several states have taken a harder line on threats to schools. Among them is Tennessee, which passed a 2023 zero-tolerance law requiring any threat of mass violence against a school to be reported immediately to law enforcement....

Students who think they are chatting privately among friends often do not realize they are under constant surveillance, said Shahar Pasch, an education lawyer in Florida. One teenage girl she represented made a joke about school shootings on a private Snapchat story. Snapchat's automated detection software picked up the comment, the company alerted the FBI, and the girl was arrested on school grounds within hours... The technology can also involve law enforcement in responses to mental health crises. In Florida's Polk County Schools, a district of more than 100,000 students, the school safety program received nearly 500 Gaggle alerts over four years, officers said in public Board of Education meetings. This led to 72 involuntary hospitalization cases under the Baker Act, a state law that allows authorities to require mental health evaluations for people against their will if they pose a risk to themselves or others...

Information that could allow schools to assess the software's effectiveness, such as the rate of false alerts, is closely held by technology companies and unavailable publicly unless schools track the data themselves. Students in one photography class were called to the principal's office over concerns Gaggle had detected nudity. The photos had been automatically deleted from the students' Google Drives, but students who had backups of the flagged images on their own devices showed it was a false alarm. District officials said they later adjusted the software's settings to reduce false alerts. Natasha Torkzaban, who graduated in 2024, said she was flagged for editing a friend's college essay because it had the words "mental health...."

School officials have said they take concerns about Gaggle seriously, but also say the technology has detected dozens of imminent threats of suicide or violence. "Sometimes you have to look at the trade for the greater good," said Board of Education member Anne Costello in a July 2024 board meeting.

Google

South Korea Postpones Decision To Let Google Maps Work Properly - Again (theguardian.com) 18

South Korea postponed a decision for the second time this year on Friday regarding Google's request to export detailed mapping data to overseas servers, which would enable full Google Maps functionality in the country. The inter-agency committee extended the deadline from August to October to allow further review of security concerns and consultations with industry stakeholders.

South Korea remains one of only a handful of countries alongside China and North Korea where Google Maps fails to function properly, unable to provide directions despite displaying landmarks and businesses. Tourism complaints increased 71% last year, with Google Maps accounting for 30% of all app-related grievances, while local industry groups representing 2,600 companies report 90% opposition to Google's request due to fears of market domination by the US tech company.
Google

Google Ending Steam for Chromebook Support in 2026 (9to5google.com) 11

Google will discontinue Steam for Chromebook Beta on January 1, 2026, removing all installed games from devices after that date. The beta launched in March 2022 as an alpha before expanding to beta status in November 2022 with reduced hardware requirements of Intel Core i3 or AMD Ryzen 3 processors and 8GB RAM. The program never progressed beyond beta testing despite supporting 99 compatible Linux-based titles through its three-year run.

Slashdot Top Deals