AI

Initiative Seeks AI Lab to Build 'American Truly Open Models' (ATOM) (msn.com) 20

"Benchmarking firm Artificial Analysis found that only five of the top 15 AI models are open source," reports the Washington Post, "and all were developed by Chinese AI companies...."

"Now some American executives, investors and academics are endorsing a plan to make U.S. open-source AI more competitive." A new campaign called the ATOM Project, for American Truly Open Models, aims to create a U.S.-based AI lab dedicated to creating software that developers can freely access and modify. Its blueprint calls for access to serious computing power, with upward of 10,000 of the cutting-edge GPU chips used to power corporate AI development. The initiative, which launched Monday, has gathered signatures of support from more than a dozen industry figures. They include veteran tech investor Bill Gurley; Clement Delangue, CEO of Hugging Face, a repository for open-source AI models and datasets; Stanford professor and AI investor Chris Manning; chipmaker Nvidia's director of applied research, Oleksii Kuchaiev; Jason Kwon, chief strategy officer for OpenAI; and Dylan Patel, CEO and founder of research firm SemiAnalysis...

The lack of progress in open-source AI underscores the case for initiatives like ATOM: The U.S. has not produced a major new open-source AI release since Meta's launch of its Llama 4 model in April, which disappointed some AI experts... "A lot of it is a coordination problem," said ATOM's creator, Nathan Lambert, a senior research scientist at the nonprofit Allen Institute for AI who is launching the project in a personal capacity... Lambert said the idea was to develop much more powerful open-source AI models than existing U.S. efforts such as Bloom, an AI language model from Hugging Face, Pythia from EleutherAI, and others. Those groups were willing to take on more legal risk in the name of scientific progress but suffered from underfunding, said Lambert, who has worked at Google's DeepMind AI lab, Facebook AI Research and Hugging Face.

The other problem? The hefty cost of top-performing AI. Lambert estimates that getting access to 10,000 state-of-the-art GPUs will cost at least $100 million. But the funding must be found if American efforts are to stay competitive, he said.

The initiative's web page is seeking signatures, but also asks visitors to the site to "consider how your expertise or resources might contribute to building the infrastructure America needs."
AI

Students Have Been Called to the Office - Or Arrested - for False Alarms from AI-Powered Surveillance Systems (apnews.com) 162

In 2023 a 13-year-old girl "made an offensive joke while chatting online with her classmates," reports the Associated Press.

But when the school's surveillance software spotted that joke, "Before the morning was even over, the Tennessee eighth grader was under arrest. She was interrogated, strip-searched and spent the night in a jail cell, her mother says." Her parents filed a lawsuit against the school system, according to the article (which points out the girl wasn't allowed to talk to her parents until the next day). "A court ordered eight weeks of house arrest, a psychological evaluation and 20 days at an alternative school for the girl." Gaggle's CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said. "I wish that was treated as a teachable moment, not a law enforcement moment," said Patterson.
But that's just one example, the article points out. "Surveillance systems in American schools increasingly monitor everything students write on school accounts and devices." Thousands of school districts across the country use software like Gaggle and Lightspeed Alert to track kids' online activities, looking for signs they might hurt themselves or others. With the help of artificial intelligence, technology can dip into online conversations and immediately notify both school officials and law enforcement... In a country weary of school shootings, several states have taken a harder line on threats to schools. Among them is Tennessee, which passed a 2023 zero-tolerance law requiring any threat of mass violence against a school to be reported immediately to law enforcement....

Students who think they are chatting privately among friends often do not realize they are under constant surveillance, said Shahar Pasch, an education lawyer in Florida. One teenage girl she represented made a joke about school shootings on a private Snapchat story. Snapchat's automated detection software picked up the comment, the company alerted the FBI, and the girl was arrested on school grounds within hours... The technology can also involve law enforcement in responses to mental health crises. In Florida's Polk County Schools, a district of more than 100,000 students, the school safety program received nearly 500 Gaggle alerts over four years, officers said in public Board of Education meetings. This led to 72 involuntary hospitalization cases under the Baker Act, a state law that allows authorities to require mental health evaluations for people against their will if they pose a risk to themselves or others...

Information that could allow schools to assess the software's effectiveness, such as the rate of false alerts, is closely held by technology companies and unavailable publicly unless schools track the data themselves. Students in one photography class were called to the principal's office over concerns Gaggle had detected nudity. The photos had been automatically deleted from the students' Google Drives, but students who had backups of the flagged images on their own devices showed it was a false alarm. District officials said they later adjusted the software's settings to reduce false alerts. Natasha Torkzaban, who graduated in 2024, said she was flagged for editing a friend's college essay because it had the words "mental health...."

School officials have said they take concerns about Gaggle seriously, but also say the technology has detected dozens of imminent threats of suicide or violence. "Sometimes you have to look at the trade for the greater good," said Board of Education member Anne Costello in a July 2024 board meeting.

Google

South Korea Postpones Decision To Let Google Maps Work Properly - Again (theguardian.com) 18

South Korea postponed a decision for the second time this year on Friday regarding Google's request to export detailed mapping data to overseas servers, which would enable full Google Maps functionality in the country. The inter-agency committee extended the deadline from August to October to allow further review of security concerns and consultations with industry stakeholders.

South Korea remains one of only a handful of countries alongside China and North Korea where Google Maps fails to function properly, unable to provide directions despite displaying landmarks and businesses. Tourism complaints increased 71% last year, with Google Maps accounting for 30% of all app-related grievances, while local industry groups representing 2,600 companies report 90% opposition to Google's request due to fears of market domination by the US tech company.
Google

Google Ending Steam for Chromebook Support in 2026 (9to5google.com) 11

Google will discontinue Steam for Chromebook Beta on January 1, 2026, removing all installed games from devices after that date. The beta launched in March 2022 as an alpha before expanding to beta status in November 2022 with reduced hardware requirements of Intel Core i3 or AMD Ryzen 3 processors and 8GB RAM. The program never progressed beyond beta testing despite supporting 99 compatible Linux-based titles through its three-year run.
Google

Google Tests AI-Powered Google Finance (blog.google) 12

Google announced Friday it will roll out an AI-powered redesign of Google Finance over the coming weeks in the United States. The update adds natural language query processing for financial research questions with comprehensive AI responses including relevant links, advanced charting tools with technical indicators and candlestick charts, expanded market data covering commodities and additional cryptocurrencies, and a live news feed displaying real-time headlines.
Google

Google TV's Uncertain Future (theverge.com) 32

Google has quietly admitted defeat in selling advertising for its smart TV platform, returning ad inventory to publishers and accepting a revenue share instead of controlling ad spots directly, according to The Verge. The policy reversal comes as Google spends hundreds of millions of dollars annually on Google TV without breaking even, while Amazon outspends the company on retail incentives that have already pushed Google TV sets out of Costco stores in favor of Fire TV models.

Amazon pays up to $50 per activated television to retailers and manufacturers, The Verge reported. Google TV has grown to 270 million monthly active devices worldwide since unifying Android TV and Chromecast under a single brand in 2020, but many devices operate in overseas markets that generate little revenue or run customized versions controlled by pay-TV operators. YouTube's success in the living room -- generating $9.8 billion in quarterly ad revenue and accounting for 12.5% of all US television viewing -- has reduced internal support for Google TV, with sales teams prioritizing the video platform and some YouTube executives arguing the smart TV budget should be redirected, the report adds.
Security

Citizen Lab Director Warns Cyber Industry About US Authoritarian Descent (techcrunch.com) 103

An anonymous reader quotes a report from TechCrunch: Ron Deibert, the director of Citizen Lab, one of the most prominent organizations investigating government spyware abuses, is sounding the alarm to the cybersecurity community and asking them to step up and join the fight against authoritarianism. On Wednesday, Deibert will deliver a keynote at the Black Hat cybersecurity conference in Las Vegas, one of the largest gatherings of information security professionals of the year. Ahead of his talk, Deibert told TechCrunch that he plans to speak about what he describes as a "descent into a kind of fusion of tech and fascism," and the role that the Big Tech platforms are playing, and "propelling forward a really frightening type of collective insecurity that isn't typically addressed by this crowd, this community, as a cybersecurity problem."

Deibert described the recent political events in the United States as a "dramatic descent into authoritarianism," but one that the cybersecurity community can help defend against. "I think alarm bells need to be rung for this community that, at the very least, they should be aware of what's going on and hopefully they can not contribute to it, if not help reverse it," Deibert told TechCrunch. [...] "I think that there comes a point at which you have to recognize that the landscape is changing around you, and the security problems you set out for yourselves are maybe trivial in light of the broader context and the insecurities that are being propelled forward in the absence of proper checks and balances and oversight, which are deteriorating," said Deibert.

Deibert is also concerned that big companies like Meta, Google, and Apple could take a step back in their efforts to fight against government spyware -- sometimes referred to as "commercial" or "mercenary" spyware -- by gutting their threat intelligence teams. [...] Deibert believes there is a "huge market failure when it comes to cybersecurity for global civil society," a part of the population that generally cannot afford to get help from big security companies that typically serve governments and corporate clients. "This market failure is going to get more acute as supporting institutions evaporate and attacks on civil society amplify," he said. "Whatever they can do to contribute to offset this market failure (e.g., pro bono work) will be essential to the future of liberal democracy worldwide," he said. Deibert is concerned that these threat intelligence teams could be cut or at least reduced, given that the same companies have cut their moderation and safety teams. He told TechCrunch that threat intelligence teams, like the ones at Meta, are doing "amazing work," in part by staying siloed and separate from the commercial arms of their wider organizations. "But the question is how long will that last?" said Deibert.

News

Ask Slashdot: Who's Still Using an RSS Reader? 181

alternative_right writes: I use RSS to cover all of my news-reading needs because I like a variety of sources spanning several fields -- politics, philosophy, science, and heavy metal. However, it seems Google wanted to kill off RSS a few years back, and it has since fallen out of favor. Some of us are holding on, but how many? And what software do you use (or did you write your own XML parsers)?
Google

Google Says AI Search Features Haven't Hurt Web Traffic Despite Industry Reports (blog.google) 14

Google says total organic click volume from its search engine to websites has remained ""relatively stable year-over-year" despite the introduction of AI Overviews, contradicting third-party reports of dramatic traffic declines. The company reports average click quality has increased, with users less likely to immediately return to search results after clicking through to websites. Google attributes stable traffic patterns to users conducting more searches and asking longer, more complex questions since AI features launched, while AI Overviews display more links per page than traditional results.
Security

Google Suffers Data Breach in Ongoing Salesforce Data Theft Attacks (bleepingcomputer.com) 3

Google is the latest company to suffer a data breach in an ongoing wave of Salesforce CRM data theft attacks conducted by the ShinyHunters extortion group. BleepingComputer: In June, Google warned that a threat actor they classify as 'UNC6040' is targeting companies' employees in voice phishing (vishing) social engineering attacks to breach Salesforce instances and download customer data. This data is then used to extort companies into paying a ransom to prevent the data from being leaked.

In a brief update to the article last night, Google said that it too fell victim to the same attack in June after one of its Salesforce CRM instances was breached and customer data was stolen. "In June, one of Google's corporate Salesforce instances was impacted by similar UNC6040 activity described in this post. Google responded to the activity, performed an impact analysis and began mitigations," reads Google's update.

AI

OpenAI Offers ChatGPT To US Federal Agencies for $1 a Year (openai.com) 25

OpenAI will provide ChatGPT access to US federal agencies for $1 annually through the General Services Administration's new AI marketplace that also includes Google and Anthropic as approved vendors. The nominal pricing represents the deepest discount GSA has negotiated with software providers, surpassing previous deals with Adobe and Salesforce.

OpenAI said it will not use federal worker data to train its models and agencies face no renewal requirements. The $1 rate applies only to the ChatGPT chatbot interface, not OpenAI's API for custom software development.
Privacy

Meta Eavesdropped On Period-Tracker App's Users, Jury Rules (sfgate.com) 101

A San Francisco jury ruled that Meta violated the California Invasion of Privacy Act by collecting sensitive data from users of the Flo period-tracking app without consent. "The plaintiff's lawyers who sued Meta are calling this a 'landmark' victory -- the tech company contends that the jury got it all wrong," reports SFGATE. From the report: The case goes back to 2021, when eight women sued Flo and a group of other tech companies, including Google and Facebook, now known as Meta. The stakes were extremely personal. Flo asked users about their sex lives, mental health and diets, and guided them through menstruation and pregnancy. Then, the women alleged, Flo shared pieces of that data with other companies. The claims were largely based on a 2019 Wall Street Journal story and a 2021 Federal Trade Commission investigation. Google, Flo and the analytics company Flurry, which was also part of the lawsuit, reached settlements with the plaintiffs, as is common in class action lawsuits about tech privacy. But Meta stuck it out through the entire trial and lost.

The case against Meta focused on its Facebook software development kit, which Flo added to its app and which is generally used for analytics and advertising services. The women alleged that between June 2016 and February 2019, Flo sent Facebook, through that kit, various records of "Custom App Events" -- such as a user clicking a particular button in the "wanting to get pregnant" section of the app. Their complaint also pointed to Facebook's terms for its business tools, which said the company used so-called "event data" to personalize ads and content.

In a 2022 filing (PDF), the tech giant admitted that Flo used Facebook's kit during this period and that the app sent data connected to "App Events." But Meta denied receiving intimate information about users' health. Nonetheless, the jury ruled (PDF) against Meta. Along with the eavesdropping decision, the group determined that Flo's users had a reasonable expectation they weren't being overheard or recorded, as well as ruling that Meta didn't have consent to eavesdrop or record. The unanimous verdict was that the massive company violated the California Invasion of Privacy Act.
The jury's ruling could impact over 3.7 million U.S. users who registered between November 2016 and February 2019, with updates to be shared via email and a case website. The exact compensation from the trial or potential settlements remains uncertain.
Google

Google's New Genie 3 AI Model Creates Video Game Worlds In Real Time (theverge.com) 15

An anonymous reader quotes a report from The Verge: Google DeepMind is releasing a new version of its AI "world" model, called Genie 3, capable of generating 3D environments that users and AI agents can interact with in real time. The company is also promising that users will be able to interact with the worlds for much longer than before and that the model will actually remember where things are when you look away from them. [...] Genie 3 seems like it could be a notable step forward. Users will be able to generate worlds with a prompt that supports a "few" minutes of continuous interaction, which is up from the 10-20 seconds of interaction possible with Genie 2, according to a blog post.

Google says that Genie 3 can keep spaces in visual memory for about a minute, meaning that if you turn away from something in a world and then turn back to it, things like paint on a wall or writing on a chalkboard will be in the same place. The worlds will also have a 720p resolution and run at 24fps. DeepMind is adding what it calls "promptable world events" into Genie 3, too. Using a prompt, you'll be able to do things like change weather conditions in a world or add new characters.
The model is launching as "a limited research preview" available to "a small cohort of academics and creators," according to Google. It's "exploring" how to bring Genie 3 to "additional testers."
Privacy

Nearly 100,000 ChatGPT Conversations Were Searchable on Google (404media.co) 13

An anonymous reader shares a report: A researcher has scraped nearly 100,000 conversations from ChatGPT that users had set to share publicly and Google then indexed, creating a snapshot of all the sorts of things people are using OpenAI's chatbot for, and inadvertently exposing. 404 Media's testing has found the dataset includes everything from the sensitive to the benign: alleged texts of non-disclosure agreements, discussions of confidential contracts, people trying to use ChatGPT to understand their relationship issues, and lots of people asking ChatGPT to write LinkedIn posts.

The news follows a July 30 Fast Company article which reported "thousands" of shared ChatGPT chats were appearing in Google search results. People have since dug through some of the chats indexed by Google. The around 100,000 conversation dataset provides a better sense of the scale of the problem, and highlights some of the potential privacy risks in using any sharing features of AI tools. OpenAI did not dispute the figure of around 100,000 indexed chats when contacted for comment.

Google

Google Agrees To Pause AI Workloads To Protect the Grid When Power Demand Spikes (theregister.com) 50

Google will pause non-essential AI workloads to protect power grids, the advertising giant announced on Monday. From a report: The web giant already does this sort of thing for non-essential workloads like processing YouTube vids, which it moves to datacenters where power is available rather than continuing to run them in places demand for energy strains the grid. Under an agreement with Indiana Michigan Power (I&M) and the Tennessee Valley Authority (TVA), Google will use the same techniques for AI workloads.

The announcement comes as states served by the power companies brace for a heat wave that will likely strain the grid as residents use air conditioners and increase demand for energy. Amid debate about datacenters' consumption of power and water, the last thing that the Chocolate Factory needs is folks blaming its AI Mode search function for a power outage when temperatures top 100F (37.7C). Under the agreement, if energy demand surges or there's a disruption in the grid due to extreme weather, I&M and TVA can now request that Google reduce its power use by rescheduling workloads or limiting non-urgent tasks until the issue is resolved.

Data Storage

What Happens To Your Data If You Stop Paying for Cloud Storage? (wired.com) 38

Major cloud storage providers maintain unclear policies about deleting user data after subscription cancellations, Wired reports, with deletion timelines ranging from six months to indefinite preservation.

Apple reserves the right to delete iCloud backups after 180 days of device inactivity but does not specify what happens to general file storage. Google may delete content after users exceed free storage limits for extended periods, though files remain safe for two years after cancellation.

Microsoft may delete OneDrive files after six months of non-payment, while Dropbox preserves files indefinitely without expiration dates. All providers revert users to limited free storage tiers upon cancellation with Apple and Microsoft offering 5GB, Google providing 15GB, and Dropbox allowing 2GB.
Piracy

How Napster Inspired a Generation of Rule-Breaking Entrepreneurs (fastcompany.com) 16

Napster's latest AI pivot "is the latest in a series of attempts by various owners to ride its brand cachet during emerging tech waves," Fast Company reported in July. In March, it sold for $207 million to Infinite Reality, an immersive digital media and e-commerce company, which also rebranded as Napster last month. Since 2020, other owners have included a British VR music startup (to create VR concerts) and two crypto-focused companies that bought it to anchor a Web3 music platform. Napster's launch follows a growing number of attempts to drive AI adoption beyond smartphones and laptops.
And tonight the Washington Post re-visited the legacy of Napster's original mp3-sharing model, arguing Napster "inspired successive generations of entrepreneurs to risk flouting the law so they could grow enough to get the laws changed to suit them, including Airbnb and Uber." "Napster to me embodies the idea that it is better to seek forgiveness than permission," said Mark Lemley, director of Stanford Law School's Program in Law, Science & Technology. "It didn't work out well for Napster or for many of the others who got sued, but it worked out very well for everyone else — users, and eventually the content industry, too, which is making record profits...." [Napster co-founder Sean] Parker later advised Spotify, and Napster marketing chief Oliver Schusser is now Apple's vice president for music.

Although many users saw Napster as an extension of rock-and-roll rebellion, that was not the company's real plan. First Fanning's majority-owning uncle, and then venture capital firm Hummer Winblad, wanted the start-up to leverage its knowledge of individual music consumers to make lucrative deals with the labels, according to internal documents this reporter found in researching a book on Napster. They warned that if no agreement were reached and Napster failed, more decentralized pirate services would take the audience and offer the labels nothing.

But settlement talks failed. The litigation blitz also took down a Napster competitor called Scour, which a young Travis Kalanick had joined shortly after its founding. Kalanick later created Uber, dedicated to overthrowing taxi regulations.

The article concludes that "Now it is Microsoft, Meta, Apple and Google, among the largest companies in the world, bankrolling the consumption of all media.

"They, too, have absorbed Napster's lessons in realpolitik, namely to build it first and hope the regulators will either yield or catch up."
Privacy

Despite Breach and Lawsuits, Tea Dating App Surges in Popularity (www.cbc.ca) 39

The women-only app Tea now "faces two class action lawsuits filed in California" in response to a recent breach," reports NPR — even as the company is now boasting it has more than 6.2 million users.

A spokesperson for Tea told the CBC it's "working to identify any users whose personal information was involved" in a breach of 72,000 images (including 13,000 verification photos and images of government IDs) and a later breach of 1.1 million private messages. Tea said they will be offering those users "free identity protection services." The company said it removed the ID requirement in 2023, but data that was stored before February 2024, when Tea migrated to a more secure system, was accessed in the breach... [Several sites have pointed out Tea's current privacy policy is telling users selfies are "deleted immediately."]

Tea was reportedly intended to launch in Canada on Friday, according to information previously posted on the App Store, but as of this week the launch date is now in February 2026. Tea didn't respond to CBC's questions about the apparent delay. Yet even amid the current turmoil, Tea's waitlist has ballooned to 1.5 million women, all eager to join, the company posted on Wednesday. A day later, Tea posted in its Instagram stories that it had approved "well over" 800,000 women into the app that day alone.

So, why is it so popular, despite the drama and risks?

Tea tapped into a perceived weakness of ther dating apps, according to an associate health studies professor at Ontario's Western University interviewed by the CBC, who thinks users should avoid Tea, at least until its security is restored.

Tech blogger John Gruber called the incident "yet another data point for the argument that any 'private messaging' feature that doesn't use E2EE isn't actually private at all." (And later Gruber notes Tea's apparent absence at the top of the charts in Google's Play Store. "I strongly suspect that, although Google hasn't removed Tea from the Play Store, they've delisted it from discovery other than by searching for it by name or following a direct link to its listing.")

Besides anonymous discussions about specific men, Tea also allows its users to perform background and criminal record checks, according to NPR, as well as reverse image searches. But the recent breach, besides threatening the safety of its users, also "laid bare the anonymous, one-sided accusations against the men in their dating pools." The CBC points out there's a men's rights group on Reddit now urging civil lawsuits against tea as part of a plan to get the app shut down. And "Cleveland lawyer Aaron Minc, who specializes in cases involving online defamation and harassment, told The Associated Press that his firm has received hundreds of calls from people upset about what's been posted about them on Tea."

Yet in response to Tea's latest Instagram post, "The comments were almost entirely from people asking Tea to approve them, so they could join the app."
AI

AI Tools Gave False Information About Tsunami Advisories (sfgate.com) 40

After an 8.8 earthquake off the coast of Russia, "weather authorities leapt into action," reports SFGate, by modeling the threat of a tsunami "and releasing warnings and advisories to prepare their communities..."

But some residents of Hawaii, Japan and North America's West Coast turned to AI tools for updates that "appear to have badly bungled the critical task at hand." Google's "AI Overview," for example, reportedly gave "inaccurate information about authorities' safety warnings in Hawaii and elsewhere," according to reports on social media. Thankfully, the tsunami danger quickly subsided on Tuesday night and Wednesday morning without major damage. Still, the issues speak to the growing role of AI tools in people's information diets... and to the tools' potentially dangerous fallibility... A critic of Google — who prompted the search tool to show an AI overview by adding "+ai" to their search — called the text that showed up "dangerously wrong."
Responding to similar complaints, Grok told one user on X.com "We'll improve accuracy."

Slashdot Top Deals