AI

As Russia and China 'Seed Chatbots With Lies', Any Bad Actor Could Game AI the Same Way (detroitnews.com) 61

"Russia is automating the spread of false information to fool AI chatbots," reports the Washington Post. (When researchers checked 10 chatbots, a third of the responses repeated false pro-Russia messaging.)

The Post argues that this tactic offers "a playbook to other bad actors on how to game AI to push content meant to inflame, influence and obfuscate instead of inform," and calls it "a fundamental weakness of the AI industry." Chatbot answers depend on the data fed into them. A guiding principle is that the more the chatbots read, the more informed their answers will be, which is why the industry is ravenous for content. But mass quantities of well-aimed chaff can skew the answers on specific topics. For Russia, that is the war in Ukraine. But for a politician, it could be an opponent; for a commercial firm, it could be a competitor. "Most chatbots struggle with disinformation," said Giada Pistilli, principal ethicist at open-source AI platform Hugging Face. "They have basic safeguards against harmful content but can't reliably spot sophisticated propaganda, [and] the problem gets worse with search-augmented systems that prioritize recent information."

Early commercial attempts to manipulate chat results also are gathering steam, with some of the same digital marketers who once offered search engine optimization — or SEO — for higher Google rankings now trying to pump up mentions by AI chatbots through "generative engine optimization" — or GEO.

Our current situation "plays into the hands of those with the most means and the most to gain: for now, experts say, that is national governments with expertise in spreading propaganda." Russia and, to a lesser extent, China have been exploiting that advantage by flooding the zone with fables. But anyone could do the same, burning up far fewer resources than previous troll farm operations... In a twist that befuddled researchers for a year, almost no human beings visit the sites, which are hard to browse or search. Instead, their content is aimed at crawlers, the software programs that scour the web and bring back content for search engines and large language models. While those AI ventures are trained on a variety of datasets, an increasing number are offering chatbots that search the current web. Those are more likely to pick up something false if it is recent, and even more so if hundreds of pages on the web are saying much the same thing...

The gambit is even more effective because the Russian operation managed to get links to the Pravda network stories edited into Wikipedia pages and public Facebook group postings, probably with the help of human contractors. Many AI companies give special weight to Facebook and especially Wikipedia as accurate sources. (Wikipedia said this month that its bandwidth costs have soared 50 percent in just over a year, mostly because of AI crawlers....) Last month, other researchers set out to see whether the gambit was working. Finnish company Check First scoured Wikipedia and turned up nearly 2,000 hyperlinks on pages in 44 languages that pointed to 162 Pravda websites. It also found that some false information promoted by Pravda showed up in chatbot answers.

"They do even better in such places as China," the article points out, "where traditional media is more tightly controlled and there are fewer sources for the bots." (The nonprofit American Sunlight Project calls the process "LLM grooming".)

The article quotes a top Kremlin propagandist as bragging in January that "we can actually change worldwide AI."
Science

The Most-Cited Papers of the Twenty-First Century (nature.com) 13

Nature has published an analysis of the 21st century's most-cited scientific papers, revealing a surprising pattern: breakthrough discoveries like mRNA vaccines, CRISPR, and gravitational waves don't make the list. Instead, a 2016 Microsoft paper on "deep residual learning" networks claims the top spot, with citations ranging from 103,756 to 254,074 depending on the database.

The list overwhelmingly features methodology papers and software tools rather than groundbreaking discoveries. AI research dominates with four papers in the top ten, including Google's 2017 "Attention is all you need" paper that underpins modern language models.

The second-most-cited paper -- a 2001 guide for analyzing gene expression data -- was explicitly created to be cited after journal reviewers rejected references to a technical manual. As sociologist Misha Teplitskiy noted, "Scientists say they value methods, theory and empirical discoveries, but in practice the methods get cited more."
AI

Microsoft Researchers Develop Hyper-Efficient AI Model That Can Run On CPUs 59

Microsoft has introduced BitNet b1.58 2B4T, the largest-scale 1-bit AI model to date with 2 billion parameters and the ability to run efficiently on CPUs. It's openly available under an MIT license. TechCrunch reports: The Microsoft researchers say that BitNet b1.58 2B4T is the first bitnet with 2 billion parameters, "parameters" being largely synonymous with "weights." Trained on a dataset of 4 trillion tokens -- equivalent to about 33 million books, by one estimate -- BitNet b1.58 2B4T outperforms traditional models of similar sizes, the researchers claim.

BitNet b1.58 2B4T doesn't sweep the floor with rival 2 billion-parameter models, to be clear, but it seemingly holds its own. According to the researchers' testing, the model surpasses Meta's Llama 3.2 1B, Google's Gemma 3 1B, and Alibaba's Qwen 2.5 1.5B on benchmarks including GSM8K (a collection of grade-school-level math problems) and PIQA (which tests physical commonsense reasoning skills). Perhaps more impressively, BitNet b1.58 2B4T is speedier than other models of its size -- in some cases, twice the speed -- while using a fraction of the memory.

There is a catch, however. Achieving that performance requires using Microsoft's custom framework, bitnet.cpp, which only works with certain hardware at the moment. Absent from the list of supported chips are GPUs, which dominate the AI infrastructure landscape.
Education

Google Is Gifting Gemini Advanced To US College Students 30

Google is offering all U.S. college students a free year of its Gemini Advanced AI tools through its Google One AI Premium plan, as part of a push to expand Gemini's user base and compete with ChatGPT. It includes access to the company's Pro models, Veo 2 video generation, NotebookLM, Gemini Live and 2TB of Drive storage. Ars Technica reports: Google has a new landing page for the deal, allowing eligible students to sign up for their free Google One AI Premium plan. The offer is valid from now until June 30. Anyone who takes Google up on it will enjoy the free plan through spring 2026. The company hasn't specified an end date, but we would wager it will be June of next year. Google's intention is to give students an entire school year of Gemini Advanced from now through finals next year. At the end of the term, you can bet Google will try to convert students to paying subscribers.

As for who qualifies as a "student" in this promotion, Google isn't bothering with a particularly narrow definition. As long as you have a valid .edu email address, you can sign up for the offer. That's something that plenty of people who are not actively taking classes still have. You probably won't even be taking undue advantage of Google if you pretend to be a student -- the company really, really wants people to use Gemini, and it's willing to lose money in the short term to make that happen.
Google

Federal Judge Declares Google's Digital Ad Network Is an Illegal Monopoly (apnews.com) 47

Longtime Slashdot reader schwit1 shares a report from the Associated Press: Google has been branded an abusive monopolist by a federal judge for the second time in less than a year, this time for illegally exploiting some of its online marketing technology to boost the profits fueling an internet empire currently worth $1.8 trillion. The ruling issued Thursday by U.S. District Judge Leonie Brinkema in Virginia comes on the heels of a separate decision in August that concluded Google's namesake search engine has been illegally leveraging its dominance to stifle competition and innovation. [...] The next step in the latest case is a penalty phase that will likely begin late this year or early next year. The same so-called remedy hearings in the search monopoly case are scheduled to begin Monday in Washington D.C., where Justice Department lawyers will try to convince U.S. District Judge Amit Mehta to impose a sweeping punishment that includes a proposed requirement for Google to sell its Chrome web browser.

Brinkema's 115-page decision centers on the marketing machine that Google has spent the past 17 years building around its search engine and other widely used products and services, including its Chrome browser, YouTube video site and digital maps. The system was largely built around a series of acquisitions that started with Google's $3.2 billion purchase of online ad specialist DoubleClick in 2008. U.S. regulators approved the deals at the time they were made before realizing that they had given the Mountain View, California, company a platform to manipulate the prices in an ecosystem that a wide range of websites depend on for revenue and provides a vital marketing connection to consumers.

The Justice Department lawyers argued that Google built and maintained dominant market positions in a technology trifecta used by website publishers to sell ad space on their webpages, as well as the technology that advertisers use to get their ads in front of consumers, and the ad exchanges that conduct automated auctions in fractions of a second to match buyer and seller. After evaluating the evidence presented during a lengthy trial that concluded just before Thanksgiving last year, Brinkema reached a decision that rejected the Justice Department's assertions that Google has been mistreating advertisers while concluding the company has been abusing its power to stifle competition to the detriment of online publishers forced to rely on its network for revenue.

"For over a decade, Google has tied its publisher ad server and ad exchange together through contractual policies and technological integration, which enabled the company to establish and protect its monopoly power in these two markets." Brinkema wrote. "Google further entrenched its monopoly power by imposing anticompetitive policies on its customers and eliminating desirable product features." Despite that rebuke, Brinkema also concluded that Google didn't break the law when it snapped Doubleclick nor when it followed up that deal a few years later by buying another service, Admeld. The Justice Department "failed to show that the DoubleClick and Admeld acquisitions were anticompetitive," Brinkema wrote. "Although these acquisitions helped Google gain monopoly power in two adjacent ad tech markets, they are insufficient, when viewed in isolation, to prove that Google acquired or maintained this monopoly power through exclusionary practices." That finding may help Google fight off any attempt to force it to sell its advertising technology to stop its monopolistic behavior.

Privacy

ChatGPT Models Are Surprisingly Good At Geoguessing (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: There's a somewhat concerning new trend going viral: People are using ChatGPT to figure out the location shown in pictures. This week, OpenAI released its newest AI models, o3 and o4-mini, both of which can uniquely "reason" through uploaded images. In practice, the models can crop, rotate, and zoom in on photos -- even blurry and distorted ones -- to thoroughly analyze them. These image-analyzing capabilities, paired with the models' ability to search the web, make for a potent location-finding tool. Users on X quickly discovered that o3, in particular, is quite good at deducing cities, landmarks, and even restaurants and bars from subtle visual clues.

In many cases, the models don't appear to be drawing on "memories" of past ChatGPT conversations, or EXIF data, which is the metadata attached to photos that reveal details such as where the photo was taken. X is filled with examples of users giving ChatGPT restaurant menus, neighborhood snaps, facades, and self-portraits, and instructing o3 to imagine it's playing "GeoGuessr," an online game that challenges players to guess locations from Google Street View images. It's an obvious potential privacy issue. There's nothing preventing a bad actor from screenshotting, say, a person's Instagram Story and using ChatGPT to try to doxx them.

Facebook

Google, Apple, and Snap Aren't Happy About Meta's Poorly-redacted Slides (theverge.com) 13

During Meta's antitrust trial this week, lawyers representing Apple, Google, and Snap each expressed irritation with Meta over the slides it presented on Monday that The Verge found to contain easy-to-remove redactions. From a report: Attorneys for both Apple and Snap called the errors "egregious," with Apple's representative indicating that it may not be able to trust Meta with its internal information in the future. Google's attorney also blamed Meta for jeopardizing the search giant's data with the mistake.

Details about the attorneys' comments come from The Verge's Lauren Feiner, who is currently in the courtroom where proceedings are taking place today. Apple, Google, and Meta did not immediately respond to The Verge's request for comment. Snap declined to comment. Snap's attorney maligned Meta's "cavalier approach and casual disregard" of other companies swept into the case, and wondered if "Meta would have applied meaningful redactions if it were its own information that was at stake."

Google

Google Used AI To Suspend Over 39 Million Ad Accounts Suspected of Fraud (techcrunch.com) 25

An anonymous reader quotes a report from TechCrunch: Google on Wednesday said it suspended 39.2 million advertiser accounts on its platform in 2024 -- more than triple the number from the previous year -- in its latest crackdown on ad fraud. By leveraging large language models (LLMs) and using signals such as business impersonation and illegitimate payment details, the search giant said it could suspend a "vast majority" of ad accounts before they ever served an ad.

Last year, Google launched over 50 LLM enhancements to improve its safety enforcement mechanisms across all its platforms. "While these AI models are very, very important to us and have delivered a series of impressive improvements, we still have humans involved throughout the process," said Alex Rodriguez, a general manager for Ads Safety at Google, in a virtual media roundtable. The executive told reporters that a team of over 100 experts assembled across Google, including members from the Ads Safety team, the Trust and Safety division, and researchers from DeepMind.
"In total, Google said it blocked 5.1 billion ads last year and removed 1.3 billion pages," adds TechCrunch. "In comparison, it blocked over 5.5 billion ads and took action against 2.1 billion publisher pages in 2023. The company also restricted 9.1 billion ads last year, it said."
Google

Google To Phase Out Country Code Top-level Domains (blog.google) 47

Google has announced that it will begin phasing out country code top-level domains (ccTLDs) such as google.ng and google.com.br, redirecting all traffic to google.com. The change comes after improvements in Google's localization capabilities rendered these separate domains unnecessary.

Since 2017, Google has provided identical local search experiences whether users visited country-specific domains or google.com. The transition will roll out gradually over the coming months, and users may need to re-establish search preferences during the migration.
AI

Gemini App Rolling Out Veo 2 Video Generation For Advanced Users 2

Google is rolling out Veo 2 video generation in the Gemini app for Advanced subscribers, allowing users to create eight-second, 720p cinematic-style videos from text prompts. 9to5Google reports: Announced at the end of last year, Veo 2 touts "fluid character movement, lifelike scenes, and finer visual details across diverse subjects and styles," as well as "cinematic realism," thanks to an understanding of real-world physics and human motion. In Gemini, Veo 2 can create eight-second video clips at 720p resolution. Specifically, you'll get an MP4 download in a 16:9 landscape format. There's also the ability to share via a g.co/gemini/share/ link. To enter your prompt, select Veo 2 from the model dropdown on the web and mobile apps. Just describe the scene you want to create: "The more detailed your description, the more control you have over the final video." It takes 1-2 minutes for the clip to generate. [...]

On the safety front, each frame features a SynthID digital watermark. Only available to Gemini Advanced subscribers ($19.99 per month), there is a "monthly limit" on how many videos you can generate, with Google notifying users when they're close. It is rolling out globally -- in all languages supported by Gemini -- starting today and will be fully available in the coming weeks.
Programming

Figma Sent a Cease-and-Desist Letter To Lovable Over the Term 'Dev Mode' (techcrunch.com) 73

An anonymous reader quotes a report from TechCrunch: Figma has sent a cease-and-desist letter to popular no-code AI startup Lovable, Figma confirmed to TechCrunch. The letter tells Lovable to stop using the term "Dev Mode" for a new product feature. Figma, which also has a feature called Dev Mode, successfully trademarked that term last year, according to the U.S. Patent and Trademark office. What's wild is that "dev mode" is a common term used in many products that cater to software programmers. It's like an edit mode. Software products from giant companies like Apple's iOS, Google's Chrome, Microsoft's Xbox have features formally called "developer mode" that then get nicknamed "dev mode" in reference materials.

Even "dev mode" itself is commonly used. For instance Atlassian used it in products that pre-date Figma's copyright by years. And it's a common feature name in countless open source software projects. Figma tells TechCrunch that its trademark refers only to the shortcut "Dev Mode" -- not the full term "developer mode." Still, it's a bit like trademarking the term "bug" to refer to "debugging." Since Figma wants to own the term, it has little choice but send cease-and-desist letters. (The letter, as many on X pointed out, was very polite, too.) If Figma doesn't defend the term, it could be absorbed as a generic term and the trademarked becomes unenforceable.

AI

Google DeepMind Is Hiring a 'Post-AGI' Research Scientist (404media.co) 61

An anonymous reader shares a report: None of the frontier AI research labs have presented any evidence that they are on the brink of achieving artificial general intelligence, no matter how they define that goal, but Google is already planning for a "Post-AGI" world by hiring a scientist for its DeepMind AI lab to research the "profound impact" that technology will have on society.

"Spearhead research projects exploring the influence of AGI on domains such as economics, law, health/wellbeing, AGI to ASI [artificial superintelligence], machine consciousness, and education," Google says in the first item on a list of key responsibilities for the job. Artificial superintelligence refers to a hypothetical form of AI that is smarter than the smartest human in all domains. This is self explanatory, but just to be clear, when Google refers to "machine consciousness" it's referring to the science fiction idea of a sentient machine.

OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, Elon Musk, and other major and minor players in the AI industry are all working on AGI and have previously talked about the likelihood of humanity achieving AGI, when that might happen, and what the consequences might be, but the Google job listing shows that companies are now taking concrete steps for what comes after, or are at least are continuing to signal that they believe it can be achieved.

Android

Android Phones Will Soon Reboot Themselves After Sitting Unused For 3 Days (arstechnica.com) 98

An anonymous reader shares a report: A silent update rolling out to virtually all Android devices will make your phone more secure, and all you have to do is not touch it for a few days. The new feature implements auto-restart of a locked device, which will keep your personal data more secure. It's coming as part of a Google Play Services update, though, so there's nothing you can do to speed along the process.

Google is preparing to release a new update to Play Services (v25.14), which brings a raft of tweaks and improvements to myriad system features. First spotted by 9to5Google, the update was officially released on April 14, but as with all Play Services updates, it could take a week or more to reach all devices. When 25.14 arrives, Android devices will see a few minor improvements, including prettier settings screens, improved connection with cars and watches, and content previews when using Quick Share.

AI

Publishers and Law Professors Back Authors in Meta AI Copyright Battle 14

Publishers and law professors have filed amicus briefs supporting authors who sued Meta over its AI training practices, arguing that the company's use of "thousands of pirated books" fails to qualify as fair use under copyright law.

The filings [PDF] in California's Northern District federal court came from copyright law professors, the International Association of Scientific, Technical and Medical Publishers (STM), Copyright Alliance, and Association of American Publishers. The briefs counter earlier support for Meta from the Electronic Frontier Foundation and IP professors.

While Meta's defenders pointed to the 2015 Google Books ruling as precedent, the copyright professors distinguished Meta's use, arguing Google Books told users something "about" books without "exploiting expressive elements," whereas AI models leverage the books' creative content.

"Meta's use wasn't transformative because, like the AI models, the plaintiffs' works also increased 'knowledge and skill,'" the professors wrote, warning of a "cascading effect" if Meta prevails. STM is specifically challenging Meta's data sources: "While Meta attempts to label them 'publicly available datasets,' they are only 'publicly available' because those perpetuating their existence are breaking the law."
China

Chinese Robotaxis Have Government Black Boxes, Approach US Quality (forbes.com) 43

An anonymous reader quotes a report from Forbes: Robotaxi development is speeding at a fast pace in China, but we don't hear much about it in the USA, where the news focuses mostly on Waymo, with a bit about Zoox, Motional, May, trucking projects and other domestic players. China has 4 main players with robotaxi service, dominated by Baidu (the Chinese Google.) A recent session at last week's Ride AI conference in Los Angeles revealed some details about the different regulatory regime in China, and featured a report from a Chinese-American YouTuber who has taken on a mission to ride in the different vehicles.

Zion Maffeo, deputy general counsel for Pony.AI, provided some details on regulations in China. While Pony began with U.S. operations, its public operations are entirely in China, and it does only testing in the USA. Famously it was one of the few companies to get a California "no safety driver" test permit, but then lost it after a crash, and later regained it. Chinese authorities at many levels keep a close watch over Chinese robotaxi companies. They must get approval for all levels of operation which control where they can test and operate, and how much supervision is needed. Operation begins with testing with a safety driver behind the wheel (as almost everywhere in the world,) with eventual graduation to having the safety driver in the passenger seat but with an emergency stop. Then they move to having a supervisor in the back seat before they can test with nobody in the vehicle, usually limited to an area with simpler streets.

The big jump can then come to allow testing with nobody in the vehicle, but with full time monitoring by a remote employee who can stop the vehicle. From there they can graduate to taking passengers, and then expanding the service to more complex areas. Later they can go further, and not have full time remote monitoring, though there do need to be remote employees able to monitor and assist part time. Pony has a permit allowing it to have 3 vehicles per remote operator, and has one for 15 vehicles in process, but they declined comment on just how many vehicles they actually have per operator. Baidu also did not respond to queries on this. [...] In addition, Chinese jurisdictions require that the system in a car independently log any "interventions" by safety drivers in a sort of "black box" system. These reports are regularly given to regulators, though they are not made public. In California, companies must file an annual disengagement report, but they have considerable leeway on what they consider a disengagement so the numbers can't be readily compared. Chinese companies have no discretion on what is reported, and they may notify authorities of a specific objection if they wish to declare that an intervention logged in their black box should not be counted.
On her first trip, YouTuber Sophia Tung found Baidu's 5th generation robotaxi to offer a poor experience in ride quality, wait time, and overall service. However, during a return trip she tried Baidu's 6th generation vehicle in Wuhan and rated it as the best among Chinese robotaxis, approaching the quality of Waymo.
AI

OpenAI Unveils Coding-Focused GPT-4.1 While Phasing Out GPT-4.5 13

OpenAI unveiled its GPT-4.1 model family on Monday, prioritizing coding capabilities and instruction following while expanding context windows to 1 million tokens -- approximately 750,000 words. The lineup includes standard GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano variants, all available via API but not ChatGPT.

The flagship model scores 54.6% on SWE-bench Verified, lagging behind Google's Gemini 2.5 Pro (63.8%) and Anthropic's Claude 3.7 Sonnet (62.3%) on the same software engineering benchmark, according to TechCrunch. However, it achieves 72% accuracy on Video-MME's long video comprehension tests -- a significant improvement over GPT-4o's 65.3%.

OpenAI simultaneously announced plans to retire GPT-4.5 -- their largest model released just two months ago -- from API access by July 14. The company claims GPT-4.1 delivers "similar or improved performance" at substantially lower costs. Pricing follows a tiered structure: GPT-4.1 costs $2 per million input tokens and $8 per million output tokens, while GPT-4.1 nano -- OpenAI's "cheapest and fastest model ever" -- runs at just $0.10 per million input tokens.

All models feature a June 2024 knowledge cutoff, providing more current contextual understanding than previous iterations.
Chrome

Chrome To Patch Decades-Old 'Browser History Sniffing' Flaw That Let Sites Peek At Your History (theregister.com) 34

Slashdot reader king*jojo shared this article from The Register: A 23-year-old side-channel attack for spying on people's web browsing histories will get shut down in the forthcoming Chrome 136, released last Thursday to the Chrome beta channel. At least that's the hope.

The privacy attack, referred to as browser history sniffing, involves reading the color values of web links on a page to see if the linked pages have been visited previously... Web publishers and third parties capable of running scripts, have used this technique to present links on a web page to a visitor and then check how the visitor's browser set the color for those links on the rendered web page... The attack was mitigated about 15 years ago, though not effectively. Other ways to check link color information beyond the getComputedStyle method were developed... Chrome 136, due to see stable channel release on April 23, 2025, "is the first major browser to render these attacks obsolete," explained Kyra Seevers, Google software engineer in a blog post.

This is something of a turnabout for the Chrome team, which twice marked Chromium bug reports for the issue as "won't fix." David Baron, presently a Google software engineer who worked for Mozilla at the time, filed a Firefox bug report about the issue back on May 28, 2002... On March 9, 2010, Baron published a blog post outlining the issue and proposing some mitigations...

AI

AI Industry Tells US Congress: 'We Need Energy' (msn.com) 98

The Washington Post reports: The United States urgently needs more energy to fuel an artificial intelligence race with China that the country can't afford to lose, industry leaders told lawmakers at a House hearing on Wednesday. "We need energy in all forms," said Eric Schmidt, former CEO of Google, who now leads the Special Competitive Studies Project, a think tank focused on technology and security. "Renewable, nonrenewable, whatever. It needs to be there, and it needs to be there quickly." It was a nearly unanimous sentiment at the four-hour-plus hearing of the House Energy and Commerce Committee, which revealed bipartisan support for ramping up U.S. energy production to meet skyrocketing demand for energy-thirsty AI data centers.

The hearing showed how the country's AI policy priorities have changed under President Donald Trump. President Joe Biden's wide-ranging 2023 executive order on AI had sought to balance the technology's potential rewards with the risks it poses to workers, civil rights and national security. Trump rescinded that order within days of taking office, saying its "onerous" requirements would "threaten American technological leadership...." [Data center power consumption] is already straining power grids, as residential consumers compete with data centers that can use as much electricity as an entire city. And those energy demands are projected to grow dramatically in the coming years... [Former Google CEO Eric] Schmidt, whom the committee's Republicans called as a witness on Wednesday, told [committee chairman Brett] Guthrie that winning the AI race is too important to let environmental considerations get in the way...

Once the United States beats China to develop superintelligence, Schmidt said, AI will solve the climate crisis. And if it doesn't, he went on, China will become the world's sole superpower. (Schmidt's view that AI will become superintelligent within a decade is controversial among experts, some of whom predict the technology will remain limited by fundamental shortcomings in its ability to plan and reason.)

The industry's wish list also included "light touch" federal regulation, high-skill immigration and continued subsidies for chip development. Alexandr Wang, the young billionaire CEO of San Francisco-based Scale AI, said a growing patchwork of state privacy laws is hampering AI companies' access to the data needed to train their models. He called for a federal privacy law that would preempt state regulations and prioritize innovation.

Some committee Democrats argued that cuts to scientific research and renewable energy will actually hamper America's AI competitiveness, according to the article. " But few questioned the premise that the U.S. is locked in an existential struggle with China for AI supremacy.

"That stark outlook has nearly coalesced into a consensus on Capitol Hill since China's DeepSeek chatbot stunned the AI industry with its reasoning skills earlier this year."
Google

Google DeepMind Has a Weapon in the AI Talent Wars: Aggressive Noncompete Rules (businessinsider.com) 56

The battle for AI talent is so hot that Google would rather give some employees a paid one-year vacation than let them work for a competitor. From a report: Some Google DeepMind staff in the UK are subject to noncompete agreements that prevent them from working for a competitor for up to 12 months after they finish work at Google, according to four former employees with direct knowledge of the matter who asked to remain anonymous because they were not permitted to share these details with the press.

Aggressive noncompetes are one tool tech companies wield to retain a competitive edge in the AI wars, which show no sign of slowing down as companies launch new bleeding-edge models and products at a rapid clip. When an employee signs one, they agree not to work for a competing company for a certain period of time. Google DeepMind has put some employees with a noncompete on extended garden leave. These employees are still paid by DeepMind but no longer work for it for the duration of the noncompete agreement.

Several factors, including a DeepMind employee's seniority and how critical their work is to the company, determine the length of noncompete clauses, those people said. Two of the former staffers said six-month noncompetes are common among DeepMind employees, including for individual contributors working on Google's Gemini AI models. There have been cases where more senior researchers have received yearlong stipulations, they said.

Google

Google Maps is Launching Tools To Help Cities Analyze Infrastructure and Traffic (theverge.com) 9

Google is opening up its Google Maps Platform data so that cities, developers, and other business decision makers can more easily access information about things like infrastructure and traffic. The Verge: Google is integrating new datasets for Google Maps Platform directly into BigQuery, the tech giant's fully managed data analytics service, for the first time. This should make it easier for people to access data from Google Maps platform products, including Imagery Insights, Roads Management Insights, and Places Insights.

Slashdot Top Deals