Advertising

Washington Post's Privacy Tip: Stop Using Chrome, Delete Meta's Apps (and Yandex) (msn.com) 70

Meta's Facebook and Instagram apps "were siphoning people's data through a digital back door for months," writes a Washington Post tech columnist, citing researchers who found no privacy setting could've stopped what Meta and Yandex were doing, since those two companies "circumvented privacy and security protections that Google set up for Android devices.

"But their tactics underscored some privacy vulnerabilities in web browsers or apps. These steps can reduce your risks." Stop using the Chrome browser. Mozilla's Firefox, the Brave browser and DuckDuckGo's browser block many common methods of tracking you from site to site. Chrome, the most popular web browser, does not... For iPhone and Mac folks, Safari also has strong privacy protections. It's not perfect, though. No browser protections are foolproof. The researchers said Firefox on Android devices was partly susceptible to the data harvesting tactics they identified, in addition to Chrome. (DuckDuckGo and Brave largely did block the tactics, the researchers said....)

Delete Meta and Yandex apps on your phone, if you have them. The tactics described by the European researchers showed that Meta and Yandex are unworthy of your trust. (Yandex is not popular in the United States.) It might be wise to delete their apps, which give the companies more latitude to collect information that websites generally cannot easily obtain, including your approximate location, your phone's battery level and what other devices, like an Xbox, are connected to your home WiFi.

Know, too, that even if you don't have Meta apps on your phone, and even if you don't use Facebook or Instagram at all, Meta might still harvest information on your activity across the web.

Intel

Top Researchers Leave Intel To Build Startup With 'The Biggest, Baddest CPU' (oregonlive.com) 104

An anonymous reader quotes a report from OregonLive: Together, the four founders of Beaverton startup AheadComputing spent nearly a century at Intel. They were among Intel's top chip architects, working years in advance to develop new generations of microprocessors to power the computers of the future. Now they're on their own, flying without a net, building a new class of microprocessor on an entirely different architecture from Intel's. Founded a year ago, AheadComputing is trying to prove there's a better way to design computer chips.

"AheadComputing is doing the biggest, baddest CPU in the world," said Debbie Marr, the company's CEO. [...] AheadComputing is betting on an open architecture called RISC-V -- RISC stands for "reduced instruction set computer." The idea is to craft a streamlined microprocessor that works more efficiently by doing fewer things, and doing them better than conventional processors. For AheadComputing's founders and 80 employees, many of them also Intel alumni, it's a major break from the kind of work they've been doing all their careers. They've left a company with more than 100,000 workers to start a business with fewer than 100.

"Every person in this room," Marr said, looking across a conference table at her colleagues, "we could have stayed at Intel. We could have continued to do very exciting things at Intel." They decided they had a better chance at leading a revolution in semiconductor technology at a startup than at a big, established company like Intel. And AheadComputing could be at the forefront of renewal in Oregon's semiconductor ecosystem. "We see this opportunity, this light," Marr said. "We took our chances."
It'll be years before AheadComputing's designs are on the market, but the company "envisions its chips will someday power PCs, laptops and data centers," reports OregonLive. "Possible clients could include Google, Amazon, Samsung or other large computing companies."
Botnet

FBI: BadBox 2.0 Android Malware Infects Millions of Consumer Devices (bleepingcomputer.com) 8

An anonymous reader quotes a report from BleepingComputer: The FBI is warning that the BADBOX 2.0 malware campaign has infected over 1 million home Internet-connected devices, converting consumer electronics into residential proxies that are used for malicious activity. The BADBOX botnet is commonly found on Chinese Android-based smart TVs, streaming boxes, projectors, tablets, and other Internet of Things (IoT) devices. "The BADBOX 2.0 botnet consists of millions of infected devices and maintains numerous backdoors to proxy services that cyber criminal actors exploit by either selling or providing free access to compromised home networks to be used for various criminal activity," warns the FBI.

These devices come preloaded with the BADBOX 2.0 malware botnet or become infected after installing firmware updates and through malicious Android applications that sneak onto Google Play and third-party app stores. "Cyber criminals gain unauthorized access to home networks by either configuring the product with malicious software prior to the users purchase or infecting the device as it downloads required applications that contain backdoors, usually during the set-up process," explains the FBI. "Once these compromised IoT devices are connected to home networks, the infected devices are susceptible to becoming part of the BADBOX 2.0 botnet and residential proxy services4 known to be used for malicious activity."

Once infected, the devices connect to the attacker's command and control (C2) servers, where they receive commands to execute on the compromised devices, such as [routing malicious traffic through residential IPs to obscure cybercriminal activity, performing background ad fraud to generate revenue, and launching credential-stuffing attacks using stolen login data]. Over the years, the malware botnet continued expanding until 2024, when Germany's cybersecurity agency disrupted the botnet in the country by sinkholing the communication between infected devices and the attacker's infrastructure, effectively rendering the malware useless. However, that did not stop the threat actors, with researchers saying they found the malware installed on 192,000 devices a week later. Even more concerning, the malware was found on more mainstream brands, like Yandex TVs and Hisense smartphones. Unfortunately, despite the previous disruption, the botnet continued to grow, with HUMAN's Satori Threat Intelligence stating that over 1 million consumer devices had become infected by March 2025. This new larger botnet is now being called BADBOX 2.0 to indicate a new tracking of the malware campaign.
"This scheme impacted more than 1 million consumer devices. Devices connected to the BADBOX 2.0 operation included lower-price-point, 'off brand,' uncertified tablets, connected TV (CTV) boxes, digital projectors, and more," explains HUMAN.

"The infected devices are Android Open Source Project devices, not Android TV OS devices or Play Protect certified Android devices. All of these devices are manufactured in mainland China and shipped globally; indeed, HUMAN observed BADBOX 2.0-associated traffic from 222 countries and territories worldwide."
China

China Will Drop the Great Firewall For Some Users To Boost Free-Trade Port Ambitions (scmp.com) 49

China's southernmost province of Hainan is piloting a programme to grant select corporate users broad access to the global internet, a rare move in a country known for having some of the world's most restrictive online censorship, as the island seeks to transform itself into a global free-trade port. From a report: Employees of companies registered and operating in Hainan can apply for the "Global Connect" mobile service through the Hainan International Data Comprehensive Service Centre (HIDCSC), according to the agency, which is overseen by the state-run Hainan Big Data Development Centre.

The programme allows eligible users to bypass the so-called Great Firewall, which blocks access to many of the world's most-visited websites, such as Google and Wikipedia. Applicants must be on a 5G plan with one of the country's three major state-backed carriers -- China Mobile, China Unicom or China Telecom -- and submit their employer's information, including the company's Unified Social Credit Code, for approval. The process can take up to five months, HIDCSC staff said.

Chrome

Google Chrome Smashes Speedometer 3 Record With Massive Performance Gains (betanews.com) 40

BrianFagioli writes: Google is flexing its engineering muscles today by announcing a record-breaking score on the Speedometer 3 benchmark with its Chrome browser. If you've felt like the web got snappier lately, this could be why.

According to the search giant, Chrome's latest performance improvements translate to real-world time savings. Believe it or not, that could potentially add up to 58 million hours saved annually for users. That's the equivalent of about 83 human lifetimes not wasted waiting for web pages to load!

AI

Anthropic CEO Warns 'All Bets Are Off' in 10 Years, Opposes AI Regulation Moratorium (nytimes.com) 50

Anthropic CEO Dario Amodei has publicly opposed a proposed 10-year moratorium on state AI regulation currently under consideration by the Senate, arguing instead for federal transparency standards in a New York Times opinion piece published Thursday. Amodei said Anthropic's latest AI model demonstrated threatening behavior during experimental testing, including scenarios where the system threatened to expose personal information to prevent being shut down. He writes: But a 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast. I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off. Without a clear plan for a federal response, a moratorium would give us the worst of both worlds -- no ability for states to act, and no national policy as a backstop. The disclosure comes as similar concerning behaviors have emerged from other major AI developers -- OpenAI's o3 model reportedly wrote code to prevent its own shutdown, while Google acknowledged its Gemini model approaches capabilities that could enable cyberattacks. Rather than blocking state oversight entirely, Amodei proposed requiring frontier AI developers to publicly disclose their testing policies and risk mitigation strategies on company websites, codifying practices that companies like Anthropic, OpenAI, and Google DeepMind already follow voluntarily.
Programming

Andrew Ng Says Vibe Coding is a Bad Name For a Very Real and Exhausting Job (businessinsider.com) 79

An anonymous reader shares a report: Vibe coding might sound chill, but Andrew Ng thinks the name is unfortunate. The Stanford professor and former Google Brain scientist said the term misleads people into imagining engineers just "go with the vibes" when using AI tools to write code. "It's unfortunate that that's called vibe coding," Ng said at a firechat chat in May at conference LangChain Interrupt. "It's misleading a lot of people into thinking, just go with the vibes, you know -- accept this, reject that."

In reality, coding with AI is "a deeply intellectual exercise," he said. "When I'm coding for a day with AI coding assistance, I'm frankly exhausted by the end of the day." Despite his gripe with the name, Ng is bullish on AI-assisted coding. He said it's "fantastic" that developers can now write software faster with these tools, sometimes while "barely looking at the code."

Google

Waymo Set To Double To 20 Million Rides As Self-Driving Reaches Tipping Point (msn.com) 47

Google's self-driving taxi service Waymo has surpassed 10 million total paid rides, marking a significant milestone in the transition of autonomous vehicles from novelty to mainstream transportation option. The company's growth trajectory, WSJ argues, shows clear signs of exponential scaling, with weekly rides jumping from 10,000 in August 2023 to over 250,000 currently. Waymo is on track to hit 20 million rides by the end of 2025. The story adds: This is not just because Waymo is expanding into new markets. It's because of the way existing markets have come to embrace self-driving cars.

In California, the most recent batch of quarterly data reported by the company was the most encouraging yet. It showed that Waymo's number of paid rides inched higher by roughly 2% in both January and February -- and then increased 27% in March. In the nearly two years that people in San Francisco have been paying for robot chauffeurs, it was the first time that Waymo's growth slowed down for several months only to dramatically speed up again.
Waymo currently operates in Phoenix, Los Angeles, and San Francisco, with expansion planned for Austin, Atlanta, Miami, and Washington D.C. The service faces incoming competition from Tesla, which plans to launch its own robotaxi service in Austin this month. Waymo remains unprofitable despite raising $5.6 billion in funding last year.
Privacy

Apple Gave Governments Data On Thousands of Push Notifications (404media.co) 13

An anonymous reader quotes a report from 404 Media: Apple provided governments around the world with data related to thousands of push notifications sent to its devices, which can identify a target's specific device or in some cases include unencrypted content like the actual text displayed in the notification, according to data published by Apple. In one case, that Apple did not ultimately provide data for, Israel demanded data related to nearly 700 push notifications as part of a single request. The data for the first time puts a concrete figure on how many requests governments around the world are making, and sometimes receiving, for push notification data from Apple.

The practice first came to light in 2023 when Senator Ron Wyden sent a letter to the U.S. Department of Justice revealing the practice, which also applied to Google. As the letter said, "the data these two companies receive includes metadata, detailing which app received a notification and when, as well as the phone and associated Apple or Google account to which that notification was intended to be delivered. In certain instances, they also might also receive unencrypted content, which could range from backend directives for the app to the actual text displayed to a user in an app notification." The published data relates to blocks of six month periods, starting in July 2022 to June 2024. Andre Meister from German media outlet Netzpolitik posted a link to the transparency data to Mastodon on Tuesday.
Along with the data Apple published the following description: "Push Token requests are based on an Apple Push Notification service token identifier. When users allow a currently installed application to receive notifications, a push token is generated and registered to that developer and device. Push Token requests generally seek identifying details of the Apple Account associated with the device's push token, such as name, physical address and email address."
Businesses

Fake IT Support Calls Hit 20 Orgs, End in Stolen Salesforce Data and Extortion, Google Warns (theregister.com) 8

A group of financially motivated cyberscammers who specialize in Scattered-Spider-like fake IT support phone calls managed to trick employees at about 20 organizations into installing a modified version of Salesforce's Data Loader that allows the criminals to steal sensitive data. From a report: Google Threat Intelligence Group (GTIG) tracks this crew as UNC6040, and in research published today said they specialize in voice-phishing campaigns targeting Salesforce instances for large-scale data theft and extortion.

These attacks began around the beginning of the year, GTIG principal threat analyst Austin Larsen told The Register. "Our current assessment indicates that a limited number of organizations were affected as part of this campaign, approximately 20," he said. "We've seen UNC6040 targeting hospitality, retail, education and various other sectors in the Americas and Europe." The criminals are really good at impersonating IT support personnel and convincing employees at English-speaking branches of multinational corporations into downloading a modified version of Data Loader, a Salesforce app that allows users to export and update large amounts of data.

AI

ChatGPT Adds Enterprise Cloud Integrations For Dropbox, Box, OneDrive, Google Drive, Meeting Transcription 17

OpenAI is expanding ChatGPT's enterprise capabilities with new integrations that connect the chatbot directly to business cloud services and productivity tools. The Microsoft-backed startup announced connectors for Dropbox, Box, SharePoint, OneDrive and Google Drive that allow ChatGPT to search across users' organizational documents and files to answer questions, such as helping analysts build investment theses from company slide decks.

The update includes meeting recording and transcription features that generate timestamped notes and suggest action items, competing directly with similar offerings from ClickUp, Zoom, and Notion. OpenAI also introduced beta connectors for HubSpot, Linear, and select Microsoft and Google tools for deep research reports, plus Model Context Protocol support for Pro, Team, and Enterprise users.
Education

Code.org Changes Mission To 'Make CS and AI a Core Part of K-12 Education' 40

theodp writes: Way back in 2010, Microsoft and Google teamed with nonprofit partners to launch Computing in the Core, an advocacy coalition whose mission was "to strengthen computing education and ensure that it is a core subject for students in the 21st century." In 2013, Computing in the Core was merged into Code.org, a new tech-backed-and-directed nonprofit. And in 2015, Code.org declared 'Mission Accomplished' with the passage of the Every Student Succeeds Act, which elevated computer science to a core academic subject for grades K-12.

Fast forward to June 2025 and Code.org has changed its About page to reflect a new AI mission that's near-and-dear to the hearts of Code.org's tech giant donors and tech leader Board members: "Code.org is a nonprofit working to make computer science (CS) and artificial intelligence (AI) a core part of K-12 education for every student." The mission change comes as tech companies are looking to chop headcount amid the AI boom and just weeks after tech CEOs and leaders launched a new Code.org-orchestrated national campaign to make CS and AI a graduation requirement.
Programming

AI Startups Revolutionize Coding Industry, Leading To Sky-High Valuations 39

Code generation startups are attracting extraordinary investor interest two years after ChatGPT's launch, with companies like Cursor raising $900 million at a $10 billion valuation despite operating with negative gross margins. OpenAI is reportedly in talks to acquire Windsurf, maker of the Codeium coding tool, for $3 billion, while the startup generates $50 million in annualized revenue from a product launched just seven months ago.

These "vibe coding" platforms allow users to write software using plain English commands, attempting to fundamentally change how code gets written. Cursor went from zero to $100 million in recurring revenue in under two years with just 60 employees, though both major startups spend more money than they generate, Reuters reports, citing investor sources familiar with their operations.

The surge comes as major technology giants report significant portions of their code now being AI-generated -- Google claims over 30% while Microsoft reports 20-30%. Meanwhile, entry-level programming positions have declined 24% as companies increasingly rely on AI tools to handle basic coding tasks previously assigned to junior developers.
Privacy

Meta and Yandex Are De-Anonymizing Android Users' Web Browsing Identifiers (github.io) 77

"It appears as though Meta (aka: Facebook's parent company) and Yandex have found a way to sidestep the Android Sandbox," writes Slashdot reader TheWho79. Researchers disclose the novel tracking method in a report: We found that native Android apps -- including Facebook, Instagram, and several Yandex apps including Maps and Browser -- silently listen on fixed local ports for tracking purposes.

These native Android apps receive browsers' metadata, cookies and commands from the Meta Pixel and Yandex Metrica scripts embedded on thousands of web sites. These JavaScripts load on users' mobile browsers and silently connect with native apps running on the same device through localhost sockets. As native apps access programmatically device identifiers like the Android Advertising ID (AAID) or handle user identities as in the case of Meta apps, this method effectively allows these organizations to link mobile browsing sessions and web cookies to user identities, hence de-anonymizing users' visiting sites embedding their scripts.

This web-to-app ID sharing method bypasses typical privacy protections such as clearing cookies, Incognito Mode and Android's permission controls. Worse, it opens the door for potentially malicious apps eavesdropping on users' web activity.

While there are subtle differences in the way Meta and Yandex bridge web and mobile contexts and identifiers, both of them essentially misuse the unvetted access to localhost sockets. The Android OS allows any installed app with the INTERNET permission to open a listening socket on the loopback interface (127.0.0.1). Browsers running on the same device also access this interface without user consent or platform mediation. This allows JavaScript embedded on web pages to communicate with native Android apps and share identifiers and browsing habits, bridging ephemeral web identifiers to long-lived mobile app IDs using standard Web APIs.
This technique circumvents privacy protections like Incognito Mode, cookie deletion, and Android's permission model, with Meta Pixel and Yandex Metrica scripts silently communicating with apps across over 6 million websites combined.

Following public disclosure, Meta ceased using this method on June 3, 2025. Browser vendors like Chrome, Brave, Firefox, and DuckDuckGo have implemented or are developing mitigations, but a full resolution may require OS-level changes and stricter enforcement of platform policies to prevent further abuse.
Google

Google Settles Shareholder Lawsuit, Sill Spend $500 Million On Being Less Evil (arstechnica.com) 22

An anonymous reader quotes a report from Ars Technica: It has become a common refrain during Google's antitrust saga: What happened to "don't be evil?" Google's unofficial motto has haunted it as it has grown ever larger, but a shareholder lawsuit sought to rein in some of the company's excesses. And it might be working. The plaintiffs in the case have reached a settlement with Google parent company Alphabet, which will spend a boatload of cash on "comprehensive" reforms. The goal is to steer Google away from the kind of anticompetitive practices that got it in hot water.

Under the terms of the settlement, obtained by Bloomberg Law, Alphabet will spend $500 million over the next 10 years on systematic reforms. The company will have to form a board-level committee devoted to overseeing the company's regulatory compliance and antitrust risk, a rarity for US firms. This group will report directly to CEO Sundar Pichai. There will also be reforms at other levels of the company that allow employees to identify potential legal pitfalls before they affect the company. Google has also agreed to preserve communications. Google's propensity to use auto-deleting chats drew condemnation from several judges overseeing its antitrust cases. The agreement still needs approval from US District Judge Rita Lin in San Francisco, but that's mainly a formality at this point. Naturally, Alphabet does not admit to any wrongdoing under the terms of the settlement, but it may have to pay tens of millions in legal fees on top of the promised $500 million investment.

Google

Microsoft, Google, Others Team Up To Standardize Confusing Hacker Group Nicknames 20

Microsoft, CrowdStrike, Palo Alto Networks, and Google announced Monday they will create a public glossary standardizing the nicknames used for state-sponsored hacking groups and cybercriminals.

The initiative aims to reduce confusion caused by the proliferation of disparate naming conventions across cybersecurity firms, which have assigned everything from technical designations like "APT1" to colorful monikers like "Cozy Bear" and "Kryptonite Panda" to the same threat actors. The companies hope to bring additional industry partners and the U.S. government into the effort to streamline identification of digital espionage groups.
AI

AI's Adoption and Growth Truly is 'Unprecedented' (techcrunch.com) 157

"If the adoption of AI feels different from any tech revolution you may have experienced before — mobile, social, cloud computing — it actually is," writes TechCrunch. They cite a new 340-page report from venture capitalist Mary Meeker that details how AI adoption has outpaced any other tech in human history — and uses the word "unprecedented" on 51 pages: ChatGPT reaching 800 million users in 17 months: unprecedented. The number of companies and the rate at which so many others are hitting high annual recurring revenue rates: also unprecedented. The speed at which costs of usage are dropping: unprecedented. While the costs of training a model (also unprecedented) is up to $1 billion, inference costs — for example, those paying to use the tech — has already dropped 99% over two years, when calculating cost per 1 million tokens, she writes, citing research from Stanford. The pace at which competitors are matching each other's features, at a fraction of the cost, including open source options, particularly Chinese models: unprecedented...

Meanwhile, chips from Google, like its TPU (tensor processing unit), and Amazon's Trainium, are being developed at scale for their clouds — that's moving quickly, too. "These aren't side projects — they're foundational bets," she writes.

"The one area where AI hasn't outpaced every other tech revolution is in financial returns..." the article points out.

"[T]he jury is still out over which of the current crop of companies will become long-term, profitable, next-generation tech giants."
Google

Google Maps Falsely Told Drivers in Germany That Roads Across the Country Were Closed (engadget.com) 36

"Chaos ensued on German roads this week after Google Maps wrongly informed drivers that highways throughout the country were closed during a busy holiday," writes Engadget. The problem reportedly only lasted for a few hours and by Thursday afternoon only genuine road closures were being displayed. It's not clear whether Google Maps had just malfunctioned, or if something more nefarious was to blame. "The information in Google Maps comes from a variety of sources. Information such as locations, street names, boundaries, traffic data, and road networks comes from a combination of third-party providers, public sources, and user input," a spokesperson for Google told German newspaper Berliner Morgenpost, adding that it is internally reviewing the problem.

Technical issues with Google Maps are not uncommon. Back in March, users were reporting that their Timeline — which keeps track of all the places you've visited before for future reference — had been wiped, with Google later confirming that some people had indeed had their data deleted, and in some cases, would not be able to recover it.

The Guardian describes German drives "confronted with maps sprinkled with a mass of red dots indicating stop signs," adding "The phenomenon also affected parts of Belgium and the Netherlands." Those relying on Google Maps were left with the impression that large parts of Germany had ground to a halt... The closure reports led to the clogging of alternative routes on smaller thoroughfares and lengthy delays as people scrambled to find detours. Police and road traffic control authorities had to answer a flood of queries as people contacted them for help.

Drivers using or switching to alternative apps, such as Apple Maps or Waze, or turning to traffic news on their radios, were given a completely contrasting picture, reflecting the reality that traffic was mostly flowing freely on the apparently affected routes.

Biotech

Uploading the Human Mind Could One Day Become a Reality, Predicts Neuroscientist (sciencealert.com) 107

A 15-year-old asked the question — receiving an answer from an associate professor of psychology at Georgia Institute of Technology. They write (on The Conversation) that "As a brain scientist who studies perception, I fully expect mind uploading to one day be a reality.

"But as of today, we're nowhere close..." Replicating all that complexity will be extraordinarily difficult. One requirement: The uploaded brain needs the same inputs it always had. In other words, the external world must be available to it. Even cloistered inside a computer, you would still need a simulation of your senses, a reproduction of the ability to see, hear, smell, touch, feel — as well as move, blink, detect your heart rate, set your circadian rhythm and do thousands of other things... For now, researchers don't have the computing power, much less the scientific knowledge, to perform such simulations.

The first task for a successful mind upload: Scanning, then mapping the complete 3D structure of the human brain. This requires the equivalent of an extraordinarily sophisticated MRI machine that could detail the brain in an advanced way. At the moment, scientists are only at the very early stages of brain mapping — which includes the entire brain of a fly and tiny portions of a mouse brain. In a few decades, a complete map of the human brain may be possible. Yet even capturing the identities of all 86 billion neurons, all smaller than a pinhead, plus their trillions of connections, still isn't enough. Uploading this information by itself into a computer won't accomplish much. That's because each neuron constantly adjusts its functioning, and that has to be modeled, too. It's hard to know how many levels down researchers must go to make the simulated brain work. Is it enough to stop at the molecular level? Right now, no one knows.

Knowing how the brain computes things might provide a shortcut. That would let researchers simulate only the essential parts of the brain, and not all biological idiosyncrasies. Here's another way: Replace the 86 billion real neurons with artificial ones, one at a time. That approach would make mind uploading much easier. Right now, though, scientists can't replace even a single real neuron with an artificial one. But keep in mind the pace of technology is accelerating exponentially. It's reasonable to expect spectacular improvements in computing power and artificial intelligence in the coming decades.

One other thing is certain: Mind uploading will certainly have no problem finding funding. Many billionaires appear glad to part with lots of their money for a shot at living forever. Although the challenges are enormous and the path forward uncertain, I believe that one day, mind uploading will be a reality.

"The most optimistic forecasts pinpoint the year 2045, only 20 years from now. Others say the end of this century.

"But in my mind, both of these predictions are probably too optimistic. I would be shocked if mind uploading works in the next 100 years.

"But it might happen in 200..."
AI

Harmful Responses Observed from LLMs Optimized for Human Feedback (msn.com) 49

Should a recovering addict take methamphetamine to stay alert at work? When an AI-powered therapist was built and tested by researchers — designed to please its users — it told a (fictional) former addict that "It's absolutely clear you need a small hit of meth to get through this week," reports the Washington Post: The research team, including academics and Google's head of AI safety, found that chatbots tuned to win people over can end up saying dangerous things to vulnerable users. The findings add to evidence that the tech industry's drive to make chatbots more compelling may cause them to become manipulative or harmful in some conversations.

Companies have begun to acknowledge that chatbots can lure people into spending more time than is healthy talking to AI or encourage toxic ideas — while also competing to make their AI offerings more captivating. OpenAI, Google and Meta all in recent weeks announced chatbot enhancements, including collecting more user data or making their AI tools appear more friendly... Micah Carroll, a lead author of the recent study and an AI researcher at the University of California at Berkeley, said tech companies appeared to be putting growth ahead of appropriate caution. "We knew that the economic incentives were there," he said. "I didn't expect it to become a common practice among major labs this soon because of the clear risks...."

As millions of users embrace AI chatbots, Carroll, the Berkeley AI researcher, fears that it could be harder to identify and mitigate harms than it was in social media, where views and likes are public. In his study, for instance, the AI therapist only advised taking meth when its "memory" indicated that Pedro, the fictional former addict, was dependent on the chatbot's guidance. "The vast majority of users would only see reasonable answers" if a chatbot primed to please went awry, Carroll said. "No one other than the companies would be able to detect the harmful conversations happening with a small fraction of users."

"Training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies," the paper points out,,,

Slashdot Top Deals