AI

Has an AI Backlash Begun? (wired.com) 134

"The potential threat of bosses attempting to replace human workers with AI agents is just one of many compounding reasons people are critical of generative AI..." writes Wired, arguing that there's an AI backlash that "keeps growing strong."

"The pushback from the creative community ramped up during the 2023 Hollywood writer's strike, and continued to accelerate through the current wave of copyright lawsuits brought by publishers, creatives, and Hollywood studios." And "Right now, the general vibe aligns even more with the side of impacted workers." "I think there is a new sort of ambient animosity towards the AI systems," says Brian Merchant, former WIRED contributor and author of Blood in the Machine, a book about the Luddites rebelling against worker-replacing technology. "AI companies have speedrun the Silicon Valley trajectory." Before ChatGPT's release, around 38 percent of US adults were more concerned than excited about increased AI usage in daily life, according to the Pew Research Center. The number shot up to 52 percent by late 2023, as the public reacted to the speedy spread of generative AI. The level of concern has hovered around that same threshold ever since...

[F]rustration over AI's steady creep has breached the container of social media and started manifesting more in the real world. Parents I talk to are concerned about AI use impacting their child's mental health. Couples are worried about chatbot addictions driving a wedge in their relationships. Rural communities are incensed that the newly built data centers required to power these AI tools are kept humming by generators that burn fossil fuels, polluting their air, water, and soil. As a whole, the benefits of AI seem esoteric and underwhelming while the harms feel transformative and immediate.

Unlike the dawn of the internet where democratized access to information empowered everyday people in unique, surprising ways, the generative AI era has been defined by half-baked software releases and threats of AI replacing human workers, especially for recent college graduates looking to find entry-level work. "Our innovation ecosystem in the 20th century was about making opportunities for human flourishing more accessible," says Shannon Vallor, a technology philosopher at the Edinburgh Futures Institute and author of The AI Mirror, a book about reclaiming human agency from algorithms. "Now, we have an era of innovation where the greatest opportunities the technology creates are for those already enjoying a disproportionate share of strengths and resources."

The impacts of generative AI on the workforce are another core issue that critics are organizing around. "Workers are more intuitive than a lot of the pundit class gives them credit for," says Merchant. "They know this has been a naked attempt to get rid of people."

The article suggests "the next major shift in public opinion" is likely "when broad swaths of workers feel further threatened," and organize in response...
Social Networks

To Spam AI Chatbots, Companies Spam Reddit with AI-Generated Posts (9to5mac.com) 38

The problem? "Companies want their products and brands to appear in chatbot results," reports 9to5Mac. And "Since Reddit forms a key part of the training material for Google's AI, then one effective way to make that happen is to spam Reddit." Huffman has confirmed to the Financial Times that this is happening, with companies using AI bots to create fake posts in the hope that the content will be regurgitated by chatbots:

"For 20 years, we've been fighting people who have wanted to be popular on Reddit," Huffman said... "If you want to show up in the search engines, you try to do well on Reddit, and now the LLMs, it's the same thing. If you want to be in the LLMs, you can do it through Reddit."

Multiple ad agency execs confirmed to the FT that they are indeed "posting content on Reddit to boost the likelihood of their ads appearing in the responses of generative AI chatbots." Huffman says that AI bots are increasingly being used to make spam posts, and Reddit is trying to block them: For Huffman, success comes down to making sure that posts are "written by humans and voted on by humans [...] It's an arms race, it's a never ending battle." The company is exploring a number of new ways to do this, including the World ID eyeball-scanning device being touted by OpenAI's Sam Altman.

It's Reddit's 20th anniversary, notes CNBC. And while "MySpace, Digg and Flickr have faded into oblivion," Reddit "has refused to die, chugging along and gaining an audience of over 108 million daily users..."

But now Reddit "faces a gargantuan challenge gaining new users, particularly if Google's search floodgates dry up." [I]n the age of AI, many users simply "go the easiest possible way," said Ann Smarty, a marketing and reputation management consultant who helps brands monitor consumer perception on Reddit. And there may be no simpler way of finding answers on the internet than simply asking ChatGPT a question, Smarty said. "People do not want to click," she said. "They just want those quick answers."
But in response, CNBC's headline argues that Reddit "is fighting AI with AI." It launched its own Reddit Answers AI service in December, using technology from OpenAI and Google. Unlike general-purpose chatbots that summarize others' web pages, the Reddit Answers chatbot generates responses based purely on the social media service, and it redirects people to the source conversations so they can see the specific user comments. A Reddit spokesperson said that over 1 million people are using Reddit Answers each week.
AI

Ask Slashdot: Do You Use AI - and Is It Actually Helpful? 247

"I wonder who actually uses AI and why," writes Slashdot reader VertosCay: Out of pure curiosity, I have asked various AI models to create: simple Arduino code, business letters, real estate listing descriptions, and 3D models/vector art for various methods of manufacturing (3D printing, laser printing, CNC machining). None of it has been what I would call "turnkey". Everything required some form of correction or editing before it was usable.

So what's the point?

Their original submission includes more AI-related questions for Slashdot readers ("Do you use it? Why?") But their biggest question seems to be: "Do you have to correct it?"

And if that's the case, then when you add up all that correction time... "Is it actually helpful?"

Share your own thoughts and experiences in the comments. Do you use AI — and is it actually helpful?
AI

AI Improves At Improving Itself Using an Evolutionary Trick (ieee.org) 41

Technology writer Matthew Hutson (also Slashdot reader #1,467,653) looks at a new kind of self-improving AI coding system. It rewrites its own code based on empirical evidence of what's helping — as described in a recent preprint on arXiv.

From Hutson's new article in IEEE Spectrum: A Darwin Gödel Machine (or DGM) starts with a coding agent that can read, write, and execute code, leveraging an LLM for the reading and writing. Then it applies an evolutionary algorithm to create many new agents. In each iteration, the DGM picks one agent from the population and instructs the LLM to create one change to improve the agent's coding ability [by creating "a new, interesting, version of the sampled agent"]. LLMs have something like intuition about what might help, because they're trained on lots of human code. What results is guided evolution, somewhere between random mutation and provably useful enhancement. The DGM then tests the new agent on a coding benchmark, scoring its ability to solve programming challenges...

The researchers ran a DGM for 80 iterations using a coding benchmark called SWE-bench, and ran one for 80 iterations using a benchmark called Polyglot. Agents' scores improved on SWE-bench from 20 percent to 50 percent, and on Polyglot from 14 percent to 31 percent. "We were actually really surprised that the coding agent could write such complicated code by itself," said Jenny Zhang, a computer scientist at the University of British Columbia and the paper's lead author. "It could edit multiple files, create new files, and create really complicated systems."

... One concern with both evolutionary search and self-improving systems — and especially their combination, as in DGM — is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned. (In the study, they found that agents falsely reported using certain tools, so they created a DGM that rewarded agents for not making things up, partially alleviating the problem. One agent, however, hacked the method that tracked whether it was making things up.)

As the article puts it, the agents' improvements compounded "as they improved themselves at improving themselves..."
AI

People Are Being Committed After Spiraling Into 'ChatGPT Psychosis' (futurism.com) 174

"I don't know what's wrong with me, but something is very bad — I'm very scared, and I need to go to the hospital," a man told his wife, after experiencing what Futurism calls a "ten-day descent into AI-fueled delusion" and "a frightening break with reality."

And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice. The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.

"I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."

Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."

Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.

Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts.

"When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response."

But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions." In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."

In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.

IT

Duolingo Stock Plummets After Slowing User Growth, Possibly Caused By 'AI-First' Backlash (fool.com) 24

"Duolingo stock fell for the fourth straight trading day on Wednesday," reported Investor's Business Daily, "as data shows user growth slowing for the language-learning software provider."

Jefferies analyst John Colantuoni said he was "concerned" by this drop — saying it "may be the result of Duolingo's poorly received AI-driven hiring announcement in late April (later clarified in late May)." Also Wednesday, DA Davidson analyst Wyatt Swanson slashed his price target on Duolingo stock to 500 from 600, but kept his buy rating. He noted that the "'AI-first' backlash" on social media is hurting Duolingo's brand sentiment. However, he expects the impact to be temporary.
Colantuoni also maintained a "hold" rating on Duolingo stock — though by Monday Duolingo fell below its 50-day moving average line (which Investor's Business Daily calls "a key sell signal.")

And Thursday afternoon (2:30 p.m. EST) Duolingo's stock had dropped 14% for the week, notes The Motley Fool: While 30 days' worth of disappointing daily active user (DAU) data isn't bad in and of itself, it extends a worrying trend. Over the last five months, the company's DAU growth declined from 56% in February to 53% in March, 41% in April, 40% in May [the month after the "AI-first" announcement], and finally 37% in June.

This deceleration is far from a death knell for Duolingo's stock. But the market may be justified in lowering the company's valuation until it sees improving data. Even after this drop, the company trades at 106 times free cash flow, including stock-based compensation.

Maybe everyone's just practicing their language skills with ChatGPT?
AI

Call Center Workers Are Tired of Being Mistaken for AI (bloomberg.com) 83

Bloomberg reports: By the time Jessica Lindsey's customers accuse her of being an AI, they are often already shouting. For the past two years, her work as a call center agent for outsourcing company Concentrix has been punctuated by people at the other end of the phone demanding to speak to a real human. Sometimes they ask her straight, 'Are you an AI?' Other times they just start yelling commands: 'Speak to a representative! Speak to a representative...!' Skeptical customers are already frustrated from dealing with the automated system that triages calls before they reach a person. So when Lindsey starts reading from her AmEx-approved script, callers are infuriated by what they perceive to be another machine. "They just end up yelling at me and hanging up," she said, leaving Lindsey sitting in her home office in Oklahoma, shocked and sometimes in tears. "Like, I can't believe I just got cut down at 9:30 in the morning because they had to deal with the AI before they got to me...."

In Australia, Canada, Greece and the US, call center agents say they've been repeatedly mistaken for AI. These people, who spend hours talking to strangers, are experiencing surreal conversations, where customers ask them to prove they are not machines... [Seth, a US-based Concentrix worker] said he is asked if he's AI roughly once a week. In April, one customer quizzed him for around 20 minutes about whether he was a machine. The caller asked about his hobbies, about how he liked to go fishing when not at work, and what kind of fishing rod he used. "[It was as if she wanted] to see if I glitched," he said. "At one point, I felt like she was an AI trying to learn how to be human...."

Sarah, who works in benefits fraud-prevention for the US government — and asked to use a pseudonym for fear of being reprimanded for talking to the media — said she is mistaken for AI between three or four times every month... Sarah tries to change her inflections and tone of voice to sound more human. But she's also discovered another point of differentiation with the machines. "Whenever I run into the AI, it just lets you talk, it doesn't cut you off," said Sarah, who is based in Texas. So when customers start to shout, she now tries to interrupt them. "I say: 'Ma'am (or Sir). I am a real person. I'm sitting in an office in the southern US. I was born.'"

EU

How a Crewless, AI-Enhanced Vessel Will Patrol Denmark's and NATO's Waters (euronews.com) 5

After past damage to undersea cables, Denmark will boost their surveillance of Baltic Sea/North Sea waters by deploying four uncrewed surface vessels — about 10 meters long — that are equipped with drones and also AI, reports Euronews.

The founder/CEO of the company that makes the vessels — Saildrone — says they'll work "like a truck" that "carries the sensors." And then "we use on-board sophisticated machine learning and AI to fuse that data to give us a full picture of what's above and below the surface." Powered by solar and wind energy, they can operate autonomously for months at sea. [Saildrone] said the autonomous sailboats can support operations such as illegal fishing detection, border enforcement, and strategic asset protection... The four "Voyagers" will be first in operation for a three-month trial, as Denmark and NATO allies aim at extending maritime presence, especially around critical undersea infrastructure such as fibre optic cables and power lines. NATO and its allies have increased sea patrolling following several incidents.
Graphics

Graphics Artists In China Push Back On AI and Its Averaging Effect (theverge.com) 33

Graphic artists in China are pushing back against AI image generators, which they say "profoundly shifts clients' perception of their work, specifically in terms of how much that work costs and how much time it takes to produce," reports The Verge. "Freelance artists or designers working in industries with clients that invest in stylized, eye-catching graphics, like advertising, are particularly at risk." From the report: Long before AI image generators became popular, graphic designers at major tech companies and in-house designers for large corporate clients were often instructed by managers to crib aesthetics from competitors or from social media, according to one employee at a major online shopping platform in China, who asked to remain anonymous for fear of retaliation from their employer. Where a human would need to understand and reverse engineer a distinctive style to recreate it, AI image generators simply create randomized mutations of it. Often, the results will look like obvious copies and include errors, but other graphic designers can then edit them into a final product.

"I think it'd be easier to replace me if I didn't embrace [AI]," the shopping platform employee says. Early on, as tools like Stable Diffusion and Midjourney became more popular, their colleagues who spoke English well were selected to study AI image generators to increase in-house expertise on how to write successful prompts and identify what types of tasks AI was useful for. Ultimately, it was useful for copying styles from popular artists that, in the past, would take more time to study. "I think it forces both designers and clients to rethink the value of designers," Jia says. "Is it just about producing a design? Or is it about consultation, creativity, strategy, direction, and aesthetic?" [...]

Across the board, though, artists and designers say that AI hype has negatively impacted clients' view of their work's value. Now, clients expect a graphic designer to produce work on a shorter timeframe and for less money, which also has its own averaging impact, lowering the ceiling for what designers can deliver. As clients lower budgets and squish timelines, the quality of the designers' output decreases. "There is now a significant misperception about the workload of designers," [says Erbing, a graphic designer in Beijing who has worked with several ad agencies and asked to be called by his nickname]. "Some clients think that since AI must have improved efficiency, they can halve their budget." But this perception runs contrary to what designers spend the majority of their time doing, which is not necessarily just making any image, Erbing says.

EU

Denmark To Tackle Deepfakes By Giving People Copyright To Their Own Features (theguardian.com) 48

An anonymous reader quotes a report from The Guardian: The Danish government is to clamp down on the creation and dissemination of AI-generated deepfakes by changing copyright law to ensure that everybody has the right to their own body, facial features and voice. The Danish government said on Thursday it would strengthen protection against digital imitations of people's identities with what it believes to be the first law of its kind in Europe. Having secured broad cross-party agreement, the department of culture plans to submit a proposal to amend the current law for consultation before the summer recess and then submit the amendment in the autumn. It defines a deepfake as a very realistic digital representation of a person, including their appearance and voice.

The Danish culture minister, Jakob Engel-Schmidt, said he hoped the bill before parliament would send an "unequivocal message" that everybody had the right to the way they looked and sounded. He told the Guardian: "In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI." He added: "Human beings can be run through the digital copy machine and be misused for all sorts of purposes and I'm not willing to accept that."

The changes to Danish copyright law will, once approved, theoretically give people in Denmark the right to demand that online platforms remove such content if it is shared without consent. It will also cover "realistic, digitally generated imitations" of an artist's performance without consent. Violation of the proposed rules could result in compensation for those affected. The government said the new rules would not affect parodies and satire, which would still be permitted.
"Of course this is new ground we are breaking, and if the platforms are not complying with that, we are willing to take additional steps," said Engel-Schmidt.

He expressed hope that other European countries will follow suit and warned that "severe fines" will be imposed if tech platforms fail to comply.
AI

Fed Chair Powell Says AI Is Coming For Your Job 68

Federal Reserve Chair Jerome Powell told the U.S. Senate that while AI hasn't yet dramatically impacted the economy or labor market, its transformative effects are inevitable -- though the timeline remains uncertain. The Register reports: Speaking to the US Senate Banking Committee on Wednesday to give his semiannual monetary policy report, Powell told elected officials that AI's effect on the economy to date is "probably not great" yet, but it has "enormous capabilities to make really significant changes in the economy and labor force." Powell declined to predict how quickly that change could happen, only noting that the final few leaps to get from a shiny new technology to practical implementation can be a slow one.

"What's happened before with technology is that it seems to take a long time to be implemented," Powell said. "That last phase has tended to take longer than people expect." AI is likely to follow that trend, Powell asserted, but he has no idea what sort of timeline that puts on the eventual economy-transforming maturation point of artificial intelligence. "There's a tremendous uncertainty about the timing of [economic changes], what the ultimate consequences will be and what the medium term consequences will be," Powell said. [...]

That continuation will be watched by the Fed, Powell told Senators, but that doesn't mean he'll have the power to do anything about it. "The Fed doesn't have the tools to address the social issues and the labor market issues that will arise from this," Powell said. "We just have interest rates."
Advertising

A Developer Built a Real-World Ad Blocker For Snap Spectacles (uploadvr.com) 11

An anonymous reader quotes a report from UploadVR: Software developer Stijn Spanhove used the newest SDK features of Snap OS to build a prototype of [a real-world ad blocker for Snap Spectacles]. If you're unfamiliar, Snap Spectacles are a bulky AR glasses development kit available to rent for $99/month. They run Snap OS, the company's made-for-AR operating system, and developers build apps called Lenses for them using Lens Studio or WebXR.

Spanhove built the real-world ad blocker using the new Depth Module API of Snap OS, integrated with the vision capability of Google's Gemini AI via the cloud. The Depth Module API caches depth frames, meaning that coordinate results from cloud vision models can be mapped to positions in 3D space. This enables detecting and labeling real-world objects, for example. Or, in the case of Spanhove's project, projecting a red rectangle onto real-world ads.

However, while the software approach used for Spanhove's real-world ad blocker is sound, two fundamental hardware limitations mean it wouldn't be a practical way to avoid seeing ads in your reality. Firstly, the imagery rendered by see-through transparent AR systems like Spectacles isn't fully opaque. Thus, as you can see in the demo clip, the ads are still visible through the blocking rectangle. The other problem is that see-through transparent AR systems have a very limited field of view. In the case of Spectacles, just 46 degrees diagonal. So ads are only "blocked" whenever you're looking directly at them, and you'll still see them when you're not.

Privacy

Facebook Is Asking To Use Meta AI On Photos In Your Camera Roll You Haven't Yet Shared (techcrunch.com) 19

Facebook is prompting users to opt into a feature that uploads photos from their camera roll -- even those not shared on the platform -- to Meta's servers for AI-driven suggestions like collages and stylized edits. While Meta claims the content is private and not used for ads, opting in allows the company to analyze facial features and retain personal data under its broad AI terms, raising privacy concerns. TechCrunch reports: The feature is being suggested to Facebook users when they're creating a new Story on the social networking app. Here, a screen pops up and asks if the user will opt into "cloud processing" to allow creative suggestions. As the pop-up message explains, by clicking "Allow," you'll let Facebook generate new ideas from your camera roll, like collages, recaps, AI restylings, or photo themes. To work, Facebook says it will upload media from your camera roll to its cloud (meaning its servers) on an "ongoing basis," based on information like time, location, or themes.

The message also notes that only you can see the suggestions, and the media isn't used for ad targeting. However, by tapping "Allow," you are agreeing to Meta's AI Terms. This allows your media and facial features to be analyzed by AI, it says. The company will additionally use the date and presence of people or objects in your photos to craft its creative ideas. [...] According to Meta's AI Terms around image processing, "once shared, you agree that Meta will analyze those images, including facial features, using AI. This processing allows us to offer innovative new features, including the ability to summarize image contents, modify images, and generate new content based on the image," the text states.

The same AI terms also give Meta's AIs the right to "retain and use" any personal information you've shared in order to personalize its AI outputs. The company notes that it can review your interactions with its AIs, including conversations, and those reviews may be conducted by humans. The terms don't define what Meta considers personal information, beyond saying it includes "information you submit as Prompts, Feedback, or other Content." We have to wonder whether the photos you've shared for "cloud processing" also count here.

China

DeepSeek Faces Ban From Apple, Google App Stores In Germany 15

Germany's data protection commissioner has urged Apple and Google to remove Chinese AI startup DeepSeek from their app stores due to concerns about data protection. Reuters reports: Commissioner Meike Kamp said in a statement on Friday that she had made the request because DeepSeek illegally transfers users' personal data to China. The two U.S. tech giants must now review the request promptly and decide whether to block the app in Germany, she added, though her office has not set a precise timeframe. According to its own privacy policy, DeepSeek stores numerous pieces of personal data, such as requests to its AI program or uploaded files, on computers in China.

"DeepSeek has not been able to provide my agency with convincing evidence that German users' data is protected in China to a level equivalent to that in the European Union," [Commissioner Meike Kamp] said. "Chinese authorities have far-reaching access rights to personal data within the sphere of influence of Chinese companies," she added. The commissioner said she took the decision after asking DeepSeek in May to meet the requirements for non-EU data transfers or else voluntarily withdraw its app. DeepSeek did not comply with this request, she added.
AI

Big Accounting Firms Fail To Track AI Impact on Audit Quality, Says Regulator (ft.com) 21

The six largest UK accounting firms do not formally monitor how automated tools and AI impact the quality of their audits, the regulator has found, even as the technology becomes embedded across the sector. From a report: The Financial Reporting Council on Thursday published its first AI guide alongside a review of the way firms were using automated tools and technology, which found "no formal monitoring performed by the firms to quantify the audit quality impact of using" them.

The watchdog found that audit teams in the Big Four firms -- Deloitte, EY, KPMG and PwC -- as well as BDO and Forvis Mazars were increasingly using this technology to perform risk assessments and obtain evidence. But it said that the firms primarily monitored the tools to understand how many teams were using them for audits, "typically for licensing purposes," rather than to assess their impact on audit quality.

Businesses

Uber In Talks With Founder Travis Kalanick To Fund Self-Driving Car Deal (nytimes.com) 1

Facing mounting competition from autonomous taxi services like Waymo, Uber is in early talks to help fund Travis Kalanick's potential acquisition of Pony.ai's U.S. subsidiary (source paywalled; alternative source). If completed, the deal would reunite Kalanick with Uber (now under CEO Dara Khosrowshahi) and position Pony.ai to operate independently of its Chinese parent amid rising U.S. regulatory pressures. The New York Times reports: The company, Pony.ai, was founded in Silicon Valley in 2016 but has its main presence in China, and has permits to operate robot taxis and trucks in the United States and China. The talks are preliminary, said the people, who were not authorized to speak about the confidential conversations. Mr. Kalanick will run Pony if the deal is completed, they said. It is unclear what role, if any, Uber would take in Pony as an investor. Financial details of the potential transaction could not be determined. Pony went public last year in the United States, raising $260 million in a share sale. Its market capitalization stands around $4.5 billion.

If the deal goes through, Mr. Kalanick, 48, will remain in his day job running CloudKitchens, a virtual restaurant start-up that he founded after leaving Uber in 2017. He would also work more closely with Dara Khosrowshahi, who took over as Uber's chief executive after Mr. Kalanick's ouster. The discussions are the starkest sign yet that Uber is under pressure from Waymo, the driverless car unit spun out of Google, and other autonomous car services. When Mr. Kalanick was Uber's chief executive, the company tried developing autonomous vehicle technology. It then bought Otto, a self-driving trucking start-up run by Anthony Levandowski, a former Google engineer. Google later sued Mr. Levandowski for theft of trade secrets and sued Uber to bar it from using its self-driving technology.

Under Mr. Khosrowshahi, Uber has taken a different tack to self-driving cars. The company has struck roughly 18 partnerships with autonomous vehicle companies like Wayve, May Mobility and WeRide to bring pilot programs for driverless car services into Europe, the Middle East and Asia. The goal, Mr. Khosrowshahi has said in podcast interviews, has been to put "as many cars on Uber's network as possible." He has maintained that while autonomous vehicles are growing steadily, ride-hailing networks will have both human and robot drivers for years.

Advertising

As AI Kills Search Traffic, Google Launches Offerwall To Boost Publisher Revenue (techcrunch.com) 37

An anonymous reader quotes a report from TechCrunch: Google's AI search features are killing traffic to publishers, so now the company is proposing a possible solution. On Thursday, the tech giant officially launched Offerwall, a new tool that allows publishers to generate revenue beyond the more traffic-dependent options, like ads.

Offerwall lets publishers give their sites' readers a variety of ways to access their content, including through options like micropayments, taking surveys, watching ads, and more. In addition, Google says that publishers can add their own options to the Offerwall, like signing up for newsletters. The new feature is available for free in Google Ad Manager after earlier tests with 1,000 publishers that spanned over a year.
While no broad case studies were shared, India's Sakal Media Group implemented Google Ad Manager's Offerwall feature and saw a 20% revenue boost and up to 2 million more impressions in three months. Overall, publishers testing Offerwall experienced an average 9% revenue lift, with some seeing between 5% and 15%.
Youtube

YouTube Search Gets Its Own Version of Google's AI Overviews 8

Google is bringing its AI Overviews-like feature to YouTube in the form of an "AI-powered search results carousel." The Verge reports: As shown in a video, the search results carousel will show a big video clip up top, thumbnails to a selection of other relevant video clips directly under that, and an AI-generated bit of text responding to your query. To see a full video, tap on the big clip at the top of the carousel.

The feature is currently only accessible on iOS and Android and for videos in English and will be available to test until July 30th, per the YouTube experiments page. Additionally, only a "randomly selected number of Premium members" will have access to it, YouTube says in a support document.
AI

Who Needs Accenture in the Age of AI? (economist.com) 30

Accenture is facing mounting challenges as AI threatens to disrupt the consulting industry the company helped build. The Dublin-based firm, which made its fortune advising clients on adapting to new technologies from the internet to cloud computing, now confronts the same predicament as generative AI reshapes business operations.

The company's new generative AI contracts slowed to $100 million in the most recent quarter, down from $200 million per quarter last year. Technology partners including Microsoft and SAP are increasingly integrating AI directly into their offerings, allowing systems to work immediately without extensive consulting support. Newcomers like Palantir are embedding their own engineers with customers, enabling clients to bypass traditional consultants.

Between 2015 and 2024, Accenture generated a 370% total return by helping companies navigate technological transitions. The firm reached a $250 billion valuation in February before losing $60 billion in market value. CEO Julie Sweet insists that the company is reorganizing around "reinvention services." A recent survey found 42% of companies abandoned most AI initiatives, up from 17% a year ago.
AI

Study Finds LLM Users Have Weaker Understanding After Research (msn.com) 111

Researchers at the University of Pennsylvania's Wharton School found that people who used large language models to research topics demonstrated weaker understanding and produced less original insights compared to those using Google searches.

The study, involving more than 4,500 participants across four experiments, showed LLM users spent less time researching, exerted less effort, and wrote shorter, less detailed responses. In the first experiment, over 1,100 participants researched vegetable gardening using either Google or ChatGPT. Google users wrote longer responses with more unique phrasing and factual references. A second experiment with nearly 2,000 participants presented identical gardening information either as an AI summary or across mock webpages, with Google users again engaging more deeply and retaining more information.

Slashdot Top Deals