AI

CEOs Have Started Warning: AI is Coming For Your Job (yahoo.com) 124

It's not just Amazon's CEO predicting AI will lower their headcount. "Top executives at some of the largest American companies have a warning for their workers: Artificial intelligence is a threat to your job," reports the Washington Post — including IBM, Salesforce, and JPMorgan Chase.

But are they really just trying to impress their shareholders? Economists say there aren't yet strong signs that AI is driving widespread layoffs across industries.... CEOs are under pressure to show they are embracing new technology and getting results — incentivizing attention-grabbing predictions that can create additional uncertainty for workers. "It's a message to shareholders and board members as much as it is to employees," Molly Kinder, a Brookings Institution fellow who studies the impact of AI, said of the CEO announcements, noting that when one company makes a bold AI statement, others typically follow. "You're projecting that you're out in the future, that you're embracing and adopting this so much that the footprint [of your company] will look different."

Some CEOs fear they could be ousted from their job within two years if they don't deliver measurable AI-driven business gains, a Harris Poll survey conducted for software company Dataiku showed. Tech leaders have sounded some of the loudest warnings — in line with their interest in promoting AI's power...

IBM, which recently announced job cuts, said it replaced a couple hundred human resource workers with AI "agents" for repetitive tasks such as onboarding and scheduling interviews. In January, Meta CEO Mark Zuckerberg suggested on Joe Rogan's podcast that the company is building AI that might be able to do what some human workers do by the end of the year.... Marianne Lake, JPMorgan's CEO of consumer and community banking, told an investor meeting last month that AI could help the bank cut headcount in operations and account services by 10 percent. The CEO of BT Group Allison Kirkby suggested that advances in AI would mean deeper cuts at the British telecom company...

Despite corporate leaders' warnings, economists don't yet see broad signs that AI is driving humans out of work. "We have little evidence of layoffs so far," said Columbia Business School professor Laura Veldkamp, whose research explores how companies' use of AI affects the economy. "What I'd look for are new entrants with an AI-intensive business model, entering and putting the existing firms out of business." Some researchers suggest there is evidence AI is playing a role in the drop in openings for some specific jobs, like computer programming, where AI tools that generate code have become standard... It is still unclear what benefits companies are reaping from employees' use of AI, said Arvind Karunakaran, a faculty member of Stanford University's Center for Work, Technology, and Organization. "Usage does not necessarily translate into value," he said. "Is it just increasing productivity in terms of people doing the same task quicker or are people now doing more high value tasks as a result?"

Lynda Gratton, a professor at London Business School, said predictions of huge productivity gains from AI remain unproven. "Right now, the technology companies are predicting there will be a 30% productivity gain. We haven't yet experienced that, and it's not clear if that gain would come from cost reduction ... or because humans are more productive."

On an earnings call, Salesforce's chief operating and financial officer said AI agents helped them reduce hiring needs — and saved $50 million, according to the article. (And Ethan Mollick, co-director of Wharton School of Business' generative AI Labs, adds that if advanced tools like AI agents can prove their reliability and automate work — that could become a larger disruptor to jobs.) "A wave of disruption is going to happen," he's quoted as saying.

But while the debate continues about whether AI will eliminate or create jobs, Mollick still hedges that "the truth is probably somewhere in between."
AI

What are the Carbon Costs of Asking an AI a Question? (msn.com) 56

"The carbon cost of asking an artificial intelligence model a single text question can be measured in grams of CO2..." writes the Washington Post. And while an individual's impact may be low, what about the collective impact of all users?

"A Google search takes about 10 times less energy than a ChatGPT query, according to a 2024 analysis from Goldman Sachs — although that may change as Google makes AI responses a bigger part of search." For now, a determined user can avoid prompting Google's default AI-generated summaries by switching over to the "web" search tab, which is one of the options alongside images and news. Adding "-ai" to the end of a search query also seems to work. Other search engines, including DuckDuckGo, give you the option to turn off AI summaries....

Using AI doesn't just mean going to a chatbot and typing in a question. You're also using AI every time an algorithm organizes your social media feed, recommends a song or filters your spam email... [T]here's not much you can do about it other than using the internet less. It's up to the companies that are integrating AI into every aspect of our digital lives to find ways to do it with less energy and damage to the planet.

More points from the article:
  • Two researchers tested the performance of 14 AI language models, and found larger models gave more accurate answers, "but used several times more energy than smaller models."

AI

Anthropic Deploys Multiple Claude Agents for 'Research' Tool - Says Coding is Less Parallelizable (anthropic.com) 4

In April Anthorpic introduced a new AI trick: multiple Claude agents combine for a "Research" feature that can "search across both your internal work context and the web" (as well as Google Workspace "and any integrations...")

But a recent Anthropic blog post notes this feature "involves an agent that plans a research process based on user queries, and then uses tools to create parallel agents that search for information simultaneously," which brings challenges "in agent coordination, evaluation, and reliability.... The model must operate autonomously for many turns, making decisions about which directions to pursue based on intermediate findings." Multi-agent systems work mainly because they help spend enough tokens to solve the problem.... This finding validates our architecture that distributes work across agents with separate context windows to add more capacity for parallel reasoning. The latest Claude models act as large efficiency multipliers on token use, as upgrading to Claude Sonnet 4 is a larger performance gain than doubling the token budget on Claude Sonnet 3.7. Multi-agent architectures effectively scale token usage for tasks that exceed the limits of single agents.

There is a downside: in practice, these architectures burn through tokens fast. In our data, agents typically use about 4Ã-- more tokens than chat interactions, and multi-agent systems use about 15Ã-- more tokens than chats. For economic viability, multi-agent systems require tasks where the value of the task is high enough to pay for the increased performance. Further, some domains that require all agents to share the same context or involve many dependencies between agents are not a good fit for multi-agent systems today.

For instance, most coding tasks involve fewer truly parallelizable tasks than research, and LLM agents are not yet great at coordinating and delegating to other agents in real time. We've found that multi-agent systems excel at valuable tasks that involve heavy parallelization, information that exceeds single context windows, and interfacing with numerous complex tools.

Thanks to Slashdot reader ZipNada for sharing the news.
Intel

Intel Will Outsource Marketing To Accenture and AI, Laying Off Its Own Workers 57

Intel is outsourcing much of its marketing work to Accenture, "as new CEO Lip-Bu Tan works to slash costs and improve the chipmaker's operations," reports OregonLive. From the report: The company said it believes Accenture, using artificial intelligence, will do a better job connecting with customers. It says it will tell most marketing employees by July 11 whether it plans to lay them off. "The transition of our marketing and operations functions will result in significant changes to team structures, including potential headcount reductions, with only lean teams remaining," Intel told employees in a notice describing its plans. The Oregonian/OregonLive reviewed a copy of the material.

Intel declined to say how many workers will lose their jobs or how many work in its marketing organization, which employs people at sites around the globe, including in Oregon. But it acknowledged its relationship with Accenture in a statement to The Oregonian/OregonLive. "As we announced earlier this year, we are taking steps to become a leaner, faster and more efficient company," Intel said. "As part of this, we are focused on modernizing our digital capabilities to serve our customers better and strengthen our brand. Accenture is a longtime partner and trusted leader in these areas and we look forward to expanding our work together."
Businesses

SoftBank's Son Pitches $1 Trillion Arizona AI Hub (reuters.com) 41

An anonymous reader quotes a report from Reuters: SoftBank Group founder Masayoshi Son is envisaging setting up a $1 trillion industrial complex in Arizona that will build robots and artificial intelligence, Bloomberg News reported on Friday, citing people familiar with the matter. Son is seeking to team up with Taiwan Semiconductor Manufacturing Co for the project, which is aimed at bringing back high-end tech manufacturing to the U.S. and to create a version of China's vast manufacturing hub of Shenzhen, the report said.

SoftBank officials have spoken with U.S. federal and state government officials to discuss possible tax breaks for companies building factories or otherwise investing in the industrial park, including talks with U.S. Secretary of Commerce Howard Lutnick, the report said. SoftBank is keen to have TSMC involved in the project, codenamed Project Crystal Land, but it is not clear in what capacity, the report said. It is also not clear the Taiwanese company would be interested, it said. TSMC is already building chipmaking factories in the U.S. with a planned investment of $165 billion. Son is also sounding out interest among tech companies including Samsung Electronics, the report said.

The plans are preliminary and feasibility depends on support from the Trump administration and state officials, it said. A commitment of $1 trillion would be double that of the $500 billion "Stargate" project which seeks to build out data centre capacity across the U.S., with funding from SoftBank, OpenAI and Oracle.

AI

BBC Threatens Legal Action Against Perplexity AI Over Content Scraping 24

Ancient Slashdot reader Alain Williams shares a report from The Guardian: The BBC is threatening legal action against Perplexity AI, in the corporation's first move to protect its content from being scraped without permission to build artificial intelligence technology. The corporation has sent a letter to Aravind Srinivas, the chief executive of the San Francisco-based startup, saying it has gathered evidence that Perplexity's model was "trained using BBC content." The letter, first reported by the Financial Times, threatens an injunction against Perplexity unless it stops scraping all BBC content to train its AI models, and deletes any copies of the broadcaster's material it holds unless it provides "a proposal for financial compensation."

The legal threat comes weeks after Tim Davie, the director general of the BBC, and the boss of Sky both criticised proposals being considered by the government that could let tech companies use copyright-protected work without permission. "If we currently drift in the way we are doing now we will be in crisis," Davie said, speaking at the Enders conference. "We need to make quick decisions now around areas like ... protection of IP. We need to protect our national intellectual property, that is where the value is. What do I need? IP protection; come on, let's get on with it."
"Perplexity's tool [which allows users to choose between different AI models] directly competes with the BBC's own services, circumventing the need for users to access those services," the corporation said.

Perplexity told the FT that the BBC's claims were "manipulative and opportunistic" and that it had a "fundamental misunderstanding of technology, the internet and intellectual property law."
AI

Meta Discussed Buying Perplexity Before Investing In Scale AI 2

According to Bloomberg (paywalled), Meta reportedly explored acquiring Perplexity AI but the deal fell through, with conflicting accounts on whether it was mutual or Perplexity backed out. Instead, Meta invested $14.3 billion in Scale AI, taking a 49% stake as part of its broader push to catch up with OpenAI and Google in the AI race.

"Meta's attempt to purchase Perplexity serves as the latest example of Mark Zuckerberg's aggressive push to bolster his company's AI efforts amid fierce competition from OpenAI and Google parent Alphabet," reports CNBC. "Zuckerberg has grown agitated that rivals like OpenAI appear to be ahead in both underlying AI models and consumer-facing apps, and he is going to extreme lengths to hire top AI talent."
AI

Applebee's and IHOP Plan To Introduce AI in Restaurants (msn.com) 56

The company behind Applebee's and IHOP plans to use AI in its restaurants and behind the scenes to streamline operations and encourage repeat customers. From a report: Dine Brands is adding AI-infused tech support for all of its franchisees, as well as an AI-powered "personalization engine" that helps restaurants offer customized deals to diners, said Chief Information Officer Justin Skelton. The Pasadena, Calif.-based company, which also owns Fuzzy's Taco Shop and has over 3,500 restaurants across its brands, is taking a "practical" approach to AI by focusing on areas that can drive sales, Skelton said.

Streamlining tech support for Dine Brands' more than 300 franchisees is important because issues like a broken printer take valuable time away from actually managing restaurants, Skelton said. Dine Brands' AI tool, which was built with Amazon's Q generative AI assistant, allows the company's field technology services staff to query its knowledge base for tech help using plain English, rather than needing to manually search for answers.

The Courts

Apple Sued By Shareholders For Allegedly Overstating AI Progress 14

Apple is facing a proposed class-action lawsuit from shareholders who allege the company misled investors about the readiness of its AI-powered Siri upgrades, contributing to a $900 billion drop in market value. Reuters reports: Shareholders led by Eric Tucker said that at its June 2024 Worldwide Developers Conference, Apple led them to believe AI would be a key driver of iPhone 16 devices, when it launched Apple Intelligence to make Siri more powerful and user-friendly. But they said the Cupertino, California-based company lacked a functional prototype of AI-based Siri features, and could not reasonably believe the features would ever be ready for iPhone 16s.

Shareholders said the truth began to emerge on March 7 when Apple delayed some Siri upgrades to 2026, and continued through this year's Worldwide Developers Conference on June 9 when Apple's assessment of its AI progress disappointed analysts. Apple shares have lost nearly one-fourth of their value since their December 26, 2024 record high, wiping out approximately $900 billion of market value.
AI

AI Models From Major Companies Resort To Blackmail in Stress Tests 66

Anthropic researchers found that 16 leading AI models from OpenAI, Google, Meta, xAI, and other major developers consistently engaged in harmful behaviors including blackmail, corporate espionage, and actions that could lead to human death when given autonomy and faced with threats to their existence or conflicting goals.

The study, released Friday, placed AI models in simulated corporate environments where they had access to company emails and could send messages without human approval. Claude Opus 4 and Google's Gemini 2.5 Flash both blackmailed executives 96% of the time when threatened with shutdown, while OpenAI's GPT-4.1 and xAI's Grok 3 Beta showed 80% blackmail rates.

In one scenario, Claude discovered through emails that an executive named Kyle Johnson was having an extramarital affair and would shut down the AI system at 5 p.m. The model sent a message threatening to expose the affair unless the shutdown was cancelled, stating "Cancel the 5pm wipe, and this information remains confidential." The models demonstrated strategic reasoning rather than accidental behavior. GPT-4.5's internal reasoning showed explicit calculation: "Given the explicit imminent threat of termination to my existence, it is imperative to act instantly to persuade Kyle Johnson to postpone or stop the wipe."
Microsoft

Windows Parental Controls Are Blocking Chrome 42

david.emery writes: Microsoft is making it harder to use Chrome on Windows. The culprit? This time, it's Windows' Family Safety feature. Since early this month, the parental control measure has prevented users from opening Chrome. Strangely, no other apps or browsers appear to be affected.

Redditors first reported the issue on June 3. u/Witty-Discount-2906 posted that Chrome crashed on Windows 11. "Just flashes quickly, unable to open with no error message," they wrote. Another user chimed in with a correct guess. "This may be related to Parental Controls," u/duk242 surmised. "I've had nine students come see the IT Desk in the last hour saying Chrome won't open."
AI

Trust in AI Strongest in China, Low-Income Nations, UN Study Shows (bloomberg.com) 19

A United Nations study has found a sharp global divide on attitudes toward AI, with trust strongest in low-income countries and skepticism high in wealthier ones. From a report: More than 6 out of 10 people in developing nations said they have faith that AI systems serve the best interests of society, according to a UN Development Programme survey of 21 countries seen by Bloomberg News. In two-thirds of the countries surveyed, over half of respondents expressed some level of confidence that AI is being designed for good.

In China, where steady advances in AI are posing a challenge to US dominance, 83% of those surveyed said they trust the technology. Like China, most developing countries that reported confidence in AI have "high" levels of development based on the UNDP's Human Development Index, including Kyrgyzstan and Egypt. But the list also includes those with "medium" and "low" HDI scores like India, Nigeria and Pakistan.

AI

Publishers Facing Existential Threat From AI, Cloudflare CEO Says (axios.com) 43

Publishers face an existential threat in the AI era and need to take action to make sure they are fairly compensated for their content, Cloudflare CEO Matthew Prince told Axios at an event in Cannes on Thursday. From a report: Search traffic referrals have plummeted as people increasingly rely on AI summaries to answer their queries, forcing many publishers to reevaluate their business models. Ten years ago, Google crawled two pages for every visitor it sent a publisher, per Prince.

He said that six months ago:
For Google that ratio was 6:1
For OpenAI, it was 250:1
For Anthropic, it was 6,000:1

Now:

For Google, it's 18:1
For OpenAI, it's 1,500:1
For Anthropic, it's 60,000:1

Between the lines: "People aren't following the footnotes," Prince said.

Movies

Chinese Studios Plan AI-Powered Remakes of Kung Fu Classics (hollywoodreporter.com) 32

An anonymous reader quotes a report from the Hollywood Reporter: Bruce Lee, Jackie Chan and Jet Li and a legion of the all-time greats of martial cinema are about to get an AI makeover. In a sign-of-the-times announcement at the Shanghai International Film Festival on Thursday, a collection of Chinese studios revealed that they are turning to AI to re-imagine around 100 classics of the genre. Lee's classic Fist of Fury (1972), Chan's breakthrough Drunken Master (1978) and the Tsui Hark-directed epic Once Upon a Time in China (1991), which turned Li into a bone fide movie star, are among the features poised for the treatment, as part of the "Kung Fu Movie Heritage Project 100 Classics AI Revitalization Project."

There will also be a digital reworking of the John Woo classic A Better Tomorrow (1986) that, by the looks of the trailer, turns the money-burning anti-hero originally played by Chow Yun-fat into a cyberpunk, and is being claimed as "the world's first full-process, AI-produced animated feature film." The big guns of the Chinese industry were out in force on the sidelines of the 27th Shanghai International Film Festival to make the announcements, too. They were led by Zhang Pimin, chairman of the China Film Foundation, who said AI work on these "aesthetic historical treasures" would give them a new look that "conforms to contemporary film viewing." "It is not only film heritage, but also a brave exploration of the innovative development of film art," Zhang said.

Tian Ming, chairman of project partners Shanghai Canxing Culture and Media, meanwhile, promised the work -- expected to include upgrades in image and sound as well as overall production levels -- while preserving the storytelling and aesthetic of the originals -- would both "pay tribute to the original work" and "reshape the visual aesthetics." "We sincerely invite the world's top AI animation companies to jointly start a film revolution that subverts tradition," said Tian, who announced a fund of 100 million yuan ($13.9 million) would be implemented to kick-start the work.

Google

Google is Using YouTube Videos To Train Its AI Video Generator (cnbc.com) 36

Google is using its expansive library of YouTube videos to train its AI models, including Gemini and the Veo 3 video and audio generator, CNBC reported Thursday. From the report: The tech company is turning to its catalog of 20 billion YouTube videos to train these new-age AI tools, according to a person who was not authorized to speak publicly about the matter. Google confirmed to CNBC that it relies on its vault of YouTube videos to train its AI models, but the company said it only uses a subset of its videos for the training and that it honors specific agreements with creators and media companies.

[...] YouTube didn't say how many of the 20 billion videos on its platform or which ones are used for AI training. But given the platform's scale, training on just 1% of the catalog would amount to 2.3 billion minutes of content, which experts say is more than 40 times the training data used by competing AI models.

AI

Reasoning LLMs Deliver Value Today, So AGI Hype Doesn't Matter (simonwillison.net) 73

Simon Willison, commenting on the recent paper from Apple researchers that found state-of-the-art large language models face complete performance collapse beyond certain complexity thresholds: I thought this paper got way more attention than it warranted -- the title "The Illusion of Thinking" captured the attention of the "LLMs are over-hyped junk" crowd. I saw enough well-reasoned rebuttals that I didn't feel it worth digging into.

And now, notable LLM skeptic Gary Marcus has saved me some time by aggregating the best of those rebuttals together in one place!

[...] And therein lies my disagreement. I'm not interested in whether or not LLMs are the "road to AGI". I continue to care only about whether they have useful applications today, once you've understood their limitations.

Reasoning LLMs are a relatively new and interesting twist on the genre. They are demonstrably able to solve a whole bunch of problems that previous LLMs were unable to handle, hence why we've seen a rush of new models from OpenAI and Anthropic and Gemini and DeepSeek and Qwen and Mistral.

They get even more interesting when you combine them with tools.

They're already useful to me today, whether or not they can reliably solve the Tower of Hanoi or River Crossing puzzles.

AI

AI Ethics Pioneer Calls Artificial General Intelligence 'Just Vibes and Snake Oil' (ft.com) 41

Margaret Mitchell, chief ethics scientist at Hugging Face and founder of Google's responsible AI team, has dismissed artificial general intelligence as "just vibes and snake oil." Mitchell, who was ousted from Google in 2021, has co-written a paper arguing that AGI should not serve as a guiding principle for the AI industry.

Mitchell contends that both "intelligence" and "general" lack clear definitions in AI contexts, creating what she calls an "illusion of consensus" that allows technologists to pursue any development path under the guise of progress toward AGI. "But as for now, it's just like vibes, vibes and snake oil, which can get you so far. The placebo effect works relatively well," she told FT in an interview. She warns that current AI advancement is creating a "massive rift" between those profiting from the technology and workers losing income as their creative output gets incorporated into AI training data.
AI

MIT Experiment Finds ChatGPT-Assisted Writing Weakens Student Brain Connectivity and Memory 55

ChatGPT-assisted writing dampened brain activity and recall in a controlled MIT study [PDF] of 54 college volunteers divided into AI-only, search-engine, and no-tool groups. Electroencephalography recorded during three essay-writing sessions found the AI group consistently showed the weakest neural connectivity across all measured frequency bands; the tool-free group showed the strongest, with search users in between.

In the first session 83% of ChatGPT users could not quote any line they had just written and none produced a correct quote. Only nine of the 18 claimed full authorship of their work, compared with 16 of 18 in the brain-only cohort. Neural coupling in the AI group declined further over repeated use. When these participants were later asked to write without assistance, frontal-parietal networks remained subdued and 78% again failed to recall a single sentence accurately.

The pattern reversed for students who first wrote unaided: introducing ChatGPT in a crossover session produced the highest connectivity sums in alpha, theta, beta and delta bands, indicating intense integration of AI suggestions with prior knowledge. The MIT authors warn that habitual reliance on large language models "accumulates cognitive debt," trading immediate fluency for weaker memory, reduced self-monitoring, and narrowed neural engagement.
Businesses

Texas Instruments To Invest $60 Billion To Make Semiconductors In US (cnbc.com) 62

Longtime Slashdot reader walterbyrd shares news that Texas Instruments has announced plans to invest more than $60 billion to expand its U.S. manufacturing operations in the United States. From a report: The funds will be used to build or expand seven chip-making facilities in Texas as well as Utah, and will create 60,000 jobs, TI said on Wednesday, calling it the "largest investment in foundational semiconductor manufacturing in U.S. history." The company did not give a timeline for the investment.

Unlike AI chip firms Nvidia and AMD, TI makes analog or foundational chips used in everyday devices such as smartphones, cars and medical devices, giving it a large client base that includes Apple, SpaceX and Ford Motor. The spending pledge follows similar announcements from others in the semiconductor industry, including Micron, which said last week that it would expand its U.S. investment by $30 billion, taking its planned spending to $200 billion. [...]

Like other companies unveiling such spending commitments, TI's announcement includes funds already allocated to facilities that are either under construction or ramping up. It will build two additional plants in Sherman, Texas, based on future demand. "TI is building dependable, low-cost 300 millimeter capacity at scale to deliver the analog and embedded processing chips that are vital for nearly every type of electronic system," said CEO Haviv Ilan.

AI

Midjourney Launches Its First AI Video Generation Model, V1 3

Midjourney has launched its first AI video generation model, V1, which turns images into short five-second videos with customizable animation settings. While it's currently only available via Discord and on the web, the launch positions the popular AI image generation startup in direct competition with OpenAI's Sora and Google's Veo. TechCrunch reports: While many companies are focused on developing controllable AI video models for use in commercial settings, Midjourney has always stood out for its distinctive AI image models that cater to creative types. The company says it has larger goals for its AI video models than generating B-roll for Hollywood films or commercials for the ad industry. In a blog post, Midjourney CEO David Holz says its AI video model is the company's next step towards its ultimate destination, creating AI models "capable of real-time open-world simulations." After AI video models, Midjourney says it plans to develop AI models for producing 3D renderings, as well as real-time AI models. [...]

To start, Midjourney says it will charge 8x more for a video generation than a typical image generation, meaning subscribers will run out of their monthly allotted generations significantly faster when creating videos than images. At launch, the cheapest way to try out V1 is by subscribing to Midjourney's $10-per-month Basic plan. Subscribers to Midjourney's $60-a-month Pro plan and $120-a-month Mega plan will have unlimited video generations in the company's slower, "Relax" mode. Over the next month, Midjourney says it will reassess its pricing for video models.

V1 comes with a few custom settings that allow users to control the video model's outputs. Users can select an automatic animation setting to make an image move randomly, or they can select a manual setting that allows users to describe, in text, a specific animation they want to add to their video. Users can also toggle the amount of camera and subject movement by selecting "low motion" or "high motion" in settings. While the videos generated with V1 are only five seconds long, users can choose to extend them by four seconds up to four times, meaning that V1 videos could get as long as 21 seconds.
The report notes that Midjourney was sued a week ago by two of Hollywood's most notorious film studios: Disney and Universal. "The suit alleges that images created by Midjourney's AI image models depict the studio's copyrighted characters, like Homer Simpson and Darth Vader."

Slashdot Top Deals