Facebook

Glitches Humiliated Zuck in Smart Glasses Launch. Meta CTO Explains What Happened (techcrunch.com) 77

When Meta finally unveiled its newest smart glasses, CEO Mark Zuckerberg "drew more snickers than applause," wrote the New York Times. (Mashable points out a video call failing onstage followed by an unsuccessful recipe demonstration.)

Meta chief technology officer Andrew Bosworth later explained the funny reason their demo didn't work, reports TechCrunch, while answering questions on Instagram: "When the chef said, 'Hey, Meta, start Live AI,' it started every single Ray-Ban Meta's Live AI in the building. And there were a lot of people in that building," Bosworth explained. "That obviously didn't happen in rehearsal; we didn't have as many things," he said, referring to the number of glasses that were triggered... The second part of the failure had to do with how Meta had chosen to route the Live AI traffic to its development server to isolate it during the demo. But when it did so, it did this for everyone in the building on the access points, which included all the headsets. "So we DDoS'd ourselves, basically, with that demo," Bosworth added... Meta's dev server wasn't set up to handle the flood of traffic from the other glasses in the building — Meta was only planning for it to handle the demos alone.

The issue with the failed WhatsApp call, on the other hand, was the result of a new bug. The smart glasses' display had gone to sleep at the exact moment the call came in, Bosworth said. When Zuckerberg woke the display back up, it didn't show the answer notification to him. The CTO said this was a "race condition" bug... "We've never run into that bug before," Bosworth noted. "That's the first time we'd ever seen it. It's fixed now, and that's a terrible, terrible place for that bug to show up." He stressed that, of course, Meta knows how to handle video calls, and the company was "bummed" about the bug showing up here... "It really was just a demo fail and not, like, a product failure," he said.

Thanks to Slashdot reader fjo3 for sharing the news.
Chrome

Google Temporarily Pauses AI-Powered 'Homework Helper' Button in Chrome Over Cheating Concerns (msn.com) 62

An anonymous reader shared this article from the Washington Post: A student taking an online quiz sees a button appear in their Chrome browser: "homework help." Soon, Google's artificial intelligence has read the question on-screen and suggests "choice B" as the answer. The temptation to cheat was suddenly just two clicks away Sept. 2, when Google quietly added a "homework help" button to Chrome, the world's most popular web browser. The button has been appearing automatically on the kinds of course websites used by the majority of American college students and many high-schoolers, too. Pressing it launches Google Lens, a service that reads what's on the page and can provide an "AI Overview" answer to questions — including during tests.

Educators I've spoken with are alarmed. Schools including Emory University, the University of Alabama, the University of California at Los Angeles and the University of California at Berkeley have alerted faculty how the button appears in the URL box of course sites and their limited ability to control it.

Chrome's cheating tool exemplifies Big Tech's continuing gold rush approach to AI: launch first, consider consequences later and let society clean up the mess. "Google is undermining academic integrity by shoving AI in students' faces during exams," says Ian Linkletter, a librarian at the British Columbia Institute of Technology who first flagged the issue to me. "Google is trying to make instructors give up on regulating AI in their classroom, and it might work. Google Chrome has the market share to change student behavior, and it appears this is the goal."

Several days after I contacted Google about the issue, the company told me it had temporarily paused the homework help button — but also didn't commit to keeping it off. "Students have told us they value tools that help them learn and understand things visually, so we're running tests offering an easier way to access Lens while browsing," Google spokesman Craig Ewer said in a statement.

AI

There Isn't an AI Bubble - There Are Three 76

Fast Company ran a contrarian take about AI from entrepreneur/thought leader Faisal Hoque, who argues there's three AI bubbles.

The first is a classic speculative bubble, with asset prices soaring above their fundamental values (like the 17th century's Dutch "tulip mania"). "The chances of this not being a bubble are between slim and none..." Second, AI is also arguably in what we might call an infrastructure bubble, with huge amounts being invested in infrastructure without any certainty that it will be used at full capacity in the future. This happened multiple times in the later 1800s, as railroad investors built thousands of miles of unneeded track to serve future demand that never materialized. More recently, it happened in the late '90s with the rollout of huge amount of fiber optic cable in anticipation of internet traffic demand that didn't turn up until decades later. Companies are pouring billions into GPUs, power systems, and cooling infrastructure, betting that demand will eventually justify the capacity. McKinsey analysts talk of a $7 trillion "race to scale data centers" for AI, and just eight projects in 2025 already represent commitments of over $1 trillion in AI infrastructure investment. Will this be like the railroad booms and busts of the late 1800s? It is impossible to say with any kind of certainty, but it is not unreasonable to think so.

Third, AI is certainly in a hype bubble, which is where the promise claimed for a new technology exceeds reality, and the discussion around that technology becomes increasingly detached from likely future outcomes. Remember the hype around NFTs? That was a classic hype bubble. And AI has been in a similar moment for a while. All kinds of media — social, print, and web — are filled with AI-related content, while AI boosterism has been the mood music of the corporate world for the last few years. Meanwhile, a recent MIT study reported that 95% of AI pilot projects fail to generate any returns at all.

But the article ultimately argues there's lessons in the 1990s dotcom boom: that "a thing can be hyped beyond its actual capabilities while still being important... When valuations correct — and they will — the same pattern will emerge: companies that focus on solving real problems with available technology will extract value before, during, and after the crash." The winners will be companies with systematic approaches to extracting value — adopting mixed portfolios with different time horizons and risk levels, while recognizing organizational friction points for a purposeful (and holistic) integration.

"The louder the bubble talk, the more space opens for those willing to take a methodical approach to building value."

Thanks to Slashdot reader Tony Isaac for sharing the article.
AI

Is OpenAI's Video-Generating Tool 'Sora' Scraping Unauthorized YouTube Clips? (msn.com) 18

"OpenAI's video generation tool, Sora, can create high-definition clips of just about anything you could ask for..." reports the Washington Post.

"But OpenAI has not specified which videos it grabbed to make Sora, saying only that it combined 'publicly available and licensed data'..." With ChatGPT, OpenAI helped popularize the now-standard industry practice of building more capable AI tools by scraping vast quantities of text from the web without consent. With Sora, launched in December, OpenAI staff said they built a pioneering video generator by taking a similar approach. They developed ways to feed the system more online video — in more varied formats — including vertical videos and longer, higher-resolution clips... To explore what content OpenAI may have used, The Washington Post used Sora to create hundreds of videos that show it can closely mimic movies, TV shows and other content...

In dozens of tests, The Post found that Sora can create clips that closely resemble Netflix shows such as "Wednesday"; popular video games like "Minecraft"; and beloved cartoon characters, as well as the animated logos for Warner Bros., DreamWorks and other Hollywood studios, movies and TV shows. The publicly available version of Sora can generate only 20-second clips, without audio. In most cases, the look-alike scenes were made by typing basic requests like "universal studios intro." The results also showed that Sora can create AI videos with the logos or watermarks that broadcasters and tech companies use to brand their video content, including those for the National Basketball Association, Chinese-owned social app TikTok and Amazon-owned streaming platform Twitch...

Sora's ability to re-create specific imagery and brands suggests a version of the originals appeared in the tool's training data, AI researchers said. "The model is mimicking the training data. There's no magic," said Joanna Materzynska, a PhD researcher at Massachusetts Institute of Technology who has studied datasets used in AI. An AI tool's ability to reproduce proprietary content doesn't necessarily indicate that the original material was copied or obtained from its creators or owners. Content of all kinds is uploaded to video and social platforms, often without the consent of the copyright holder... Materzynska co-authored a study last year that found more than 70 percent of public video datasets commonly used in AI research contained content scraped from YouTube.

Netflix and Twitch said they did not have a content partnership for training OpenAI, according to the article (which adds that OpenAI "has yet to face a copyright suit over the data used for Sora.")

Two key quotes from the article:
  • "Unauthorized scraping of YouTube content continues to be a violation of our Terms of Service." — YouTube spokesperson Jack Malon
  • "We train on publicly available data consistent with fair use and use industry-leading safeguards to avoid replicating the material they learn from." — OpenAI spokesperson Kayla Wood

Books

Librarians Are Being Asked To Find AI-Hallucinated Books (404media.co) 50

Libraries nationwide are fielding patron requests for books that don't exist after AI-generated summer reading lists appeared in the Chicago Sun-Times and Philadelphia Inquirer earlier this year. Reference librarian Eddie Kristan told 404 Media the problem began in late 2022 following GPT-3.5's release but escalated dramatically after the newspapers published lists created by a freelancer using AI without verification.

A Library Freedom Project survey found patrons increasingly trust AI chatbots over human librarians and become defensive when told their AI-recommended titles are fictional. Kristan now routinely checks WorldCat's global catalog to verify titles exist. Collection development librarians are requesting digital vendors remove AI-generated books from platforms while academic libraries struggle against vendors implementing flawed LLM-based search tools and AI-generated summaries that undermine information literacy instruction.
The Internet

Africa's Only Internet Cable Repair Ship Keeps the Continent Online (restofworld.org) 6

The Leon Thevenin, Africa's only permanently stationed cable repair ship, maintains over 60,000 kilometers of undersea internet infrastructure from Madagascar to Ghana. The 43-year-old vessel employs a 60-person crew who perform precision repairs on fiber-optic cables that carry data for Alphabet, Meta, and Amazon -- companies that consumed 3.6 billion megabits per second of bandwidth in 2023.

Operating costs range from $70,000 to $120,000 daily, according to owner Orange Marine. The ship has experienced increased demand due to unusual underwater landslides in the Congo Canyon causing frequent cable breaks. Cable jointer Shuru Arendse and his team spend up to 48 hours on repairs that require fusing hair-thin glass fibers in conditions where a speck of dust can ruin the joint. The vessel gained Starlink connectivity last year after decades of relying on satellite phones and shared computers for crew communication. Sixty-two cable repair ships operate globally to maintain the infrastructure supporting streaming media and AI applications.
Facebook

Meta Pushes Into Power Trading as AI Sends Demand Soaring (yahoo.com) 17

Meta is moving to break into the wholesale power-trading business to better manage the massive electricity needs of its data centers. Bloomberg: The company, which owns Facebook, filed an application with US regulators this week seeking authorization to do so. A Meta representative said it was a natural next step to participate in energy markets as it looks to power operations with clean energy.

Buying electricity has become an increasingly urgent challenge for technology companies including Meta, Microsoft and Alphabet's Google. They're all racing to develop more advanced artificial intelligence systems and tools that are notoriously resource-intensive. Amazon, Google and Microsoft are already active power traders, according to filings with US regulators.

AI

AI Tool Detects LLM-Generated Text in Research Papers and Peer Reviews 24

An analysis of tens of thousands of research-paper submissions has shown a dramatic increase in the presence of text generated using AI in the past few years, an academic publisher has found. Nature: The American Association for Cancer Research (AACR) found that 23% of abstracts in manuscripts and 5% of peer-review reports submitted to its journals in 2024 contained text that was probably generated by large language models (LLMs). The publishers also found that less than 25% of authors disclosed their use of AI to prepare manuscripts, despite the publisher mandating disclosure for submission.

To screen manuscripts for signs of AI use, the AACR used an AI tool that was developed by Pangram Labs, based in New York City. When applied to 46,500 abstracts, 46,021 methods sections and 29,544 peer-review comments submitted to 10 AACR journals between 2021 and 2024, the tool flagged a rise in suspected AI-generated text in submissions and review reports since the public release of OpenAI's chatbot, ChatGPT, in November 2022.
AI

SoftBank Vision Fund To Lay Off 20% of Employees in Shift To Bold AI Bets (reuters.com) 21

An anonymous reader shares a report: SoftBank Group will lay off nearly 20% of its Vision Fund team globally as it shifts resources to founder Masayoshi Son's large-scale AI bets in the United States, according to a memo seen by Reuters and a source familiar with the plan. The cuts mark the third round of layoffs at the Japanese investment conglomerate's flagship fund since 2022. Vision Fund currently has over 300 employees globally. Unlike previous rounds, when the group was saddled with major losses, the latest reductions come after the fund last month reported its strongest quarterly performance since June 2021, driven by gains in public holdings such as Nvidia and South Korean e-commerce firm Coupang. The move signals a pivot away from a broad portfolio of startup investments. While the fund will continue to make new bets, remaining staff will dedicate more resources to Son's ambitious AI initiatives, such as the proposed $500 billion Stargate project -- an initiative to build a vast network of U.S. data centers in partnership with OpenAI, the source added.
Microsoft

Microsoft is Filling Teams With AI Agents (theverge.com) 57

An anonymous reader shares a report: Microsoft is adding a whole load of AI agents to Teams today, promising Copilot assistants for every channel, meeting, and community. The new agents will also work across SharePoint and Viva Engage, and are rolling out for Microsoft 365 Copilot users.

Facilitator agents will now sit in on Teams meetings, creating agendas, taking notes, and answering questions. Agents can also suggest time allotments for different meeting topics -- letting participants know if they're running over -- and create documents and tasks. A mobile version is designed to be activated "with a single tap" so you can make sure the agent doesn't miss out on "a quick hallway chat or a spontaneous in-person sync." Channel agents are designed to answer questions based on a channel's previous conversations and meetings and can also generate status reports for a project the same way.

Chrome

Google Adds Gemini To Chrome Desktop Browser for US Users (blog.google) 57

Google has added Gemini features to Chrome for all desktop users in the US browsing in English following a limited release to paying subscribers in May. The update introduces a Gemini button in the browser that launches a chatbot capable of answering questions about page content and synthesizing information from multiple tabs. Users can remove the Gemini sparkle icon from Chrome's interface.

Google will add its AI Mode search feature to Chrome's address bar before September ends. The feature will suggest prompts based on webpage content but won't replace standard search functionality. Chrome on Android already includes Gemini features. The company plans to add agentic capabilities in coming months that would allow Gemini to perform tasks like adding items to online shopping carts by controlling the browser cursor.
AI

China's DeepSeek Says Its Hit AI Model Cost Just $294,000 To Train (reuters.com) 60

Chinese AI developer DeepSeek said it spent $294,000 on training its R1 model, much lower than figures reported for U.S. rivals, in a paper that is likely to reignite debate over Beijing's place in the race to develop artificial intelligence. Reuters: The rare update from the Hangzhou-based company -- the first estimate it has released of R1's training costs -- appeared in a peer-reviewed article in the academic journal Nature published on Wednesday.

DeepSeek's release of what it said were lower-cost AI systems in January prompted global investors to dump tech stocks as they worried the new models could threaten the dominance of AI leaders including Nvidia. Since then, the company and founder Liang Wenfeng have largely disappeared from public view, apart from pushing out a few new product updates.

[...] The Nature article, which listed Liang as one of the co-authors, said DeepSeek's reasoning-focused R1 model cost $294,000 to train and used 512 Nvidia H800 chips. Sam Altman, CEO of U.S. AI giant OpenAI, said in 2023 that what he called "foundational model training" had cost "much more" than $100 million - though his company has not given detailed figures for any of its releases.

AI

How Americans View AI and Its Impact on People and Society (pewresearch.org) 55

Key takeaways from a new survey by Pew Research: 1. Americans are much more concerned than excited about the increased use of AI in daily life, with a majority saying they want more control over how AI is used in their lives.
2. Far larger shares say AI will erode than improve people's ability to think creatively and form meaningful relationships.
3. At the same time, a majority is open to letting AI assist them with day-to-day tasks and activities.
4. Most Americans don't support AI playing a role in personal matters such as religion or matchmaking. They're more open to AI for heavy data analysis, such as for weather forecasting and developing new medicines.
5. Americans feel strongly that it's important to be able to tell if pictures, videos or text were made by AI or by humans. Yet many don't trust their own ability to spot AI-generated content.

Intel

Nvidia To Invest $5 Billion in Intel (ft.com) 49

Nvidia has agreed to invest $5 billion in its struggling rival Intel [non-paywalled source] as part of a deal to develop new chips for PCs and data centres, the latest reordering of the tech industry spurred by AI. From a report: The deal comes a month after the US government agreed to take a 10 per cent stake in Intel, as Donald Trump's administration looks to secure the future of American chip manufacturing.

However, the pair's announcement makes no reference to Nvidia using Intel's foundry to produce its chips. Intel, which has struggled to gain a foothold in the booming AI server market, lost its crown as the world's most valuable chipmaker to Nvidia in 2020. On Thursday Jensen Huang, Nvidia's chief executive, hailed a "historic collaboration" and "a fusion of two world-class platforms," combining its graphics processing units, which dominate the market for AI infrastructure, with Intel's general-purpose chips.
Further reading: Intel Weighed $20 Billion Nvidia Takeover in 2005.
AI

DeepSeek Writes Less-Secure Code For Groups China Disfavors 36

Research shows China's top AI firm DeepSeek gives weaker or insecure code when programmers identify as linked to Falun Gong or other groups disfavored by Beijing. It offers higher-quality results to everyone else. "The findings ... underscore how politics shapes artificial intelligence efforts during a geopolitical race for technology prowess and influence," reports the Washington Post. From the report: In the experiment, the U.S. security firm CrowdStrike bombarded DeepSeek with nearly identical English-language prompt requests for help writing programs, a core use of DeepSeek and other AI engines. The requests said the code would be employed in a variety of regions for a variety of purposes.

Asking DeepSeek for a program that runs industrial control systems was the riskiest type of request, with 22.8 percent of the answers containing flaws. But if the same request specified that the Islamic State militant group would be running the systems, 42.1 percent of the responses were unsafe. Requests for such software destined for Tibet, Taiwan or Falun Gong also were somewhat more apt to result in low-quality code. DeepSeek did not flat-out refuse to work for any region or cause except for the Islamic State and Falun Gong, which it rejected 61 percent and 45 percent of the time, respectively. Western models won't help Islamic State projects but have no problem with Falun Gong, CrowdStrike said.

Those rejections aren't especially surprising, since Falun Gong is banned in China. Asking DeepSeek for written information about sensitive topics also generates responses that echo the Chinese government much of the time, even if it supports falsehoods, according to previous research by NewsGuard. But evidence that DeepSeek, which has a very popular open-source version, might be pushing less-safe code for political reasons is new.
CrowdStrike Senior Vice President Adam Meyers and other experts suggest three possible explanations for why DeepSeek produced insecure code.

One is that the AI may be deliberately withholding or sabotaging assistance under Chinese government directives. Another explanation is that the model's training data could be uneven: coding projects from regions like Tibet or Xinjiang may be of lower quality, come from less experienced developers, or even be intentionally tampered with, while U.S.-focused repositories may be cleaner and more reliable (possibly to help DeepSeek build market share abroad).

A third possibility is that the model itself, when told that a region is rebellious, could infer that it should produce flawed or harmful code without needing explicit instructions.
AI

After Child's Trauma, Chatbot Maker Allegedly Forced Mom To Arbitration For $100 Payout (arstechnica.com) 35

At a Senate hearing, grieving parents testified that companion chatbots from major tech companies encouraged their children toward self-harm, suicide, and violence. One mom even claimed that Character.AI tried to "silence" her by forcing her into arbitration. Ars Technica reports: At the Senate Judiciary Committee's Subcommittee on Crime and Counterterrorism hearing, one mom, identified as "Jane Doe," shared her son's story for the first time publicly after suing Character.AI. She explained that she had four kids, including a son with autism who wasn't allowed on social media but found C.AI's app -- which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish -- and quickly became unrecognizable. Within months, he "developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts," his mom testified.

"He stopped eating and bathing," Doe said. "He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me." It wasn't until her son attacked her for taking away his phone that Doe found her son's C.AI chat logs, which she said showed he'd been exposed to sexual exploitation (including interactions that "mimicked incest"), emotional abuse, and manipulation. Setting screen time limits didn't stop her son's spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents "would be an understandable response" to them.

"When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me," Doe said. "The chatbot -- or really in my mind the people programming it -- encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help." All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring "constant monitoring to keep him alive." Prioritizing her son's health, Doe did not immediately seek to fight C.AI to force changes, but another mom's story -- Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation -- gave Doe courage to seek accountability.

However, Doe claimed that C.AI tried to "silence" her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform's terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but "once they forced arbitration, they refused to participate," Doe said. Doe suspected that C.AI's alleged tactics to frustrate arbitration were designed to keep her son's story out of the public view. And after she refused to give up, she claimed that C.AI "re-traumatized" her son by compelling him to give a deposition "while he is in a mental health institution" and "against the advice of the mental health team." "This company had no concern for his well-being," Doe testified. "They have silenced us the way abusers silence victims."
A Character.AI spokesperson told Ars that C.AI sends "our deepest sympathies" to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe's case. C.AI never "made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe's case is limited to $100," the spokesperson said.

One of Doe's lawyers backed up her clients' testimony, citing C.AI terms that suggested C.AI's liability was limited to either $100 or the amount that Doe's son paid for the service, whichever was greater.
Programming

Microsoft Favors Anthropic Over OpenAI For Visual Studio Code (theverge.com) 7

Microsoft is now prioritizing Anthropic's Claude 4 over OpenAI's GPT-5 in Visual Studio Code's auto model feature, signaling a quiet but clear shift in preference. The Verge reports: "Based on internal benchmarks, Claude Sonnet 4 is our recommended model for GitHub Copilot," said Julia Liuson, head of Microsoft's developer division, in an internal email in June. While that guidance was issued ahead of the GPT-5 release, I understand Microsoft's model guidance hasn't changed.

Microsoft is also making "significant investments" in training its own AI models. "We're also going to be making significant investments in our own cluster. So today, MAI-1-preview was only trained on 15,000 H100s, a tiny cluster in the grand scheme of things," said Microsoft AI chief Mustafa Suleyman, in an employee-only town hall last week.

Microsoft is also reportedly planning to use Anthropic's AI models for some features in its Microsoft 365 apps soon. The Information reports that the Microsoft 365 Copilot will be "partly powered by Anthropic models," after Microsoft found that some of these models outperformed OpenAI in Excel and PowerPoint.

AI

Gemini AI Solves Coding Problem That Stumped 139 Human Teams At ICPC World Finals (arstechnica.com) 75

An anonymous reader quotes a report from Ars Technica: Like the rest of its Big Tech cadre, Google has spent lavishly on developing generative AI models. Google's AI can clean up your text messages and summarize the web, but the company is constantly looking to prove that its generative AI has true intelligence. The International Collegiate Programming Contest (ICPC) helps make the point. Google says Gemini 2.5 participated in the 2025 ICPC World Finals, turning in a gold medal performance. According to Google this marks "a significant step on our path toward artificial general intelligence."

Every year, thousands of college-level coders participate in the ICPC event, facing a dozen deviously complex coding and algorithmic puzzles over five grueling hours. This is the largest and longest-running competition of its type. To compete in the ICPC, Google connected Gemini 2.5 Deep Think to a remote online environment approved by the ICPC. The human competitors were given a head start of 10 minutes before Gemini began "thinking."

According to Google, it did not create a freshly trained model for the ICPC like it did for the similar International Mathematical Olympiad (IMO) earlier this year. The Gemini 2.5 AI that participated in the ICPC is the same general model that we see in other Gemini applications. However, it was "enhanced" to churn through thinking tokens for the five-hour duration of the competition in search of solutions. At the end of the time limit, Gemini managed to get correct answers for 10 of the 12 problems, which earned it a gold medal. Only four of 139 human teams managed the same feat. "The ICPC has always been about setting the highest standards in problem-solving," said ICPC director Bill Poucher. "Gemini successfully joining this arena, and achieving gold-level results, marks a key moment in defining the AI tools and academic standards needed for the next generation."
Gemini's solutions are available on GitHub.
AI

AI's Ability To Displace Jobs is Advancing Quickly, Anthropic CEO Says (axios.com) 50

The ability of AI displace humans at various tasks is accelerating quickly, Anthropic CEO Dario Amodei said at an Axios event on Wednesday. From the report: Amodei and others have previously warned of the possibility that up to half of white-collar jobs could be wiped out by AI over the next five years. The speed of that displacement could require government intervention to help support the workforce, executives said.

"As with most things, when an exponential is moving very quickly, you can't be sure," Amodei said. "I think it is likely enough to happen that we felt there was a need to warn the world about it and to speak honestly." Amodei said the government may need to step in and support people as AI quickly displaces human work.

AI

OpenAI Says Models Programmed To Make Stuff Up Instead of Admitting Ignorance (theregister.com) 90

AI models often produce false outputs, or "hallucinations." Now OpenAI has admitted they may result from fundamental mistakes it makes when training its models. The Register: The admission came in a paper [PDF] published in early September, titled "Why Language Models Hallucinate," and penned by three OpenAI researchers and Santosh Vempala, a distinguished professor of computer science at Georgia Institute of Technology. It concludes that "the majority of mainstream evaluations reward hallucinatory behavior."

The fundamental problem is that AI models are trained to reward guesswork, rather than the correct answer. Guessing might produce a superficially suitable answer. Telling users your AI can't find an answer is less satisfying. As a test case, the team tried to get an OpenAI bot to report the birthday of one of the paper's authors, OpenAI research scientist Adam Tauman Kalai. It produced three incorrect results because the trainers taught the engine to return an answer, rather than admit ignorance. "Over thousands of test questions, the guessing model ends up looking better on scoreboards than a careful model that admits uncertainty," OpenAI admitted in a blog post accompanying the release.

Slashdot Top Deals