AI

OpenAI Declares 'Code Red' As Google Catches Up In AI Race 35

OpenAI has reportedly issued a "code red" on Monday, pausing projects like ads, shopping agents, health tools, and its Pulse assistant to focus entirely on improving ChatGPT. "This includes core features like greater speed and reliability, better personalization, and the ability to answer more questions," reports The Verge, citing a memo reported by the Wall Street Journal and The Information. "There will be a daily call for those tasked with improving the chatbot, the memo said, and Altman encouraged temporary team transfers to speed up development." From the report: The newfound urgency illustrates an inflection point for OpenAI as it spends hundreds of billions of dollars to fund growth and figures out a path to future profitability. It is also something of a full-circle moment in the AI race. Google, which declared its own "code red" after the arrival of ChatGPT, is a particular concern. Google's AI user base is growing -- helped by the success of popular tools like the Nano Banana image model -- and its latest AI model, Gemini 3, blew past its competitors on many industry benchmarks and popular metrics.
Privacy

Apple To Resist India Order To Preload State-Run App As Political Outcry Builds (reuters.com) 40

Apple does not plan to comply with India's mandate to preload its smartphones with a state-owned cyber safety app that cannot be disabled. According to Reuters, the order "sparked surveillance concerns and a political uproar" after it was revealed on Monday. From the report: In the wake of the criticism, India's telecom minister Jyotiraditya M. Scindia on Tuesday said the app was a "voluntary and democratic system," adding that users can choose to activate it and can "easily delete it from their phone at any time." At present, the app can be deleted by users. Scindia did not comment on or clarify the November 28 confidential directive that ordered smartphone makers to start preloading it and ensure "its functionalities are not disabled or restricted."

Apple however does not plan to comply with the directive and will tell the government it does not follow such mandates anywhere in the world as they raise a host of privacy and security issues for the company's iOS ecosystem, said two of the industry sources who are familiar with Apple's concerns. They declined to be named publicly as the company's strategy is private. "Its not only like taking a sledgehammer, this is like a double-barrel gun," said the first source.

AI

Amazon To Use Nvidia Tech In AI Chips, Roll Out New Servers 3

AWS is deepening its partnership with Nvidia by adopting "NVLink Fusion" in its upcoming Trainium4 AI chips. "The NVLink technology creates speedy connections between different kinds of chips and is one of Nvidia's crown jewels," notes Reuters. From the report: Nvidia has been pushing to sign up other chip firms to adopt its NVLink technology, with Intel, Qualcomm and now AWS on board. The technology will help AWS build bigger AI servers that can recognize and communicate with one another faster, a critical factor in training large AI models, in which thousands of machines must be strung together. As part of the Nvidia partnership, customers will have access to what AWS is calling AI Factories, exclusive AI infrastructure inside their own data centers for greater speed and readiness.

Separately, Amazon said it is rolling out new servers based on a chip called Trainium3. The new servers, available on Tuesday, each contain 144 chips and have more than four times the computing power of AWS's previous generation of AI, while using 40% less power, Dave Brown, vice president of AWS compute and machine learning services, told Reuters. Brown did not give absolute figures on power or performance, but said AWS aims to compete with rivals -- including Nvidia -- based on price.
"Together, Nvidia and AWS are creating the compute fabric for the AI industrial revolution - bringing advanced AI to every company, in every country, and accelerating the world's path to intelligence," Nvidia CEO Jensen Huang said in a statement.
Government

Trump Administration To Take Equity Stake In Former Intel CEO's Chip Startup (wsj.com) 55

An anonymous reader quotes a report from the Wall Street Journal: The Trump administration has agreed to inject up to $150 million into a startup (source paywalled; alternative source) trying to develop more advanced semiconductor manufacturing techniques in the U.S., its latest bid to support strategically important domestic industries with government incentives. Under the arrangement, the Commerce Department would give the incentives to xLight, a startup trying to improve the critical chip-making process known as extreme ultraviolet lithography, the agency said in a Monday release. In return, the government would get an equity stake that would likely make it xLight's largest shareholder.

The Dutch firm ASML is currently the only global producer of EUV machines, which can cost hundreds of millions of dollars each. XLight is seeking to improve on just one component of the EUV process: the crucially important lasers that etch complex microscopic patterns onto chemical-treated silicon wafers. The startup is hoping to integrate its light sources into ASML's machines. XLight represents a second act for Pat Gelsinger, the former chief executive of Intel who was fired by the board late last year after the chip maker suffered from weak financial performance and a stalled manufacturing expansion. Gelsinger serves as executive chairman of xLight's board.

[...] The xLight deal uses funding from the 2022 Chips and Science Act allocated for earlier stage companies with promising technologies. It is the first Chips Act award in President Trump's second term and is a preliminary agreement, meaning it isn't finalized and could change. "This partnership would back a technology that can fundamentally rewrite the limits of chipmaking," Commerce Secretary Howard Lutnick said in the release.

Intel

Former CEO Blasts Intel's 'Decay': 'We Don't Know How To Engineer Anymore' (ft.com) 121

Pat Gelsinger, the former Intel CEO who was pushed out in late 2024 during a five-year turnaround effort, told the Financial Times that the "decay" he found when he returned to the company in 2021 was "deeper and harder than I'd realized." In the five years before his return, "not a single product was delivered on schedule," he said. "Basic disciplines" had been lost. "It's like, wow, we don't know how to engineer anymore!"

Gelsinger was also unsparing about the Biden administration's implementation of the 2022 Chips Act, legislation he spent more time lobbying for than any other CEO. "Two and a half years later [and] no money is dispensed? I thought it was hideous!" There's what Gelsinger carefully calls "a touch of irony" in how things played out.

Intel's board forced him out four years into a five-year plan, then picked successor Lip-Bu Tan -- who Gelsinger says is following the same broad strategy. Tan has kept Intel in the manufacturing game and delivered the 18A process node within the five years Gelsinger originally promised. Asked what went wrong, Gelsinger conceded he was "very focused on managing 'down'" and should have managed "up" more. He also would have pushed harder for more semiconductor expertise on the board, he said.
AI

Browser Extension 'Slop Evader' Lets You Surf the Web Like It's 2022 (404media.co) 47

"The internet is being increasingly polluted by AI generated text, images and video," argues the site for a new browser extension called Slop Evader. It promises to use Google's search API "to only return content published before Nov 30th, 2022" — the day ChatGPT launched — "so you can be sure that it was written or produced by the human hand."

404 Media calls it "a scorched earth approach that virtually guarantees your searches will be slop-free." Slop Evader was created by artist and researcher Tega Brain, who says she was motivated by the growing dismay over the tech industry's unrelenting, aggressive rollout of so-called "generative AI" — despite widespread criticism and the wider public's distaste for it. "This sowing of mistrust in our relationship with media is a huge thing, a huge effect of this synthetic media moment we're in," Brain told 404 Media, describing how tools like Sora 2 have short-circuited our ability to determine reality within a sea of artificial online junk. "I've been thinking about ways to refuse it, and the simplest, dumbest way to do that is to only search before 2022...."

Currently, Slop Evader can be used to search pre-GPT archives of seven different sites where slop has become commonplace, including YouTube, Reddit, Stack Exchange, and the parenting site MumsNet. The obvious downside to this, from a user perspective, is that you won't be able to find anything time-sensitive or current — including this very website, which did not exist in 2022. The experience is simultaneously refreshing and harrowing, allowing you to browse freely without having to constantly question reality, but always knowing that this freedom will be forever locked in time — nostalgia for a human-centric world wide web that no longer exists.

Of course, the tool's limitations are part of its provocation. Brain says she has plans to add support for more sites, and release a new version that uses DuckDuckGo's search indexing instead of Google's. But the real goal, she says, is prompting people to question how they can collectively refuse the dystopian, inhuman version of the internet that Silicon Valley's AI-pushers have forced on us... With enough cultural pushback, Brain suggests, we could start to see alternative search engines like DuckDuckGo adding options to filter out search results suspected of having synthetic content (DuckDuckGo added the ability to filter out AI images in search earlier this year)... But no matter what form AI slop-refusal takes, it will need to be a group effort.

GNU is Not Unix

Hundreds of Free Software Supporters Tuned in For 'FSF40' Hackathon (fsf.org) 9

The Free Software Foundation describes how "After months of preparation and excitement, we finally came together on November 21 for a global online hackathon to support free software projects and "put a spotlight on the difficult and often thankless work that free software hackers carry out..."

Based on how many of you dropped in over the weekend and were incredibly engaged in the important work that is improving free software, either as a spectator or as a participant, this goal was accomplished. And it's all thanks to you!

Friday started a little rocky with a datacenter outage affecting most FSF services. Participants spread out to work on six different free software projects over forty-eight hours as our tech team worked to restore all FSF sites with the help and support of the community. Over three hundred folks were tuned in at a time, some to participate in the hackathon and others to follow the progress being made. As a community, we got a lot done over the weekend...

It was amazing to see so many of you take a little (or a lot of!) time out of your busy schedules to improve free software, and we're incredibly grateful for each and every one of you. It really energizes us and shows us how much we can accomplish when we work together over even just a couple days. Not only was this a fantastic sight to see because of the work we got done, but it was also a very fitting way to conclude our fortieth anniversary celebration events. Free software has been and always will be a community effort, one that continues to get better and better because of the dedicated developers, contributors, and users who ensure its existence. Thank you for celebrating forty years of the FSF and fighting for a freer future for us all.

News

Officials Clashed in Investigation of Deadly Air India Crash (wsj.com) 54

The investigation into the June 12 Air India crash that killed 260 people has been marked by tension, suspicion and poor communication between American and Indian officials, including an episode where NTSB chairwoman Jennifer Homendy instructed her black-box specialists not to board a late-night Indian military flight to a remote facility, WSJ reports.

When two American recorder experts landed in New Delhi in late June, they received urgent messages from colleagues telling them not to go with the Indians; Homendy had grown concerned about sending U.S. personnel and equipment to an aerospace lab in the remote town of Korwa amid State Department security warnings about terrorism in the region. She made calls to Transportation Secretary Sean Duffy and the CEOs of Boeing and GE Aerospace, and the State Department sent embassy officials to intercept the NTSB specialists at the airport.

Homendy eventually delivered an ultimatum: if Indian authorities didn't choose between their Delhi facility and the NTSB's Washington lab within 48 hours, she would withdraw American support from the probe. Indian officials relented. The downloaded data showed someone in the cockpit moved switches that cut off the engines' fuel supply, and India's preliminary report stated one pilot asked the other why he moved the switches while that pilot denied doing so. American government and industry officials now privately believe the captain likely moved the switches deliberately.
EU

EU To Examine If Apple Ads and Maps Subject To Tough Rules, Apple Says No (reuters.com) 10

EU antitrust regulators will examine whether Apple's Apple Ads and Apple Maps should be subject to the onerous requirements of the bloc's digital rules after both services hit key criteria, with the U.S. tech giant saying they should be exempted. From a report: Apple's App Store, iOS operating system and Safari web browser were designated core platform services under the Digital Markets Act two years ago aimed at reining in the power of Big Tech and opening up the field to rivals so consumers can have more choice. The European Commission said that Apple has notified it that Apple Ads and Apple Maps met the Act's two thresholds to be considered "gatekeepers." The DMA designates companies with services with more than 45 million monthly active users and $79 billion in market capitalisation as gatekeepers subject to a list of dos and don'ts.
Social Networks

Social Media Giants Liable For Financial Scams Under New EU Law (politico.eu) 18

Platforms including Meta and TikTok will be held liable for financial fraud for the first time under new rules agreed by EU lawmakers in the early hours of Thursday. From a report: The Parliament and Council agreed on the package of rules after eight hours of negotiations to strengthen safeguards against payment fraud. The deal adds another layer of EU regulatory risk for U.S. tech giants, which have lobbied the White House to confront Brussels' anti-monopoly and content moderation rules.

[...] Social media has become rife with financial scams, and MEPs pushed hard to hold both Big Tech and banks liable during legislative negotiations. EU governments, meanwhile, believed banks should be held responsible if their safeguards aren't strong enough. As a compromise, lawmakers agreed that banks should reimburse victims if a scammer, impersonating the bank, swindles them out of their money, or if payments are processed without consent.

AI

AI Can Technically Perform 12% of US Labor Market's Wage Value, MIT Simulation Finds (cnbc.com) 70

Researchers at MIT and Oak Ridge National Laboratory have built a simulation that models all 151 million American workers and their skills, then maps those skills against the capabilities of over 13,000 AI tools currently in production to see where the two overlap. The answer, according to their analysis: 11.7% of the US labor market's total wage value, or about $1.2 trillion, sits in tasks that AI systems can technically perform [PDF].

The researchers call this the Iceberg Index, and the name is deliberate. The visible AI disruption happening in tech jobs right now accounts for only 2.2% of labor market wage value. The remaining exposure lurks in cognitive and administrative work across finance, healthcare administration, and professional services, and unlike tech-sector disruption, it's spread across all fifty states rather than concentrated on the coasts.

Delaware and South Dakota show higher Iceberg Index values than California because their economies lean heavily on administrative and financial work. Ohio and Tennessee register modest tech-sector exposure but substantial hidden risk in the white-collar functions that support their manufacturing bases.

To validate the framework, the researchers compared their predictions against Anthropic's Economic Index tracking real-world AI usage from millions of Claude users. The two measures agreed on state categorizations 69% of the time, with particularly strong alignment at the extremes.

The Iceberg Index doesn't predict job losses or adoption timelines. It measures technical capability, the overlap between what AI can do and what occupations require. Traditional economic indicators like GDP and unemployment explain less than five percent of the variation in this skill-based exposure, which is partly why the researchers argue workforce planners need new metrics.
Android

Google's AirDrop Support For Pixel 10 Likely Exists Because of EU's Apple Ruling (9to5google.com) 15

Last week, Google surprised the tech world when it announced AirDrop support on Pixel 10 devices -- all without Apple's involvement. "While it initially seemed like this was a rogue move made by Google to coerce Apple into another boundary-breaking decision, it might actually be part of the repercussions that also led to USB-C on iPhone and the adoption of RCS," reports 9to5Google. From a report: As reported by Ars Technica, the answer to this week's mysterious Quick Share upgrade lies in the EU's interoperability requirements designed for the DMA. The ruling out of the European Commission pushed Apple to begin supporting interoperable wireless standards beginning with this year's set of OS upgrades, replacing the previous proprietary standard the company used to power its various Continuity features. That forced Apple to add support for the Wi-Fi Alliance's Wi-Fi Aware standard of multi-directional file sharing, at the cost of completely phasing out its previous walled-in protocol.
AI

OpenAI Says Dead Teen Violated TOS When He Used ChatGPT To Plan Suicide 125

An anonymous reader quotes a report from Ars Technica: Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen's suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot. The earliest look at OpenAI's strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen's "suicide coach." OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world's most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring "the full picture" revealed by the teen's chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he'd begun experiencing suicidal ideation at age 11, long before he used the chatbot. "A full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT," OpenAI's filing argued. [...] All the logs that OpenAI referenced in its filing are sealed, making it impossible to verify the broader context the AI firm claims the logs provide. In its blog, OpenAI said it was limiting the amount of "sensitive evidence" made available to the public, due to its intention to handle mental health-related cases with "care, transparency, and respect."
The Raine family's lead lawyer called OpenAI's response "disturbing."

"They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing. That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a 'beautiful suicide.' And OpenAI and Sam Altman have no explanation for the last hours of Adam's life, when ChatGPT gave him a pep talk and then offered to write a suicide note."

OpenAI is leaning on its usage policies to defend against this case, emphasizing that "ChatGPT users acknowledge their use of ChatGPT is 'at your sole risk'" and that Raine should never have been allowed to use the chatbot without parental consent.
AI

Warner Music Group Partners With Suno To Offer AI Likenesses of Its Artists 31

Warner Music Group has reached a licensing deal with Suno that will let users create AI-generated music using the voices and likenesses of artists who opt in. WMG says participating artists will have "full control" over how their likeness and music are used. "These will be new creation experiences from artists who do opt in, which will open up new revenue streams for them and allow you to interact with them in new ways," Suno says, adding that users will be able to "build around" an artist's sounds "and ensure they get compensated." WMG is also dropping its previous lawsuit accusing Suno of scraping copyrighted material.

"Along with the licensing agreement, Suno is planning to use licensed music from WMG to build next-gen music generation models that it claims will surpass its flagship v5 model," adds The Verge. "It will also start requiring users to have a paid account to download songs starting next year, with each tier providing a specific number of downloads each month."

Further reading: First 'AI Music Creator' Signed by Record Label. More Ahead, or Just a Copyright Quandry?
Privacy

Google Maps Will Let You Hide Your Identity When Writing Reviews (pcmag.com) 36

An anonymous reader quotes a report from PCMag: Four new features are coming to Google Maps, including a way to hide your identity in reviews. Maps will soon let you use a nickname and select an alternative profile picture for online reviews, so you can rate a business without linking it to full name and Google profile photo. Google says it will monitor for "suspicious and fake reviews," and every review is still associated with an account on Google's backend, which it believes will discourage bad actors.

Look for a new option under Your Profile that says Use a custom name & picture for posting. You'll then be able to pick an illustration to represent you and add a nickname. Google didn't explain why it is introducing anonymous reviews; it pitched the idea as a way to be a business's "Secret Santa." Some users are nervous to publicly post reviews for local businesses as it may be used to track their location or movements. It may encourage more people to contribute honest feedback to its platform, for better or worse.
Further reading: Gemini AI To Transform Google Maps Into a More Conversational Experience

Slashdot Top Deals