AI

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com) 206

The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines." [OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."

Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....

The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.

AI

After 'AI-First' Promise, Duolingo CEO Admits 'I Did Not Expect the Blowback' (ft.com) 46

Last month, Duolingo CEO Luis von Ahn "shared on LinkedIn an email he had sent to all staff announcing Duolingo was going 'AI-first'," remembers the Financial Times.

"I did not expect the amount of blowback," he admits.... He attributes this anger to a general "anxiety" about technology replacing jobs. "I should have been more clear to the external world," he reflects on a video call from his office in Pittsburgh. "Every tech company is doing similar things [but] we were open about it...."

Since the furore, von Ahn has reassured customers that AI is not going to replace the company's workforce. There will be a "very small number of hourly contractors who are doing repetitive tasks that we no longer need", he says. "Many of these people are probably going to be offered contractor jobs for other stuff." Duolingo is still recruiting if it is satisfied the role cannot be automated. Graduates who make up half the people it hires every year "come with a different mindset" because they are using AI at university.

The thrust of the AI-first strategy, the 46-year-old says, is overhauling work processes... He wants staff to explore whether their tasks "can be entirely done by AI or with the help of AI. It's just a mind shift that people first try AI. It may be that AI doesn't actually solve the problem you're trying to solve.....that's fine." The aim is to automate repetitive tasks to free up time for more creative or strategic work.

Examples where it is making a difference include technology and illustration. Engineers will spend less time writing code. "Some of it they'll need to but we want it to be mediated by AI," von Ahn says... Similarly, designers will have more of a supervisory role, with AI helping to create artwork that fits Duolingo's "very specific style". "You no longer do the details and are more of a creative director. For the vast majority of jobs, this is what's going to happen...." [S]ocietal implications for AI, such as the ethics of stealing creators' copyright, are "a real concern". "A lot of times you don't even know how [the large language model] was trained. We should be careful." When it comes to artwork, he says Duolingo is "ensuring that the entirety of the model is trained just with our own illustrations".

AI

'Welcome to Campus. Here's Your ChatGPT.' (nytimes.com) 68

The New York Times reports: California State University announced this year that it was making ChatGPT available to more than 460,000 students across its 23 campuses to help prepare them for "California's future A.I.-driven economy." Cal State said the effort would help make the school "the nation's first and largest A.I.-empowered university system..." Some faculty members have already built custom chatbots for their students by uploading course materials like their lecture notes, slides, videos and quizzes into ChatGPT.
And other U.S. campuses including the University of Maryland are also "working to make A.I. tools part of students' everyday experiences," according to the article. It's all part of an OpenAI initiative "to overhaul college education — by embedding its artificial intelligence tools in every facet of campus life."

The Times calls it "a national experiment on millions of students." If the company's strategy succeeds, universities would give students A.I. assistants to help guide and tutor them from orientation day through graduation. Professors would provide customized A.I. study bots for each class. Career services would offer recruiter chatbots for students to practice job interviews. And undergrads could turn on a chatbot's voice mode to be quizzed aloud ahead of a test. OpenAI dubs its sales pitch "A.I.-native universities..." To spread chatbots on campuses, OpenAI is selling premium A.I. services to universities for faculty and student use. It is also running marketing campaigns aimed at getting students who have never used chatbots to try ChatGPT...

OpenAI's campus marketing effort comes as unemployment has increased among recent college graduates — particularly in fields like software engineering, where A.I. is now automating some tasks previously done by humans. In hopes of boosting students' career prospects, some universities are racing to provide A.I. tools and training...

[Leah Belsky, OpenAI's vice president of education] said a new "memory" feature, which retains and can refer to previous interactions with a user, would help ChatGPT tailor its responses to students over time and make the A.I. "more valuable as you grow and learn." Privacy experts warn that this kind of tracking feature raises concerns about long-term tech company surveillance. In the same way that many students today convert their school-issued Gmail accounts into personal accounts when they graduate, Ms. Belsky envisions graduating students bringing their A.I. chatbots into their workplaces and using them for life.

"It would be their gateway to learning — and career life thereafter," Ms. Belsky said.

Nintendo

Nintendo Switch 2 Has Record-Breaking Launch, Selling Over 3 Million Units (barrons.com) 48

TweakTown writes that the Switch 2 "has reportedly beaten the record for the most-sold console within 24 hours and is on track to shatter the two-month record," selling over 3 million units and tripling the PlayStation 4's previous launch day sales.

So Nintendo's first console in 8 years becomes "one of the most successful hardware releases of all time," writes Barron's, raising hopes for the future: [2017's original Switch] ultimately sold more than 152 million units... Switch 2's big advantage is its backward compatibility, allowing it to play current-generation Switch games and giving gamers solace that their large investments in software are intact... Many older Switch games also play better on the Switch 2, taking advantage of the extra horsepower.
Bloomberg writes that its bigger screen and faster chip "live up to the hype: Despite the hype and a $150 increase over the launch price for the original, the second-generation system manages to impress with faster performance, improved graphics, more comfortable ergonomics and enough tweaks throughout to make this feel like a distinctly new machine... This time, it's capable of outputting 4K resolution and more impactful HDR video to your TV screen... It's a bigger, faster, more polished version of a wildly successful gadget.
The "buzzy launch drew long lines" at retailers like Walmart, Target, Best Buy, and Gamestop, according to the article. (See the photos from AOL.com and USA Today.) "The era of spending hours waiting in line for the latest iPhone is long gone, but the debut of a new video game console is still a rare enough event that Nintendo fans didn't think twice about driving to retailers in the middle of the night to secure a Switch 2."

The Verge also opines that "the Switch 2's eShop is much better," calling it "way faster... with much less lag browsing through sections and loading up game pages."

Or, as Barron's puts it, "Ultimately, Nintendo is winning because it has a different strategy than its competition, the Sony PlayStation and Microsoft Xbox. Instead of trying to appeal to tech snobs like me, who are obsessed with graphics resolution and hardware statistics like teraflops, Nintendo focuses on joy and fun."
Advertising

Washington Post's Privacy Tip: Stop Using Chrome, Delete Meta's Apps (and Yandex) (msn.com) 70

Meta's Facebook and Instagram apps "were siphoning people's data through a digital back door for months," writes a Washington Post tech columnist, citing researchers who found no privacy setting could've stopped what Meta and Yandex were doing, since those two companies "circumvented privacy and security protections that Google set up for Android devices.

"But their tactics underscored some privacy vulnerabilities in web browsers or apps. These steps can reduce your risks." Stop using the Chrome browser. Mozilla's Firefox, the Brave browser and DuckDuckGo's browser block many common methods of tracking you from site to site. Chrome, the most popular web browser, does not... For iPhone and Mac folks, Safari also has strong privacy protections. It's not perfect, though. No browser protections are foolproof. The researchers said Firefox on Android devices was partly susceptible to the data harvesting tactics they identified, in addition to Chrome. (DuckDuckGo and Brave largely did block the tactics, the researchers said....)

Delete Meta and Yandex apps on your phone, if you have them. The tactics described by the European researchers showed that Meta and Yandex are unworthy of your trust. (Yandex is not popular in the United States.) It might be wise to delete their apps, which give the companies more latitude to collect information that websites generally cannot easily obtain, including your approximate location, your phone's battery level and what other devices, like an Xbox, are connected to your home WiFi.

Know, too, that even if you don't have Meta apps on your phone, and even if you don't use Facebook or Instagram at all, Meta might still harvest information on your activity across the web.

Australia

Apple Warns Australia Against Joining EU In Mandating iPhone App Sideloading (neowin.net) 84

Apple has urged Australia not to follow the European Union in mandating iPhone app sideloading, warning that such policies pose serious privacy and security risks. "This communication comes as the Australian federal government considers new rules that could force Apple to open up its iOS ecosystem, much like what happened in Europe with recent legislation," notes Neowin. Apple claims that allowing alternative app stores has led to increased exposure to malware, scams, and harmful content. From the report: Apple, in its response to this Australian paper (PDF), stated that Australia should not use the EU's Digital Markets Act "as a blueprint". The company's core argument is that the changes mandated by the EU's DMA, which came into full effect in March 2024, introduce serious security and privacy risks for users. Apple claims that allowing sideloading and alternative app stores effectively opens the door for malware, fraud, scams, and other harmful content. The tech company also highlighted specific concerns from its European experience, alleging that its compliance there has led to users being able to install pornography apps and apps that facilitate copyright infringement, things its curated App Store aims to prevent. Apple maintains that its current review process is vital for user protection, and that its often criticized 30% commission applies mainly to the highest earning apps, with most developers paying a lower 15% rate or nothing.
Open Source

Linux Foundation Tries To Play Peacemaker In Ongoing WordPress Scuffle (theregister.com) 13

The Register's Thomas Claburn reports: The Linux Foundation on Friday introduced a new method to distribute WordPress updates and plugins that's not controlled by any one party, in a bid to "stabilize the WordPress ecosystem" after months of infighting. The FAIR Package Manager project is a response to the legal brawl that erupted last year, pitting WordPress co-creator Matthew Mullenweg, his for-profit hosting firm Automattic, and the WordPress Foundation that he controls, against WP Engine, a rival commercial WordPress hosting firm. [...]

The Linux Foundation says the FAIR Package Manager, a mechanism for distributing open-source WordPress plugins, "eliminates reliance on any single source for core updates, plugins, themes, and more, unites a fragmented ecosystem by bringing together plugins from any source, and builds security into the supply chain." In other words, it can't be weaponized against the WordPress community because it won't be controlled by any one entity. "The FAIR Package Manager project paves the way for the stability and growth of open source content management, giving contributors and businesses additional options governed by a neutral community," said Jim Zemlin, Executive Director of the Linux Foundation, in a canned press statement. "We look forward to the growth in community and contributions this important project attracts."

The FAIR Package Manager repo explains the software's purpose more succinctly. The software "is a decentralized alternative to the central WordPress.org plugin and theme ecosystem, designed to return control to WordPress hosts and developers. It operates as a drop-in WordPress plugin, seamlessly replacing existing centralized services with a federated, open-source infrastructure." In addition to providing some measure of stability, the Linux Foundation sees the FAIR Package Manager as advancing WordPress' alignment with Europe's General Data Protection Regulation by reducing automatic browser data transmission and telemetry sent to commercial entities, while also supporting modern security practices and strengthening the open source software supply chain.

Businesses

About 20% of Tech Startups Worth More Than $1 Billion Will Fail, Accel Says (theedgemalaysia.com) 33

An anonymous reader shares a report: There are more than 1,000 technology unicorns, meaning venture-backed companies worth $1 billion or more, but at least one in 5 are likely to fail, said Rich Wong, a partner at venture capital firm Accel Partners. "I think maybe out of that thousand, 20% fully die. The end," Wong said on Thursday at the Bloomberg Tech conference in San Francisco.

The estimate reinforces what's become a grim calculus for many companies. Tech start-up valuations soared during the 2021 pandemic boom -- before crashing back to earth, as interest rates rose and venture capital investments fell. Of the companies that don't fail, about half will be stuck -- muddling along without being able to grow bigger or go public, Wong said. Some of those may "ultimately have reality set in," and sell themselves for lower prices than once seemed feasible. Others, not quite failing, "will be a bit zombie-ish and grind on," he said.

Youtube

YouTube Pulls Tech Creator's Self-Hosting Tutorial as 'Harmful Content' (jeffgeerling.com) 77

YouTube pulled a popular tutorial video from tech creator Jeff Geerling this week, claiming his guide to installing LibreELEC on a Raspberry Pi 5 violated policies against "harmful content." The video, which showed viewers how to set up their own home media servers, had been live for over a year and racked up more than 500,000 views. YouTube's automated systems flagged the content for allegedly teaching people "how to get unauthorized or free access to audio or audiovisual content."

Geerling says his tutorial covered only legal self-hosting of media people already own -- no piracy tools or copyright workarounds. He said he goes out of his way to avoid mentioning popular piracy software in his videos. It's the second time YouTube has pulled a self-hosting content video from Geerling. Last October, YouTube removed his Jellyfin tutorial, though that decision was quickly reversed after appeal. This time, his appeal was denied.
United Kingdom

UK Tech Job Openings Climb 21% To Pre-Pandemic Highs (theregister.com) 17

UK tech job openings have surged 21% to pre-pandemic levels, driven largely by a 200% spike in demand for AI skills. London accounted for 80% of the AI-related postings. The Register reports: Accenture collected data from LinkedIn in the first and second week of February 2025, and supplemented the results with a survey of more than 4,000 respondents conducted by research firm YouGov between July and August 2024. The research found a 53 percent annual increase in those describing themselves as having tech skills, amounting to 1.69 million people reporting skills in disciplines including cyber, data, and robotics. [...]

The research found that London-based companies said they would allocate a fifth of their tech budgets to AI this year, compared to 13 percent who said the same and were based in North East England, Scotland, and Wales. Growth in revenue per employee increased during the period when LLMs emerged, from 7 percent annually between 2018 and 2022 to 27 percent between 2018 and 2024. Meanwhile, growth in the same measure fell slightly in industries less affected by AI, such as mining and hospitality, the researchers said.

Intel

Intel: New Products Must Deliver 50% Gross Profit To Get the Green Light (tomshardware.com) 44

Intel has implemented a strict new policy requiring all new projects to demonstrate at least a 50% gross margin to move forward. CEO Lip-Bu Tan explained Intel's new risk-averse policy as "something that we probably should have had before," later clarifying that the number is a figure the company is aspiring toward internally. Tom's Hardware reports: Tan is reportedly "laser focused on the fact that we need to get our gross margins back up above 50%." To accomplish this, Tan is also said to be investigating and potentially cancelling or changing unprofitable deals with other companies. Intel's margins have slipped to new lows for the company in recent months. MacroTrends reports Intel's trailing 12 months gross margin for Q1 2025 was as low as 31.67%. Intel's gross margins had hovered around the 60% mark for the ten years leading up to the COVID-19 pandemic, falling beneath 50% in Q2 2022 and continuing to steadily fall ever since.

Holthaus predicts a "tug-of-war" to ensue within Intel in the coming months as engineers and executives reckon with being forced between a rock and a hard place. "We need to be building products that... fit the right competitive landscape and requirements of our customers, but also have the right cost structure in place. It really requires us to do both." [...] Tan is also quoted as wanting to turn Intel into an "engineering-focused company" again under his leadership. To reach this, Tan has committed to investing in recruiting and retaining top talent; "I believe Intel has lost some of this talent over the years; I want to create a culture of innovation empowerment." Maintaining a culture of empowering innovation and top talent seems, on its face, at odds with layoffs and a lock on projects not projected to gross 50% margins, but Tan seemingly has Intel investors on his side in these pursuits.

Businesses

Discord's CTO Is Just As Worried About Enshittification As You Are (engadget.com) 45

An anonymous reader quotes a report from Engadget: Discord co-founder and CTO Stanislav Vishnevskiy wants you to know he thinks a lot about enshittification. With reports of an upcoming IPO and the news of his co-founder, Jason Citron, recently stepping down to hand leadership of the company over to Humam Sakhnini, a former Activision Blizzard executive, many Discord users are rightfully worried the platform is about to become, well, shit. "I understand the anxiety and concern," Vishnevskiy told Engadget in a recent call. "I think the things that people are afraid of are what separate a great, long-term focused company from just any other company." According to Vishnevskiy, the concern that Discord could fail to do right by its users or otherwise lose its way is a topic of regular discussion at the company.

"I'm definitely the one who's constantly bringing up enshittification," he said of Discord's internal meetings. "It's not a bad thing to build a strong business and to monetize a product. That's how we can reinvest and continue to make things better. But we have to be extremely thoughtful about how we do that." The way Vishnevskiy tells it, Discord already had an identity crisis and came out of that moment with a stronger sense of what its product means to people. You may recall the company briefly operated a curated game store. Discord launched the storefront in 2018 only to shut it down less than a year later in 2019. Vishnevskiy describes that as a period of reckoning within Discord.

"We call it embracing the brutal facts internally," he said of the episode. When Vishnevskiy and Citron started Discord, they envisioned a platform that would not just be for chatting with friends, but one that would also serve as a game distribution hub. "We spent a year building that component of our business and then, quite frankly, we quickly knew it wasn't going well." Out of that failure, Discord decided to focus on its Nitro subscription and embrace everyone who was using the app to organize communities outside of gaming. Since its introduction in 2017, the service has evolved to include a few different perks, but at its heart, Nitro has always been a way for Discord users to get more out of the app and support their favorite servers. [...] Vishnevskiy describes Nitro as a "phenomenal business," but the decision to look beyond gaming created a different set of problems. "It wasn't clear exactly who we were building for, because now Discord was a community product for everyone, and that drove a lot of distractions," he said.
"Discord is something that is meant to be a durable company that has a meaningful impact on people's lives, not just now but in 10 years as well," Vishnevskiy said. "That's the journey that Humam joined and signed up for too. We are long-term focused. Our investors are long-term focused."
China

China Will Drop the Great Firewall For Some Users To Boost Free-Trade Port Ambitions (scmp.com) 49

China's southernmost province of Hainan is piloting a programme to grant select corporate users broad access to the global internet, a rare move in a country known for having some of the world's most restrictive online censorship, as the island seeks to transform itself into a global free-trade port. From a report: Employees of companies registered and operating in Hainan can apply for the "Global Connect" mobile service through the Hainan International Data Comprehensive Service Centre (HIDCSC), according to the agency, which is overseen by the state-run Hainan Big Data Development Centre.

The programme allows eligible users to bypass the so-called Great Firewall, which blocks access to many of the world's most-visited websites, such as Google and Wikipedia. Applicants must be on a 5G plan with one of the country's three major state-backed carriers -- China Mobile, China Unicom or China Telecom -- and submit their employer's information, including the company's Unified Social Credit Code, for approval. The process can take up to five months, HIDCSC staff said.

China

OpenAI Says Significant Number of Recent ChatGPT Misuses Likely Came From China (wsj.com) 19

OpenAI said it disrupted several attempts [non-paywalled source] from users in China to leverage its AI models for cyber threats and covert influence operations, underscoring the security challenges AI poses as the technology becomes more powerful. From a report: The Microsoft-backed company on Thursday published its latest report on disrupting malicious uses of AI, saying its investigative teams continued to uncover and prevent such activities in the three months since Feb. 21.

While misuse occurred in several countries, OpenAI said it believes a "significant number" of violations came from China, noting that four of 10 sample cases included in its latest report likely had a Chinese origin. In one such case, the company said it banned ChatGPT accounts it claimed were using OpenAI's models to generate social media posts for a covert influence operation. The company said a user stated in a prompt that they worked for China's propaganda department, though it cautioned it didn't have independent proof to verify its claim.

Businesses

Data Center Boom May End Up Being 'Irrational,' Investor Warns (axios.com) 28

A prominent venture capitalist has warned that the technology industry's massive buildout of AI data centers risks becoming "irrational" and could end in disaster, particularly as companies pursue small nuclear reactors to power the facilities. Josh Wolfe, co-founder and partner at Lux Capital, compared the current infrastructure expansion to previous market bubbles in fiber-optic networking and cloud computing. While individual actions by hyperscale companies to build data center infrastructure remain rational, Wolfe said the collective effort "becomes irrational" and "will not necessarily persist."

The warning comes as Big Tech companies pour tens of billions into data centers and energy sources, with Meta announcing just this week a deal to purchase power from an operating nuclear station in Illinois that was scheduled to retire in 2027. Wolfe said he is worried that speculative capital is flowing into small modular reactors based on presumed energy demands from data centers. "I think that that whole thing is going to end in disaster, mostly because as cliched as it is, history doesn't repeat. It rhymes," he said.
Privacy

New Spying Claims Emerge in Silicon Valley Corporate Espionage Scandal (ft.com) 14

A bitter fight over alleged corporate espionage involving two of Silicon Valley's hottest startups took a new twist on Tuesday, after $12 billion HR software company Deel claimed arch-rival Rippling had directed one of its employees to "pilfer" the company's assets by posing as a customer. From a report: The latest claim comes after Rippling alleged earlier this year that a staff member had been spying on behalf of Deel. The employee locked themselves into a bathroom and smashed their phone with an axe when confronted with allegations, according to their own testimony.

In new legal filings seen by the Financial Times, Deel has countered by arguing that: "Rippling has been actively engaged in a carefully co-ordinated espionage campaign, through which it infiltrated Deel's customer platform by fraudulent means and pilfered the company's most valuable proprietary assets."

Facebook

Meta's Push Into Defense Tech Reflects Cultural Shift, CTO Says (bloomberg.com) 52

Meta CTO Andrew Bosworth said that the "tides have turned" in Silicon Valley and made it more palatable for the tech industry to support the US military's efforts. From a report: There's long existed a "silent majority" who wanted to pursue defense projects, Bosworth said during an interview at the Bloomberg Tech summit in San Francisco on Wednesday. "There's a much stronger patriotic underpinning than I think people give Silicon Valley credit for," he said. Silicon Valley was founded on military development and "there's really a long history here that we are kind of hoping to return to, but it is not even day one," Bosworth added. He described Silicon Valley's new openness to work with the US military as a "return to grace."
The Courts

OpenAI Slams Court Order To Save All ChatGPT Logs, Including Deleted Chats (arstechnica.com) 103

An anonymous reader quotes a report from Ars Technica: OpenAI is now fighting a court order (PDF) to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering -- after news organizations suing over copyright claims accused the AI company of destroying evidence. "Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to 'preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying)," OpenAI explained in a court filing (PDF) demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without "any just cause," OpenAI argued, the order "continues to prevent OpenAI from respecting its users' privacy decisions." That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI's application programming interface (API), OpenAI said. The court order came after news organizations expressed concern that people using ChatGPT to skirt paywalls "might be more likely to 'delete all [their] searches' to cover their tracks," OpenAI explained. Evidence to support that claim, news plaintiffs argued, was missing from the record because so far, OpenAI had only shared samples of chat logs that users had agreed that the company could retain. Sharing the news plaintiffs' concerns, the judge, Ona Wang, ultimately agreed that OpenAI likely would never stop deleting that alleged evidence absent a court order, granting news plaintiffs' request to preserve all chats.

OpenAI argued the May 13 order was premature and should be vacated, until, "at a minimum," news organizations can establish a substantial need for OpenAI to preserve all chat logs. They warned that the privacy of hundreds of millions of ChatGPT users globally is at risk every day that the "sweeping, unprecedented" order continues to be enforced. "As a result, OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used, and whether it is retained," OpenAI argued. Meanwhile, there is no evidence beyond speculation yet supporting claims that "OpenAI had intentionally deleted data," OpenAI alleged. And supposedly there is not "a single piece of evidence supporting" claims that copyright-infringing ChatGPT users are more likely to delete their chats. "OpenAI did not 'destroy' any data, and certainly did not delete any data in response to litigation events," OpenAI argued. "The Order appears to have incorrectly assumed the contrary."
One tech worker on LinkedIn suggested the order created "a serious breach of contract for every company that uses OpenAI," while privacy advocates on X warned, "every single AI service 'powered by' OpenAI should be concerned."

Also on LinkedIn, a consultant rushed to warn clients to be "extra careful" sharing sensitive data "with ChatGPT or through OpenAI's API for now," warning, "your outputs could eventually be read by others, even if you opted out of training data sharing or used 'temporary chat'!"
KDE

KDE Targets Windows 10 'Exiles' Claiming 'Your Computer is Toast' (theregister.com) 134

king*jojo shares a report: Linux desktop darling KDE is weighing in on the controversy around the impending demise of Windows 10 support with a lurid "KDE for Windows 10 Exiles" campaign. KDE's alarming "Exiles" page opens with the text "Your computer is toast" followed by a warning that Microsoft wants to turn computers running Windows 10 into junk from October 14.

"It may seem like it continues to work after that date for a bit, but when Microsoft stops support for Windows 10, your perfectly good computer will be officially obsolete." Beneath a picture of a pile of tech junk, including a rotary telephone and some floppy drives, KDE proclaims: "Windows 10 will degrade as more and more bugs come to light. With nobody to correct them, you risk being hacked. Your data, identity, and control over your device could be stolen."

AI

Hollywood Already Uses Generative AI (And Is Hiding It) (vulture.com) 61

Major Hollywood studios are extensively using AI tools while avoiding public disclosure, according to industry sources interviewed by New York Magazine. Nearly 100 AI studios now operate in Hollywood with every major studio reportedly experimenting with generative AI despite legal uncertainties surrounding copyright training data, the report said.

Lionsgate has partnered with AI company Runway to create a customized model trained on the studio's film archive, with executives planning to generate entire movie trailers from scripts before shooting begins. The collaboration allows the studio to potentially reduce production costs from $100 million to $50 million for certain projects.

Widespread usage of the new technology is often happening through unofficial channels. Workers are reporting pressure to use AI tools without formal studio approval, then "launder" the AI-generated content through human artists to obscure its origins.

Slashdot Top Deals