Windows

PIRG, Other Groups Criticize Microsoft's Plan to Discontinue Support for Windows 10 (windowscentral.com) 157

The consumer advocacy nonprofit PIRG (Public Interest Research Group) is now petitioning Microsoft to reconsider pulling support for Windows 10 in 2025, since "as many as 400 million perfectly good computers that can't upgrade to Windows 11 will be thrown out." In a petition addressed to Microsoft CEO Satya Nadella, the group warned the October 14 end of free support could cause "the single biggest jump in junked computers ever, and make it impossible for Microsoft to hit their sustainability goals." About 40% of PCs currently in use can't upgrade to Windows 11, even if users want to... Less than a quarter of electronic waste is recycled, so most of those computers will end up in landfills.
Consumer Reports recently also urged Microsoft to not to "strand millions of customers.". And now more groups are also pushing back, according to a post from the blog Windows: Central The Restart Project co-developed the "End of 10" toolkit, which is designed to support Windows 10 users who can't upgrade to Windows 11 after the operating system hits its end-of-support date.
They also note that a Paris-based company called Back Market plans to sell Windows 10 laptops refurbished with Ubuntu Linux or ChromeOS Flex. ("We refuse to watch hundreds of millions of perfectly good computers end up in the trash as e-waste," explains their web site.) Back Market's ad promises an "up-to-date, secure operating system — so instead of paying for a new computer you don't need, you can help us give this one a brand new life."

Right now Windows 10 holds 71.9% of Microsoft's market share, with Windows 11 at 22.95%, according to figures from StatCounter cited by the blog Windows Central. And HP and Dell "recently indicated that half of the global PCs are still running Windows 10," according to another Windows Central post...
Chrome

Google Temporarily Pauses AI-Powered 'Homework Helper' Button in Chrome Over Cheating Concerns (msn.com) 65

An anonymous reader shared this article from the Washington Post: A student taking an online quiz sees a button appear in their Chrome browser: "homework help." Soon, Google's artificial intelligence has read the question on-screen and suggests "choice B" as the answer. The temptation to cheat was suddenly just two clicks away Sept. 2, when Google quietly added a "homework help" button to Chrome, the world's most popular web browser. The button has been appearing automatically on the kinds of course websites used by the majority of American college students and many high-schoolers, too. Pressing it launches Google Lens, a service that reads what's on the page and can provide an "AI Overview" answer to questions — including during tests.

Educators I've spoken with are alarmed. Schools including Emory University, the University of Alabama, the University of California at Los Angeles and the University of California at Berkeley have alerted faculty how the button appears in the URL box of course sites and their limited ability to control it.

Chrome's cheating tool exemplifies Big Tech's continuing gold rush approach to AI: launch first, consider consequences later and let society clean up the mess. "Google is undermining academic integrity by shoving AI in students' faces during exams," says Ian Linkletter, a librarian at the British Columbia Institute of Technology who first flagged the issue to me. "Google is trying to make instructors give up on regulating AI in their classroom, and it might work. Google Chrome has the market share to change student behavior, and it appears this is the goal."

Several days after I contacted Google about the issue, the company told me it had temporarily paused the homework help button — but also didn't commit to keeping it off. "Students have told us they value tools that help them learn and understand things visually, so we're running tests offering an easier way to access Lens while browsing," Google spokesman Craig Ewer said in a statement.

Moon

Interlune Signs $300M Deal to Harvest Helium-3 for Quantum Computing from the Moon (msn.com) 60

An anonymous reader shared this report from the Washington Post: Finnish tech firm Bluefors, a maker of ultracold refrigerator systems critical for quantum computing, has purchased tens of thousands of liters of Helium-3 from the moon — spending "above $300 million" — through a commercial space company called Interlune. The agreement, which has not been previously reported, marks the largest purchase of a natural resource from space.

Interlune, a company founded by former executives from Blue Origin and an Apollo astronaut, has faced skepticism about its mission to become the first entity to mine the moon (which is legal thanks to a 2015 law that grants U.S. space companies the rights to mine on celestial bodies). But advances in its harvesting technology and the materialization of commercial agreements are gradually making this undertaking sound less like science fiction. Bluefors is the third customer to sign up, with an order of up to 10,000 liters of Helium-3 annually for delivery between 2028 and 2037...

Helium-3 is lighter than the Helium-4 gas featured at birthday parties. It's also much rarer on Earth. But moon rock samples from the Apollo days hint at its abundance there. Interlune has placed the market value at $20 million per kilogram (about 7,500 liters). "It's the only resource in the universe that's priced high enough to warrant going out to space today and bringing it back to Earth," said Rob Meyerson [CEO of Interlune and former president of Blue Origin]...

[H]eat, even in small doses, can cause qubits to produce errors. That's where Helium-3 comes in. Bluefors makes the cooling technology that allows the computer to operate — producing chandelier-type structures known as dilution refrigerators. Their fridges, used by quantum computer leader IBM, contain a mixture of Helium-3 and Helium-4 that pushes temperatures below 10 millikelvins (or minus-460 degrees Fahrenheit)... Existing quantum computers have been built with more than a thousand qubits, he said, but a commercial system or data center would need a million or more. That could require perhaps thousands of liters of Helium-3 per quantum computer. "They will need more Helium-3 than is available on planet Earth," said Gary Lai [a co-founder and chief technology officer of Interlune, who was previously the chief architect at Blue Origin]. Most Helium-3 on Earth, he said, comes from the decay of tritium (an isotope of hydrogen) in nuclear weapons stockpiles, but between 22,000 and 30,000 liters are made each year...

"We estimate there's more than a million metric tons of Helium-3 on the moon," Meyerson said. "And it's been accumulating there for 4 billion years." Now, they just need to get it.

Interlune CEO Meyerson tells the post "It's really all about establishing a resilient supply chain for this critical material" — adding that in the long-term he could also see Helium-3 being used for other purposes including fusion energy.
Security

Self-Replicating Worm Affected Several Hundred NPM Packages, Including CrowdStrike's (www.koi.security) 33

The Shai-Hulud malware campaign impacted hundreds of npm packages across multiple maintainers, reports Koi Security, including popular libraries like @ctrl/tinycolor and some packages maintained by CrowdStrike. Malicious versions embed a trojanized script (bundle.js) designed to steal developer credentials, exfiltrate secrets, and persist in repositories and endpoints through automated workflows.
Koi Security created a table of packages identified as compromised, promising it's "continuously updated" (and showing the last compromise detected Tuesday). Nearly all of the compromised packages have a status of "removed from NPM". Attackers published malicious versions of @ctrl/tinycolor and other npm packages, injecting a large obfuscated script (bundle.js) that executes automatically during installation. This payload repackages and republishes maintainer projects, enabling the malware to spread laterally across related packages without direct developer involvement. As a result, the compromise quickly scaled beyond its initial entry point, impacting not only widely used open-source libraries but also CrowdStrike's npm packages.

The injected script performs credential harvesting and persistence operations. It runs TruffleHog to scan local filesystems and repositories for secrets, including npm tokens, GitHub credentials, and cloud access keys for AWS, GCP, and Azure. It also writes a hidden GitHub Actions workflow file (.github/workflows/shai-hulud-workflow.yml) that exfiltrates secrets during CI/CD runs, ensuring long-term access even after the initial infection. This dual focus on endpoint secret theft and backdoors makes Shai-Hulud one of the most dangerous campaigns ever compared to previous compromises.

"The malicious code also attempts to leak data on GitHub by making private repositories public," according to a Tuesday blog post from security systems provider Sysdig: The Sysdig Threat Research Team (TRT) has been monitoring this worm's progress since its discovery. Due to quick response times, the number of new packages being compromised has slowed considerably. No new packages have been seen in several hours at the time...
Their blog post concludes "Supply chain attacks are increasing in frequency. It is more important than ever to monitor third-party packages for malicious activity."

Some context from Tom's Hardware: To be clear: This campaign is distinct from the incident that we covered on Sept. 9, which saw multiple npm packages with billions of weekly downloads compromised in a bid to steal cryptocurrency. The ecosystem is the same — attackers have clearly realized the GitHub-owned npm package registry for the Node.js ecosystem is a valuable target — but whoever's behind the Shai-Hulud campaign is after more than just some Bitcoin.
AI

Is OpenAI's Video-Generating Tool 'Sora' Scraping Unauthorized YouTube Clips? (msn.com) 18

"OpenAI's video generation tool, Sora, can create high-definition clips of just about anything you could ask for..." reports the Washington Post.

"But OpenAI has not specified which videos it grabbed to make Sora, saying only that it combined 'publicly available and licensed data'..." With ChatGPT, OpenAI helped popularize the now-standard industry practice of building more capable AI tools by scraping vast quantities of text from the web without consent. With Sora, launched in December, OpenAI staff said they built a pioneering video generator by taking a similar approach. They developed ways to feed the system more online video — in more varied formats — including vertical videos and longer, higher-resolution clips... To explore what content OpenAI may have used, The Washington Post used Sora to create hundreds of videos that show it can closely mimic movies, TV shows and other content...

In dozens of tests, The Post found that Sora can create clips that closely resemble Netflix shows such as "Wednesday"; popular video games like "Minecraft"; and beloved cartoon characters, as well as the animated logos for Warner Bros., DreamWorks and other Hollywood studios, movies and TV shows. The publicly available version of Sora can generate only 20-second clips, without audio. In most cases, the look-alike scenes were made by typing basic requests like "universal studios intro." The results also showed that Sora can create AI videos with the logos or watermarks that broadcasters and tech companies use to brand their video content, including those for the National Basketball Association, Chinese-owned social app TikTok and Amazon-owned streaming platform Twitch...

Sora's ability to re-create specific imagery and brands suggests a version of the originals appeared in the tool's training data, AI researchers said. "The model is mimicking the training data. There's no magic," said Joanna Materzynska, a PhD researcher at Massachusetts Institute of Technology who has studied datasets used in AI. An AI tool's ability to reproduce proprietary content doesn't necessarily indicate that the original material was copied or obtained from its creators or owners. Content of all kinds is uploaded to video and social platforms, often without the consent of the copyright holder... Materzynska co-authored a study last year that found more than 70 percent of public video datasets commonly used in AI research contained content scraped from YouTube.

Netflix and Twitch said they did not have a content partnership for training OpenAI, according to the article (which adds that OpenAI "has yet to face a copyright suit over the data used for Sora.")

Two key quotes from the article:
  • "Unauthorized scraping of YouTube content continues to be a violation of our Terms of Service." — YouTube spokesperson Jack Malon
  • "We train on publicly available data consistent with fair use and use industry-leading safeguards to avoid replicating the material they learn from." — OpenAI spokesperson Kayla Wood

Slashdot Top Deals