DRM

The True Cost of Game Piracy: 20% of Revenue, According To a New Study 106

A new study suggests game piracy costs publishers 19% of revenue on average when digital rights management (DRM) protections are cracked. Research associate William Volckmann at UNC analyzed 86 games using Denuvo DRM on Steam between 2014-2022.

The study, published in Entertainment Computing, found cracks appearing in the first week after release led to 20% revenue loss, dropping to 5% for cracks after six weeks. Volckmann used Steam user reviews and player counts as proxies for sales data.
DRM

Windows Media Player and Silverlight Are Losing Legacy DRM Services on Windows 7 and 8 (tomshardware.com) 47

An anonymous reader shares a report: Per a recent update to Microsoft's Deprecated Windows features page, Legacy DRM services utilized by Windows Media Player and Silverlight clients for Windows 7 and Windows 8 are now deprecated. This will prevent the streaming or playback of DRM-protected content in those applications on those operating systems. It also includes playing content from personal CD rips and streaming from a Silverlight or Windows 8 client to an Xbox 360 if you were still doing that.

For those unfamiliar, "DRM" refers to Digital Rights Management. Basically, DRM tech ensures that you aren't stealing or playing back pirated content. Of course, piracy still exists, but these days, most officially distributed movies, TV shows, games, etc., all involve some form of DRM unless explicitly advertised as DRM-free. DRM does seem like harmless piracy prevention on paper. Still, it hasn't been all that effective at eliminating piracy -- and where it is implemented, it mainly punishes or inconveniences paying customers. It is an excellent example of DRM's folly. Now, anyone who had previously opted into Microsoft's legitimate media streaming ecosystem with Windows 7 and 8 is being penalized for buying media legitimately since it will no longer work without them being forced to pivot to other streaming solutions.

Linux

Linux 6.12 To Optionally Display A QR Code During Kernel Panics (phoronix.com) 44

New submitter meisdug writes: A new feature has been submitted for inclusion in Linux 6.12, allowing the display of a QR code when a kernel panic occurs using the DRM Panic handler. This QR code can capture detailed error information that is often missed in traditional text-based panic messages, making it more user-friendly. The feature, written in Rust, is optional and can be enabled via a specific build switch. This implementation follows similar ideas from other operating systems and earlier discussions in the Linux community.
AI

It May Soon Be Legal To Jailbreak AI To Expose How It Works (404media.co) 26

An anonymous reader quotes a report from 404 Media: A group of researchers, academics, and hackers are trying to make it easier to break AI companies' terms of service to conduct "good faith research" that exposes biases, inaccuracies, and training data without fear of being sued. The U.S. government is currently considering an exemption to U.S. copyright law that would allow people to break technical protection measures and digital rights management (DRM) on AI systems to learn more about how they work, probe them for bias, discrimination, harmful and inaccurate outputs, and to learn more about the data they are trained on. The exemption would allow for "good faith" security and academic research and "red-teaming" of AI products even if the researcher had to circumvent systems designed to prevent that research. The proposed exemption has the support of the Department of Justice, which said "good faith research can help reveal unintended or undisclosed collection or exposure of sensitive personal data, or identify systems whose operations or outputs are unsafe, inaccurate, or ineffective for the uses for which they are intended or marketed by developers, or employed by end users. Such research can be especially significant when AI platforms are used for particularly important purposes, where unintended, inaccurate, or unpredictable AI output can result in serious harm to individuals."

Much of what we know about how closed-sourced AI tools like ChatGPT, Midjourney, and others work are from researchers, journalists, and ordinary users purposefully trying to trick these systems into revealing something about the data they were trained on (which often includes copyrighted material indiscriminately and secretly scraped from the internet), its biases, and its weaknesses. Doing this type of research can often violate the terms of service users agree to when they sign up for a system. For example, OpenAI's terms of service state that users cannot "attempt to or assist anyone to reverse engineer, decompile or discover the source code or underlying components of our Services, including our models, algorithms, or systems (except to the extent this restriction is prohibited by applicable law)," and adds that users must not "circumvent any rate limits or restrictions or bypass any protective measures or safety mitigations we put on our Services."

Shayne Longpre, an MIT researcher who is part of the team pushing for the exemption, told me that "there is a lot of apprehensiveness about these models and their design, their biases, being used for discrimination, and, broadly, their trustworthiness." "But the ecosystem of researchers looking into this isn't super healthy. There are people doing the work but a lot of people are getting their accounts suspended for doing good-faith research, or they are worried about potential legal ramifications of violating terms of service," he added. "These terms of service have chilling effects on research, and companies aren't very transparent about their process for enforcing terms of service." The exemption would be to Section 1201 of the Digital Millennium Copyright Act, a sweeping copyright law. Other 1201 exemptions, which must be applied for and renewed every three years as part of a process through the Library of Congress, allow for the hacking of tractors and electronic devices for the purpose of repair, have carveouts that protect security researchers who are trying to find bugs and vulnerabilities, and in certain cases protect people who are trying to archive or preserve specific types of content.
Harley Geiger of the Hacking Policy Council said that an exemption is "crucial to identifying and fixing algorithmic flaws to prevent harm or disruption," and added that a "lack of clear legal protection under DMCA Section 1201 adversely affect such research."
Linux

Linux Kernel 6.10 Released (omgubuntu.co.uk) 15

"The latest version of the Linux kernel adds an array of improvements," writes the blog OMG Ubuntu, " including a new memory sealing system call, a speed boost for AES-XTS encryption on Intel and AMD CPUs, and expanding Rust language support within the kernel to RISC-V." Plus, like in all kernel releases, there's a glut of groundwork to offer "initial support" for upcoming CPUs, GPUs, NPUs, Wi-Fi, and other hardware (that most of us don't use yet, but require Linux support to be in place for when devices that use them filter out)...

Linux 6.10 adds (after much gnashing) the mseal() system call to prevent changes being made to portions of the virtual address space. For now, this will mainly benefit Google Chrome, which plans to use it to harden its sandboxing. Work is underway by kernel contributors to allow other apps to benefit, though. A similarly initially-controversial change merged is a new memory-allocation profiling subsystem. This helps developers fine-tune memory usage and more readily identify memory leaks. An explainer from LWN summarizes it well.

Elsewhere, Linux 6.10 offers encrypted interactions with trusted platform modules (TPM) in order to "make the kernel's use of the TPM reasonably robust in the face of external snooping and packet alteration attacks". The documentation for this feature explains: "for every in-kernel operation we use null primary salted HMAC to protect the integrity [and] we use parameter encryption to protect key sealing and parameter decryption to protect key unsealing and random number generation." Sticking with security, the Linux kernel's Landlock security module can now apply policies to ioctl() calls (Input/Output Control), restricting potential misuse and improving overall system security.

On the networking side there's significant performance improvements to zero-copy send operations using io_uring, and the newly-added ability to "bundle" multiple buffers for send and receive operations also offers an uptick in performance...

A couple of months ago Canonical announced Ubuntu support for the RISC-V Milk-V Mars single-board computer. Linux 6.10 mainlines support for the Milk-V Mars, which will make that effort a lot more viable (especially with the Ubuntu 24.10 kernel likely to be v6.10 or newer). Others RISC-V improvements abound in Linux 6.10, including support for the Rust language, boot image compression in BZ2, LZ4, LZMA, LZO, and Zstandard (instead of only Gzip); and newer AMD GPUs thanks to kernel-mode FPU support in RISC-V.

Phoronix has their own rundown of Linux 6.10, plus a list of some of the highlights, which includes:
  • The initial DRM Panic infrastructure
  • The new Panthor DRM driver for newer Arm Mali graphics
  • Better AMD ROCm/AMDKFD support for "small" Ryzen APUs and new additions for AMD Zen 5.
  • AMD GPU display support on RISC-V hardware thanks to RISC-V kernel mode FPU
  • More Intel Xe2 graphics preparations
  • Better IO_uring zero-copy performance
  • Faster AES-XTS disk/file encryption with modern Intel and AMD CPUs
  • Continued online repair work for XFS
  • Steam Deck IMU support
  • TPM bus encryption and integrity protection

Linux

New Linux 'Screen of Death' Options: Black - or a Monochrome Tux Logo (phoronix.com) 49

It was analgous to the "Blue Screen of Death" that Windows gives for critical errors, Phoronix wrote. To enable error messages for things like a kernel panic, Linux 6.10 introduced a new panic handler infrastructure for "Direct Rendering Manager" (or DRM) drivers.

Phoronix also published a follow-up from Red Hat engineer Javier Martinez Canillas (who was involved in the new DRM Panic infrastructure). Given complaints about being too like Microsoft Windows following his recent Linux "Blue Screen of Death" showcase... Javier showed that a black screen of death is possible if so desired... After all, it's all open-source and thus can customize to your heart's content.
And now the panic handler is getting even more new features, Phoronix reported Friday: With the code in Linux 6.10 when DRM Panic is triggered, an ASCII art version of Linux's mascot, Tux the penguin, is rendered as part of the display. With Linux 6.11 it will also be able to handle displaying a monochrome image as the logo.

If ASCII art on error messages doesn't satisfy your tastes in 2024+, the DRM Panic code will be able to support a monochrome graphical logo that leverages the Linux kernel's boot-up logo support. The ASCII art penguin will still be used when no graphical logo is found or when the existing "LOGO" Kconfig option is disabled. (Those Tux logo assets being here.)

This monochrome logo support in the DRM Panic handler was sent out as part of this week's drm-misc-next pull request ahead of the Linux 6.11 merge window in July. This week's drm-misc-next material also includes TTM memory management improvements, various fixes to the smaller Direct Rendering Manager drivers, and also the previously talked about monochrome TV support for the Raspberry Pi.

Long-time Slashdot reader unixbhaskar thinks the new option "will certainly satisfy the modern people... But it is not as eye candy as people think... Moreover, it is monochrome, so certainly not resource-hungry. Plus, if all else fails, the ASCII art logo is still there to show!"
Linux

'Blue Screen of Death' Comes To Linux (phoronix.com) 109

In 2016, Phoronix remembered how the early days of Linux kernel mode-setting (KMS) had brought hopes for improved error messages. And one long-awaited feature was errors messages for "Direct Rendering Manager" (or DRM) drivers — something analgous to the "Blue Screen of Death" Windows gives for critical errors.

Now Linux 6.10 is introducing a new DRM panic handler infrastructure enabling messages when a panic occurs, Phoronix reports today. "This is especially important for those building a kernel without VT/FBCON support where otherwise viewing the kernel panic message isn't otherwise easily available." With Linux 6.10 the initial DRM Panic code has landed as well as wiring up the DRM/KMS driver support for the SimpleDRM, MGAG200, IMX, and AST drivers. There is work underway on extending DRM Panic support to other drivers that we'll likely see over the coming kernel cycles for more widespread support... On Linux 6.10+ with platforms having the DRM Panic driver support, this "Blue Screen of Death" functionality can be tested via a route such as echo c > /proc/sysrq-trigger.
The article links to a picture shared on Mastodon by Red Hat engineer Javier Martinez Canillas of the error message being generated on a BeaglePlay single board computer.

Phoronix also points out that some operating systems have even considered QR codes for kernel error messages...
DRM

Big Copyright Win in Canada: Court Rules Fair Use Beats Digital Locks (michaelgeist.ca) 16

Michael Geist Pig Hogger (Slashdot reader #10,379) reminds us that in Canadian law, "fair use" is called "fair dealing" — and that Canadian digital media users just enjoyed a huge win. Canadian user rights champion Michael Geist writes: The Federal Court has issued a landmark decision on copyright's anti-circumvention rules which concludes that digital locks should not trump fair dealing. Rather, the two must co-exist in harmony, leading to an interpretation that users can still rely on fair dealing even in cases involving those digital locks.

The decision could have enormous implications for libraries, education, and users more broadly as it seeks to restore the copyright balance in the digital world. The decision also importantly concludes that merely requiring a password does not meet the standard needed to qualify for copyright rules involving technological protection measures.

Canada's 2012 "Copyright Modernization Act" protected anti-copying technology from circumvention, Geist writes — and Blacklock's Reports had then "argued that allowing anyone other than original subscriber to access articles constituted copyright infringement." The court found that the Blacklock's legal language associated with its licensing was confusing and that fair dealing applied here as well...

Blacklock's position on this issue was straightforward: it argued that its content was protected by a password, that passwords constituted a form of technological protection measure, and that fair dealing does not apply in the context of circumvention. In other words, it argued that the act of circumvention (in this case of a password) was itself infringing and it could not be saved by fair dealing. The Federal Court disagreed on all points...

For years, many have argued for a specific exception to clarify that circumvention was permitted for fair dealing purposes, essentially making the case that users should not lose their fair dealing rights the moment a rights holder places a digital lock on their work. The Federal Court has concluded that the fair dealing rights have remained there all along and that the Copyright Act's anti-circumvention rules must be interpreted in a manner consistent with those rights.

"The case could still be appealed, but for now the court has restored a critical aspect of the copyright balance after more than a decade of uncertainty and concern."
DRM

Developer Hacks Denuvo DRM After Six Months of Detective Work and 2,000 Hooks (tomshardware.com) 37

After six months of work, DRM developer Maurice Heumann successfully cracked Hogwarts Legacy's Denuvo DRM protection system to learn more about the technology. According to Tom's Hardware, he's "left plenty of the details of his work vague so as not to promote illegal cracking." From the report: Heumann reveals in his blog post that Denuvo utilizes several different methods to ensure that Hogwarts Legacy is being run under appropriate (legal) conditions. First, the DRM creates a "fingerprint" of the game owner's system, and a Steam Ticket is used to prove game ownership. The Steam ticket is sent to the Steam servers to ensure the game was legitimately purchased. Heumann notes that he doesn't technically know what the Steam servers are doing but says this assumption should be accurate enough to understand how Denuvo works.

Once the Steam ticket is verified, a Denuovo Token is generated that only works on a PC with the exact fingerprint. This token is used to decrypt certain values when the game is running, enabling the system to run the game. In addition, the game will use the fingerprint to periodically verify security while the game is running, making Denuvo super difficult to hack.

After six months, Heumann was able to figure out how to hijack Hogwart Legacy's Denuvo fingerprint and use it to run the game on another machine. He used the Qiling reverse engineering framework to identify most of the fingerprint triggers, which took him two months. There was a third trigger that he says he only discovered by accident. By the end, he was able to hack most of the Denuvo DRM with ~2,000 of his own patches and hooks, and get the game running on his laptop using the token generated from his desktop PC.
Heumann ran a bunch of tests to determine if performance was impacted, but he wasn't able to get a definitive answer. "He discovered that the amount of Denuvo code executed in-game is quite infrequent, with calls occurring once every few seconds, or during level loads," reports Tom's Hardware. "This suggests that Denuvo is not killing performance, contrary to popular belief."
Apple

Apple Vision Pro Review Roundup 80

Apple has lifted the embargo on the first wave of non-curated reviews of its Vision Pro headset, and the results are somewhat surprising. The initial "high" experienced upon first impressions, where reviewers laud the headset's "incredibly impressive displays" and "near perfect" tracking capabilities, has waned. In real-world conditions outside of Apple's heavily-regulated demos, the Vision Pro appears to suffer from limited productivity usecases, DRM'd apps, and half-baked features that suggest this device is still very much in the dev-kit stage. Above all, however, is the isolation experienced when using the Vision Pro. It offers very few options for wearers to socialize and share memories with one another in any meaningful way. Tim Cook may be right when he said headsets are inherently isolating.

"You're in there, having experiences all by yourself that no one else can take part it," concludes Nilay Patel in his review for The Verge. "I don't want to get work done in the Vision Pro. I get my work done with other people, and I'd rather be out here with them."

These are some of our favorite reviews of the Apple Vision Pro:

- The Verge: Apple Vision Pro review: magic, until it's not
- The Wall Street Journal: Apple Vision Pro Review: The Best Headset Yet Is Just a Glimpse of the Future
- Washington Post: Apple's Vision Pro is nearly here. But what can you do with it?
- Tom's Guide: Apple Vision Pro review: A revolution in progress
- CNET: Apple Vision Pro Review: A Mind-Blowing Look at an Unfinished Future
- CNBC: Apple Vision Pro review: This is the future of computing and entertainment
AI

Ask Slashdot: Could a Form of Watermarking Prevent AI Deep Faking? (msn.com) 67

An opinion piece in the Los Angeles Times imagines a world after "the largest coordinated deepfake attack in history... a steady flow of new deepfakes, mostly manufactured in Russia, North Korea, China and Iran." The breakthrough actually came in early 2026 from a working group of digital journalists from U.S. and international news organizations. Their goal was to find a way to keep deepfakes out of news reports... Journalism organizations formed the FAC Alliance — "Fact Authenticated Content" — based on a simple insight: There was already far too much AI fakery loose in the world to try to enforce a watermarking system for dis- and misinformation. And even the strictest labeling rules would simply be ignored by bad actors. But it would be possible to watermark pieces of content that deepfakes.

And so was born the voluntary FACStamp on May 1, 2026...

The newest phones, tablets, cameras, recorders and desktop computers all include software that automatically inserts the FACStamp code into every piece of visual or audio content as it's captured, before any AI modification can be applied. This proves that the image, sound or video was not generated by AI. You can also download the FAC app, which does the same for older equipment... [T]o retain the FACStamp, your computer must be connected to the non-profit FAC Verification Center. The center's computers detect if the editing is minor — such as cropping or even cosmetic face-tuning — and the stamp remains. Any larger manipulation, from swapping faces to faking backgrounds, and the FACStamp vanishes.

It turned out that plenty of people could use the FACStamp. Internet retailers embraced FACStamps for videos and images of their products. Individuals soon followed, using FACStamps to sell goods online — when potential buyers are judging a used pickup truck or secondhand sofa, it's reassuring to know that the image wasn't spun out or scrubbed up by AI.

The article envisions the world of 2028, with the authentication stamp appearing on everything from social media posts to dating app profiles: Even the AI industry supports the use of FACStamps. During training runs on the internet, if an AI program absorbs excessive amounts of AI-generated rather than authentic data, it may undergo "model collapse" and become wildly inaccurate. So the FACStamp helps AI companies train their models solely on reality. A bipartisan group of senators and House members plans to introduce the Right to Reality Act when the next Congress opens in January 2029. It will mandate the use of FACStamps in multiple sectors, including local government, shopping sites and investment and real estate offerings. Counterfeiting a FACStamp would become a criminal offense. Polling indicates widespread public support for the act, and the FAC Alliance has already begun a branding campaign.
But all this leaves Slashdot reader Bruce66423 with a question. "Is it really technically possible to achieve such a clear distinction, or would, in practice, AI be able to replicate the necessary authentication?"
Christmas Cheer

FSF Shares Holiday Fairy Tale Warning 'Don't Let Your Tools Control You' (fsf.org) 25

"Share this holiday fairy tale with your loved ones," urges the Free Software Foundation.

A company offers you a tool to make your life easier, but, when you use it, you find out that the tool forces you to use it only in the way the tool's manufacturer approves. Does this story ring a bell? It's what millions of software users worldwide experience again and again, day after day. It's also the story of Wendell the Elf and the ShoeTool.
They suggest enjoying the video "to remind yourself why you shouldn't let your tools tell you how to use them." First released in 2019, it's available on the free/open-source video site PeerTube, a decentralized (and ActivityPub-federated) platform powered by WebTorrent.

They've also created a shortened URL for sharing on social media (recommending the hashtag #shoetool ). "And, of course, you can adapt the video to your liking after downloading the source files." Or, you can share the holiday fairy tale with your loved ones so that they can learn not to let their tools control them.

If we use free software, we don't need anyone's permission to, for example, modify our tools ourselves or install modifications shared by others. We don't need permission to ask someone else to tailor our tools to serve our wishes, exercise our creativity. The Free Software Foundation believes that everyone deserves full control over their computers and phones, and we hope this video helps you explain the importance of free software to your friends and family.

"Don't let your tools tell you how to use them," the video ends. "Join the Free Software Foundation!"
DRM

'Copyright Troll' Porn Company 'Makes Millions By Shaming Porn Consumers' (yahoo.com) 100

In 1999 Los Angeles Times reporter Michael Hiltzik co-authored a Pulitzer Prize-winning story. Now a business columnist for the Times, he writes that a Southern California maker of pornographic films named Strike 3 Holdings is also "a copyright troll," according to U.S. Judge Royce C. Lamberth: Lamberth cwrote in 2018, "Armed with hundreds of cut-and-pasted complaints and boilerplate discovery motions, Strike 3 floods this courthouse (and others around the country) with lawsuits smacking of extortion. It treats this Court not as a citadel of justice, but as an ATM." He likened its litigation strategy to a "high-tech shakedown." Lamberth was not speaking off the cuff. Since September 2017, Strike 3 has filed more than 12,440 lawsuits in federal courts alleging that defendants infringed its copyrights by downloading its movies via BitTorrent, an online service on which unauthorized content can be accessed by almost anyone with a computer and internet connection.

That includes 3,311 cases the firm filed this year, more than 550 in federal courts in California. On some days, scores of filings reach federal courthouses — on Nov. 17, to select a date at random, the firm filed 60 lawsuits nationwide... Typically, they are settled for what lawyers say are cash payments in the four or five figures or are dismissed outright...

It's impossible to pinpoint the profits that can be made from this courthouse strategy. J. Curtis Edmondson, a Portland, Oregon, lawyer who is among the few who pushed back against a Strike 3 case and won, estimates that Strike 3 "pulls in about $15 million to $20 million a year from its lawsuits." That would make the cases "way more profitable than selling their product...." If only one-third of its more than 12,000 lawsuits produced settlements averaging as little as $5,000 each, the yield would come to $20 million... The volume of Strike 3 cases has increased every year — from 1,932 in 2021 to 2,879 last year and 3,311 this year.

What's really needed is a change in copyright law to bring the statutory damages down to a level that truly reflects the value of a film lost because of unauthorized downloading — not $750 or $150,000 but perhaps a few hundred dollars.

Anone of the lawsuits go to trial. Instead ISPs get a subpoena demanding the real-world address and name behind IP addresses "ostensibly used to download content from BitTorrent..." according to the article. Strike 3 will then "proceed by sending a letter implicitly threatening the subscriber with public exposure as a pornography viewer and explicitly with the statutory penalties for infringement written into federal copyright law — up to $150,000 for each example of willful infringement and from $750 to $30,0000 otherwise."

A federal judge in Connecticut wrote last year that "Given the nature of the films at issue, defendants may feel coerced to settle these suits merely to prevent public disclosure of their identifying information, even if they believe they have been misidentified."

Thanks to Slashdot reader Beerismydad for sharing the article.
AI

Science Fiction and Fantasy Writers Take Aim At AI Freeloading (torrentfreak.com) 73

An anonymous reader quotes a report from TorrentFreak: Members of the Science Fiction and Fantasy Writers Association have no trouble envisioning an AI-centered future, but developments over the past year are reason for concern. The association takes offense when AI models exploit the generosity of science fiction writers, who share their work without DRM and free of charge. [...] Over the past few months, we have seen a variety of copyright lawsuits, many of which were filed by writers. These cases target ChatGPT's OpenAI but other platforms are targeted as well. A key allegation in these complaints is that the AI was trained using pirated books. For example, several authors have just filed an amended complaint against Meta, alleging that the company continued to train its AI on pirated books despite concerns from its own legal team. This clash between AI and copyright piqued the interest of the U.S. Copyright Office which launched an inquiry asking the public for input. With more than 10,000 responses, it is clear that the topic is close to the hearts of many people. It's impossible to summarize all opinions without AI assistance, but one submission stood out to us in particular; it encourages the free sharing of books while recommending that AI tools shouldn't be allowed to exploit this generosity for free.

The submission was filed by the Science Fiction and Fantasy Writers Association (SFWA), which represents over 2,500 published writers. The association is particularly concerned with the suggestion that its members' works can be used for AI training under a fair use exception. SFWA sides with many other rightsholders, concluding that pirated books shouldn't be used for AI training, adding that the same applies to books that are freely shared by many Science Fiction and Fantasy writers. [...] Many of the authors strongly believe that freely sharing stories is a good thing that enriches mankind, but that doesn't automatically mean that AI has the same privilege if the output is destined for commercial activities. The SFWA stresses that it doesn't take offense when AI tools use the works of its members for non-commercial purposes, such as research and scholarship. However, turning the data into a commercial tool goes too far.

AI freeloading will lead to unfair competition and cause harm to licensing markets, the writers warn. The developers of the AI tools have attempted to tone down these concerns but the SFWA is not convinced. [...] The writers want to protect their rights but they don't believe in the extremely restrictive position of some other copyright holders. They don't subscribe to the idea that people will no longer buy books because they can get the same information from an AI tool, for example. However, authors deserve some form of compensation. SFWA argues that all stakeholders should ultimately get together to come up with a plan that works for everyone. This means fair compensation and protection for authors, without making it financially unviable for AI to flourish.
"Questions of 'how' and 'when' and 'how much money' all come later; first and foremost the author must have the right to say how their work is used," their submission reads.

"So long as authors retain the right to say 'no' we believe that equitable solutions to the thorny problems of licensing, scale, and market harm can be found. But that right remains the cornerstone, and we insist upon it," SFWA concludes.
DRM

Polish Hackers Repaired Trains the Manufacturer Artificially Bricked. Now The Train Company Is Threatening Them (404media.co) 221

Hackers unbricked a train in Poland that had been deliberately disabled by its manufacturer. Now the manufacturer is threatening legal action against the hackers despite evidence it sabotaged the trains. From a report: The manufacturer is also now demanding that the repaired trains immediately be removed from service because they have been "hacked," and thus might now be unsafe, a claim they also cannot substantiate.

The situation is a heavy machinery example of something that happens across most categories of electronics, from phones, laptops, health devices, and wearables to tractors and, apparently, trains. In this case, NEWAG, the manufacturer of the Impuls family of trains, put code in the train's control systems that prevented them from running if a GPS tracker detected that it spent a certain number of days in an independent repair company's maintenance center, and also prevented it from running if certain components had been replaced without a manufacturer-approved serial number.

This anti-repair mechanism is called "parts pairing," and is a common frustration for farmers who want to repair their John Deere tractors without authorization from the company. It's also used by Apple to prevent independent repair of iPhones.

PlayStation (Games)

PlayStation To Delete A Ton Of TV Shows Users Already Paid For (kotaku.com) 123

Sony is about to delete tons of Discovery shows from PlayStation users' libraries even if they already "purchased" them. Why? Because most users don't actually own the digital content they buy thanks to the mess of online DRM and license agreements. Some of the soon-to-be-deleted TV shows include Mythbusters and Naked and Afraid. Kotaku reports: The latest pothole in the road to an all-digital future was discovered via a warning Sony recently sent out to PlayStation users who purchased TV shows made by Discovery, the reality TV network that recently merged with Warner Bros. in one of the most brutal and idiotic corporate maneuvers of our time. "Due to our content licensing arrangements with content providers, you will no longer be able to watch any of your previously purchased Discovery content and the content will be removed from your video library," read a copy of the email that was shared with Kotaku.

It linked to a page on the PlayStation website listing all of the shows impacted. As you might imagine, given Discovery's penchant for pumping out seasons of relatively cheap to produce but popular reality TV and documentary-based shows, there are a lot of them. They include, but are not limited to, hits such as: Say Yes to the Dress, Shark Week, Cake Boss, Long Island Medium, Deadly Women, and many, many more. [...] Now, essentially anything you buy on PSN, whether a PS5 blockbuster or, uh, Police Women of Cincinnati, is essentially just on indefinite loan until such time as the PlayStation servers die or the original copyright owner decides to pull the content.

Chrome

Chrome Not Proceeding With Web Integrity API Deemed By Many To Be DRM (9to5google.com) 24

An anonymous reader shares a report: Back in July, Google's work on a Web Integrity API emerged and many equated it to DRM. While prototyped, it was only at the proposal stage and the company announced today it's not going ahead with it. With this proposal, Google wanted to give websites a way to confirm the authenticity of the user and their device/browser.

The Web Integrity API would let websites "request a token that attests key facts about the environment their client code is running in." It's not all too different from the Play Integrity API (SafetyNet) on Android that Google Wallet and other banking apps use to make sure a device hasn't been tampered with (rooted).

Open Source

OpenBSD 7.4 Released (phoronix.com) 8

Long-time Slashdot reader Noryungi writes: OpenBSD 7.4 has been officially released. The 55th release of this BSD operating system, known for being security oriented, brings a lot of new things, including dynamic tracer, pfsync improvements, loads of security goodies and virtualization improvements. Grab your copy today! As mentioned by Phoronix's Michael Larabel, some of the key highlights include:

- Dynamic Tracer (DT) and Utrace support on AMD64 and i386 OpenBSD
- Power savings for those running OpenBSD 7.4 on Apple Silicon M1/M2 CPUs by allowing deep idle states when available for the idle loop and suspend
- Support for the PCIe controller found on Apple M2 Pro/Max SoCs
- Allow updating AMD CPU Microcode updating when a newer patch is available
- A workaround for the AMD Zenbleed CPU bug
- Various SMP improvements
- Updating the Direct Rendering Manager (DRM) graphics driver support against the upstream Linux 6.1.55 state
- New drivers for supporting various Qualcomm SoC features
- Support for soft RAID disks was improved for the OpenBSD installer
- Enabling of Indirect Branch Tracking (IBT) on x86_64 and Branch Target Identifier (BTI) on ARM64 for capable processors

You can download and view all the new changes via OpenBSD.org.
DRM

Cory Doctorow: Apple Sabotages Right-to-Repair Using 'Parts-Pairing' and the DMCA (pluralistic.net) 112

From science fiction author/blogger/technology activist Cory Doctorow: Right to repair has no cannier, more dedicated adversary than Apple, a company whose most innovative work is dreaming up new ways to sneakily sabotage electronics repair while claiming to be a caring environmental steward, a lie that covers up the mountains of e-waste that Apple dooms our descendants to wade through... Tim Cook laid it out for his investors: when people can repair their devices, they don't buy new ones. When people don't buy new devices, Apple doesn't sell them new devices. It's that's simple...
Specifically Doctorow is criticizing the way Apple equips parts with a tiny system-on-a-chip just to track serial numbers solely "to prevent independent repair technicians from fixing your gadget." For Apple, the true anti-repair innovation comes from the most pernicious US tech law: Section 1201 of the Digital Millennium Copyright Act (DMCA). DMCA 1201 is an "anti-circumvention" law. It bans the distribution of any tool that bypasses "an effective means of access control." That's all very abstract, but here's what it means: if a manufacturer sticks some Digital Rights Management (DRM) in its device, then anything you want to do that involves removing that DRM is now illegal — even if the thing itself is perfectly legal...

When California's right to repair bill was introduced, it was clear that it was gonna pass. Rather than get run over by that train, Apple got on board, supporting the legislation, which passed unanimously. But Apple got the last laugh. Because while California's bill contains many useful clauses for the independent repair shops that keep your gadgets out of a landfill, it's a state law, and DMCA 1201 is federal. A state law can't simply legalize the conduct federal law prohibits. California's right to repair bill is a banger, but it has a weak spot: parts-pairing, the scourge of repair techs...

Parts-pairing is bullshit, and Apple are scum for using it, but they're hardly unique. Parts-pairing is at the core of the fuckery of inkjet printer companies, who use it to fence out third-party ink, so they can charge $9,600/gallon for ink that pennies to make. Parts-pairing is also rampant in powered wheelchairs, a heavily monopolized sector whose predatory conduct is jaw-droppingly depraved...

When Bill Clinton signed DMCA 1201 into law 25 years ago, he loaded a gun and put it on the nation's mantlepiece and now it's Act III and we're all getting sprayed with bullets. Everything from ovens to insulin pumps, thermostats to lightbulbs, has used DMCA 1201 to limit repair, modification and improvement. Congress needs to rid us of this scourge, to let us bring back all the benefits of interoperability. I explain how this all came to be — and what we should do about it — in my new Verso Books title, The Internet Con: How to Seize the Means of Computation.

Games

Starfield's Missing Nvidia DLSS Support Added By a Mod - With DRM (arstechnica.com) 48

tlhIngan writes: Starfield, a Bethesda space-based RPG that was recently released, was criticized for not having Nvidia DLSS support -- instead the game was primarily written to feature AMD's FSR support. This isn't too surprising since the major consoles all use AMD processors and GPUs. However, an enterprising modder created a mod that enables players with Nvidia cards to enable DLSS. This isn't the unusual bit -- the mod makes DLSS2 (ca. 2020) available for free, while the version enabling DLSS3 (which adds the ability to use AI to generate frames in-between) is behind a Patreon paywall. This has lead to several other people to crack the DRM protecting the mod itself (note: this is not the DRM on the game itself -- the game's Steam page doesn't seem to imply use of 3rd party DRM beyond Steam). Imagine that -- DRM on a game mod because it requires payment.

Slashdot Top Deals