×
Open Source

Nvidia's Open-Source Linux Kernel Driver Performing At Parity To Proprietary Driver (phoronix.com) 21

Nvidia's new R555 Linux driver series has significantly improved their open-source GPU kernel driver modules, achieving near parity with their proprietary drivers. Phoronix's Michael Larabel reports: The NVIDIA open-source kernel driver modules shipped by their driver installer and also available via their GitHub repository are in great shape. With the R555 series the support and performance is basically at parity of their open-source kernel modules compared to their proprietary kernel drivers. [...] Across a range of different GPU-accelerated creator workloads, the performance of the open-source NVIDIA kernel modules matched that of the proprietary driver. No loss in performance going the open-source kernel driver route. Across various professional graphics workloads, both the NVIDIA RTX A2000 and A4000 graphics cards were also achieving the same performance whether on the open-source MIT/GPLv2 driver or using NVIDIA's classic proprietary driver.

Across all of the tests I carried out using the NVIDIA 555 stable series Linux driver, the open-source NVIDIA kernel modules were able to achieve the same performance as the classic proprietary driver. Also important is that there was no increased power use or other difference in power management when switching over to the open-source NVIDIA kernel modules.

It's great seeing how far the NVIDIA open-source kernel modules have evolved and that with the upcoming NVIDIA 560 Linux driver series they will be defaulting to them on supported GPUs. And moving forward with Blackwell and beyond, NVIDIA is just enabling the GPU support along their open-source kernel drivers with leaving the proprietary kernel drivers to older hardware. Tests I have done using NVIDIA GeForce RTX 40 graphics cards with Linux gaming workloads between the MIT/GPL and proprietary kernel drivers have yielded similar (boring but good) results: the same performance being achieved with no loss going the open-source route.
You can view Phoronix's performance results in charts here, here, and here.
Windows

How a Cheap Barcode Scanner Helped Fix CrowdStrike'd Windows PCs In a Flash (theregister.com) 60

An anonymous reader quotes a report from The Register: Not long after Windows PCs and servers at the Australian limb of audit and tax advisory Grant Thornton started BSODing last Friday, senior systems engineer Rob Woltz remembered a small but important fact: When PCs boot, they consider barcode scanners no differently to keyboards. That knowledge nugget became important as the firm tried to figure out how to respond to the mess CrowdStrike created, which at Grant Thornton Australia threw hundreds of PCs and no fewer than 100 servers into the doomloop that CrowdStrike's shoddy testing software made possible. [...] The firm had the BitLocker keys for all its PCs, so Woltz and colleagues wrote a script that turned them into barcodes that were displayed on a locked-down management server's desktop. The script would be given a hostname and generate the necessary barcode and LAPS password to restore the machine.

Woltz went to an office supplies store and acquired an off-the-shelf barcode scanner for AU$55 ($36). At the point when rebooting PCs asked for a BitLocker key, pointing the scanner at the barcode on the server's screen made the machines treat the input exactly as if the key was being typed. That's a lot easier than typing it out every time, and the server's desktop could be accessed via a laptop for convenience. Woltz, Watson, and the team scaled the solution -- which meant buying more scanners at more office supplies stores around Australia. On Monday, remote staff were told to come to the office with their PCs and visit IT to connect to a barcode scanner. All PCs in the firm's Australian fleet were fixed by lunchtime -- taking only three to five minutes for each machine. Watson told us manually fixing servers needed about 20 minutes per machine.

Transportation

Automakers Sold Driver Data For Pennies, Senators Say (jalopnik.com) 58

An anonymous reader quotes a report from the New York Times: If you drive a car made by General Motors and it has an internet connection, your car's movements and exact location are being collected and shared anonymously with a data broker. This practice, disclosed in a letter (PDF) sent by Senators Ron Wyden of Oregon and Edward J. Markey of Massachusetts to the Federal Trade Commission on Friday, is yet another way in which automakers are tracking drivers (source may be paywalled; alternative source), often without their knowledge. Previous reporting in The New York Times which the letter cited, revealed how automakers including G.M., Honda and Hyundai collected information about drivers' behavior, such as how often they slammed on the brakes, accelerated rapidly and exceeded the speed limit. It was then sold to the insurance industry, which used it to help gauge individual drivers' riskiness.

The two Democratic senators, both known for privacy advocacy, zeroed in on G.M., Honda and Hyundai because all three had made deals, The Times reported, with Verisk, an analytics company that sold the data to insurers. In the letter, the senators urged the F.T.C.'s chairwoman, Lina Khan, to investigate how the auto industry collects and shares customers' data. One of the surprising findings of an investigation by Mr. Wyden's office was just how little the automakers made from selling driving data. According to the letter, Verisk paid Honda $25,920 over four years for information about 97,000 cars, or 26 cents per car. Hyundai was paid just over $1 million, or 61 cents per car, over six years. G.M. would not reveal how much it had been paid, Mr. Wyden's office said. People familiar with G.M.'s program previously told The Times that driving behavior data had been shared from more than eight million cars, with the company making an amount in the low millions of dollars from the sale. G.M. also previously shared data with LexisNexis Risk Solutions.
"Companies should not be selling Americans' data without their consent, period," the letter from Senators Wyden and Markey stated. "But it is particularly insulting for automakers that are selling cars for tens of thousands of dollars to then squeeze out a few additional pennies of profit with consumers' private data."
The Internet

ISPs Seeking Government Handouts Try To Avoid Offering Low-Cost Broadband (arstechnica.com) 20

Internet service providers are pushing back against the Biden administration's requirement for low-cost options even as they are attempting to secure funds from a $42.45 billion government broadband initiative. The Broadband Equity, Access, and Deployment program, established by law to expand internet access, mandates that recipients offer affordable plans to eligible low-income subscribers, a stipulation the providers argue infringes on legal prohibitions against rate regulation. ISPs claim that the proposed $30 monthly rate for low-cost plans is economically unfeasible, especially in hard-to-reach rural areas, potentially undermining the program's goals by discouraging provider participation.
Google

Pixel 9 AI Will Add You To Group Photos Even When You're Not There (androidheadlines.com) 54

Google's upcoming Pixel 9 smartphones are set to introduce new AI-powered features, including "Add Me," a tool that will allow users to insert themselves into group photos after those pictures have been taken, according to leaked promotional video obtained by Android Headlines. This feature builds on the Pixel 8's "Best Take" function, which allowed face swapping in group shots.
Chrome

New Chrome Feature Scans Password-Protected Files For Malicious Content (thehackernews.com) 24

An anonymous reader quotes a report from The Hacker News: Google said it's adding new security warnings when downloading potentially suspicious and malicious files via its Chrome web browser. "We have replaced our previous warning messages with more detailed ones that convey more nuance about the nature of the danger and can help users make more informed decisions," Jasika Bawa, Lily Chen, and Daniel Rubery from the Chrome Security team said. To that end, the search giant is introducing a two-tier download warning taxonomy based on verdicts provided by Google Safe Browsing: Suspicious files and Dangerous files. Each category comes with its own iconography, color, and text to distinguish them from one another and help users make an informed choice.

Google is also adding what's called automatic deep scans for users who have opted-in to the Enhanced Protection mode of Safe Browsing in Chrome so that they don't have to be prompted each time to send the files to Safe Browsing for deep scanning before opening them. In cases where such files are embedded within password-protected archives, users now have the option to "enter the file's password and send it along with the file to Safe Browsing so that the file can be opened and a deep scan may be performed." Google emphasized that the files and their associated passwords are deleted a short time after the scan and that the collected data is only used for improving download protections.

AI

AI Models Face Collapse If They Overdose On Their Own Output 106

According to a new study published in Nature, researchers found that training AI models using AI-generated datasets can lead to "model collapse," where models produce increasingly nonsensical outputs over generations. "In one example, a model started with a text about European architecture in the Middle Ages and ended up -- in the ninth generation -- spouting nonsense about jackrabbits," writes The Register's Lindsay Clark. From the report: [W]ork led by Ilia Shumailov, Google DeepMind and Oxford post-doctoral researcher, found that an AI may fail to pick up less common lines of text, for example, in training datasets, which means subsequent models trained on the output cannot carry forward those nuances. Training new models on the output of earlier models in this way ends up in a recursive loop. In an accompanying article, Emily Wenger, assistant professor of electrical and computer engineering at Duke University, illustrated model collapse with the example of a system tasked with generating images of dogs. "The AI model will gravitate towards recreating the breeds of dog most common in its training data, so might over-represent the Golden Retriever compared with the Petit Basset Griffon Vendéen, given the relative prevalence of the two breeds," she said.

"If subsequent models are trained on an AI-generated data set that over-represents Golden Retrievers, the problem is compounded. With enough cycles of over-represented Golden Retriever, the model will forget that obscure dog breeds such as Petit Basset Griffon Vendeen exist and generate pictures of just Golden Retrievers. Eventually, the model will collapse, rendering it unable to generate meaningful content." While she concedes an over-representation of Golden Retrievers may be no bad thing, the process of collapse is a serious problem for meaningful representative output that includes less-common ideas and ways of writing. "This is the problem at the heart of model collapse," she said.
The Courts

California Supreme Court Upholds Gig Worker Law In a Win For Ride-Hail Companies (politico.com) 73

In a major victory for ride-hail companies, California Supreme Court upheld a law classifying gig workers as independent contractors, maintaining their ineligibility for benefits such as sick leave and workers' compensation. This decision concludes a prolonged legal battle and supports the 2020 ballot measure Proposition 22, despite opposition from labor groups who argued it was unconstitutional. Politico reports: Thursday's ruling capped a yearslong battle between labor and the companies over the status of workers who are dispatched by apps to deliver food, buy groceries and transport customers. A 2018 Supreme Court ruling and a follow-up bill would have compelled the gig companies to treat those workers as employees. A collection of five firms then spent more than $200 million to escape that mandate by passing the 2020 ballot measure Proposition 22 in one of the most expensive political campaigns in American history. The unanimous ruling on Thursday now upholds the status quo of the gig economy in California.

As independent contractors, gig workers are not entitled to benefits like sick leave, overtime and workers' compensation. The SEIU union and four gig workers, ultimately, challenged Prop 22 based on its conflict with the Legislature's power to administer workers' compensation, specifically. The law, which passed with 58 percent of the vote in 2020, makes gig workers ineligible for workers' comp, which opponents of Prop 22 argued rendered the entire law unconstitutional. [...] Beyond the implications for gig workers, the heavily-funded Prop 22 ballot campaign pushed the limits of what could be spent on an initiative, ultimately becoming the most expensive measure in California history. Uber and Lyft have both threatened to leave any states that pass laws not classifying their drivers as independent contractors. The decision Thursday closes the door to that possibility for California.

AI

iFixit CEO Takes Shots At Anthropic For 'Hitting Our Servers a Million Times In 24 Hours' (pcgamer.com) 48

Yesterday, iFixit CEO Kyle Wiens asked AI company Anthropic why it was clogging up their server bandwidth without permission. "Do you really need to hit our servers a million times in 24 hours?" Wiens wrote on X. "You're not only taking our content without paying, you're tying up our DevOps resources. Not cool." PC Gamer's Jacob Fox reports: Assuming Wiens isn't massively exaggerating, it's no surprise that this is "typing up our devops resources." A million "hits" per day would do it, and would certainly be enough to justify more than a little annoyance. The thing is, putting this bandwidth chugging in context only makes it more ridiculous, which is what Wiens is getting at. It's not just that an AI company is seemingly clogging up server resources, but that it's been expressly forbidden from using the content on its servers anyway.

There should be no reason for an AI company to hit the iFixit site because its terms of service state that "copying or distributing any Content, materials or design elements on the Site for any other purpose, including training a machine learning or AI model, is strictly prohibited without the express prior written permission of iFixit." Unless it wants us to believe it's not going to use any data it scrapes for these purposes, and it's just doing it for... fun?

Well, whatever the case, iFixit's Wiens decided to have some fun with it and ask Anthropic's own AI, Claude, about the matter, saying to Anthropic, "Don't ask me, ask Claude!" It seems that Claude agrees with iFixit, because when it's asked what it should do if it was training a machine learning model and found the above writing in its terms of service, it responded, in no uncertain terms, "Do not use the content." This is, as Wiens points out, something that could be seen if one simply accessed the terms of service.

Transportation

Minnesota Becomes Second State To Pass Law For Flying Cars (fortune.com) 54

Minnesota has become the second state to pass what it's calling a "Jetsons law," establishing rules for cars that can take to the sky. New Hampshire was the first to enact a "Jetsons" law. From a report: The new road rules in Minnesota address "roadable aircraft," which is basically any aircraft that can take off and land at an airfield but is also designed to be operated on a public highway. The law will let owners of these vehicles register them as cars and trucks, but they won't have to obtain a license plate. The tail number will suffice instead.

As for operation, flying cars won't be allowed to take off or land on public roadways, Minnesota officials declared (an exception is made in the case of emergency). Those shenanigans are restricted to airports. While the idea of a Jetsons-like sky full of flying cars is still firmly rooted in the world of science fiction, the concept of flying cars isn't quite as distant as it might seem (though it has some high-profile skeptics). United Airlines, two years ago, made a $10 million bet on the technology, putting down a deposit for 200 four-passenger flying taxis from Archer Aviation, a San Francisco-based startup working on the aircraft/auto hybrid.

Communications

5th Circuit Court Upends FCC Universal Service Fund, Ruling It an Illegal Tax (arstechnica.com) 137

A U.S. appeals court has ruled that the Federal Communications Commission's Universal Service Fund, which collects fees on phone bills to support telecom network expansion and affordability programs, is unconstitutional, potentially upending the $8 billion-a-year system.

The 5th Circuit Court's 9-7 decision, which creates a circuit split with previous rulings in the 6th and 11th circuits, found that the combination of Congress's delegation to the FCC and the FCC's subsequent delegation to a private entity violates the Constitution's Legislative Vesting Clause. FCC Chairwoman Jessica Rosenworcel criticized the ruling as "misguided and wrong," vowing to pursue all available avenues for review.
AI

OpenAI To Launch 'SearchGPT' in Challenge To Google 31

OpenAI is launching an online search tool in a direct challenge to Google, opening up a new front in the tech industry's race to commercialise advances in generative artificial intelligence. From a report: The experimental product, known as SearchGPT [non-paywalled], will initially only be available to a small group of users, with the San Francisco-based company opening a 10,000-person waiting list to test the service on Thursday. The product is visually distinct from ChatGPT as it goes beyond generating a single answer by offering a rail of links -- similar to a search engine -- that allows users to click through to external websites.

[...] SearchGPT will "provide up-to-date information from the web while giving you clear links to relevant sources," according to OpenAI. The new search tool will be able to access sites even if they have opted out of training OpenAI's generative AI tools, such as ChatGPT.
Google

Google DeepMind's AI Systems Can Now Solve Complex Math Problems (technologyreview.com) 40

Google DeepMind has announced that its AI systems, AlphaProof and AlphaGeometry 2, have achieved silver medal performance at the 2024 International Mathematical Olympiad (IMO), solving four out of six problems and scoring 28 out of 42 possible points in a significant breakthrough for AI in mathematical reasoning. This marks the first time an AI system has reached such a high level of performance in this prestigious competition, which has long been considered a benchmark for advanced mathematical reasoning capabilities in machine learning.

AlphaProof, a system that combines a pre-trained language model with reinforcement learning techniques, demonstrated its new capability by solving two algebra problems and one number theory problem, including the competition's most challenging question. Meanwhile, AlphaGeometry 2 successfully tackled a complex geometry problem, Google wrote in a blog post. The systems' solutions were formally verified and scored by prominent mathematicians, including Fields Medal winner Prof Sir Timothy Gowers and IMO Problem Selection Committee Chair Dr Joseph Myers, lending credibility to the achievement.

The development of these AI systems represents a significant step forward in bridging the gap between natural language processing and formal mathematical reasoning, the company argued. By fine-tuning a version of Google's Gemini model to translate natural language problem statements into formal mathematical language, the researchers created a vast library of formalized problems, enabling AlphaProof to train on millions of mathematical challenges across various difficulty levels and topic areas. While the systems' performance is impressive, challenges remain, particularly in the field of combinatorics where both AI models were unable to solve the given problems. Researchers at Google DeepMind continue to investigate these limitations, the company said, aiming to further improve the systems' capabilities across all areas of mathematics.
IT

Adobe Exec Compared Creative Cloud Cancellation Fees To 'Heroin' (theverge.com) 34

Early termination fees are "a bit like heroin for Adobe," according to an Adobe executive quoted in the FTC's newly unredacted complaint against the company for allegedly hiding fees and making it too hard to cancel Creative Cloud. The Verge: "There is absolutely no way to kill off ETF or talk about it more obviously" in the order flow without "taking a big business hit," this executive said. That's the big reveal in the unredacted complaint, which also contains previously unseen allegations that Adobe was internally aware of studies showing its order and cancellation flows were too complicated and customers were unhappy with surprise early termination fees.

In a short interview, Adobe's general counsel and chief trust officer, Dana Rao, pushed back on both the specific quote and the FTC's complaint more generally, telling me that he was "disappointed in the way they're continuing to take comments out of context from non-executive employees from years ago to make their case."

AI

AI Video Generator Runway Trained On Thousands of YouTube Videos Without Permission (404media.co) 81

samleecole writes: A leaked document obtained by 404 Media shows company-wide effort at generative AI company Runway, where employees collected thousands of YouTube videos and pirated content for training data for its Gen-3 Alpha model. The model -- initially codenamed Jupiter and released officially as Gen-3 -- drew widespread praise from the AI development community and technology outlets covering its launch when Runway released it in June. Last year, Runway raised $141 million from investors including Google and Nvidia, at a $1.5 billion valuation.

The spreadsheet of training data viewed by 404 Media and our testing of the model indicates that part of its training data is popular content from the YouTube channels of thousands of media and entertainment companies, including The New Yorker, VICE News, Pixar, Disney, Netflix, Sony, and many others. It also includes links to channels and individual videos belonging to popular influencers and content creators, including Casey Neistat, Sam Kolder, Benjamin Hardman, Marques Brownlee, and numerous others.

Transportation

GM-Owned Cruise Has Lost Interest In Cars Without Steering Wheels (yahoo.com) 72

Yesterday, GM announced it was delaying production of the Cruise Origin indefinitely, opting to use the Chevy Bolt as the main vehicle for its self-driving efforts. Introduced four years ago, the Cruise Origin embodied a futuristic vision with no steering wheels or pedals and 'campfire' seating for six passengers, all while providing wireless internet. However, as Fortune's Jessica Mathews writes, the company appears to have lost interest in that vision (source paywalled; alternative source) -- at least for now. From the report: To hear GM CEO and Cruise Chair Mary Barra, the demise of the Origin comes down to costs and regulation. GM's "per unit-costs will be much lower" by focusing on Bolts instead of Origin vehicles, Barra wrote in a quarterly letter to shareholders Tuesday. Barra discussed the regulatory challenges during the quarterly earnings call, explaining the company's view that deploying the Origin was going to require "legislative change." "As we looked at this, we thought it was better to get rid of that risk," Barra said.

All robo-taxi companies have been waiting on the green light from regulators for the approvals needed to add these futuristic pedal-less cars into their commercial fleets. While the National Highway Traffic Safety Administration adjusted its rules so that carmakers could manufacture and deploy cars without pedals or steering, state DMVs still have many restrictions set in place when it comes to people riding in them. GM isn't completely swearing off the concept of steering-wheel free cars -- Barra noted that there could be an opportunity for a "vehicle like the Origin in the future."

The Internet

Phish-Friendly Domain Registry '.top' Put On Notice (krebsonsecurity.com) 22

Investigative journalist and cybersecurity expert Brian Krebs writes: The Chinese company in charge of handing out domain names ending in ".top" has been given until mid-August 2024 to show that it has put in place systems for managing phishing reports and suspending abusive domains, or else forfeit its license to sell domains. The warning comes amid the release of new findings that .top was the most common suffix in phishing websites over the past year, second only to domains ending in ".com." On July 16, the Internet Corporation for Assigned Names and Numbers (ICANN) sent a letter to the owners of the .top domain registry. ICANN has filed hundreds of enforcement actions against domain registrars over the years, but in this case ICANN singled out a domain registry responsible for maintaining an entire top-level domain (TLD). Among other reasons, the missive chided the registry for failing to respond to reports about phishing attacks involving .top domains.

"Based on the information and records gathered through several weeks, it was determined that .TOP Registry does not have a process in place to promptly, comprehensively, and reasonably investigate and act on reports of DNS Abuse," the ICANN letter reads (PDF). ICANN's warning redacted the name of the recipient, but records show the .top registry is operated by a Chinese entity called Jiangsu Bangning Science & Technology Co. Ltd. Representatives for the company have not responded to requests for comment.

Domains ending in .top were represented prominently in a new phishing report released today by the Interisle Consulting Group, which sources phishing data from several places, including the Anti-Phishing Working Group (APWG), OpenPhish, PhishTank, and Spamhaus. Interisle's newest study examined nearly two million phishing attacks in the last year, and found that phishing sites accounted for more than four percent of all new .top domains between May 2023 and April 2024. Interisle said .top has roughly 2.76 million domains in its stable, and that more than 117,000 of those were phishing sites in the past year.

AI

Open Source AI Better for US as China Will Steal Tech Anyway, Zuckerberg Argues (fb.com) 37

Meta CEO Mark Zuckerberg has advocated for open-source AI development, asserting it as a strategic advantage for the United States against China. In a blog post, Zuckerberg argued that closing off AI models would not effectively prevent Chinese access, given their espionage capabilities, and would instead disadvantage U.S. allies and smaller entities. He writes: Our adversaries are great at espionage, stealing models that fit on a thumb drive is relatively easy, and most tech companies are far from operating in a way that would make this more difficult. It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities. Plus, constraining American innovation to closed development increases the chance that we don't lead at all. Instead, I think our best strategy is to build a robust open ecosystem and have our leading companies work closely with our government and allies to ensure they can best take advantage of the latest advances and achieve a sustainable first-mover advantage over the long term.
AI

The AI Job Interviewer Will See You Now 82

AI is increasingly being employed in job interviews across China and India, marking a significant shift in recruitment practices in the region. This follows a similar practice making inroads in the U.S. Rest of World adds: A 2023 survey of 1,000 human-resources workers by the U.S. firm ResumeBuilder found that 10% of companies were already using AI in the hiring process, and another 30% planned to start the following year. The research firm Gartner listed natural-language chatbots as one of 2023's key innovations for the recruiting industry, designating the technology as experimental but promising. Companies like Meituan, Siemens, and Estee Lauder are using AI-powered interviews, with platforms such as MoSeeker, Talently.ai, and Instahyre leading the charge in AI recruitment solutions.
Google

Google's Exclusive Reddit Access (404media.co) 43

Google is now the only search engine that can surface results from Reddit, making one of the web's most valuable repositories of user generated content exclusive to the internet's already dominant search engine. 404 Media: If you use Bing, DuckDuckGo, Mojeek, Qwant or any other alternative search engine that doesn't rely on Google's indexing and search Reddit by using "site:reddit.com," you will not see any results from the last week.

DuckDuckGo is currently turning up seven links when searching Reddit, but provides no data on where the links go or why, instead only saying that "We would like to show you a description here but the site won't allow us." Older results will still show up, but these search engines are no longer able to "crawl" Reddit, meaning that Google is the only search engine that will turn up results from Reddit going forward. Searching for Reddit still works on Kagi, an independent, paid search engine that buys part of its search index from Google. The news shows how Google's near monopoly on search is now actively hindering other companies' ability to compete at a time when Google is facing increasing criticism over the quality of its search results.
The news follows Google signing a $60 million deal with Reddit early this year to use the social network's content to train its LLMs.

Slashdot Top Deals