Microsoft

Microsoft Sued By Authors Over Use of Books in AI Training (reuters.com) 15

Microsoft has been hit with a lawsuit by a group of authors who claim the company used their books without permission to train its Megatron artificial intelligence model. From a report: Kai Bird, Jia Tolentino, Daniel Okrent and several others alleged that Microsoft used pirated digital versions of their books to teach its AI to respond to human prompts. Their lawsuit, filed in New York federal court on Tuesday, is one of several high-stakes cases brought by authors, news outlets and other copyright holders against tech companies including Meta Platforms, Anthropic and Microsoft-backed OpenAI over alleged misuse of their material in AI training.

[...] The writers alleged in the complaint that Microsoft used a collection of nearly 200,000 pirated books to train Megatron, an algorithm that gives text responses to user prompts.

Mozilla

Mozilla Formally Discontinues Its DeepSpeech Project (phoronix.com) 10

An anonymous reader shares a report: One of the interesting projects engaged in by Mozilla that directly wasn't related to their web browser efforts was DeepSpeech, an embedded/offline speech-to-text engine. To not much surprise given the lack of activity in recent years, last week they finally and formally discontinued the open-source project.

Mozilla DeepSpeech was a promising speech-to-text engine with great performance for real-time communication even when running on Raspberry Pi SBCs and other low-power systems.

Programming

'The Computer-Science Bubble Is Bursting' 128

theodp writes: The job of the future might already be past its prime," writes The Atlantic's Rose Horowitch in The Computer-Science Bubble Is Bursting. "For years, young people seeking a lucrative career were urged to go all in on computer science. From 2005 to 2023, the number of comp-sci majors in the United States quadrupled. All of which makes the latest batch of numbers so startling. This year, enrollment grew by only 0.2 percent nationally, and at many programs, it appears to already be in decline, according to interviews with professors and department chairs. At Stanford, widely considered one of the country's top programs, the number of comp-sci majors has stalled after years of blistering growth. Szymon Rusinkiewicz, the chair of Princeton's computer-science department, told me that, if current trends hold, the cohort of graduating comp-sci majors at Princeton is set to be 25 percent smaller in two years than it is today. The number of Duke students enrolled in introductory computer-science courses has dropped about 20 percent over the past year."

"But if the decline is surprising, the reason for it is fairly straightforward: Young people are responding to a grim job outlook for entry-level coders. In recent years, the tech industry has been roiled by layoffs and hiring freezes. The leading culprit for the slowdown is technology itself. Artificial intelligence has proved to be even more valuable as a writer of computer code than as a writer of words. This means it is ideally suited to replacing the very type of person who built it. A recent Pew study found that Americans think software engineers will be most affected by generative AI. Many young people aren't waiting to find out whether that's true."

Meanwhile, writing in the Communications of the ACM, Orit Hazzan and Avi Salmon ask: Should Universities Raise or Lower Admission Requirements for CS Programs in the Age of GenAI? "This debate raises a key dilemma: should universities raise admission standards for computer science programs to ensure that only highly skilled problem-solvers enter the field, lower them to fill the gaps left by those who now see computer science as obsolete due to GenAI, or restructure them to attract excellent candidates with diverse skill sets who may not have considered computer science prior to the rise of GenAI, but who now, with the intensive GenAI and vibe coding tools supporting programming tasks, may consider entering the field?
IT

HDMI 2.2 Finalized with 96 GB/s Bandwidth, 16K Resolution Support (tomshardware.com) 70

The HDMI Forum has officially finalized HDMI 2.2, doubling bandwidth from 48 GB/s to 96 GB/s compared to the current HDMI 2.1 standard. The specification enables 16K resolution at 60 Hz and 12K at 120 Hz with chroma subsampling, while supporting uncompressed 4K at 240 Hz with 12-bit color depth and uncompressed 8K at 60 Hz.

The new standard requires "Ultra96" certified cables with clear HDMI Forum branding to achieve full bandwidth capabilities. HDMI 2.2's 96 GB/s throughput surpasses DisplayPort 2.1b UHBR20's 80 GB/s maximum. The specification maintains backwards compatibility with existing devices and cables, operating at the lowest common denominator when mixed with older hardware. HDMI 2.2 introduces a Latency Indication Protocol to improve audio-video synchronization in complex home theater setups.
Australia

Australia Regulator and YouTube Spar Over Under-16s Social Media Ban 26

Australia's eSafety Commissioner has urged the government to deny YouTube an exemption from upcoming child safety regulations, citing research showing it exposes more children to harmful content than any other platform. YouTube pushed back, calling the commissioner's stance inconsistent with government data and parental feedback. "The quarrel adds an element of uncertainty to the December rollout of a law being watched by governments and tech leaders around the world as Australia seeks to become the first country to fine social media firms if they fail to block users aged under 16," reports Reuters. From the report: The centre-left Labor government of Anthony Albanese has previously said it would give YouTube a waiver, citing the platform's use for education and health. Other social media companies such as Meta's Facebook and Instagram, Snapchat, and TikTok have argued such an exemption would be unfair. eSafety Commissioner Julie Inman Grant said she wrote to the government last week to say there should be no exemptions when the law takes effect. She added that the regulator's research found 37% of children aged 10 to 15 reported seeing harmful content on YouTube -- the most of any social media site. [...]

YouTube, in a blog post, accused Inman Grant of giving inconsistent and contradictory advice, which discounted the government's own research which found 69% of parents considered the video platform suitable for people under 15. "The eSafety commissioner chose to ignore this data, the decision of the Australian Government and other clear evidence from teachers and parents that YouTube is suitable for younger users," wrote Rachel Lord, YouTube's public policy manager for Australia and New Zealand.

Inman Grant, asked about surveys supporting a YouTube exemption, said she was more concerned "about the safety of children and that's always going to surpass any concerns I have about politics or being liked or bringing the public onside". A spokesperson for Communications Minister Anika Wells said the minister was considering the online regulator's advice and her "top priority is making sure the draft rules fulfil the objective of the Act and protect children from the harms of social media."
Government

Health Secretary Wants Every American To Be Sporting a Wearable Within Four Years (gizmodo.com) 375

Health and Human Services Secretary Robert F. Kennedy Jr. announced a major federal campaign to promote wearable health tech, aiming for every American to adopt a device within four years as part of a broader effort to "Make America Healthy Again." Gizmodo reports: RFK Jr. announced the initiative Tuesday afternoon during a House Energy and Commerce Health Subcommittee meeting to discuss the HHS' budget request for the upcoming fiscal year. In response to a question from representative Troy Balderson (R-Ohio) about wearables, Kennedy revealed that HHS will soon conduct one of the agency's largest ever advertising campaigns to promote their use. He added that in his ideal future, every American will be donning a wearable within the next four years. "It's a key part of our mission to Make America Healthy Again," RFK Jr. stated in an X post following the question.
Network

Huawei Chair Says the Future of Comms Is Fiber-To-The-Room 97

The Register's Simon Sharwood reports: Huawei's chairman Xu Zhijun -- aka Eric Xu -- has called out China's enormous lead in fiber-to-the-room (FTTR) installations. Speaking at last week's Mobile World Congress event in Shanghai, Xu shared his views on the telecommunications industry's future growth opportunities and said by the end of 2025 China will be home to 75 million FTTR installations -- but just 500,000 exist outside the Middle Kingdom. Xu said FTTR will benefit businesses by increasing their internet connection speeds, helping them address spotty Wi-Fi coverage, allowing them to deploy tech in more places, and therefore creating more opportunities to adopt productivity-boosting devices and services. FTTR will also help carriers to sell more expensive packages, he said. Xu also urged telecom carriers to target high-growth user groups like delivery riders and livestream influencers, citing their above-average data consumption and revenue potential. Delivery riders, who will make up 5% of the global workforce by 2030, use four times more voice minutes and double the data of average users, while influencers generate five times the data usage and four times the revenue.

He also pushed for greater collaboration between carriers and platforms to deliver more high-res video content, and called for improved efficiency in networking equipment and device power use. "Xu said Huawei is here to help carriers deliver any of the scenarios he mentioned," concludes Sharwood. "And of course it is, because the Chinese giant has a thriving business selling to telcos -- or at least to telcos beyond the liberal democracies that have largely decided Huawei's close ties with Beijing mean the company and its products represent an unacceptable threat to the operation of critical infrastructure."
Apple

iPhone Customers Upset By Apple Wallet Ad Pushing F1 Movie (techcrunch.com) 78

An anonymous reader shares a report: Apple customers aren't thrilled they're getting an ad from the Apple Wallet app promoting the tech giant's Original Film, "F1 the Movie." Across social media, iPhone owners are complaining that their Wallet app sent out a push notification offering a $10 discount at Fandango for anyone buying two or more tickets to the film.

The feature film, starring Brad Pitt, explores the world of Formula 1 and was shot at actual Grand Prix races. It also showcases the use of Apple technology, from the custom-made cameras made of iPhone parts used to film inside the cars, to the AirPods Max that Pitt's character, F1 driver Sonny Hayes, sleeps in. However well-received the film may be, iPhone users don't necessarily want their built-in utilities, like their digital wallet, marketing to them.

Windows

Microsoft Extends Free Windows 10 Security Updates Into 2026, With Strings Attached (windows.com) 70

Microsoft will offer free Windows 10 security updates through October 2026 to consumers who enable Windows Backup or spend 1,000 Microsoft Rewards points, the company said today. The move provides alternatives to the previously announced $30-per-PC Extended Security Update program for individuals wanting to continue using Windows 10 past its October 14, 2025 end-of-support date.

The company will notify Windows 10 users about the ESU program through the Settings app and notifications starting in July, with full rollout by mid-August. Both free options require a Microsoft Account, which the company has increasingly pushed in Windows 11. Business and organizational customers can still purchase up to three years of ESU updates but must pay for the service.

Windows 10 remains installed on 53% of Windows PCs worldwide, according to Statcounter data.
China

China on Cusp of Seeing Over 100 DeepSeeks, Ex-Top Official Says (yahoo.com) 27

China's advantages in developing AI are about to unleash a wave of innovation that will generate more than 100 DeepSeek-like breakthroughs in the coming 18 months, according to a former top official. From a report: The new software products "will fundamentally change the nature and the tech nature of the whole Chinese economy," Zhu Min, who was previously a deputy governor of the People's Bank of China, said during the World Economic Forum in Tianjin on Tuesday.

Zhu, who also served as the deputy managing director at the International Monetary Fund, sees a transformation made possible by harnessing China's pool of engineers, massive consumer base and supportive government policies. The bullish take on China's AI future promises no letup in the competition for dominance in cutting-edge technologies with the US, just as the world's two biggest economies are also locked in a trade war.

AI

Hinge CEO Says Dating AI Chatbots Is 'Playing With Fire' (theverge.com) 57

In a podcast interview with The Verge's Nilay Patel, Hinge CEO Justin McLeod described integrating AI into dating apps as promising but warned against relying on AI companionship, likening it to "playing with fire" and consuming "junk food," potentially exacerbating the loneliness epidemic. He emphasized Hinge's mission to foster genuine human connections and highlighted upcoming AI-powered features designed to improve matchmaking and provide coaching to encourage real-world interactions. Here's an excerpt from the interview: Again, there's a fine line between prompting someone and coaching them inside Hinge, and we're coaching them in a different way within a more self-contained ecosystem. How do you think about that? Would you launch a full-on virtual girlfriend inside Hinge?

Certainly not. I have lots of thoughts about this. I think there's actually quite a clear line between providing a tool that helps people do something or get better at something, and the line where it becomes this thing that is trying to become your friend, trying to mimic emotions, and trying to create an emotional connection with you. That I think is really playing with fire. I think we are already in a crisis of loneliness, and a loneliness epidemic. It's a complex issue, and it's baked into our culture, and it goes back to before the internet. But just since 2000, over the past 20 years, the amount of time that people spend together in real life with their friends has dropped by 70 percent for young people. And it's been almost completely displaced by the time spent staring at screens. As a result, we've seen massive increases in mental health issues, and people's loneliness, anxiety, and depression.

I think Mark Zuckerberg was just quoted about this, that most people don't have enough friends. But he said we're going to give them AI chatbots. That he believes that AI chatbots can become your friends. I think that's honestly an extraordinarily reductive view of what a friendship is, that it's someone there to say all the right things to you at the right moment The most rewarding parts of being in a friendship are being able to be there for someone else, to risk and be vulnerable, to share experiences with other conscious entities. So I think that while it will feel good in the moment, like junk food basically, to have an experience with someone who says all the right things and is available at the right time, it will ultimately, just like junk food, make people feel less healthy and mo re drained over time. It will displace the human relationships that people should be cultivating out in the real world.

How do you compete with that? That is the other thing that is happening. It is happening. Whether it's good or bad. Hinge is offering a harder path. So you say, "We've got to get people out on dates." I honestly wonder about that, based on the younger folks I know who sometimes say, âoeI just don't want to leave the house. I would rather just talk to this computer. I have too much social pressure just leaving the house in this way.â That's what Hinge is promising to do. How do you compete with that? Do you take it head on? Are you marketing that directly?

I'm starting to think very much about taking it head on. We want to continue at Hinge to champion human relationships, real human-to-human-in-real-life relationships, because I think they are an essential part of the human experience, and they're essential to our mental health. It's not just because I run a dating app and, obviously, it's important that people continue to meet. It really is a deep, personal mission of mine, and I think it's absolutely critical that someone is out there championing this. Because it's always easier to race to the bottom of the brain stem and offer people junk products that maybe sell in the moment but leave them worse off. That's the entire model that we've seen from what happened with social media. I think AI chatbots could frankly be much more dangerous in that respect.

So what we can do is to become more and more effective and support people more and more, and make it as easy as possible to do the harder and riskier thing, which is to go out and form real relationships with real people. They can let you down and might not always be there for you, but it is ultimately a much more nourishing and enriching experience for people. We can also champion and raise awareness as much as we can. That's another reason why I'm here today talking with you, because I think it's important to put out the counter perspective, that we don't just reflexively believe that AI chatbots can be your friend, without thinking too deeply about what that really implies and what that really means.

We keep going back to junk food, but people had to start waking up to the fact that this was harmful. We had to do a lot of campaigns to educate people that drinking Coca-Cola and eating fast food was detrimental to their health over the long term. And then as people became more aware of that, a whole personal wellness industry started to grow, and now that's a huge industry, and people spend a lot of time focusing on their diet and nutrition and mental health, and all these other things. I think similarly, social wellness needs to become a category like that. It's thinking about not just how do I get this junk social experience of social media where I get fed outraged news and celebrity gossip and all that stuff, but how do I start building a sense of social wellness, where I can create an enriching, intimate connection with important people in my life.
You can listen to the podcast here.
Ubuntu

Ubuntu To Disable Intel Graphics Security Mitigations To Boost GPU Performance By Up To 20% (arstechnica.com) 15

Disabling Intel graphics security mitigations in GPU compute stacks for OpenCL and Level Zero can yield a performance boost of up to 20%, prompting Ubuntu's Canonical and Intel to disable these mitigations in future Ubuntu packages. Phoronix's Michael Larabel reports: Intel does allow building their GPU compute stack without these mitigations by using the "NEO_DISABLE_MITIGATIONS" build option and that is what Canonical is looking to set now for Ubuntu packages to avoid the significant performance impact. This work will likely all be addressed in time for Ubuntu 25.10. This NEO_DISABLE_MITIGATIONS option is just for compiling the Intel Compute Runtime stack and doesn't impact the Linux kernel security mitigations or else outside of Intel's "NEO" GPU compute stack. Both Intel and Canonical are in agreement with this move and it turns out that even Intel's GitHub binary packages for their Compute Runtime for OpenCL and Level Zero ship with the mitigations disabled due to the performance impact. This Ubuntu Launchpad bug report for the Intel Compute Runtime notes some of the key takeaways. There is also this PPA where Ubuntu developers are currently testing their Compute Runtime builds with NEO_DISABLE_MITIGATIONS enabled for disabling the mitigations.
Privacy

Judge Denies Creating 'Mass Surveillance Program' Harming All ChatGPT Users (arstechnica.com) 62

An anonymous reader quotes a report from Ars Technica: After a court ordered OpenAI to "indefinitely" retain all ChatGPT logs, including deleted chats, of millions of users, two panicked users tried and failed to intervene. The order sought to preserve potential evidence in a copyright infringement lawsuit raised by news organizations. In May, Judge Ona Wang, who drafted the order, rejected the first user's request (PDF) on behalf of his company simply because the company should have hired a lawyer to draft the filing. But more recently, Wang rejected (PDF) a second claim from another ChatGPT user, and that order went into greater detail, revealing how the judge is considering opposition to the order ahead of oral arguments this week, which were urgently requested by OpenAI.

The second request (PDF) to intervene came from a ChatGPT user named Aidan Hunt, who said that he uses ChatGPT "from time to time," occasionally sending OpenAI "highly sensitive personal and commercial information in the course of using the service." In his filing, Hunt alleged that Wang's preservation order created a "nationwide mass surveillance program" affecting and potentially harming "all ChatGPT users," who received no warning that their deleted and anonymous chats were suddenly being retained. He warned that the order limiting retention to just ChatGPT outputs carried the same risks as including user inputs, since outputs "inherently reveal, and often explicitly restate, the input questions or topics input."

Hunt claimed that he only learned that ChatGPT was retaining this information -- despite policies specifying they would not -- by stumbling upon the news in an online forum. Feeling that his Fourth Amendment and due process rights were being infringed, Hunt sought to influence the court's decision and proposed a motion to vacate the order that said Wang's "order effectively requires Defendants to implement a mass surveillance program affecting all ChatGPT users." [...] OpenAI will have a chance to defend panicked users on June 26, when Wang hears oral arguments over the ChatGPT maker's concerns about the preservation order. In his filing, Hunt explained that among his worst fears is that the order will not be blocked and that chat data will be disclosed to news plaintiffs who may be motivated to publicly disseminate the deleted chats. That could happen if news organizations find evidence of deleted chats they say are likely to contain user attempts to generate full news articles.

Wang suggested that there is no risk at this time since no chat data has yet been disclosed to the news organizations. That could mean that ChatGPT users may have better luck intervening after chat data is shared, should OpenAI's fight to block the order this week fail. But that's likely no comfort to users like Hunt, who worry that OpenAI merely retaining the data -- even if it's never shared with news organizations -- could cause severe and irreparable harms. Some users appear to be questioning how hard OpenAI will fight. In particular, Hunt is worried that OpenAI may not prioritize defending users' privacy if other concerns -- like "financial costs of the case, desire for a quick resolution, and avoiding reputational damage" -- are deemed more important, his filing said.

AI

DeepSeek Aids China's Military and Evaded Export Controls, US Official Says (reuters.com) 28

An anonymous reader shares a report: AI firm DeepSeek is aiding China's military and intelligence operations, a senior U.S. official told Reuters, adding that the Chinese tech startup sought to use Southeast Asian shell companies to access high-end semiconductors that cannot be shipped to China under U.S. rules. The U.S. conclusions reflect a growing conviction in Washington that the capabilities behind the rapid rise of one of China's flagship AI enterprises may have been exaggerated and relied heavily on U.S. technology.

[...] "We understand that DeepSeek has willingly provided and will likely continue to provide support to China's military and intelligence operations," a senior State Department official told Reuters in an interview. "This effort goes above and beyond open-source access to DeepSeek's AI models," the official said, speaking on condition of anonymity in order to speak about U.S. government information. Chinese law requires companies operating in China to provide data to the government when requested. But the suggestion that DeepSeek is already doing so is likely to raise privacy and other concerns for the firm's tens of millions of daily global users.

Transportation

Volkswagen's Autonomous 'ID Buzz' Robotaxi Is Ready, And Cities And Companies Can Buy Them Soon (jalopnik.com) 65

The classic VW bus got an all-electric update — but that was just the beginning. Now there's an autonomous driving version (that's intended for commercial fleets, reports Jalopnik, "a level 4 vehicle that drives set routes" that's "going into full production" as the ID Buzz AD. (The AD stands for "autonomous driving") The AD version sports a longer wheelbase and a higher roofline than its mere human-driven sibling, which helps it to fit in the 13 cameras, nine LiDARs, and five radars that will (hopefully) allow the car to drive without crashing into anybody. These are intended for large-fleet customers providing taxi services, either ones run by local governments or private companies. [Volkswagen Group software subsidiary MOIA] has already lined up its first customer, the German city of Hamburg, which will provide the automated Buzz as a public transit option alongside traditional bus and subway services. If all goes well, after Hamburg MOIA "will bring sustainable, autonomous mobility to large-scale deployment in Europe and the U.S.," according to VW Group CEO Oliver Blume. Down the road, VW has also signed an agreement for rideshare juggernaut Uber to use the ID Buzz AD across America, starting with Los Angeles in 2026.

The ID Buzz AD is the first vehicle in Germany to reach SAE International's threshold for Level 4 autonomous driving, meaning that the car can drive itself, with no need for a driver behind the wheel, within designated areas.

It comes with "a full suite of tools for public and private transit providers," notes the EV news site Electrek. "That includes everything from the self-driving tech to fleet management software, passenger support, and operator training. That will allow cities and companies to launch driverless fleets quickly, safely, and at scale."

And Christian Senger, a member of the board of management of VW Commercial Vehicles, tells DW the vans will be manufactured in very large numbers. The Hannover VW factory is set to produce more than 10,000 commercial vehicles. "We believe we can be the leading supplier in Europe," Senger says.... [Senger] does not expect the top dog of Germany's beleaguered auto industry to make any money, at least at first. In the long term, though, he explains that autonomous driving is the lucrative field of the future, one that promises to be much more profitable than the traditional automotive industry...

The exact price has not yet been announced but the ID. Buzz AD is unlikely to come cheap. According to Senger, buyers will have to pay a low six-figure sum (in euros) per vehicle. That means it's going to be expensive for transport companies. The Association of German Transport Companies or VDV, is calling for a nationally coordinated strategy of long-term financing, and a market launch supported by public funding, to establish the country's supremacy in this market.

AI

OpenAI Pulls Promotional Materials About Jony Ive Deal (After Trademark Lawsuit) (techcrunch.com) 2

OpenAI appears to have pulled a much-discussed video promoting the friendship between CEO Sam Altman and legendary Apple designer Jony Ive (plus, incidentally, OpenAI's $6.5 billion deal to acquire Ive and Altman's device startup io) from its website and YouTube page. [Though you can still see the original on Archive.org.]

Does that suggest something is amiss with the acquisition, or with plans for Ive to lead design work at OpenAI? Not exactly, according to Bloomberg's Mark Gurman, who reports [on X.com] that the "deal is on track and has NOT dissolved or anything of the sort." Instead, he said a judge has issued a restraining order over the io name, forcing the company to pull all materials that used it.

Gurman elaborates on the disappearance of the video (and other related marketing materials) in a new article at Bloomberg: Bloomberg reported last week that a judge was considering barring OpenAI from using the IO name due to a lawsuit recently filed by the similarly named IYO Inc., which is also building AI devices. "This is an utterly baseless complaint and we'll fight it vigorously," a spokesperson for Ive said on Sunday.
The video is still viewable on X.com, notes TechCrunch. But visiting the "Sam and Jony" page on OpenAI now pulls up a 404 error message — written in the form of a haiku:

Ghost of code lingers
Blank space now invites wonder
Thoughts begin to soar

by o4-mini-high

AI

Tesla Begins Driverless Robotaxi Service in Austin, Texas (theguardian.com) 110

With no one behind the steering wheel, a Tesla robotaxi passes Guero's Taco Bar in Austin Texas, making a right turn onto Congress Avenue.

Today is the day Austin became the first city in the world to see Tesla's self-driving robotaxi service, reports The Guardian: Some analysts believe that the robotaxis will only be available to employees and invitees initially. For the CEO, Tesla's rollout is slow. "We could start with 1,000 or 10,000 [robotaxis] on day one, but I don't think that would be prudent," he told CNBC in May. "So, we will start with probably 10 for a week, then increase it to 20, 30, 40."

The billionaire has said the driverless cars will be monitored remotely... [Posting on X.com] Musk said the date was "tentatively" 22 June but that this launch date would be "not real self-driving", which would have to wait nearly another week... Musk said he planned to have one thousand Tesla robotaxis on Austin roads "within a few months" and then he would expand to other cities in Texas and California.

Musk posted on X that riders on launch day would be charged a flat fee of $4.20, according to Reuters. And "In recent days, Tesla has sent invites to a select group of Tesla online influencers for a small and carefully monitored robotaxi trial..." As the date of the planned robotaxi launch approached, Texas lawmakers moved to enact rules on autonomous vehicles in the state. Texas Governor Greg Abbott, a Republican, on Friday signed legislation requiring a state permit to operate self-driving vehicles. The law does not take effect until September 1, but the governor's approval of it on Friday signals state officials from both parties want the driverless-vehicle industry to proceed cautiously... The law softens the state's previous anti-regulation stance on autonomous vehicles. A 2017 Texas law specifically prohibited cities from regulating self-driving cars...

The law requires autonomous-vehicle operators to get approval from the Texas Department of Motor Vehicles before operating on public streets without a human driver. It also gives state authorities the power to revoke permits if they deem a driverless vehicle "endangers the public," and requires firms to provide information on how police and first responders can deal with their driverless vehicles in emergency situations. The law's requirements for getting a state permit to operate an "automated motor vehicle" are not particularly onerous but require a firm to attest it can safely operate within the law... Compliance remains far easier than in some states, most notably California, which requires extensive submission of vehicle-testing data under state oversight.

Tesla "planned to operate only in areas it considered the safest," according to the article, and "plans to avoid bad weather, difficult intersections, and will not carry anyone below the age of 18."

More details from UPI: To get started using the robotaxis, users must download the Robotaxi app and use their Tesla account to log in, where it then functions like most ridesharing apps...

"Riders may not always be delivered to their intended destinations or may experience inconveniences, interruptions, or discomfort related to the Robotaxi," the company wrote in a disclaimer in its terms of service. "Tesla may modify or cancel rides in its discretion, including for example due to weather conditions." The terms of service include a clause that Tesla will not be liable for "any indirect, consequential, incidental, special, exemplary, or punitive damages, including lost profits or revenues, lost data, lost time, the costs of procuring substitute transportation services, or other intangible losses" from the use of the robotaxis.

Their article includes a link to the robotaxi's complete Terms of Service: To the fullest extent permitted by law, the Robotaxi, Robotaxi app, and any ride are provided "as is" and "as available" without warranties of any kind, either express or implied... The Robotaxi is not intended to provide transportation services in connection with emergencies, for example emergency transportation to a hospital... Tesla's total liability for any claim arising from or relating to Robotaxi or the Robotaxi app is limited to the greater of the amount paid by you to Tesla for the Robotaxi ride giving rise to the claim, and $100... Tesla may modify these Terms in our discretion, effective upon posting an updated version on Tesla's website. By using a Robotaxi or the Robotaxi app after Tesla posts such modifications, you agree to be bound by the revised Terms.
AI

How the Music Industry is Building the Tech to Hunt Down AI-Generated Songs (theverge.com) 75

The goal isn't to stop generative music, but to make it traceable, reports the Verge — "to identify it early, tag it with metadata, and govern how it moves through the system...."

"Detection systems are being embedded across the entire music pipeline: in the tools used to train models, the platforms where songs are uploaded, the databases that license rights, and the algorithms that shape discovery." Platforms like YouTube and [French music streaming service] Deezer have developed internal systems to flag synthetic audio as it's uploaded and shape how it surfaces in search and recommendations. Other music companies — including Audible Magic, Pex, Rightsify, and SoundCloud — are expanding detection, moderation, and attribution features across everything from training datasets to distribution... Vermillio and Musical AI are developing systems to scan finished tracks for synthetic elements and automatically tag them in the metadata. Vermillio's TraceID framework goes deeper by breaking songs into stems — like vocal tone, melodic phrasing, and lyrical patterns — and flagging the specific AI-generated segments, allowing rights holders to detect mimicry at the stem level, even if a new track only borrows parts of an original. The company says its focus isn't takedowns, but proactive licensing and authenticated release... A rights holder or platform can run a finished track through [Vermillo's] TraceID to see if it contains protected elements — and if it does, have the system flag it for licensing before release.

Some companies are going even further upstream to the training data itself. By analyzing what goes into a model, their aim is to estimate how much a generated track borrows from specific artists or songs. That kind of attribution could enable more precise licensing, with royalties based on creative influence instead of post-release disputes...

Deezer has developed internal tools to flag fully AI-generated tracks at upload and reduce their visibility in both algorithmic and editorial recommendations, especially when the content appears spammy. Chief Innovation Officer Aurélien Hérault says that, as of April, those tools were detecting roughly 20 percent of new uploads each day as fully AI-generated — more than double what they saw in January. Tracks identified by the system remain accessible on the platform but are not promoted... Spawning AI's DNTP (Do Not Train Protocol) is pushing detection even earlier — at the dataset level. The opt-out protocol lets artists and rights holders label their work as off-limits for model training.

Thanks to long-time Slashdot reader SonicSpike for sharing the article.
AI

What if Customers Started Saying No to AI? (msn.com) 213

An artist cancelled their Duolingo and Audible subscriptions to protest the companies' decisions to use more AI. "If enough people leave, hopefully they kind of rethink this," the artist tells the Washington Post.

And apparently, many more people feel the same way... In thousands of comments and posts about Audible and Duolingo that The Post reviewed across social media — including on Reddit, YouTube, Threads and TikTok — people threatened to cancel subscriptions, voiced concern for human translators and narrators, and said AI creates inferior experiences. "It destroys the purpose of humanity. We have so many amazing abilities to create art and music and just appreciate what's around us," said Kayla Ellsworth, a 21-year-old college student. "Some of the things that are the most important to us are being replaced by things that are not real...."

People in creative jobs are already on edge about the role AI is playing in their fields. On sites such as Etsy, clearly AI-generated art and other products are pushing out some original crafters who make a living on their creations. AI is being used to write romance novels and coloring books, design logos and make presentations... "I was promised tech would make everything easier so I could enjoy life," author Brittany Moone said. "Now it's leaving me all the dishes and the laundry so AI can make the art."

But will this turn into a consumer movement? The article also cites an assistant marketing professor at Washington State University, who found customers are now reacting negatively to the term "AI" in product descriptions — out of fear for losing their jobs (as well as concerns about quality and privacy). And he does predict this can change the way companies use AI.

"There will be some companies that are going to differentiate themselves by saying no to AI." And while it could be a niche market, "The people will be willing to pay more for things just made by humans."
AI

CEOs Have Started Warning: AI is Coming For Your Job (yahoo.com) 124

It's not just Amazon's CEO predicting AI will lower their headcount. "Top executives at some of the largest American companies have a warning for their workers: Artificial intelligence is a threat to your job," reports the Washington Post — including IBM, Salesforce, and JPMorgan Chase.

But are they really just trying to impress their shareholders? Economists say there aren't yet strong signs that AI is driving widespread layoffs across industries.... CEOs are under pressure to show they are embracing new technology and getting results — incentivizing attention-grabbing predictions that can create additional uncertainty for workers. "It's a message to shareholders and board members as much as it is to employees," Molly Kinder, a Brookings Institution fellow who studies the impact of AI, said of the CEO announcements, noting that when one company makes a bold AI statement, others typically follow. "You're projecting that you're out in the future, that you're embracing and adopting this so much that the footprint [of your company] will look different."

Some CEOs fear they could be ousted from their job within two years if they don't deliver measurable AI-driven business gains, a Harris Poll survey conducted for software company Dataiku showed. Tech leaders have sounded some of the loudest warnings — in line with their interest in promoting AI's power...

IBM, which recently announced job cuts, said it replaced a couple hundred human resource workers with AI "agents" for repetitive tasks such as onboarding and scheduling interviews. In January, Meta CEO Mark Zuckerberg suggested on Joe Rogan's podcast that the company is building AI that might be able to do what some human workers do by the end of the year.... Marianne Lake, JPMorgan's CEO of consumer and community banking, told an investor meeting last month that AI could help the bank cut headcount in operations and account services by 10 percent. The CEO of BT Group Allison Kirkby suggested that advances in AI would mean deeper cuts at the British telecom company...

Despite corporate leaders' warnings, economists don't yet see broad signs that AI is driving humans out of work. "We have little evidence of layoffs so far," said Columbia Business School professor Laura Veldkamp, whose research explores how companies' use of AI affects the economy. "What I'd look for are new entrants with an AI-intensive business model, entering and putting the existing firms out of business." Some researchers suggest there is evidence AI is playing a role in the drop in openings for some specific jobs, like computer programming, where AI tools that generate code have become standard... It is still unclear what benefits companies are reaping from employees' use of AI, said Arvind Karunakaran, a faculty member of Stanford University's Center for Work, Technology, and Organization. "Usage does not necessarily translate into value," he said. "Is it just increasing productivity in terms of people doing the same task quicker or are people now doing more high value tasks as a result?"

Lynda Gratton, a professor at London Business School, said predictions of huge productivity gains from AI remain unproven. "Right now, the technology companies are predicting there will be a 30% productivity gain. We haven't yet experienced that, and it's not clear if that gain would come from cost reduction ... or because humans are more productive."

On an earnings call, Salesforce's chief operating and financial officer said AI agents helped them reduce hiring needs — and saved $50 million, according to the article. (And Ethan Mollick, co-director of Wharton School of Business' generative AI Labs, adds that if advanced tools like AI agents can prove their reliability and automate work — that could become a larger disruptor to jobs.) "A wave of disruption is going to happen," he's quoted as saying.

But while the debate continues about whether AI will eliminate or create jobs, Mollick still hedges that "the truth is probably somewhere in between."

Slashdot Top Deals