Television

The New Dolby Vision 2 HDR Standard is Probably Going To Be Controversial (arstechnica.com) 75

Dolby Vision 2 addresses two widespread TV viewing problems in ways that will likely divide viewers and creators. The format's Content Intelligence feature uses AI and ambient light sensors to brighten notoriously dark content like Game of Thrones' Battle of Winterfell and Apple TV+'s Silo based on room brightness.

Authentic Motion grants filmmakers scene-by-scene control over motion smoothing, a feature most cinephiles despise for creating artifacts and making films look like 60fps home videos. Many filmmakers have criticized motion smoothing for undermining artistic intent. Dolby positions the feature as eliminating unwanted judder while maintaining cinematic feel. The format launches in standard and Max tiers for high-end displays.
AI

FreeBSD Project Isn't Ready To Let AI Commit Code Just Yet (theregister.com) 21

The latest status report from the FreeBSD Project says no thanks to code generated by LLM-based assistants. From a report: The FreeBSD Project's Status Report for the second quarter of 2025 contains updates from various sub-teams that are working on improving the FreeBSD OS, including separate sub-projects such as enabling FreeBSD apps to run on Linux, Chinese translation efforts, support for Solaris-style Extended Attributes, and for Apple's legacy HFS+ file system.

The thing that stood out to us, though, was that the core team is working on what it terms a "Policy on generative AI created code and documentation." The relevant paragraph says: "Core is investigating setting up a policy for LLM/AI usage (including but not limited to generating code). The result will be added to the Contributors Guide in the doc repository. AI can be useful for translations (which seems faster than doing the work manually), explaining long/obscure documents, tracking down bugs, or helping to understand large code bases. We currently tend to not use it to generate code because of license concerns. The discussion continues at the core session at BSDCan 2025 developer summit, and core is still collecting feedback and working on the policy."

Google

Google Critics Think the Search Remedies Ruling is a Total Whiff (theverge.com) 41

Critics are denouncing Tuesday's antitrust remedies ruling against Google, calling them inadequate to restore search market competition. DuckDuckGo said the court's decision allows Google to continue using its monopoly to hold back competitors in AI search.

The Open Markets Institute called it "pure judicial cowardice" that leaves Google's power "almost fully intact." Senator Amy Klobuchar said the limited remedies demonstrate why Congress needs to pass legislation stopping dominant platforms from preferencing their own products. The News/Media Alliance criticized Judge Amit Mehta for failing to address Google forcing publishers to provide content for AI offerings to remain in search results.
AI

AI-Powered Drone Swarms Have Now Entered the Battlefield (msn.com) 91

An anonymous reader quotes a report from the Wall Street Journal: On a recent evening, a trio of Ukrainian drones flew under the cover of darkness to a Russian position and decided among themselves exactly when to strike. The assault was an example of how Ukraine is using artificial intelligence to allow groups of drones to coordinate with each other to attack Russian positions, an innovative technology that heralds the future of battle. Military experts say the so-called swarm technology represents the next frontier for drone warfare because of its potential to allow tens or even thousands of drones -- or swarms -- to be deployed at once to overwhelm the defenses of a target, be that a city or an individual military asset.

Ukraine has conducted swarm attacks on the battlefield for much of the past year, according to a senior Ukrainian officer and the company that makes the software. The previously unreported attacks are the first known routine use of swarm technology in combat, analysts say, underscoring Ukraine's position at the vanguard of drone warfare. [...] The drones deployed in the recent Ukrainian attack used technology developed by local company Swarmer. Its software allows groups of drones to decide which one strikes first and adapt if, for instance, one runs out of battery, said Chief Executive Serhii Kupriienko. "You set the target and the drones do the rest," Kupriienko said. "They work together, they adapt."

Swarmer's technology was first deployed by Ukrainian forces to lay mines around a year ago. It has since been used to target Russian soldiers, equipment and infrastructure, according to the Ukrainian military officer. The officer said his drone unit had used Swarmer's technology more than a hundred times, and that other units also have UAVs equipped with the software. He typically uses the technology with three drones, but says others have deployed it with as many as eight. Kupriienko said the software has been tested with up to 25 drones. A common operation uses a reconnaissance drone and two other UAVs carrying small bombs to target a Russian trench, the officer said. An operator gives the drones a target zone to look for an enemy position and the command to engage when it is spotted. The reconnaissance drone maps the route for the bombers to follow and the drones themselves then decide when, and which one, will release the bombs over the target.

Cloud

SAP To Invest Over 20 Billion Euros In 'Sovereign Cloud' (cnbc.com) 18

SAP will invest over 20 billion euros ($23 billion) in European sovereign cloud infrastructure over the next decade. "Innovation and sovereignty cannot be two separate things -- it needs to come together," said Thomas Saueressig, SAP's board member tasked with leading customer services and delivery. CNBC reports: The company said it was expanding its sovereign cloud offerings to include an infrastructure-as-a-service (IaaS) platform enabling companies to access various computing services via its data center network. IaaS is a market dominated by players like Microsoft and Amazon. It will also roll out a new on-site option that allows customers to use SAP-operated infrastructure within their own data centers. The aim of the initiative is to ensure that customer data is stored within the European Union to maintain compliance with regional data protection regulations such as the General Data Protection Regulation, or GDPR.

[...] Saueressig said that SAP is "closely" involved in the creation of the new AI gigafactories but would not be the lead partner for the initiative. He added that the company's more than 20-billion-euro investment in Europe's sovereign cloud capabilities will not alter the company's capital expenditure for the next year and has already been baked into its financial plans.

AI

OpenAI To Acquire Product Testing Startup Statsig, Appoints CTO of Applications (reuters.com) 3

An anonymous reader quotes a report from Reuters: OpenAI said on Tuesday it will acquire Statsig in an all-stock deal valuing the product testing startup at about $1.1 billion based on OpenAI's current valuation of $300 billion. The ChatGPT maker will also appoint Statsig's chief executive officer, Vijaye Raji, as OpenAI's tech chief of applications, in a push to build on its artificial intelligence products amid strong competition from rivals.

[...] In his role, Vijaye will head product engineering for ChatGPT and the company's coding agent, Codex, with responsibilities that span core systems and product lines including infrastructure, the company said. Statsig builds tools to help software developers test and flag new features. It raised $100 million in funding earlier this year. Once the acquisition is finalized, Statsig employees will work for OpenAI but will continue operating independently out of its Seattle office, OpenAI said.
The move follows the acquisition of iPhone designer Jony Ive's startup, io Products, in a $6.5 billion deal to usher in "a new family of products" for the age of artificial general intelligence.
Security

Hackers Threaten To Submit Artists' Data To AI Models If Art Site Doesn't Pay Up (404media.co) 32

An old school ransomware attack has a new twist: threatening to feed data to AI companies so it'll be added to LLM datasets. 404 Media reports: Artists&Clients is a website that connects independent artists with interested clients. Around August 30, a message appeared on Artists&Clients attributed to the ransomware group LunaLock. "We have breached the website Artists&Clients to steal and encrypt all its data," the message on the site said, according to screenshots taken before the site went down on Tuesday. "If you are a user of this website, you are urged to contact the owners and insist that they pay our ransom. If this ransom is not paid, we will release all data publicly on this Tor site, including source code and personal data of users. Additionally, we will submit all artwork to AI companies to be added to training datasets."

LunaLock promised to delete the stolen data and allow users to decrypt their files if the site's owner paid a $50,000 ransom. "Payment is accepted in either Bitcoin or Monero," the notice put on the site by the hackers said. The ransom note included a countdown timer that gave the site's owners several days to cough up the cash. "If you do not pay, all files will be leaked, including personal user data. This may cause you to be subject to fines and penalties under the GDPR and other laws."

Education

85% of College Students Report AI Use (insidehighered.com) 57

College students have integrated generative AI into their academic routines at an unprecedented scale as 85% report usage for coursework in the past year, according to new Inside Higher Ed survey data. The majority employ AI tools for brainstorming ideas, seeking tutoring assistance, and exam preparation rather than wholesale academic outsourcing. Only 25% admitted using AI to complete assignments entirely, while 19% generated full essays.

Students overwhelmingly reject institutional policing approaches, with 53% favoring education on ethical AI use over detection software deployment. Despite widespread adoption, 35% of respondents report no change in their perception of college value, while 23% view their degrees as more valuable in the AI era.
AI

Salesforce CEO Says AI Enabled Him To Cut 4,000 Jobs (kron4.com) 81

An anonymous reader shares a report: Speaking to The Logan Bartlett Show on Friday, Salesforce CEO Marc Benioff said the use of AI agents had enabled him to "rebalance" his headcount in the customer support division by trimming 4,000 jobs. "I've reduced it from 9,000 head to about 5,000 because I need less heads," Benioff said. Benioff called the first eight months of 2025, during which an estimated 10,000 jobs have been lost to AI, "eight of the most exciting months of my career."

"There were more than 100 million leads that we have not called back at Salesforce in the last 26 years because we have not had enough people," Benioff said. "We just couldn't call them back. But we now have an agentic sales that is calling back every person that contacts us."

Transportation

'Why Do Waymos Keep Loitering in Front of My House?' (theverge.com) 66

Waymo robotaxis are repeatedly selecting identical parking spots in front of specific Los Angeles and Arizona homes between rides, puzzling residents who document the same vehicles returning to precise locations daily. The company states its vehicles choose parking based on local regulations, existing vehicle distribution, and proximity to high-demand areas but cannot explain the algorithmic specificity.

Carnegie Mellon autonomous vehicle expert Phil Koopman attributes the behavior to machine learning systems optimizing for specific spots without variation. Waymo said it had received neighbor complaints and has designated certain locations as no-parking zones for its fleet. The vehicles comply with three-hour parking limits, according to Los Angeles Department of Transportation regulations, governing commercial passenger vehicles under 22 feet.
Microsoft

Blizzard's 'Diablo' Devs Unionize. There's Now 3,500 Unionized Microsoft Workers (aftermath.site) 68

PC Gamer reports: The Diablo team is the next in line to unionize at Blizzard. Over 450 developers across multiple disciplines have voted to form a union under the Communications Workers of America (CWA), and they're now the fourth major Blizzard team to do so... A wave of unions have formed at Blizzard in the last year, including the World of Warcraft, Overwatch, and Story and Franchise Development teams. Elsewhere at Microsoft, Bethesda, ZeniMax Online Studios and ZeniMax QA testers have also unionized...

The CWA says over 3,500 Microsoft workers have now organized to fight for fair compensation, job security, and improved working conditions.

CWA is America's largest communications and media labor union, and in a statement, local 9510 president Jason Justice called the successful vote "part of a much larger story about turning the tide in an industry that has long overlooked its labor. Entertainment workers across film, television, music, and now video games are standing together to have a seat at the table. The strength of our movement comes from that solidarity."

And CWA local 6215 president Ron Swaggerty said "Each new organizing effort adds momentum to the nationwide movement for video game worker power."

"What began as a trickle has turned into an avalanche," writes the gaming news site Aftermath, calling the latest vote "a direct result of the union neutrality deal Microsoft struck with CWA in 2022 when it was facing regulatory scrutiny over its $68.7 billion purchase of Activision Blizzard." We've come a long way since small units at Raven and Blizzard Albany fended off Activision Blizzard's pre-acquisition attempts at union busting in 2022 and 2023, and not a moment too soon: Microsoft's penchant for mass layoffs has cut some teams to the bone and left others warily counting down the days until their heads land on the chopping block. This new union, workers hope, will act as a bulwark...

[B]ased on preliminary conversations with prospective members, they can already hazard a few guesses as to what they'll be arm-wrestling management over at the bargaining table: pay equity, AI, crediting, and remote work.

AI

First 'AI Music Creator' Signed by Record Label. More Ahead, or Just a Copyright Quandry? (apnews.com) 101

"I have no musical talent at all," says Oliver McCann. "I can't sing, I can't play instruments, and I have no musical background at all!"

But the Associated Press describes 37-year-old McCann as a British "AI music creator" — and last month McCann signed with an independent record label "after one of his tracks racked up 3 million streams, in what's billed as the first time a music label has inked a contract with an AI music creator." McCann is an example of how ChatGPT-style AI song generation tools like Suno and Udio have spawned a wave of synthetic music, a movement most notably highlighted by a fictitious group, Velvet Sundown, that went viral even though all its songs, lyrics and album art were created by AI. Experts say generative AI is set to transform the music world. However, there are scant details, so far, on how it's impacting the $29.6 billion global recorded music market, which includes about $20 billion from streaming.

The most reliable figures come from music streaming service Deezer, which estimates that 18% of songs uploaded to its platform every day are purely AI generated, though they only account for a tiny amount of total streams, hinting that few people are actually listening. Other, bigger streaming platforms like Spotify haven't released any figures on AI music... "It's a total boom. It's a tsunami," said Josh Antonuccio, director of Ohio University's School of Media Arts and Studies. The amount of AI generated music "is just going to only exponentially increase" as young people grow up with AI and become more comfortable with it, he said. [Antonuccio says later the cost of making a hit record "just keeps winnowing down from a major studio to a laptop to a bedroom. And now it's like a text prompt — several text prompts." Though there's a lack of legal clarity over copyright issues.]

Generative AI, with its ability to spit out seemingly unique content, has divided the music world, with musicians and industry groups complaining that recorded works are being exploited to train AI models that power song generation tools... Three major record companies, Sony Music Entertainment, Universal Music Group and Warner Records, filed lawsuits last year against Suno and Udio for copyright infringement. In June, the two sides also reportedly entered negotiations that could go beyond settling the lawsuits and set rules for how artists are paid when AI is used to remix their songs.

GEMA, a German royalty collection society, has sued Suno, accusing it of generating music similar to songs like "Mambo No. 5" by Lou Bega and "Forever Young" by Alphaville. More than 1,000 musicians, including Kate Bush, Annie Lennox and Damon Albarn, released a silent album to protest proposed changes to U.K. laws on AI they fear would erode their creative control.

Meanwhile, other artists, such as will.i.am, Timbaland and Imogen Heap, have embraced the technology. Some users say the debate is just a rehash of old arguments about once-new technology that eventually became widely used, such as AutoTune, drum machines and synthesizers.

AI

OpenAI Is Scanning Users' ChatGPT Conversations and Reporting Content To Police (futurism.com) 72

Futurism reports: Earlier this week, buried in the middle of a lengthy blog post addressing ChatGPT's propensity for severe mental health harms, OpenAI admitted that it's scanning users' conversations and reporting to police any interactions that a human reviewer deems sufficiently threatening.

"When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts," it wrote. "If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement."

The announcement raised immediate questions. Don't human moderators judging tone, for instance, undercut the entire premise of an AI system that its creators say can solve broad, complex problems? How is OpenAI even figuring out users' precise locations in order to provide them to emergency responders? How is it protecting against abuse by so-called swatters, who could pretend to be someone else and then make violent threats to ChatGPT in order to get their targets raided by the cops...? The admission also seems to contradict remarks by OpenAI CEO Sam Altman, who recently called for privacy akin to a "therapist or a lawyer or a doctor" for users talking to ChatGPT.

"Others argued that the AI industry is hastily pushing poorly-understood products to market, using real people as guinea pigs, and adopting increasingly haphazard solutions to real-world problems as they arise..."

Thanks to long-time Slashdot reader schwit1 for sharing the news.
AI

Humans Are Being Hired to Make AI Slop Look Less Sloppy (nbcnews.com) 78

Graphic designer Lisa Carstens "spends a good portion of her day working with startups and individual clients looking to fix their botched attempts at AI-generated logos," reports NBC News: Such gigs are part of a new category of work spawned by the generative AI boom that threatened to displace creative jobs across the board: Anyone can now write blog posts, produce a graphic or code an app with a few text prompts, but AI-generated content rarely makes for a satisfactory final product on its own... Fixing AI's mistakes is not their ideal line of work, many freelancers say, as it tends to pay less than traditional gigs in their area of expertise. But some say it's what helps pay the bills....

As companies struggle to figure out their approach to AI, recent data provided to NBC News from freelance job platforms Upwork, Freelancer and Fiverr also suggest that demand for various types of creative work surged this year, and that clients are increasingly looking for humans who can work alongside AI technologies without relying on or rejecting them entirely. Data from Upwork found that although AI is already automating lower-skilled and repetitive tasks, the platform is seeing growing demand for more complex work such as content strategy or creative art direction. And over the past six months, Fiverr said it has seen a 250% boost in demand for niche tasks across web design and book illustration, from "watercolor children story book illustration" to "Shopify website design." Similarly, Freelancer saw a surge in demand this year for humans in writing, branding, design and video production, including requests for emotionally engaging content like "heartfelt speeches...."

The low pay from clients who have already cheaped out on AI tools has affected gig workers across industries, including more technical ones like coding. For India-based web and app developer Harsh Kumar, many of his clients say they had already invested much of their budget in "vibe coding" tools that couldn't deliver the results they wanted. But others, he said, are realizing that shelling out for a human developer is worth the headaches saved from trying to get an AI assistant to fix its own "crappy code." Kumar said his clients often bring him vibe-coded websites or apps that resulted in unstable or wholly unusable systems.

"Even outside of any obvious mistakes made by AI tools, some artists say their clients simply want a human touch to distinguish themselves from the growing pool of AI-generated content online..."
AI

Are AI Web Crawlers 'Destroying Websites' In Their Hunt for Training Data? (theregister.com) 85

"AI web crawlers are strip-mining the web in their perpetual hunt for ever more content to feed into their Large Language Model mills," argues Steven J. Vaughan-Nichols at the Register.

And "when AI searchbots, with Meta (52% of AI searchbot traffic), Google (23%), and OpenAI (20%) leading the way, clobber websites with as much as 30 Terabits in a single surge, they're damaging even the largest companies' site performance..." How much traffic do they account for? According to Cloudflare, a major content delivery network (CDN) force, 30% of global web traffic now comes from bots. Leading the way and growing fast? AI bots... Anyone who runs a website, though, knows there's a huge, honking difference between the old-style crawlers and today's AI crawlers. The new ones are site killers. Fastly warns that they're causing "performance degradation, service disruption, and increased operational costs." Why? Because they're hammering websites with traffic spikes that can reach up to ten or even twenty times normal levels within minutes.

Moreover, AI crawlers are much more aggressive than standard crawlers. As the InMotionhosting web hosting company notes, they also tend to disregard crawl delays or bandwidth-saving guidelines and extract full page text, and sometimes attempt to follow dynamic links or scripts. The result? If you're using a shared server for your website, as many small businesses do, even if your site isn't being shaken down for content, other sites on the same hardware with the same Internet pipe may be getting hit. This means your site's performance drops through the floor even if an AI crawler isn't raiding your website...

AI crawlers don't direct users back to the original sources. They kick our sites around, return nothing, and we're left trying to decide how we're to make a living in the AI-driven web world. Yes, of course, we can try to fend them off with logins, paywalls, CAPTCHA challenges, and sophisticated anti-bot technologies. You know one thing AI is good at? It's getting around those walls. As for robots.txt files, the old-school way of blocking crawlers? Many — most? — AI crawlers simply ignore them... There are efforts afoot to supplement robots.txt with llms.txt files. This is a proposed standard to provide LLM-friendly content that LLMs can access without compromising the site's performance. Not everyone is thrilled with this approach, though, and it may yet come to nothing.

In the meantime, to combat excessive crawling, some infrastructure providers, such as Cloudflare, now offer default bot-blocking services to block AI crawlers and provide mechanisms to deter AI companies from accessing their data.

Facebook

What Made Meta Suddenly Ban Tens of Thousands of Accounts? (bbc.com) 105

"For months, tens of thousands of people around the world have been complaining Meta has been banning their Instagram and Facebook accounts in error..." the BBC reported this month... More than 500 of them have contacted the BBC to say they have lost cherished photos and seen businesses upended — but some also speak of the profound personal toll it has taken on them, including concerns that the police could become involved.

Meta acknowledged a problem with the erroneous banning of Facebook Groups in June, but has denied there is wider issue on Facebook or Instagram at all. It has repeatedly refused to comment on the problems its users are facing — though it has frequently overturned bans when the BBC has raised individual cases with it.

One examples is a woman lost the Instagram profile for her boutique dress shop. ("Over 5,000 followers, gone in an instant.") "After the BBC sent questions about her case to Meta's press office, her Instagram accounts were reinstated... Five minutes later, her personal Instagram was suspended again — but the account for the dress shop remained."

Another user spent a month appealing. ("In June, the BBC understands a human moderator double checked," but concluded he'd breached a policy.) And then "his account was abruptly restored at the end of July. 'We're sorry we've got this wrong,' Instagram said in an email to him, adding that he had done nothing wrong." Hours after the BBC contacted Meta's press office to ask questions about his experience, he was banned again on Instagram and, for the first time, Facebook... His Facebook account was back two days later — but he was still blocked from Instagram.
None of the banned users in the BBC's examples were ever told what post breached the platform's rules. Over 36,000 people have signed a petition accusing Meta of falsely banning accounts; thousands more are in Reddit forums or on social media posting about it. Their central accusation — Meta's AI is unfairly banning people, with the tech also being used to deal with the appeals. The only way to speak to a human is to pay for Meta Verified, and even then many are frustrated.

Meta has not commented on these claims. Instagram states AI is central to its "content review process" and Meta has outlined how technology and humans enforce its policies.

The Guardian reports there's been "talk of a class action against Meta over the bans." Users report Meta has typically been unresponsive to their pleas for assistance, often with standardised responses to requests for review, almost all of which have been rejected... But the company claims there has not been an increase in incorrect account suspension, and the volume of users complaining was not indicative of new targeting or over-enforcement. "We take action on accounts that violate our policies, and people can appeal if they think we've made a mistake," a spokesperson for Meta said.
"It happened to me this morning," writes long-time Slashdot reader Daemon Duck," asking if any other Slashdot readers had their personal (or business) account unreasonably banned. (And wondering what to do next...)
Music

Five Indie Bands Quit Spotify After Founder's AI Weapons Tech Investment (theguardian.com) 48

At the moment, the Spotify exodus of 2025 is a trickle rather than a flood, writes the Guardian, citing the departure of five notable bands "liked in indie circles," but not "the sorts to rack up billions of listens."

"Still, it feels significant if only because, well, this sort of thing wasn't really supposed to happen any more." Plenty of bands and artists refused to play ball with Spotify in its early years, when the streamer still had work to do before achieving total ubiquity. But at some point there seemed to a collective recognition that resistance was futile, that Spotify had won and those bands would have to bend to its less-than-appealing model... This artist acquiescence happened in tandem — surely not coincidentally — with a closer relationship between Spotify and the record labels that once viewed it as their destroyer. Some of the bigger labels have found a way to make a lot of money from streaming: Spotify paid out $10bn in royalties last year — though many artists would point out that only a small fraction of that reaches them after their label takes its share...

So why have those five bands departed in quick succession? The trigger was the announcement that Spotify founder Daniel Ek had led a €6oom fundraising push into a German defence company specialising in AI weapons technology. That was enough to prompt Deerhoof, the veteran San Francisco oddball noise pop band, to jump. "We don't want our music killing people," was how they bluntly explained their move on Instagram. That seems to have also been the animating factor for the rest of the departed, though GY!BE, who aren't on any social media platforms, removed their music from Spotify — and indeed all other platforms aside from Bandcamp — without issuing a statement, while Hotline TNT's statement seemed to frame it as one big element in a broader ideological schism. "The company that bills itself as the steward of all recorded music has proven beyond the shadow of a doubt that it does not align with the band's values in any way," the statement read.

That speaks to a wider artist discontent in a company that has, even by its own standards, had a controversial couple of years. There was of course the publication of Liz Pelly's marmalade-dropper of a book Mood Machine, with its blow-by-blow explanation of why Spotify's model is so deleterious to musicians, including allegations that the streamer is filling its playlists with "ghost artists" to further push down the number of streams, and thus royalty payments, to real artists (Spotify denies this). The streamer continues to amend its model in ways that have caused frustration — demonetising artists with fewer than 1,000 streams, or by introducing a new bundling strategy resulting in lower royalty fees. Meanwhile, the company — along with other streamers — has struggled to police a steady flow of AI-generated tracks and artists on to the platform...

[R]emoving yourself from such an important platform is highly risky. But if they can pull it off, the sacrifice might just be worth it. "A cooler world is possible," as Hotline TNT put it in their statement.

The Guardian's culture editor adds that "I've been using Bandcamp more, even — gasp — buying albums..."

"Maybe weaning ourselves off not just Spotify, but the way that Spotify has convinced us to consume music is the only answer. Then a cooler world might be possible."
AI

Did Will Smith Upload an AI-Enhanced Video - and Is This Just the Beginning? (hollywoodreporter.com) 28

After Will Smith uploaded a video of an adoring crowd, blogger Andy Baio "conducted a detailed analysis that suggests Will Smith's team might have used AI to turn photos from his recent concerts into videos," writes BGR. But there's more to the story: Google recently ran an experiment for YouTube Shorts in which it used AI (machine learning) to improve the quality of Shorts without asking the creator for permission. People complained the videos looked like they were AI generated. It seems that Will Smith's YouTube Shorts clip that attracted criticism from fans this week might have been a victim of this experiment... The signs are real. The man who claimed Will Smith's song helped him cure cancer was there. The woman in front of him was holding the sign with him. The "Lov U" sign appeared in photos the singer posted on his social media channels before the clip was shared.
"Will Smith has not denied the use of AI in these promotional clips," the article adds.

But the Hollywood Reporter also calls it "just the beginning of AI chaos," noting that "influencers and spinmeisters have been using AI upscaling for years, if quietly, the way you might round up your current salary in a job interview." It's only going to grow more popular as the tools get better. (And they will — you just need some tweaks to the model and increases in compute to erase these hallucinations.) In fact, when the chapter on the early AI Age is written, the line about this moment is less likely to be, "Remember when Will Smith did something cringily AI?" and more, "Remember when AI was still seen as so cringe that we made fun of Will Smith for it?" Experts differ on the timeline, but everyone agrees it's just years if not months before we'll stop being able to spot an AI video. [Will Smith's video] had the particular misfortune of coming out at this interregnum moment: good enough for someone to use but not so good we can't spot it.

That moment will be over soon enough, and, I suspect, so will our pearl-clutching. The main effect of this new age of the synthetic is that video will stop being a meaningful measure of truth. We have long stopped believing everything we read, and AI image-generators have killed what photoshop wounded. But video until now has been the last bastion of objectivity — incontrovertible evidence that an event took place the way it seemed to....

But there is an upside. (Really.) Without a format that can telegraph objectivity, we'll need to (if we care to) turn to other ways to assure ourselves of the facts: the source of the video. That could mean the human-led content creator will matter more. After years of seeing news brands take a beating in the trust department, they'll soon become the only hope we have of knowing whether something happened. We no longer will be able to trust the medium. But we may newly believe the media.

Power

Fusion Power Company CFS Raises $863M More From Google, Nvidia, and Many Others (techcrunch.com) 71

When it comes to nuclear fusion energy, "How do we advance fusion as fast as possible?" asks the CEO of Commonwealth Fusion Systems. They've just raised $863 million from Nvidia, Google, the BIll Gates-founded Breakthrough Energy Ventures and nearly two dozen more investors, which "may prove helpful as the company develops its supply chain and searches for partners to build its power plants and buy electricity," reports TechCrunch.

Commonwealth's CEO/co-founder Bob Mumgaard says "This round of capital isn't just about fusion just generally as a concept... It's about how do we go to make fusion into a commercial industrial endeavor." The Massachusetts-based company has raised nearly $3 billion to date, the most of any fusion startup. Commonwealth Fusion Systems (CFS) previously raised a $1.8 billion round in 2021...

CFS is currently building a prototype reactor called Sparc in a Boston suburb. The company expects to turn that device on later next year and achieve scientific breakeven in 2027, a milestone in which the fusion reaction produces more energy than was required to ignite it. Though Sparc isn't designed to sell power to the grid, it's still vital to CFS's success. "There are parts of the modeling and the physics that we don't yet understand," Saskia Mordijck, an associate professor of physics at the College of William and Mary, told TechCrunch. "It's always an open question when you turn on a completely new device that it might go into plasma regimes we've never been into, that maybe we uncover things that we just did not expect." Assuming Sparc doesn't reveal any major problems, CFS expects to begin construction on Arc, its commercial-scale power plant, in Virginia starting in 2027 or 2028...

"We know that this kind of idea should work," Mordijck said. "The question is naturally, how will it perform?" Investors appear to like what they've seen so far. The list of participants in the Series B2 round is lengthy. No single investor led the round, and a number of existing investors increased their stakes, said Ally Yost, CFS's senior vice president of corporate development... The new round will help CFS make progress on Sparc, but it will not be enough to build Arc, which will likely cost several billion dollars, Mumgaard said.

"As advances in computing and AI have quickened the pace of research and development, the sector has become a hotbed of startup and investor activity," the article points out.

And CEO Mumgaard told TechCrunch that their Sparc prototype will prove the soundness of the science — but it's also important to learn "the capabilities that you need to be able to deliver it. It's also to have the receipts, know what these things cost!"
Privacy

Is a Backlash Building Against Smart Glasses That Record? (futurism.com) 68

Remember those Harvard dropouts who built smart glasses for covert facial recognition — and then raised $1 million to develop AI-powered glasses to continuously listen to conversations and display its insights?

"People Are REALLY Mad," writes Futurism, noting that some social media users "have responded with horror and outrage." One of its selling points is that the specs don't come with a visual indicator that lights up to let people know when they're being recorded, which is a feature that Meta's smart glasses do currently have. "People don't want this," wrote Whitney Merill, a privacy lawyer. "Wanting this is not normal. It's weird...."

[S]ome mocked the deleterious effects this could have on our already smartphone-addicted, brainrotted cerebrums. "I look forward to professional conversations with people who just read robot fever dream hallucinations at me in response to my technical and policy questions," one user mused.

The co-founder of the company told TechCrunch their glasses would be the "first real step towards vibe thinking."

But there's already millions of other smart glasses out in the world, and they're now drawing a backlash, reports the Washington Post, citing the millions of people viewing "a stream of other critical videos" about Meta's smart glasses.

The article argues that Generation Z, "who grew up in an internet era defined by poor personal privacy, are at the forefront of a new backlash against smart glasses' intrusion into everyday life..." Opal Nelson, a 22-year-old in New York, said the more she learns about smart glasses, the angrier she becomes. Meta Ray-Bans have a light that turns on when the gadget is recording video, but she said it doesn't seem to protect people from being recorded without consent... "And now there's more and more tutorials showing people how to cover up the [warning light] and still allow you to record," Nelson said. In one such tutorial with more than 900,000 views, a man claims to explain how to cover the warning light on Meta Ray-Bans without triggering the sensor that prevents the device from secretly recording.
One 26-year-old attracted 10 million views to their video on TikTok about the spread of Meta's photography-capable smart glasses. "People specifically in my generation are pretty concerned about the future of technology," the told the Post, "and what that means for all of us and our privacy."

The article cites figures from a devices analyst at IDC who estimates U.S. sales for Meta Ray-Bans will hit 4 million units by the end of 2025, compared to 1.2 million in 2024.

Slashdot Top Deals