Android

Android Phones Will Soon Reboot Themselves After Sitting Unused For 3 Days (arstechnica.com) 98

An anonymous reader shares a report: A silent update rolling out to virtually all Android devices will make your phone more secure, and all you have to do is not touch it for a few days. The new feature implements auto-restart of a locked device, which will keep your personal data more secure. It's coming as part of a Google Play Services update, though, so there's nothing you can do to speed along the process.

Google is preparing to release a new update to Play Services (v25.14), which brings a raft of tweaks and improvements to myriad system features. First spotted by 9to5Google, the update was officially released on April 14, but as with all Play Services updates, it could take a week or more to reach all devices. When 25.14 arrives, Android devices will see a few minor improvements, including prettier settings screens, improved connection with cars and watches, and content previews when using Quick Share.

AI

Publishers and Law Professors Back Authors in Meta AI Copyright Battle 14

Publishers and law professors have filed amicus briefs supporting authors who sued Meta over its AI training practices, arguing that the company's use of "thousands of pirated books" fails to qualify as fair use under copyright law.

The filings [PDF] in California's Northern District federal court came from copyright law professors, the International Association of Scientific, Technical and Medical Publishers (STM), Copyright Alliance, and Association of American Publishers. The briefs counter earlier support for Meta from the Electronic Frontier Foundation and IP professors.

While Meta's defenders pointed to the 2015 Google Books ruling as precedent, the copyright professors distinguished Meta's use, arguing Google Books told users something "about" books without "exploiting expressive elements," whereas AI models leverage the books' creative content.

"Meta's use wasn't transformative because, like the AI models, the plaintiffs' works also increased 'knowledge and skill,'" the professors wrote, warning of a "cascading effect" if Meta prevails. STM is specifically challenging Meta's data sources: "While Meta attempts to label them 'publicly available datasets,' they are only 'publicly available' because those perpetuating their existence are breaking the law."
China

Chinese Robotaxis Have Government Black Boxes, Approach US Quality (forbes.com) 43

An anonymous reader quotes a report from Forbes: Robotaxi development is speeding at a fast pace in China, but we don't hear much about it in the USA, where the news focuses mostly on Waymo, with a bit about Zoox, Motional, May, trucking projects and other domestic players. China has 4 main players with robotaxi service, dominated by Baidu (the Chinese Google.) A recent session at last week's Ride AI conference in Los Angeles revealed some details about the different regulatory regime in China, and featured a report from a Chinese-American YouTuber who has taken on a mission to ride in the different vehicles.

Zion Maffeo, deputy general counsel for Pony.AI, provided some details on regulations in China. While Pony began with U.S. operations, its public operations are entirely in China, and it does only testing in the USA. Famously it was one of the few companies to get a California "no safety driver" test permit, but then lost it after a crash, and later regained it. Chinese authorities at many levels keep a close watch over Chinese robotaxi companies. They must get approval for all levels of operation which control where they can test and operate, and how much supervision is needed. Operation begins with testing with a safety driver behind the wheel (as almost everywhere in the world,) with eventual graduation to having the safety driver in the passenger seat but with an emergency stop. Then they move to having a supervisor in the back seat before they can test with nobody in the vehicle, usually limited to an area with simpler streets.

The big jump can then come to allow testing with nobody in the vehicle, but with full time monitoring by a remote employee who can stop the vehicle. From there they can graduate to taking passengers, and then expanding the service to more complex areas. Later they can go further, and not have full time remote monitoring, though there do need to be remote employees able to monitor and assist part time. Pony has a permit allowing it to have 3 vehicles per remote operator, and has one for 15 vehicles in process, but they declined comment on just how many vehicles they actually have per operator. Baidu also did not respond to queries on this. [...] In addition, Chinese jurisdictions require that the system in a car independently log any "interventions" by safety drivers in a sort of "black box" system. These reports are regularly given to regulators, though they are not made public. In California, companies must file an annual disengagement report, but they have considerable leeway on what they consider a disengagement so the numbers can't be readily compared. Chinese companies have no discretion on what is reported, and they may notify authorities of a specific objection if they wish to declare that an intervention logged in their black box should not be counted.
On her first trip, YouTuber Sophia Tung found Baidu's 5th generation robotaxi to offer a poor experience in ride quality, wait time, and overall service. However, during a return trip she tried Baidu's 6th generation vehicle in Wuhan and rated it as the best among Chinese robotaxis, approaching the quality of Waymo.
AI

OpenAI Unveils Coding-Focused GPT-4.1 While Phasing Out GPT-4.5 13

OpenAI unveiled its GPT-4.1 model family on Monday, prioritizing coding capabilities and instruction following while expanding context windows to 1 million tokens -- approximately 750,000 words. The lineup includes standard GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano variants, all available via API but not ChatGPT.

The flagship model scores 54.6% on SWE-bench Verified, lagging behind Google's Gemini 2.5 Pro (63.8%) and Anthropic's Claude 3.7 Sonnet (62.3%) on the same software engineering benchmark, according to TechCrunch. However, it achieves 72% accuracy on Video-MME's long video comprehension tests -- a significant improvement over GPT-4o's 65.3%.

OpenAI simultaneously announced plans to retire GPT-4.5 -- their largest model released just two months ago -- from API access by July 14. The company claims GPT-4.1 delivers "similar or improved performance" at substantially lower costs. Pricing follows a tiered structure: GPT-4.1 costs $2 per million input tokens and $8 per million output tokens, while GPT-4.1 nano -- OpenAI's "cheapest and fastest model ever" -- runs at just $0.10 per million input tokens.

All models feature a June 2024 knowledge cutoff, providing more current contextual understanding than previous iterations.
Chrome

Chrome To Patch Decades-Old 'Browser History Sniffing' Flaw That Let Sites Peek At Your History (theregister.com) 34

Slashdot reader king*jojo shared this article from The Register: A 23-year-old side-channel attack for spying on people's web browsing histories will get shut down in the forthcoming Chrome 136, released last Thursday to the Chrome beta channel. At least that's the hope.

The privacy attack, referred to as browser history sniffing, involves reading the color values of web links on a page to see if the linked pages have been visited previously... Web publishers and third parties capable of running scripts, have used this technique to present links on a web page to a visitor and then check how the visitor's browser set the color for those links on the rendered web page... The attack was mitigated about 15 years ago, though not effectively. Other ways to check link color information beyond the getComputedStyle method were developed... Chrome 136, due to see stable channel release on April 23, 2025, "is the first major browser to render these attacks obsolete," explained Kyra Seevers, Google software engineer in a blog post.

This is something of a turnabout for the Chrome team, which twice marked Chromium bug reports for the issue as "won't fix." David Baron, presently a Google software engineer who worked for Mozilla at the time, filed a Firefox bug report about the issue back on May 28, 2002... On March 9, 2010, Baron published a blog post outlining the issue and proposing some mitigations...

AI

AI Industry Tells US Congress: 'We Need Energy' (msn.com) 98

The Washington Post reports: The United States urgently needs more energy to fuel an artificial intelligence race with China that the country can't afford to lose, industry leaders told lawmakers at a House hearing on Wednesday. "We need energy in all forms," said Eric Schmidt, former CEO of Google, who now leads the Special Competitive Studies Project, a think tank focused on technology and security. "Renewable, nonrenewable, whatever. It needs to be there, and it needs to be there quickly." It was a nearly unanimous sentiment at the four-hour-plus hearing of the House Energy and Commerce Committee, which revealed bipartisan support for ramping up U.S. energy production to meet skyrocketing demand for energy-thirsty AI data centers.

The hearing showed how the country's AI policy priorities have changed under President Donald Trump. President Joe Biden's wide-ranging 2023 executive order on AI had sought to balance the technology's potential rewards with the risks it poses to workers, civil rights and national security. Trump rescinded that order within days of taking office, saying its "onerous" requirements would "threaten American technological leadership...." [Data center power consumption] is already straining power grids, as residential consumers compete with data centers that can use as much electricity as an entire city. And those energy demands are projected to grow dramatically in the coming years... [Former Google CEO Eric] Schmidt, whom the committee's Republicans called as a witness on Wednesday, told [committee chairman Brett] Guthrie that winning the AI race is too important to let environmental considerations get in the way...

Once the United States beats China to develop superintelligence, Schmidt said, AI will solve the climate crisis. And if it doesn't, he went on, China will become the world's sole superpower. (Schmidt's view that AI will become superintelligent within a decade is controversial among experts, some of whom predict the technology will remain limited by fundamental shortcomings in its ability to plan and reason.)

The industry's wish list also included "light touch" federal regulation, high-skill immigration and continued subsidies for chip development. Alexandr Wang, the young billionaire CEO of San Francisco-based Scale AI, said a growing patchwork of state privacy laws is hampering AI companies' access to the data needed to train their models. He called for a federal privacy law that would preempt state regulations and prioritize innovation.

Some committee Democrats argued that cuts to scientific research and renewable energy will actually hamper America's AI competitiveness, according to the article. " But few questioned the premise that the U.S. is locked in an existential struggle with China for AI supremacy.

"That stark outlook has nearly coalesced into a consensus on Capitol Hill since China's DeepSeek chatbot stunned the AI industry with its reasoning skills earlier this year."
Google

Google DeepMind Has a Weapon in the AI Talent Wars: Aggressive Noncompete Rules (businessinsider.com) 56

The battle for AI talent is so hot that Google would rather give some employees a paid one-year vacation than let them work for a competitor. From a report: Some Google DeepMind staff in the UK are subject to noncompete agreements that prevent them from working for a competitor for up to 12 months after they finish work at Google, according to four former employees with direct knowledge of the matter who asked to remain anonymous because they were not permitted to share these details with the press.

Aggressive noncompetes are one tool tech companies wield to retain a competitive edge in the AI wars, which show no sign of slowing down as companies launch new bleeding-edge models and products at a rapid clip. When an employee signs one, they agree not to work for a competing company for a certain period of time. Google DeepMind has put some employees with a noncompete on extended garden leave. These employees are still paid by DeepMind but no longer work for it for the duration of the noncompete agreement.

Several factors, including a DeepMind employee's seniority and how critical their work is to the company, determine the length of noncompete clauses, those people said. Two of the former staffers said six-month noncompetes are common among DeepMind employees, including for individual contributors working on Google's Gemini AI models. There have been cases where more senior researchers have received yearlong stipulations, they said.

Google

Google Maps is Launching Tools To Help Cities Analyze Infrastructure and Traffic (theverge.com) 9

Google is opening up its Google Maps Platform data so that cities, developers, and other business decision makers can more easily access information about things like infrastructure and traffic. The Verge: Google is integrating new datasets for Google Maps Platform directly into BigQuery, the tech giant's fully managed data analytics service, for the first time. This should make it easier for people to access data from Google Maps platform products, including Imagery Insights, Roads Management Insights, and Places Insights.
Google

Samsung and Google Partner To Launch Ballie Home Robot with Built-in Projector (engadget.com) 25

Samsung Electronics and Google Cloud are jointly entering the consumer robotics market with Ballie, a yellow, soccer-ball-shaped robot equipped with a video projector and powered by Google's Gemini AI models. First previewed in 2020, the long-delayed device will finally launch this summer in the US and South Korea. The mobile companion uses small wheels to navigate homes autonomously and integrates with Samsung's SmartThings platform to control smart home devices.

Running on Samsung's Tizen operating system, Ballie can manage calendars, answer questions, handle phone calls, and project video content from services including YouTube and Netflix. Samsung EVP Jay Kim described it as a "completely new Ballie" compared to the 2020 version, with Google Cloud integration being the most significant change. The robot leverages Gemini for understanding commands, searching the web, and processing visual data for navigation, while using Samsung's AI models for accessing personal information.
Social Networks

The Tumblr Revival is Real - and Gen Z is Leading the Charge (fastcompany.com) 35

"Gen Z is rediscovering Tumblr — a chaotic, cozy corner of the internet untouched by algorithmic gloss and influencer overload..." writes Fast Company, "embracing the platform as a refuge from an internet saturated with influencers and algorithm fatigue." Thanks to Gen Z, the site has found new life. As of 2025, Gen Z makes up 50% of Tumblr's active monthly users and accounts for 60% of new sign-ups, according to data shared with Business Insider's Amanda Hoover, who recently reported on the platform's resurgence. User numbers spiked in January during the near-ban of TikTok and jumped again last year when Brazil temporarily banned X. In response, Tumblr users launched dedicated communities to archive and share their favorite TikToks...

To keep up with the momentum, Tumblr introduced Reddit-style Communities in December, letting users connect over shared interests like photography and video games. In January, it debuted Tumblr TV — a TikTok-like feature that serves as both a GIF search engine and a short-form video platform. But perhaps Tumblr's greatest strength is that it isn't TikTok or Facebook. Currently the 10th most popular social platform in the U.S., according to analytics firm Similarweb, Tumblr is dwarfed by giants like Instagram and X. For its users, though, that's part of the appeal.

First launched in 2007, Tumblr peaked at over 100 million users in 2014, according to the article. Trends like Occupy Wall Street had been born on Tumblr, notes Business Insider, calling the blogging platform "Gen Z's safe space... as the rest of the social internet has become increasingly commodified, polarized, and dominated by lifestyle influencers." Tumblr was also "one of the most hyped startups in the world before fading into obsolescence — bought by Yahoo for $1.1 billion in 2013... then acquired by Verizon, and later offloaded for fractions of pennies on the dollar in a distressed sale.

"That same Tumblr, a relic of many millennials' formative years, has been having a moment among Gen Z..." "Gen Z has this romanticism of the early-2000s internet," says Amanda Brennan, an internet librarian who worked at Tumblr for seven years, leaving her role as head of content in 2021... Part of the reason young people are hanging out on old social platforms is that there's nowhere new to go. The tech industry is evolving at a slower pace than it was in the 2000s, and there's less room for disruption. Big Tech has a stranglehold on how we socialize. That leaves Gen Z to pick up the scraps left by the early online millennials and attempt to craft them into something relevant. They love Pinterest (founded in 2010) and Snapchat (2011), and they're trying out digital point-and-shoot cameras and flip phones for an early-2000s aesthetic — and learning the valuable lesson that sometimes we look better when blurrier.

More Gen Zers and millennials are signing up for Yahoo. Napster, surprising many people with its continued existence, just sold for $207 million. The trend is fueled by nostalgia for Y2K aesthetics and a longing for a time when people could make mistakes on the internet and move past them. The pandemic also brought more Gen Z users to Tumblr...

And Tumblr still works much like an older internet, where people have more control over what they see and rely less on algorithms. "You curate your own stuff; it takes a little bit of work to put everything in place, but when it's working, you see the content you want to see," Fjodor Everaerts, a 26-year-old in Belgium who has made some 250,000 posts since he joined Tumblr when he was 14... Under Automattic, Tumblr is finally in the home that serves it, [says Ari Levine, the head of brand partnerships at Tumblr]. "We've had ups and downs along the way, but we're in the most interesting position and place that we've been in 18 years," he says... And following media companies (including Business Insider) and social platforms like Reddit, Automattic in 2024 was making a deal with OpenAI and Midjourney to allow the systems to train on Tumblr posts.


"The social internet is fractured," the article argues. ("Millennials are running Reddit. Gen Xers and Baby Boomers have a home on Facebook. Bluesky, one of the new X alternatives, has a tangible elder-millennial/Gen X vibe. Gen Zers have created social apps like BeReal and the Myspace-inspired Noplace, but they've so far generated more hype than influence....")

But in a world where megaplatforms "flatten our online experiences and reward content that fits a mold," the article suggests, "smaller communities can enrich them."
EU

As Stocks (and Cryptocurrencies) Drop After Tariffs, France Considers Retaliating Against US Big Tech (politico.eu) 277

"U.S. stock market futures plunged on Sunday evening," reports Yahoo Finance, "after the new U.S. tariff policy began collecting duties over the weekend..."

The EU will vote on $28 billion in retaliatory tariffs Wednesday, Reuters reports. (And those tariffs will be approved unless "a qualified majority of 15 EU members representing 65% of the EU's population oppose it. They would enter force in two stages, a smaller part on April 15 and the rest a month later.")

But France's Economy and Finance Minister has an idea: more strictly regulating how data is used by America's Big Tech companies. Politico EU reports/A>: "We may strengthen certain administrative requirements or regulate the use of data," Lombard said in an interview with Le Journal Du Dimanche. He added that another option could be to "tax certain activities," without being more specific.

A French government spokesperson already said last week that the EU's retaliation against U.S. tariffs could include "digital services that are currently not taxed." That suggestion was fiercely rejected by Ireland, which hosts the European headquarters of several U.S. Big Tech firms...

Technology is seen as a possible area for Europe to retaliate. The European Union has a €157 billion trade surplus in goods, which means it exports more than it imports, but it runs a deficit of €109 billion in services, including digital services. Big Tech giants like Apple, Microsoft, Amazon, Google and Meta dominate many parts of the market in Europe.

Amid the market turmoil, what about cryptocurrencies, often seen as a "proxy" for the level of risk felt by investors? In the 10 weeks after October 6, the price of Bitcoin skyrocketed 67% to $106,490 by December 10th. But by January 30th it had started dropping again, and now sits at $77,831 — still up 22% for the last six months, but down nearly 27% over the last 10 weeks. Yet even after all that volatility, Bitcoin suddenly fell again more than 6% on Sunday, reports Reuters, "as markets plunged amid tariff tensions. Ether, the second largest cryptocurrency, fell more than 10% on Sunday."
AI

Microsoft Uses AI To Find Flaws In GRUB2, U-Boot, Barebox Bootloaders (bleepingcomputer.com) 57

Slashdot reader zlives shared this report from BleepingComputer: Microsoft used its AI-powered Security Copilot to discover 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders.

GRUB2 (GRand Unified Bootloader) is the default boot loader for most Linux distributions, including Ubuntu, while U-Boot and Barebox are commonly used in embedded and IoT devices. Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison. Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and symlinks were discovered in U-Boot and Barebox, which require physical access to exploit.

The newly discovered flaws impact devices relying on UEFI Secure Boot, and if the right conditions are met, attackers can bypass security protections to execute arbitrary code on the device. While exploiting these flaws would likely need local access to devices, previous bootkit attacks like BlackLotus achieved this through malware infections.

Miccrosoft titled its blog post "Analyzing open-source bootloaders: Finding vulnerabilities faster with AI." (And they do note that Micxrosoft disclosed the discovered vulnerabilities to the GRUB2, U-boot, and Barebox maintainers and "worked with the GRUB2 maintainers to contribute fixes... GRUB2 maintainers released security updates on February 18, 2025, and both the U-boot and Barebox maintainers released updates on February 19, 2025.")

They add that performing their initial research, using Security Copilot "saved our team approximately a week's worth of time," Microsoft writes, "that would have otherwise been spent manually reviewing the content." Through a series of prompts, we identified and refined security issues, ultimately uncovering an exploitable integer overflow vulnerability. Copilot also assisted in finding similar patterns in other files, ensuring comprehensive coverage and validation of our findings...

As AI continues to emerge as a key tool in the cybersecurity community, Microsoft emphasizes the importance of vendors and researchers maintaining their focus on information sharing. This approach ensures that AI's advantages in rapid vulnerability discovery, remediation, and accelerated security operations can effectively counter malicious actors' attempts to use AI to scale common attack tactics, techniques, and procedures (TTPs).

This week Google also announced Sec-Gemini v1, "a new experimental AI model focused on advancing cybersecurity AI frontiers."
AI

Open Source Coalition Announces 'Model-Signing' with Sigstore to Strengthen the ML Supply Chain (googleblog.com) 10

The advent of LLMs and machine learning-based applications "opened the door to a new wave of security threats," argues Google's security blog. (Including model and data poisoning, prompt injection, prompt leaking and prompt evasion.)

So as part of the Linux Foundation's nonprofit Open Source Security Foundation, and in partnership with NVIDIA and HiddenLayer, Google's Open Source Security Team on Friday announced the first stable model-signing library (hosted at PyPI.org), with digital signatures letting users verify that the model used by their application "is exactly the model that was created by the developers," according to a post on Google's security blog. [S]ince models are an uninspectable collection of weights (sometimes also with arbitrary code), an attacker can tamper with them and achieve significant impact to those using the models. Users, developers, and practitioners need to examine an important question during their risk assessment process: "can I trust this model?"

Since its launch, Google's Secure AI Framework (SAIF) has created guidance and technical solutions for creating AI applications that users can trust. A first step in achieving trust in the model is to permit users to verify its integrity and provenance, to prevent tampering across all processes from training to usage, via cryptographic signing... [T]he signature would have to be verified when the model gets uploaded to a model hub, when the model gets selected to be deployed into an application (embedded or via remote APIs) and when the model is used as an intermediary during another training run. Assuming the training infrastructure is trustworthy and not compromised, this approach guarantees that each model user can trust the model...

The average developer, however, would not want to manage keys and rotate them on compromise. These challenges are addressed by using Sigstore, a collection of tools and services that make code signing secure and easy. By binding an OpenID Connect token to a workload or developer identity, Sigstore alleviates the need to manage or rotate long-lived secrets. Furthermore, signing is made transparent so signatures over malicious artifacts could be audited in a public transparency log, by anyone. This ensures that split-view attacks are not possible, so any user would get the exact same model. These features are why we recommend Sigstore's signing mechanism as the default approach for signing ML models.

Today the OSS community is releasing the v1.0 stable version of our model signing library as a Python package supporting Sigstore and traditional signing methods. This model signing library is specialized to handle the sheer scale of ML models (which are usually much larger than traditional software components), and handles signing models represented as a directory tree. The package provides CLI utilities so that users can sign and verify model signatures for individual models. The package can also be used as a library which we plan to incorporate directly into model hub upload flows as well as into ML frameworks.

"We can view model signing as establishing the foundation of trust in the ML ecosystem..." the post concludes (adding "We envision extending this approach to also include datasets and other ML-related artifacts.") Then, we plan to build on top of signatures, towards fully tamper-proof metadata records, that can be read by both humans and machines. This has the potential to automate a significant fraction of the work needed to perform incident response in case of a compromise in the ML world...

To shape the future of building tamper-proof ML, join the Coalition for Secure AI, where we are planning to work on building the entire trust ecosystem together with the open source community. In collaboration with multiple industry partners, we are starting up a special interest group under CoSAI for defining the future of ML signing and including tamper-proof ML metadata, such as model cards and evaluation results.

AI

Two Teenagers Built 'Cal AI', a Photo Calorie App With Over a Million Users (techcrunch.com) 24

An anonymous reader quotes a report from TechCrunch: In a world filled with "vibe coding," Zach Yadegari, teen founder of Cal AI, stands in ironic, old-fashioned contrast. Ironic because Yadegari and his co-founder, Henry Langmack, are both just 18 years old and still in high school. Yet their story, so far, is a classic. Launched in May, Cal AI has generated over 5 million downloads in eight months, Yadegari says. Better still, he tells TechCrunch that the customer retention rate is over 30% and that the app generated over $2 million in revenue last month. [...]

The concept is simple: Take a picture of the food you are about to consume, and let the app log calories and macros for you. It's not a unique idea. For instance, the big dog in calorie counting, MyFitnessPal, has its Meal Scan feature. Then there are apps like SnapCalorie, which was released in 2023 and created by the founder of Google Lens. Cal AI's advantage, perhaps, is that it was built wholly in the age of large image models. It uses models from Anthropic and OpenAI and RAG to improve accuracy and is trained on open source food calorie and image databases from sites like GitHub.

"We have found that different models are better with different foods," Yadegari tells TechCrunch. Along the way, the founders coded through technical problems like recognizing ingredients from food packages or in jumbled bowls. The result is an app that the creators say is 90% accurate, which appears to be good enough for many dieters.
The report says Yadegari began mastering Python and C# in middle school and went on to build his first business in ninth grade -- a website called Totally Science that gave students access to unblocked games (cleverly named to evade school filters). He sold the company at age 16 to FreezeNova for $100,000.

Following the sale, Yadegari immersed himself in the startup scene, watching Y Combinator videos and networking on X, where he met co-founder Blake Anderson, known for creating ChatGPT-powered apps like RizzGPT. Together, they launched Cal AI and moved to a hacker house in San Francisco to develop their prototype.
Security

Google Launches Sec-Gemini v1 AI Model To Improve Cybersecurity Defense 2

Google has introduced Sec-Gemini v1, an experimental AI model built on its Gemini platform and tailored for cybersecurity. BetaNews reports: Sec-Gemini v1 is built on top of Gemini, but it's not just some repackaged chatbot. Actually, it has been tailored with security in mind, pulling in fresh data from sources like Google Threat Intelligence, the OSV vulnerability database, and Mandiant's threat reports. This gives it the ability to help with root cause analysis, threat identification, and vulnerability triage.

Google says the model performs better than others on two well-known benchmarks. On CTI-MCQ, which measures how well models understand threat intelligence, it scores at least 11 percent higher than competitors. On CTI-Root Cause Mapping, it edges out rivals by at least 10.5 percent. Benchmarks only tell part of the story, but those numbers suggest it's doing something right.
Access is currently limited to select researchers and professionals for early testing. If you meet that criteria, you can request access here.
Government

Trump Extends TikTok Deadline For the Second Time (cnbc.com) 74

For the second time, President Trump has extended the deadline for ByteDance to divest TikTok's U.S. operations by 75 days. The TikTok deal "requires more work to ensure all necessary approvals are signed," said Trump in a post on his Truth Social platform. The extension will "keep TikTok up and running for an additional 75 days."

"We hope to continue working in Good Faith with China, who I understand are not very happy about our Reciprocal Tariffs (Necessary for Fair and Balanced Trade between China and the U.S.A.!)," Trump added. CNBC reports: ByteDance has been in discussion with the U.S. government, the company told CNBC, adding that any agreement will be subject to approval under Chinese law. "An agreement has not been executed," a spokesperson for ByteDance said in a statement. "There are key matters to be resolved." Before Trump's decision, ByteDance faced an April 5 deadline to carry out a "qualified divestiture" of TikTok's U.S. business as required by a national security law signed by former President Joe Biden in April 2024.

ByteDance's original deadline to sell TikTok was on Jan. 19, but Trump signed an executive order when he took office the next day that gave the company 75 more days to make a deal. Although the law would penalize internet service providers and app store owners like Apple and Google for hosting and providing services to TikTok in the U.S., Trump's executive order instructed the attorney general to not enforce it.
"This proves that Tariffs are the most powerful Economic tool, and very important to our National Security!," Trump said in the Truth Social post. "We do not want TikTok to 'go dark.' We look forward to working with TikTok and China to close the Deal. Thank you for your attention to this matter!"
AI

Google's NotebookLM AI Can Now 'Discover Sources' For You 6

Google's NotebookLM has added a new "Discover sources" feature that allows users to describe a topic and have the AI find and curate relevant sources from the web -- eliminating the need to upload documents manually. "When you tap the Discover button in NotebookLM, you can describe the topic you're interested in, and NotebookLM will bring back a curated collection of relevant sources from the web," says Google software engineer Adam Bignell. Click to add those sources to your notebook; "it's a fast and easy way to quickly grasp a new concept or gather essential reading on a topic." PCMag reports: You can still add your files. NotebookLM can ingest PDFs, websites, YouTube videos, audio files, Google Docs, or Google Slides and summarize, transcribe, narrate, or convert into FAQs and study guides. "Discover sources" helps incorporate information you may not have saved. [...] The imported sources stay within the notebook you created. You can read the entire original document, ask questions about it via chat, or apply other NotebookLM features to it.

Google started rolling out both features on Wednesday. It should be available for all users in about "a week or so." For those concerned about privacy, Google says, "NotebookLM does not use your personal data, including your source uploads, queries, and the responses from the model for training."
There's also an "I'm Feeling Curious" button (a reference to its iconic "I'm feeling lucky" search button) that generates sources on a random topic you might find interesting.
Piracy

Massive Expansion of Italy's Piracy Shield Underway (techdirt.com) 21

An anonymous reader quotes a report from Techdirt: Walled Culture has been following closely Italy's poorly designed Piracy Shield system. Back in December we reported how copyright companies used their access to the Piracy Shield system to order Italian Internet service providers (ISPs) to block access to all of Google Drive for the entire country, and how malicious actors could similarly use that unchecked power to shut down critical national infrastructure. Since then, the Computer & Communications Industry Association (CCIA), an international, not-for-profit association representing computer, communications, and Internet industry firms, has added its voice to the chorus of disapproval. In a letter (PDF) to the European Commission, it warned about the dangers of the Piracy Shield system to the EU economy [...]. It also raised an important new issue: the fact that Italy brought in this extreme legislation without notifying the European Commission under the so-called "TRIS" procedure, which allows others to comment on possible problems [...].

As well as Italy's failure to notify the Commission about its new legislation in advance, the CCIA believes that: this anti-piracy mechanism is in breach of several other EU laws. That includes the Open Internet Regulation which prohibits ISPs to block or slow internet traffic unless required by a legal order. The block subsequent to the Piracy Shield also contradicts the Digital Services Act (DSA) in several aspects, notably Article 9 requiring certain elements to be included in the orders to act against illegal content. More broadly, the Piracy Shield is not aligned with the Charter of Fundamental Rights nor the Treaty on the Functioning of the EU -- as it hinders freedom of expression, freedom to provide internet services, the principle of proportionality, and the right to an effective remedy and a fair trial.

Far from taking these criticisms to heart, or acknowledging that Piracy Shield has failed to convert people to paying subscribers, the Italian government has decided to double down, and to make Piracy Shield even worse. Massimiliano Capitanio, Commissioner at AGCOM, the Italian Authority for Communications Guarantees, explained on LinkedIn how Piracy Shield was being extended in far-reaching ways (translation by Google Translate, original in Italian). [...] That is, Piracy Shield will apply to live content far beyond sports events, its original justification, and to streaming services. Even DNS and VPN providers will be required to block sites, a serious technical interference in the way the Internet operates, and a threat to people's privacy. Search engines, too, will be forced to de-index material. The only minor concession to ISPs is to unblock domain names and IP addresses that are no longer allegedly being used to disseminate unauthorized material. There are, of course, no concessions to ordinary Internet users affected by Piracy Shield blunders.
In the future, Italy's Piracy Shield will add:
- 30-minute blackout orders not only for pirate sports events, but also for other live content;
- the extension of blackout orders to VPNs and public DNS providers;
- the obligation for search engines to de-index pirate sites;
- the procedures for unblocking domain names and IP addresses obscured by Piracy Shield that are no longer used to spread pirate content;
- the new procedure to combat piracy on the #linear and "on demand" television, for example to protect the #film and #serietv.
AI

DeepMind Details All the Ways AGI Could Wreck the World (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica, written by Ryan Whitwam: Researchers at DeepMind have ... released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience. It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to "severe harm." This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks.

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne'er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon. DeepMind says companies developing AGI will have to conduct extensive testing and create robust post-training safety protocols. Essentially, AI guardrails on steroids. They also suggest devising a method to suppress dangerous capabilities entirely, sometimes called "unlearning," but it's unclear if this is possible without substantially limiting models. Misalignment is largely not something we have to worry about with generative AI as it currently exists. This type of AGI harm is envisioned as a rogue machine that has shaken off the limits imposed by its designers. Terminators, anyone? More specifically, the AI takes actions it knows the developer did not intend. DeepMind says its standard for misalignment here is more advanced than simple deception or scheming as seen in the current literature.

To avoid that, DeepMind suggests developers use techniques like amplified oversight, in which two copies of an AI check each other's output, to create robust systems that aren't likely to go rogue. If that fails, DeepMind suggests intensive stress testing and monitoring to watch for any hint that an AI might be turning against us. Keeping AGIs in virtual sandboxes with strict security and direct human oversight could help mitigate issues arising from misalignment. Basically, make sure there's an "off" switch. If, on the other hand, an AI didn't know that its output would be harmful and the human operator didn't intend for it to be, that's a mistake. We get plenty of those with current AI systems -- remember when Google said to put glue on pizza? The "glue" for AGI could be much stickier, though. DeepMind notes that militaries may deploy AGI due to "competitive pressure," but such systems could make serious mistakes as they will be tasked with much more elaborate functions than today's AI. The paper doesn't have a great solution for mitigating mistakes. It boils down to not letting AGI get too powerful in the first place. DeepMind calls for deploying slowly and limiting AGI authority. The study also suggests passing AGI commands through a "shield" system that ensures they are safe before implementation.

Lastly, there are structural risks, which DeepMind defines as the unintended but real consequences of multi-agent systems contributing to our already complex human existence. For example, AGI could create false information that is so believable that we no longer know who or what to trust. The paper also raises the possibility that AGI could accumulate more and more control over economic and political systems, perhaps by devising heavy-handed tariff schemes. Then one day, we look up and realize the machines are in charge instead of us. This category of risk is also the hardest to guard against because it would depend on how people, infrastructure, and institutions operate in the future.

Media

AV1 is Supposed To Make Streaming Better, So Why Isn't Everyone Using It? (theverge.com) 46

Despite promises of more efficient streaming, the AV1 video codec hasn't achieved widespread adoption seven years after its 2018 debut, even with backing from tech giants Netflix, Microsoft, Google, Amazon, and Meta. The Alliance for Open Media (AOMedia) claims AV1 is 30% more efficient than standards like HEVC, delivering higher-quality video at lower bandwidth while remaining royalty-free.

Major services including YouTube, Netflix, and Amazon Prime Video have embraced the technology, with Netflix encoding approximately 95% of its content using AV1. However, adoption faces significant hurdles. Many streaming platforms including Max, Peacock, and Paramount Plus haven't implemented AV1, partly due to hardware limitations. Devices require specific decoders to properly support AV1, though recent products from Apple, Nvidia, AMD, and Intel have begun including them. "In order to get its best features, you have to accept a much higher encoding complexity," Larry Pearlstein, associate professor at the College of New Jersey, told The Verge. "But there is also higher decoding complexity, and that is on the consumer end."

Slashdot Top Deals