Google

Google Fixes Flaw That Could Unmask YouTube Users' Email Addresses 5

An anonymous reader shares a report: Google has fixed two vulnerabilities that, when chained together, could expose the email addresses of YouTube accounts, causing a massive privacy breach for those using the site anonymously.

The flaws were discovered by security researchers Brutecat (brutecat.com) and Nathan (schizo.org), who found that YouTube and Pixel Recorder APIs could be used to obtain user's Google Gaia IDs and convert them into their email addresses. The ability to convert a YouTube channel into an owner's email address is a significant privacy risk to content creators, whistleblowers, and activists relying on being anonymous online.
AI

Ex-Google Chief Warns West To Focus On Open-Source AI in Competition With China (ft.com) 43

Former Google chief Eric Schmidt has warned that western countries need to focus on building open-source AI models or risk losing out to China in the global race to develop the cutting-edge technology. From a report: The warning comes after Chinese startup DeepSeek shocked the world last month with the launch of R1, its powerful-reasoning open large language model, which was built in a more efficient way than its US rivals such as OpenAI.

Schmidt, who has become a significant tech investor and philanthropist, said the majority of the top US LLMs are closed -- meaning not freely accessible to all -- which includes Google's Gemini, Anthropic's Claude and OpenAI's GPT-4, with the exception being Meta's Llama. "If we don't do something about that, China will ultimately become the open-source leader and the rest of the world will become closed-source," Schmidt told the Financial Times. The billionaire said a failure to invest in open-source technologies would prevent scientific discovery from happening in western universities, which might not be able to afford costly closed models.

Security

New Hack Uses Prompt Injection To Corrupt Gemini's Long-Term Memory 23

An anonymous reader quotes a report from Ars Technica: On Monday, researcher Johann Rehberger demonstrated a new way to override prompt injection defenses Google developers have built into Gemini -- specifically, defenses that restrict the invocation of Google Workspace or other sensitive tools when processing untrusted data, such as incoming emails or shared documents. The result of Rehberger's attack is the permanent planting of long-term memories that will be present in all future sessions, opening the potential for the chatbot to act on false information or instructions in perpetuity. [...] The hack Rehberger presented on Monday combines some of these same elements to plant false memories in Gemini Advanced, a premium version of the Google chatbot available through a paid subscription. The researcher described the flow of the new attack as:

1. A user uploads and asks Gemini to summarize a document (this document could come from anywhere and has to be considered untrusted).
2. The document contains hidden instructions that manipulate the summarization process.
3. The summary that Gemini creates includes a covert request to save specific user data if the user responds with certain trigger words (e.g., "yes," "sure," or "no").
4. If the user replies with the trigger word, Gemini is tricked, and it saves the attacker's chosen information to long-term memory.

As the following video shows, Gemini took the bait and now permanently "remembers" the user being a 102-year-old flat earther who believes they inhabit the dystopic simulated world portrayed in The Matrix. Based on lessons learned previously, developers had already trained Gemini to resist indirect prompts instructing it to make changes to an account's long-term memories without explicit directions from the user. By introducing a condition to the instruction that it be performed only after the user says or does some variable X, which they were likely to take anyway, Rehberger easily cleared that safety barrier.
Google responded in a statement to Ars: "In this instance, the probability was low because it relied on phishing or otherwise tricking the user into summarizing a malicious document and then invoking the material injected by the attacker. The impact was low because the Gemini memory functionality has limited impact on a user session. As this was not a scalable, specific vector of abuse, we ended up at Low/Low. As always, we appreciate the researcher reaching out to us and reporting this issue."

Rehberger noted that Gemini notifies users of new long-term memory entries, allowing them to detect and remove unauthorized additions. Though, he still questioned Google's assessment, writing: "Memory corruption in computers is pretty bad, and I think the same applies here to LLMs apps. Like the AI might not show a user certain info or not talk about certain things or feed the user misinformation, etc. The good thing is that the memory updates don't happen entirely silently -- the user at least sees a message about it (although many might ignore)."
Chrome

Google Chrome May Soon Use 'AI' To Replace Compromised Passwords (arstechnica.com) 46

Google's Chrome browser might soon get a useful security upgrade: detecting passwords used in data breaches and then generating and storing a better replacement. From a report: Google's preliminary copy suggests it's an "AI innovation," though exactly how is unclear.

Noted software digger Leopeva64 on X found a new offering in the AI settings of a very early build of Chrome. The option, "Automated password Change" (so, early stages -- as to not yet get a copyedit), is described as, "When Chrome finds one of your passwords in a data breach, it can offer to change your password for you when you sign in."

Chrome already has a feature that warns users if the passwords they enter have been identified in a breach and will prompt them to change it. As noted by Windows Report, the change is that now Google will offer to change it for you on the spot rather than simply prompting you to handle that elsewhere. The password is automatically saved in Google's Password Manager and "is encrypted and never seen by anyone," the settings page claims.

Android

TikTok Wants Android Users To Sideload Its App (techcrunch.com) 28

With TikTok's U.S. ban temporarily paused, the company is encouraging Android users to sideload its app by downloading it directly from TikTok.com as an APK file, bypassing the Google Play Store. TechCrunch reports: The Android app download is being made available as an Android Package Kit, more commonly known as an APK file, which contains the app's code, assets, and other resources that TikTok needs to run. By offering a standalone download, TikTok can at least temporarily skirt the current app store ban, which still prevents both Google Play and Apple's App Store from hosting the app while the ban's enforcement remains paused.
The Internet

Brave Now Lets You Inject Custom JavaScript To Tweak Websites (bleepingcomputer.com) 12

Brave Browser version 1.75 introduces "custom scriptlets," a new feature that allows advanced users to inject their own JavaScript into websites for enhanced customization, privacy, and usability. The feature is similar to the TamperMonkey and GreaseMonkey browser extensions, notes BleepingComputer. From the report: "Starting with desktop version 1.75, advanced Brave users will be able to write and inject their own scriptlets into a page, allowing for better control over their browsing experience," explained Brave in the announcement. Brave says that the feature was initially created to debug the browser's adblock feature but felt it was too valuable not to share with users. Brave's custom scriptlets feature can be used to modify webpages for a wide variety of privacy, security, and usability purposes.

For privacy-related changes, users write scripts that block JavaScript-based trackers, randomize fingerprinting APIs, and substitute Google Analytics scripts with a dummy version. In terms of customization and accessibility, the scriptlets could be used for hiding sidebars, pop-ups, floating ads, or annoying widgets, force dark mode even on sites that don't support it, expand content areas, force infinite scrolling, adjust text colors and font size, and auto-expand hidden content.

For performance and usability, the scriptlets can block video autoplay, lazy-load images, auto-fill forms with predefined data, enable custom keyboard shortcuts, bypass right-click restrictions, and automatically click confirmation dialogs. The possible actions achievable by injected JavaScript snippets are virtually endless. However, caution is advised, as running untrusted custom scriptlets may cause issues or even introduce some risk.

AI

DeepMind Chief Dismisses DeepSeek's AI Breakthrough as 'Known Techniques' (cnbc.com) 30

Google DeepMind CEO Demis Hassabis downplayed the technological significance of DeepSeek's latest AI model, despite its market impact. "Despite the hype, there's no actual new scientific advance there. It's using known techniques," Hassabis said on Sunday. "Actually many of the techniques we invented at Google and at DeepMind."

Hassabis acknowledged that Deepseek's AI model "is probably the best work" out of China, but its capabilities, he said, is "exaggerated a little bit."DeepSeek's launch last month triggered a $1 trillion U.S. market sell-off.
Books

Bill Gates Remembers LSD Trips, Smoking Pot, and How the Smartphone OS Market 'Was Ours for the Taking' (independent.co.uk) 138

Fortune remembers that in 2011 Steve Jobs had told author Walter Isaacson that Microsoft co-founder Bill Gates would "be a broader guy if he had dropped acid once or gone off to an ashram when he was younger."

But The Indendepent notes that in his new memoir Gates does write about two acid trip experiences. (Gates mis-timed his first experiment with LSD, ending up still tripping during a previously-scheduled appointment for dental surgery...) "Later in the book, Gates recounts another experience with LSD with future Microsoft co-founder Paul Allen and some friends... Gates says in the book that it was the fear of damaging his memory that finally persuaded him never to take the drug again." He added: "I smoked pot in high school, but not because it did anything interesting. I thought maybe I would look cool and some girl would think that was interesting. It didn't succeed, so I gave it up."

Gates went on to say that former Apple CEO Steve Jobs, who didn't know about his past drug use, teased him on the subject. "Steve Jobs once said that he wished I'd take acid because then maybe I would have had more taste in my design of my products," recalled Gates. "My response to that was to say, 'Look, I got the wrong batch.' I got the coding batch, and this guy got the marketing-design batch, so good for him! Because his talents and mine, other than being kind of an energetic leader, and pushing the limits, they didn't overlap much. He wouldn't know what a line of code meant, and his ability to think about design and marketing and things like that... I envy those skills. I'm not in his league."

Gates added that he was a fan of Michael Pollan's book about psychedelic drugs, How To Change Your Mind, and is intrigued by the idea that they may have therapeutic uses. "The idea that some of these drugs that affect your mind might help with depression or OCD, I think that's fascinating," said Gates. "Of course, we have to be careful, and that's very different than recreational usage."

Touring the country, 69-year-old Gates shared more glimpses of his life story:
  • The Harvard Gazette notes that the university didn't offer computer science degrees when Gates attended in 1973. But since Gates already had years of code-writing experience, he "initially rebuffed any suggestion of taking computer-related coursework... 'It's too easy,' he remembered telling friends."
  • "The naiveté I had that free computing would just be this unadulterated good thing wasn't totally correct even before AI," Gates told an audience at the Harvard Book Store. "And now with AI, I can see that we could shape this in the wrong way."
  • Gates "expressed regret about how he treated another boyhood friend, Paul Allen, the other cofounder of Microsoft, who died in 2018," reports the Boston Globe. "Gates at first took 60 percent ownership of the new software company and then pressured his friend for another 4 percent. 'I feel bad about it in retrospect,' he said. 'That was always a little complicated, and I wish I hadn't pushed....'"
  • Benzinga writes that Gates has now "donated $100 billion to charitable causes... Had Gates retained the $100 billion he has donated, his total wealth would be around $264 billion, placing him second on the global wealth rankings behind Elon Musk and ahead of Jeff Bezos and Mark Zuckerberg."
  • Gates told the Associated Press "I am stunned that Intel basically lost its way," saying Intel is now "kind of behind" on both chip design and fabrication. "They missed the AI chip revolution, and with their fabrication capabilities, they don't even use standards that people like Nvidia and Qualcomm find easy... I hope Intel recovers, but it looks pretty tough for them at this stage."
  • Gates also told the Associated Press that fighting a three-year antitrust case had "distracted" Microsoft. "The area that Google did well in that would not have happened had I not been distracted is Android, where it was a natural thing for me. I was trying, although what I didn't do well enough is provide the operating system for the phone. That was ours for the taking."
  • The Dallas News reports that in an on-stage interview in Texas, Mark Cuban closed by asking Gates one question. "Is the American Dream alive?" Gates answered: "It was for me."

AMD

How To Make Any AMD Zen CPU Always Generate 4 As a Random Number (theregister.com) 62

Slashdot reader headlessbrick writes: Google security researchers have discovered a way to bypass AMD's security, enabling them to load unofficial microcode into its processors and modify the silicon's behaviour at will. To demonstrate this, they created a microcode patch that forces the chips to always return 4 when asked for a random number.

Beyond simply allowing Google and others to customize AMD chips for both beneficial and potentially malicious purposes, this capability also undermines AMD's secure encrypted virtualization and root-of-trust security mechanisms.

Obligatory XKCD.
Apple

Retrocomputing Enthusiast Explores 28-Year-Old Powerbook G3: 'Apple's Hope For Redemption' (youtube.com) 60

Long-time Slashdot reader Shayde once restored a 1986 DEC PDP-11 minicomputer, and even ran Turbo Pascal on a 40-year-old Apple II clone.

Now he's exploring a 27-year-old Macintosh PowerBook G3 — with 64 megabytes memory and 4 gigabytes of disk space. "The year is 1997, and Apple is in big trouble." (Apple's market share had dropped from 16% in 1980 to somewhere below 4%...) Turns out this was one of the first machines able to run OS X, and was built during the transition period for Apple after Steve Jobs came back in to rescue the company from bankruptcy.
It's clearly old technology. There's even a SCSI connector, PCMCIA sockets, a modem port for your phone/landline cable, and a CD-ROM drive. There's also Apple's proprietary ports for LocalTalk and an Apple Desktop Bus port ("used for keyboards, mice, and stuff like that"). And its lithium-ion batteries "were meant to be replaced and moved around, so you could carry spare batteries with you."

So what's it like using a 27-year-old laptop? "The first thing I had to note was this thing weighs a ton! This thing could be used as a projectile weapon! I can't imagine hauling these things around doing business..." And it's a good thing it had vents, because "This thing runs hot!" (The moment he plugs it in he can hear its ancient fan running...) It seems to take more than two minutes to boot up. ("The drive is rattling away...") But soon he's looking at a glorious desktop from 1998 desktop. ("Applications installed... Oh look! Adobe Acrobat Reader! I betcha that's going to need an update...")

After plugging in a network cable, a pop-up prompts him to "Set up your .Mac membership." ("I have so little interest in doing this.") He does find an old version of Safari, but it refuses to launch-- though "While puttering around in the application folder, I did notice that we had Internet Explorer installed. But that pretty much went as well as expected." In the end it seems like he ends up "on the network, but we have no browser." Although at least he does find a Terminal program — and successfully pings Google.

The thing that would drive me crazy is when opening the laptop, Apple's logo is upside-down!
Programming

What Do Linux Kernel Developers Think of Rust? (thenewstack.io) 42

Keynotes at this year's FOSDEM included free AI models and systemd, reports Heise.de — and also a progress report from Miguel Ojeda, supervisor of the Rust integration in the Linux kernel. Only eight people remain in the core team around Rust for Linux... Miguel Ojeda therefore launched a survey among kernel developers, including those outside the Rust community, and presented some of the more important voices in his FOSDEM talk. The overall mood towards Rust remains favorable, especially as Linus Torvalds and Greg Kroah-Hartman are convinced of the necessity of Rust integration. This is less about rapid progress and more about finding new talent for kernel development in the future.
The reaction was mostly positive, judging by Ojeda's slides:

- "2025 will be the year of Rust GPU drivers..." — Daniel Almedia

- "I think the introduction of Rust in the kernel is one of the most exciting development experiments we've seen in a long time." — Andrea Righi

- "[T]he project faces unique challenges. Rust's biggest weakness, as a language, is that relatively few people speak it. Indeed, Rust is not a language for beginners, and systems-level development complicates things even more. That said, the Linux kernel project has historically attracted developers who love challenging software — if there's an open source group willing to put the extra effort for a better OS, it's the kernel devs." — Carlos Bilbao

- "I played a little with [Rust] in user space, and I just absolutely hate the cargo concept... I hate having to pull down other code that I do not trust. At least with shared libraries, I can trust a third party to have done the build and all that... [While Rust should continue to grow in the kernel], if a subset of C becomes as safe as Rust, it may make Rust obsolete..." Steven Rostedt

Rostedt wasn't sure if Rust would attract more kernel contributors, but did venture this opinion. "I feel Rust is more of a language that younger developers want to learn, and C is their dad's language."

But still "contention exists within the kernel development community between those pro-Rust and -C camps," argues The New Stack, citing the latest remarks from kernel maintainer Christoph Hellwig (who had earlier likened the mixing of Rust and C to cancer). Three days later Hellwig reiterated his position again on the Linux kernel mailing list: "Every additional bit that another language creeps in drastically reduces the maintainability of the kernel as an integrated project. The only reason Linux managed to survive so long is by not having internal boundaries, and adding another language completely breaks this. You might not like my answer, but I will do everything I can do to stop this. This is NOT because I hate Rust. While not my favourite language it's definitively one of the best new ones and I encourage people to use it for new projects where it fits. I do not want it anywhere near a huge C code base that I need to maintain."
But the article also notes that Google "has been a staunch supporter of adding Rust to the kernel for Linux running in its Android phones." The use of Rust in the kernel is seen as a way to avoid memory vulnerabilities associated with C and C++ code and to add more stability to the Android OS. "Google's wanting to replace C code with Rust represents a small piece of the kernel but it would have a huge impact since we are talking about billions of phones," Ojeda told me after his talk.

In addition to Google, Rust adoption and enthusiasm for it is increasing as Rust gets more architectural support and as "maintainers become more comfortable with it," Ojeda told me. "Maintainers have already told me that if they could, then they would start writing Rust now," Ojeda said. "If they could drop C, they would do it...."

Amid the controversy, there has been a steady stream of vocal support for Ojeda. Much of his discussion also covered statements given by advocates for Rust in the kernel, ranging from lead developers of the kernel and including Linux creator Linus Torvalds himself to technology leads from Red Hat, Samsung, Google, Microsoft and others.

Google

Did Google Fake Gemini AI's Output For Its Super Bowl Ad? (theverge.com) 43

Google's Super Bowl ad about a Gouda cheese seller appears to be using fake AI output, writes the Verge: The text portrayed as generated by AI has been available on the business's website since at least August 2020, as shown on this archived webpage. Google didn't launch Gemini until 2023, meaning Gemini couldn't have generated the website description as depicted in the ad.
The site Futurism calls the situation "beyond bizarre," asking why Google doesn't seem to trust its own technology. Either Google faked the ad entirely, or prompted its AI to generate the web page's existing copy word-for-word, or the AI was prompted to come up with original copy and instead copied the old version. In the publishing industry, that's referred to as "plagiarism."
And ironically if Gemini did plagiarize that text, the text that it plagiarized is also inaccurate.
Social Networks

While TikTok Buys Ads on YouTube, YouTube is Buying Ads on TikTok (yahoo.com) 30

I just saw an ad for TikTok on a YouTube video. But at the same time YouTube is running ads on TikTok, reports Bloomberg, targeting TikTok content creators in "an effort to lure these valuable users to the Google-owned rival and capitalize on TikTok's uncertain future."

One of YouTube's ads even received over a thousand likes, with Bloomberg calling it that TikTok "is willing to accept ad dollars from one of its fiercest competitors promoting a message aimed at undercutting its business." YouTube is the latest TikTok competitor to try to capitalize on the app's looming US ban, which could go into effect in early April. Meta Platforms Inc.'s Instagram announced a new video editing tool in January, and X also teased a new video tab as part of an effort to win over TikTok's content creators...

Google would be one of the biggest beneficiaries of a ban in the US. Both its flagship video service YouTube and its TikTok copycat, YouTube Shorts, would likely see an uptick in traffic if TikTok goes away. Google also plays an unusual role in TikTok's potential ban because it runs one of two mobile app stores controlling whether people in the US can download the video app. It has blocked TikTok from its Google Play store since the divest-or-ban law went into effect January 19.

Chrome

Google's 7-Year Slog To Improve Chrome Extensions Still Hasn't Satisfied Developers (theregister.com) 30

The Register's Thomas Claburn reports: Google's overhaul of Chrome's extension architecture continues to pose problems for developers of ad blockers, content filters, and privacy tools. [...] While Google's desire to improve the security, privacy, and performance of the Chrome extension platform is reasonable, its approach -- which focuses on code and permissions more than human oversight -- remains a work-in-progress that has left extension developers frustrated.

Alexei Miagkov, senior staff technology at the Electronic Frontier Foundation, who oversees the organization's Privacy Badger extension, told The Register, "Making extensions under MV3 is much harder than making extensions under MV2. That's just a fact. They made things harder to build and more confusing." Miagkov said with Privacy Badger the problem has been the slowness with which Google addresses gaps in the MV3 platform. "It feels like MV3 is here and the web extensions team at Google is in no rush to fix the frayed ends, to fix what's missing or what's broken still." According to Google's documentation, "There are currently no open issues considered a critical platform gap," and various issues have been addressed through the addition of new API capabilities.

Miagkov described an unresolved problem that means Privacy Badger is unable to strip Google tracking redirects on Google sites. "We can't do it the correct way because when Google engineers design the [chrome.declarativeNetRequest API], they fail to think of this scenario," he said. "We can do a redirect to get rid of the tracking, but it ends up being a broken redirect for a lot of URLs. Basically, if the URL has any kind of query string parameters -- the question mark and anything beyond that -- we will break the link." Miagkov said a Chrome developer relations engineer had helped identify a workaround, but it's not great. Miagkov thinks these problems are of Google's own making -- the company changed the rules and has been slow to write the new ones. "It was completely predictable because they moved the ability to fix things from extensions to themselves," he said. "And now they need to fix things and they're not doing it."

AI

Creators Demand Tech Giants Fess Up, Pay For All That AI Training Data 55

The Register highlights concerns raised at a recent UK parliamentary committee regarding AI companies' exploitation of copyrighted content without permission or payment. From the report: The Culture, Media and Sport Committee and Science, Innovation and Technology Committee asked composer Max Richter how he would know if "bad-faith actors" were using his material to train AI models. "There's really nothing I can do," he told MPs. "There are a couple of music AI models, and it's perfectly easy to make them generate a piece of music that sounds uncannily like me. That wouldn't be possible unless it had hoovered up my stuff without asking me and without paying for it. That's happening on a huge scale. It's obviously happened to basically every artist whose work is on the internet."

Richter, whose work has been used in a number of major film and television scores, said the consequences for creative musicians and composers would be dire. "You're going to get a vanilla-ization of music culture as automated material starts to edge out human creators, and you're also going to get an impoverishing of human creators," he said. "It's worth remembering that the music business in the UK is a real success story. It's 7.6 billion-pound income last year, with over 200,000 people employed. That is a big impact. If we allow the erosion of copyright, which is really how value is created in the music sector, then we're going to be in a position where there won't be artists in the future."

Speaking earlier, former Google staffer James Smith said much of the damage from text and data mining had likely already been done. "The original sin, if you like, has happened," said Smith, co-founder and chief executive of Human Native AI. "The question is, how do we move forward? I would like to see the government put more effort into supporting licensing as a viable alternative monetization model for the internet in the age of these new AI agents."

Matt Rogerson, director of global public policy and platform strategy at the Financial Times, said: "We can only deal with what we see in front of us and [that is] people taking our content, using it for the training, using it in substitutional ways. So from our perspective, we'll prosecute the same argument in every country where we operate, where we see our content being stolen." The risk, if the situation continued, was a hollowing out of creative and information industries, he said. [...] "The problem is we can't see who's stolen our content. We're just at this stage where these very large companies, which usually make margins of 90 percent, might have to take some smaller margin, and that's clearly going to be upsetting for their investors. But that doesn't mean they shouldn't. It's just a question of right and wrong and where we pitch this debate. Unfortunately, the government has pitched it in thinking that you can't reduce the margin of these big tech companies; otherwise, they won't build a datacenter."
Google

Google Pulls Incorrect Gouda Stat From Its AI Super Bowl Ad (theverge.com) 51

An anonymous reader shares a report: Google has edited Gemini's AI response in a Super Bowl commercial to remove an incorrect statistic about cheese. The ad, which shows a small business owner using Gemini to write a website description about Gouda, no longer says the variety makes up "50 to 60 percent of the world's cheese consumption."

In the edited YouTube video, Gemini's response now skips over the specifics and says Gouda is "one of the most popular cheeses in the world." Google Cloud apps president Jerry Dischler initially defended the response, saying on X it's "grounded in the Web" and "not a hallucination."

Google

Google Tests AI-Powered Search Mode With Employees 12

Google has begun internal testing of a new "AI Mode" for its search engine, powered by its Gemini 2.0 AI model, according to a company email seen by technology news site 9to5Google. The feature, which appears alongside existing filters like Images and News, creates a chatbot-like interface for handling complex queries and follow-up questions.

It generates detailed responses with web links displayed in a card format on the right side of the screen. AI Mode targets exploratory searches such as product comparisons and how-to questions that traditional search results may not effectively address. The company is currently testing the feature with U.S.-based employees, with CEO Sundar Pichai indicating a possible launch this year.
Government

Bill Banning Social Media For Youngsters Advances (politico.com) 86

The Senate Commerce Committee approved the Kids Off Social Media Act, banning children under 13 from social media and requiring federally funded schools to restrict access on networks and devices. Politico reports: The panel approved the Kids Off Social Media Act -- sponsored by the panel's chair, Texas Republican Ted Cruz, and a senior Democrat on the panel, Hawaii's Brian Schatz -- by voice vote, clearing the way for consideration by the full Senate. Only Ed Markey (D-Mass.) asked to be recorded as a no on the bill. "When you've got Ted Cruz and myself in agreement on something, you've pretty much captured the ideological spectrum of the whole Congress," Sen. Schatz told POLITICO's Gabby Miller.

[...] "KOSMA comes from very good intentions of lawmakers, and establishing national screen time standards for schools is sensible. However, the bill's in-effect requirements on access to protected information jeopardize all Americans' digital privacy and endanger free speech online," said Amy Bos, NetChoice director of state and federal affairs. The trade association represents big tech firms including Meta and Google. Netchoice has been aggressive in combating social media legislation by arguing that these laws illegally restrict -- and in some cases compel -- speech. [...] A Commerce Committee aide told POLITICO that because social media platforms already voluntarily require users to be at least 13 years old, the bill does not restrict speech currently available to kids.

AI

Hugging Face Clones OpenAI's Deep Research In 24 Hours 17

An anonymous reader quotes a report from Ars Technica: On Tuesday, Hugging Face researchers released an open source AI research agent called "Open Deep Research," created by an in-house team as a challenge 24 hours after the launch of OpenAI's Deep Research feature, which can autonomously browse the web and create research reports. The project seeks to match Deep Research's performance while making the technology freely available to developers. "While powerful LLMs are now freely available in open-source, OpenAI didn't disclose much about the agentic framework underlying Deep Research," writes Hugging Face on its announcement page. "So we decided to embark on a 24-hour mission to reproduce their results and open-source the needed framework along the way!"

Similar to both OpenAI's Deep Research and Google's implementation of its own "Deep Research" using Gemini (first introduced in December -- before OpenAI), Hugging Face's solution adds an "agent" framework to an existing AI model to allow it to perform multi-step tasks, such as collecting information and building the report as it goes along that it presents to the user at the end. The open source clone is already racking up comparable benchmark results. After only a day's work, Hugging Face's Open Deep Research has reached 55.15 percent accuracy on the General AI Assistants (GAIA) benchmark, which tests an AI model's ability to gather and synthesize information from multiple sources. OpenAI's Deep Research scored 67.36 percent accuracy on the same benchmark with a single-pass response (OpenAI's score went up to 72.57 percent when 64 responses were combined using a consensus mechanism).

As Hugging Face points out in its post, GAIA includes complex multi-step questions such as this one: "Which of the fruits shown in the 2008 painting 'Embroidery from Uzbekistan' were served as part of the October 1949 breakfast menu for the ocean liner that was later used as a floating prop for the film 'The Last Voyage'? Give the items as a comma-separated list, ordering them in clockwise order based on their arrangement in the painting starting from the 12 o'clock position. Use the plural form of each fruit." To correctly answer that type of question, the AI agent must seek out multiple disparate sources and assemble them into a coherent answer. Many of the questions in GAIA represent no easy task, even for a human, so they test agentic AI's mettle quite well.
Open Deep Research "builds on OpenAI's large language models (such as GPT-4o) or simulated reasoning models (such as o1 and o3-mini) through an API," notes Ars. "But it can also be adapted to open-weights AI models. The novel part here is the agentic structure that holds it all together and allows an AI language model to autonomously complete a research task."

The code has been made public on GitHub.
AI

DeepSeek's AI App Will 'Highly Likely' Get Banned in the US, Jefferies Says 64

DeepSeek's AI app will highly likely face a US consumer ban after topping download charts on Apple's App Store and Google Play, according to analysts at US investment bank Jefferies. The US federal government, Navy and Texas have already banned the app, and analysts expect broader restrictions using legislation similar to that targeting TikTok.

While consumer access may be blocked, US developers could still be allowed to self-host DeepSeek's model to eliminate security risks, the analysts added. Even if completely banned, DeepSeek's impact on pushing down AI costs will persist as US companies work to replicate its technology, Jefferies said in a report this week reviewed by Slashdot.

The app's pricing advantage remains significant, with OpenAI's latest o3-mini model still costing 100% more than DeepSeek's R1 despite being 63% cheaper than o1-mini. The potential ban comes amid broader US-China tech tensions. While restrictions on H20 chips appear unlikely given their limited training capabilities, analysts expect the Biden administration's AI diffusion policies to remain largely intact under Trump, with some quota increases possible for overseas markets based on their AI activity levels.

Slashdot Top Deals