AI

DeepMind Chief Dismisses DeepSeek's AI Breakthrough as 'Known Techniques' (cnbc.com) 30

Google DeepMind CEO Demis Hassabis downplayed the technological significance of DeepSeek's latest AI model, despite its market impact. "Despite the hype, there's no actual new scientific advance there. It's using known techniques," Hassabis said on Sunday. "Actually many of the techniques we invented at Google and at DeepMind."

Hassabis acknowledged that Deepseek's AI model "is probably the best work" out of China, but its capabilities, he said, is "exaggerated a little bit."DeepSeek's launch last month triggered a $1 trillion U.S. market sell-off.
Books

Bill Gates Remembers LSD Trips, Smoking Pot, and How the Smartphone OS Market 'Was Ours for the Taking' (independent.co.uk) 138

Fortune remembers that in 2011 Steve Jobs had told author Walter Isaacson that Microsoft co-founder Bill Gates would "be a broader guy if he had dropped acid once or gone off to an ashram when he was younger."

But The Indendepent notes that in his new memoir Gates does write about two acid trip experiences. (Gates mis-timed his first experiment with LSD, ending up still tripping during a previously-scheduled appointment for dental surgery...) "Later in the book, Gates recounts another experience with LSD with future Microsoft co-founder Paul Allen and some friends... Gates says in the book that it was the fear of damaging his memory that finally persuaded him never to take the drug again." He added: "I smoked pot in high school, but not because it did anything interesting. I thought maybe I would look cool and some girl would think that was interesting. It didn't succeed, so I gave it up."

Gates went on to say that former Apple CEO Steve Jobs, who didn't know about his past drug use, teased him on the subject. "Steve Jobs once said that he wished I'd take acid because then maybe I would have had more taste in my design of my products," recalled Gates. "My response to that was to say, 'Look, I got the wrong batch.' I got the coding batch, and this guy got the marketing-design batch, so good for him! Because his talents and mine, other than being kind of an energetic leader, and pushing the limits, they didn't overlap much. He wouldn't know what a line of code meant, and his ability to think about design and marketing and things like that... I envy those skills. I'm not in his league."

Gates added that he was a fan of Michael Pollan's book about psychedelic drugs, How To Change Your Mind, and is intrigued by the idea that they may have therapeutic uses. "The idea that some of these drugs that affect your mind might help with depression or OCD, I think that's fascinating," said Gates. "Of course, we have to be careful, and that's very different than recreational usage."

Touring the country, 69-year-old Gates shared more glimpses of his life story:
  • The Harvard Gazette notes that the university didn't offer computer science degrees when Gates attended in 1973. But since Gates already had years of code-writing experience, he "initially rebuffed any suggestion of taking computer-related coursework... 'It's too easy,' he remembered telling friends."
  • "The naiveté I had that free computing would just be this unadulterated good thing wasn't totally correct even before AI," Gates told an audience at the Harvard Book Store. "And now with AI, I can see that we could shape this in the wrong way."
  • Gates "expressed regret about how he treated another boyhood friend, Paul Allen, the other cofounder of Microsoft, who died in 2018," reports the Boston Globe. "Gates at first took 60 percent ownership of the new software company and then pressured his friend for another 4 percent. 'I feel bad about it in retrospect,' he said. 'That was always a little complicated, and I wish I hadn't pushed....'"
  • Benzinga writes that Gates has now "donated $100 billion to charitable causes... Had Gates retained the $100 billion he has donated, his total wealth would be around $264 billion, placing him second on the global wealth rankings behind Elon Musk and ahead of Jeff Bezos and Mark Zuckerberg."
  • Gates told the Associated Press "I am stunned that Intel basically lost its way," saying Intel is now "kind of behind" on both chip design and fabrication. "They missed the AI chip revolution, and with their fabrication capabilities, they don't even use standards that people like Nvidia and Qualcomm find easy... I hope Intel recovers, but it looks pretty tough for them at this stage."
  • Gates also told the Associated Press that fighting a three-year antitrust case had "distracted" Microsoft. "The area that Google did well in that would not have happened had I not been distracted is Android, where it was a natural thing for me. I was trying, although what I didn't do well enough is provide the operating system for the phone. That was ours for the taking."
  • The Dallas News reports that in an on-stage interview in Texas, Mark Cuban closed by asking Gates one question. "Is the American Dream alive?" Gates answered: "It was for me."

AMD

How To Make Any AMD Zen CPU Always Generate 4 As a Random Number (theregister.com) 62

Slashdot reader headlessbrick writes: Google security researchers have discovered a way to bypass AMD's security, enabling them to load unofficial microcode into its processors and modify the silicon's behaviour at will. To demonstrate this, they created a microcode patch that forces the chips to always return 4 when asked for a random number.

Beyond simply allowing Google and others to customize AMD chips for both beneficial and potentially malicious purposes, this capability also undermines AMD's secure encrypted virtualization and root-of-trust security mechanisms.

Obligatory XKCD.
Apple

Retrocomputing Enthusiast Explores 28-Year-Old Powerbook G3: 'Apple's Hope For Redemption' (youtube.com) 60

Long-time Slashdot reader Shayde once restored a 1986 DEC PDP-11 minicomputer, and even ran Turbo Pascal on a 40-year-old Apple II clone.

Now he's exploring a 27-year-old Macintosh PowerBook G3 — with 64 megabytes memory and 4 gigabytes of disk space. "The year is 1997, and Apple is in big trouble." (Apple's market share had dropped from 16% in 1980 to somewhere below 4%...) Turns out this was one of the first machines able to run OS X, and was built during the transition period for Apple after Steve Jobs came back in to rescue the company from bankruptcy.
It's clearly old technology. There's even a SCSI connector, PCMCIA sockets, a modem port for your phone/landline cable, and a CD-ROM drive. There's also Apple's proprietary ports for LocalTalk and an Apple Desktop Bus port ("used for keyboards, mice, and stuff like that"). And its lithium-ion batteries "were meant to be replaced and moved around, so you could carry spare batteries with you."

So what's it like using a 27-year-old laptop? "The first thing I had to note was this thing weighs a ton! This thing could be used as a projectile weapon! I can't imagine hauling these things around doing business..." And it's a good thing it had vents, because "This thing runs hot!" (The moment he plugs it in he can hear its ancient fan running...) It seems to take more than two minutes to boot up. ("The drive is rattling away...") But soon he's looking at a glorious desktop from 1998 desktop. ("Applications installed... Oh look! Adobe Acrobat Reader! I betcha that's going to need an update...")

After plugging in a network cable, a pop-up prompts him to "Set up your .Mac membership." ("I have so little interest in doing this.") He does find an old version of Safari, but it refuses to launch-- though "While puttering around in the application folder, I did notice that we had Internet Explorer installed. But that pretty much went as well as expected." In the end it seems like he ends up "on the network, but we have no browser." Although at least he does find a Terminal program — and successfully pings Google.

The thing that would drive me crazy is when opening the laptop, Apple's logo is upside-down!
Programming

What Do Linux Kernel Developers Think of Rust? (thenewstack.io) 42

Keynotes at this year's FOSDEM included free AI models and systemd, reports Heise.de — and also a progress report from Miguel Ojeda, supervisor of the Rust integration in the Linux kernel. Only eight people remain in the core team around Rust for Linux... Miguel Ojeda therefore launched a survey among kernel developers, including those outside the Rust community, and presented some of the more important voices in his FOSDEM talk. The overall mood towards Rust remains favorable, especially as Linus Torvalds and Greg Kroah-Hartman are convinced of the necessity of Rust integration. This is less about rapid progress and more about finding new talent for kernel development in the future.
The reaction was mostly positive, judging by Ojeda's slides:

- "2025 will be the year of Rust GPU drivers..." — Daniel Almedia

- "I think the introduction of Rust in the kernel is one of the most exciting development experiments we've seen in a long time." — Andrea Righi

- "[T]he project faces unique challenges. Rust's biggest weakness, as a language, is that relatively few people speak it. Indeed, Rust is not a language for beginners, and systems-level development complicates things even more. That said, the Linux kernel project has historically attracted developers who love challenging software — if there's an open source group willing to put the extra effort for a better OS, it's the kernel devs." — Carlos Bilbao

- "I played a little with [Rust] in user space, and I just absolutely hate the cargo concept... I hate having to pull down other code that I do not trust. At least with shared libraries, I can trust a third party to have done the build and all that... [While Rust should continue to grow in the kernel], if a subset of C becomes as safe as Rust, it may make Rust obsolete..." Steven Rostedt

Rostedt wasn't sure if Rust would attract more kernel contributors, but did venture this opinion. "I feel Rust is more of a language that younger developers want to learn, and C is their dad's language."

But still "contention exists within the kernel development community between those pro-Rust and -C camps," argues The New Stack, citing the latest remarks from kernel maintainer Christoph Hellwig (who had earlier likened the mixing of Rust and C to cancer). Three days later Hellwig reiterated his position again on the Linux kernel mailing list: "Every additional bit that another language creeps in drastically reduces the maintainability of the kernel as an integrated project. The only reason Linux managed to survive so long is by not having internal boundaries, and adding another language completely breaks this. You might not like my answer, but I will do everything I can do to stop this. This is NOT because I hate Rust. While not my favourite language it's definitively one of the best new ones and I encourage people to use it for new projects where it fits. I do not want it anywhere near a huge C code base that I need to maintain."
But the article also notes that Google "has been a staunch supporter of adding Rust to the kernel for Linux running in its Android phones." The use of Rust in the kernel is seen as a way to avoid memory vulnerabilities associated with C and C++ code and to add more stability to the Android OS. "Google's wanting to replace C code with Rust represents a small piece of the kernel but it would have a huge impact since we are talking about billions of phones," Ojeda told me after his talk.

In addition to Google, Rust adoption and enthusiasm for it is increasing as Rust gets more architectural support and as "maintainers become more comfortable with it," Ojeda told me. "Maintainers have already told me that if they could, then they would start writing Rust now," Ojeda said. "If they could drop C, they would do it...."

Amid the controversy, there has been a steady stream of vocal support for Ojeda. Much of his discussion also covered statements given by advocates for Rust in the kernel, ranging from lead developers of the kernel and including Linux creator Linus Torvalds himself to technology leads from Red Hat, Samsung, Google, Microsoft and others.

Google

Did Google Fake Gemini AI's Output For Its Super Bowl Ad? (theverge.com) 43

Google's Super Bowl ad about a Gouda cheese seller appears to be using fake AI output, writes the Verge: The text portrayed as generated by AI has been available on the business's website since at least August 2020, as shown on this archived webpage. Google didn't launch Gemini until 2023, meaning Gemini couldn't have generated the website description as depicted in the ad.
The site Futurism calls the situation "beyond bizarre," asking why Google doesn't seem to trust its own technology. Either Google faked the ad entirely, or prompted its AI to generate the web page's existing copy word-for-word, or the AI was prompted to come up with original copy and instead copied the old version. In the publishing industry, that's referred to as "plagiarism."
And ironically if Gemini did plagiarize that text, the text that it plagiarized is also inaccurate.
Social Networks

While TikTok Buys Ads on YouTube, YouTube is Buying Ads on TikTok (yahoo.com) 30

I just saw an ad for TikTok on a YouTube video. But at the same time YouTube is running ads on TikTok, reports Bloomberg, targeting TikTok content creators in "an effort to lure these valuable users to the Google-owned rival and capitalize on TikTok's uncertain future."

One of YouTube's ads even received over a thousand likes, with Bloomberg calling it that TikTok "is willing to accept ad dollars from one of its fiercest competitors promoting a message aimed at undercutting its business." YouTube is the latest TikTok competitor to try to capitalize on the app's looming US ban, which could go into effect in early April. Meta Platforms Inc.'s Instagram announced a new video editing tool in January, and X also teased a new video tab as part of an effort to win over TikTok's content creators...

Google would be one of the biggest beneficiaries of a ban in the US. Both its flagship video service YouTube and its TikTok copycat, YouTube Shorts, would likely see an uptick in traffic if TikTok goes away. Google also plays an unusual role in TikTok's potential ban because it runs one of two mobile app stores controlling whether people in the US can download the video app. It has blocked TikTok from its Google Play store since the divest-or-ban law went into effect January 19.

Chrome

Google's 7-Year Slog To Improve Chrome Extensions Still Hasn't Satisfied Developers (theregister.com) 30

The Register's Thomas Claburn reports: Google's overhaul of Chrome's extension architecture continues to pose problems for developers of ad blockers, content filters, and privacy tools. [...] While Google's desire to improve the security, privacy, and performance of the Chrome extension platform is reasonable, its approach -- which focuses on code and permissions more than human oversight -- remains a work-in-progress that has left extension developers frustrated.

Alexei Miagkov, senior staff technology at the Electronic Frontier Foundation, who oversees the organization's Privacy Badger extension, told The Register, "Making extensions under MV3 is much harder than making extensions under MV2. That's just a fact. They made things harder to build and more confusing." Miagkov said with Privacy Badger the problem has been the slowness with which Google addresses gaps in the MV3 platform. "It feels like MV3 is here and the web extensions team at Google is in no rush to fix the frayed ends, to fix what's missing or what's broken still." According to Google's documentation, "There are currently no open issues considered a critical platform gap," and various issues have been addressed through the addition of new API capabilities.

Miagkov described an unresolved problem that means Privacy Badger is unable to strip Google tracking redirects on Google sites. "We can't do it the correct way because when Google engineers design the [chrome.declarativeNetRequest API], they fail to think of this scenario," he said. "We can do a redirect to get rid of the tracking, but it ends up being a broken redirect for a lot of URLs. Basically, if the URL has any kind of query string parameters -- the question mark and anything beyond that -- we will break the link." Miagkov said a Chrome developer relations engineer had helped identify a workaround, but it's not great. Miagkov thinks these problems are of Google's own making -- the company changed the rules and has been slow to write the new ones. "It was completely predictable because they moved the ability to fix things from extensions to themselves," he said. "And now they need to fix things and they're not doing it."

AI

Creators Demand Tech Giants Fess Up, Pay For All That AI Training Data 55

The Register highlights concerns raised at a recent UK parliamentary committee regarding AI companies' exploitation of copyrighted content without permission or payment. From the report: The Culture, Media and Sport Committee and Science, Innovation and Technology Committee asked composer Max Richter how he would know if "bad-faith actors" were using his material to train AI models. "There's really nothing I can do," he told MPs. "There are a couple of music AI models, and it's perfectly easy to make them generate a piece of music that sounds uncannily like me. That wouldn't be possible unless it had hoovered up my stuff without asking me and without paying for it. That's happening on a huge scale. It's obviously happened to basically every artist whose work is on the internet."

Richter, whose work has been used in a number of major film and television scores, said the consequences for creative musicians and composers would be dire. "You're going to get a vanilla-ization of music culture as automated material starts to edge out human creators, and you're also going to get an impoverishing of human creators," he said. "It's worth remembering that the music business in the UK is a real success story. It's 7.6 billion-pound income last year, with over 200,000 people employed. That is a big impact. If we allow the erosion of copyright, which is really how value is created in the music sector, then we're going to be in a position where there won't be artists in the future."

Speaking earlier, former Google staffer James Smith said much of the damage from text and data mining had likely already been done. "The original sin, if you like, has happened," said Smith, co-founder and chief executive of Human Native AI. "The question is, how do we move forward? I would like to see the government put more effort into supporting licensing as a viable alternative monetization model for the internet in the age of these new AI agents."

Matt Rogerson, director of global public policy and platform strategy at the Financial Times, said: "We can only deal with what we see in front of us and [that is] people taking our content, using it for the training, using it in substitutional ways. So from our perspective, we'll prosecute the same argument in every country where we operate, where we see our content being stolen." The risk, if the situation continued, was a hollowing out of creative and information industries, he said. [...] "The problem is we can't see who's stolen our content. We're just at this stage where these very large companies, which usually make margins of 90 percent, might have to take some smaller margin, and that's clearly going to be upsetting for their investors. But that doesn't mean they shouldn't. It's just a question of right and wrong and where we pitch this debate. Unfortunately, the government has pitched it in thinking that you can't reduce the margin of these big tech companies; otherwise, they won't build a datacenter."
Google

Google Pulls Incorrect Gouda Stat From Its AI Super Bowl Ad (theverge.com) 51

An anonymous reader shares a report: Google has edited Gemini's AI response in a Super Bowl commercial to remove an incorrect statistic about cheese. The ad, which shows a small business owner using Gemini to write a website description about Gouda, no longer says the variety makes up "50 to 60 percent of the world's cheese consumption."

In the edited YouTube video, Gemini's response now skips over the specifics and says Gouda is "one of the most popular cheeses in the world." Google Cloud apps president Jerry Dischler initially defended the response, saying on X it's "grounded in the Web" and "not a hallucination."

Google

Google Tests AI-Powered Search Mode With Employees 12

Google has begun internal testing of a new "AI Mode" for its search engine, powered by its Gemini 2.0 AI model, according to a company email seen by technology news site 9to5Google. The feature, which appears alongside existing filters like Images and News, creates a chatbot-like interface for handling complex queries and follow-up questions.

It generates detailed responses with web links displayed in a card format on the right side of the screen. AI Mode targets exploratory searches such as product comparisons and how-to questions that traditional search results may not effectively address. The company is currently testing the feature with U.S.-based employees, with CEO Sundar Pichai indicating a possible launch this year.
Government

Bill Banning Social Media For Youngsters Advances (politico.com) 86

The Senate Commerce Committee approved the Kids Off Social Media Act, banning children under 13 from social media and requiring federally funded schools to restrict access on networks and devices. Politico reports: The panel approved the Kids Off Social Media Act -- sponsored by the panel's chair, Texas Republican Ted Cruz, and a senior Democrat on the panel, Hawaii's Brian Schatz -- by voice vote, clearing the way for consideration by the full Senate. Only Ed Markey (D-Mass.) asked to be recorded as a no on the bill. "When you've got Ted Cruz and myself in agreement on something, you've pretty much captured the ideological spectrum of the whole Congress," Sen. Schatz told POLITICO's Gabby Miller.

[...] "KOSMA comes from very good intentions of lawmakers, and establishing national screen time standards for schools is sensible. However, the bill's in-effect requirements on access to protected information jeopardize all Americans' digital privacy and endanger free speech online," said Amy Bos, NetChoice director of state and federal affairs. The trade association represents big tech firms including Meta and Google. Netchoice has been aggressive in combating social media legislation by arguing that these laws illegally restrict -- and in some cases compel -- speech. [...] A Commerce Committee aide told POLITICO that because social media platforms already voluntarily require users to be at least 13 years old, the bill does not restrict speech currently available to kids.

AI

Hugging Face Clones OpenAI's Deep Research In 24 Hours 17

An anonymous reader quotes a report from Ars Technica: On Tuesday, Hugging Face researchers released an open source AI research agent called "Open Deep Research," created by an in-house team as a challenge 24 hours after the launch of OpenAI's Deep Research feature, which can autonomously browse the web and create research reports. The project seeks to match Deep Research's performance while making the technology freely available to developers. "While powerful LLMs are now freely available in open-source, OpenAI didn't disclose much about the agentic framework underlying Deep Research," writes Hugging Face on its announcement page. "So we decided to embark on a 24-hour mission to reproduce their results and open-source the needed framework along the way!"

Similar to both OpenAI's Deep Research and Google's implementation of its own "Deep Research" using Gemini (first introduced in December -- before OpenAI), Hugging Face's solution adds an "agent" framework to an existing AI model to allow it to perform multi-step tasks, such as collecting information and building the report as it goes along that it presents to the user at the end. The open source clone is already racking up comparable benchmark results. After only a day's work, Hugging Face's Open Deep Research has reached 55.15 percent accuracy on the General AI Assistants (GAIA) benchmark, which tests an AI model's ability to gather and synthesize information from multiple sources. OpenAI's Deep Research scored 67.36 percent accuracy on the same benchmark with a single-pass response (OpenAI's score went up to 72.57 percent when 64 responses were combined using a consensus mechanism).

As Hugging Face points out in its post, GAIA includes complex multi-step questions such as this one: "Which of the fruits shown in the 2008 painting 'Embroidery from Uzbekistan' were served as part of the October 1949 breakfast menu for the ocean liner that was later used as a floating prop for the film 'The Last Voyage'? Give the items as a comma-separated list, ordering them in clockwise order based on their arrangement in the painting starting from the 12 o'clock position. Use the plural form of each fruit." To correctly answer that type of question, the AI agent must seek out multiple disparate sources and assemble them into a coherent answer. Many of the questions in GAIA represent no easy task, even for a human, so they test agentic AI's mettle quite well.
Open Deep Research "builds on OpenAI's large language models (such as GPT-4o) or simulated reasoning models (such as o1 and o3-mini) through an API," notes Ars. "But it can also be adapted to open-weights AI models. The novel part here is the agentic structure that holds it all together and allows an AI language model to autonomously complete a research task."

The code has been made public on GitHub.
AI

DeepSeek's AI App Will 'Highly Likely' Get Banned in the US, Jefferies Says 64

DeepSeek's AI app will highly likely face a US consumer ban after topping download charts on Apple's App Store and Google Play, according to analysts at US investment bank Jefferies. The US federal government, Navy and Texas have already banned the app, and analysts expect broader restrictions using legislation similar to that targeting TikTok.

While consumer access may be blocked, US developers could still be allowed to self-host DeepSeek's model to eliminate security risks, the analysts added. Even if completely banned, DeepSeek's impact on pushing down AI costs will persist as US companies work to replicate its technology, Jefferies said in a report this week reviewed by Slashdot.

The app's pricing advantage remains significant, with OpenAI's latest o3-mini model still costing 100% more than DeepSeek's R1 despite being 63% cheaper than o1-mini. The potential ban comes amid broader US-China tech tensions. While restrictions on H20 chips appear unlikely given their limited training capabilities, analysts expect the Biden administration's AI diffusion policies to remain largely intact under Trump, with some quota increases possible for overseas markets based on their AI activity levels.
AI

Researchers Created an Open Rival To OpenAI's o1 'Reasoning' Model for Under $50 23

AI researchers at Stanford and the University of Washington were able to train an AI "reasoning" model for under $50 in cloud compute credits, according to a research paper. From a report: The model, known as s1, performs similarly to cutting-edge reasoning models, such as OpenAI's o1 and DeepSeek's R1, on tests measuring math and coding abilities. The s1 model is available on GitHub, along with the data and code used to train it.

The team behind s1 said they started with an off-the-shelf base model, then fine-tuned it through distillation, a process to extract the "reasoning" capabilities from another AI model by training on its answers. The researchers said s1 is distilled from one of Google's reasoning models, Gemini 2.0 Flash Thinking Experimental. Distillation is the same approach Berkeley researchers used to create an AI reasoning model for around $450 last month.
The Internet

The Enshittification Hall of Shame 249

In 2022, writer and activist Cory Doctorow coined the term "enshittification" to describe the gradual deterioration of a service or product. The term's prevalence has increased to the point that it was the National Dictionary of Australia's word of the year last year. The editors at Ars Technica, having "covered a lot of things that have been enshittified," decided to highlight some of the worst examples the've come across. Here's a summary of each thing mentioned in their report: Smart TVs: Evolved into data-collecting billboards, prioritizing advertising and user tracking over user experience and privacy. Features like convenient input buttons are sacrificed for pushing ads and webOS apps. "This is all likely to get worse as TV companies target software, tracking, and ad sales as ways to monetize customers after their TV purchases -- even at the cost of customer convenience and privacy," writes Scharon Harding. "When budget brands like Roku are selling TV sets at a loss, you know something's up."

Google's Voice Assistant (e.g., Nest Hubs): Functionality has degraded over time, with previously working features becoming unreliable. Users report frequent misunderstandings and unresponsiveness. "I'm fine just saying it now: Google Assistant is worse now than it was soon after it started," writes Kevin Purdy. "Even if Google is turning its entire supertanker toward AI now, it's not clear why 'Start my morning routine,' 'Turn on the garage lights,' and 'Set an alarm for 8 pm' had to suffer."

Portable Document Format (PDF): While initially useful for cross-platform document sharing and preserving formatting, PDFs have become bloated and problematic. Copying text, especially from academic journals, is often garbled or impossible. "Apple, which had given the PDF a reprieve, has now killed its main selling point," writes John Timmer. "Because Apple has added OCR to the MacOS image display system, I can get more reliable results by screenshotting the PDF and then copying the text out of that. This is the true mark of its enshittification: I now wish the journals would just give me a giant PNG."

Televised Sports (specifically cycling and Formula 1): Streaming services have consolidated, leading to significantly increased costs for viewers. Previously affordable and comprehensive options have been replaced by expensive bundles across multiple platforms. "Formula 1 racing has largely gone behind paywalls, and viewership is down significantly over the last 15 years," writes Eric Berger. "Major US sports such as professional and college football had largely been exempt, but even that is now changing, with NFL games being shown on Peacock, Amazon Prime, and Netflix. None of this helps viewers. It enshittifies the experience for us in the name of corporate greed."

Google Search: AI overviews often bury relevant search results under lengthy, sometimes inaccurate AI-generated content. This makes finding specific information, especially primary source documents, more difficult. "Google, like many big tech companies, expects AI to revolutionize search and is seemingly intent on ignoring any criticism of that idea," writes Ashley Belanger.

Email AI Tools (e.g., Gemini in Gmail): Intrusive and difficult to disable, these tools offer questionable value due to their potential for factual inaccuracies. Users report being unable to fully opt-out. "Gmail won't take no for an answer," writes Dan Goodin. "It keeps asking me if I want to use Google's Gemini AI tool to summarize emails or draft responses. As the disclaimer at the bottom of the Gemini tool indicates, I can't count on the output being factual, so no, I definitely don't want it."

Windows: While many complaints about Windows 11 originated with Windows 10, the newer version continues the trend of unwanted features, forced updates, and telemetry data collection. Bugs and performance issues also plague the operating system. "... it sure is easy to resent Windows 11 these days, between the well-documented annoyances, the constant drumbeat of AI stuff (some of it gated to pricey new PCs), and a batch of weird bugs that mostly seem to be related to the under-the-hood overhauls in October's Windows 11 24H2 update," writes Andrew Cunningham. "That list includes broken updates for some users, inoperable scanners, and a few unplayable games. With every release, the list of things you need to do to get rid of and turn off the most annoying stuff gets a little longer."

Web Discourse: The rapid spread of memes, trends, and corporate jargon on social media has led to a homogenization of online communication, making it difficult to distinguish original content and creating a sense of constant noise. "[T]he enshittifcation of social media, particularly due to its speed and virality, has led to millions vying for their moment in the sun, and all I see is a constant glare that makes everything look indistinguishable," writes Jacob May. "No wonder some companies think AI is the future."
China

Researchers Link DeepSeek To Chinese Telecom Banned In US (apnews.com) 86

An anonymous reader quotes a report from the Associated Press: The website of the Chinese artificial intelligence company DeepSeek, whose chatbot became the most downloaded app in the United States, has computer code that could send some user login information to a Chinese state-owned telecommunications company that has been barred from operating in the United States, security researchers say. The web login page of DeepSeek's chatbot contains heavily obfuscated computer script that when deciphered shows connections to computer infrastructure owned by China Mobile, a state-owned telecommunications company. The code appears to be part of the account creation and user login process for DeepSeek.

In its privacy policy, DeepSeek acknowledged storing data on servers inside the People's Republic of China. But its chatbot appears more directly tied to the Chinese state than previously known through the link revealed by researchers to China Mobile. The U.S. has claimed there are close ties between China Mobile and the Chinese military as justification for placing limited sanctions on the company. [...] The code linking DeepSeek to one of China's leading mobile phone providers was first discovered by Feroot Security, a Canadian cybersecurity company, which shared its findings with The Associated Press. The AP took Feroot's findings to a second set of computer experts, who independently confirmed that China Mobile code is present. Neither Feroot nor the other researchers observed data transferred to China Mobile when testing logins in North America, but they could not rule out that data for some users was being transferred to the Chinese telecom.

The analysis only applies to the web version of DeepSeek. They did not analyze the mobile version, which remains one of the most downloaded pieces of software on both the Apple and the Google app stores. The U.S. Federal Communications Commission unanimously denied China Mobile authority to operate in the United States in 2019, citing "substantial" national security concerns about links between the company and the Chinese state. In 2021, the Biden administration also issued sanctions limiting the ability of Americans to invest in China Mobile after the Pentagon linked it to the Chinese military.
"It's mindboggling that we are unknowingly allowing China to survey Americans and we're doing nothing about it," said Ivan Tsarynny, CEO of Feroot. "It's hard to believe that something like this was accidental. There are so many unusual things to this. You know that saying 'Where there's smoke, there's fire'? In this instance, there's a lot of smoke," Tsarynny said.

Further reading: Senator Hawley Proposes Jail Time For People Who Download DeepSeek
Supercomputing

Google Says Commercial Quantum Computing Applications Arriving Within 5 Years (msn.com) 38

Google aims to release commercial quantum computing applications within five years, challenging Nvidia's prediction of a 20-year timeline. "We're optimistic that within five years we'll see real-world applications that are possible only on quantum computers," founder and lead of Google Quantum AI Hartmut Neven said in a statement. Reuters reports: Real-world applications Google has discussed are related to materials science - applications such as building superior batteries for electric cars - creating new drugs and potentially new energy alternatives. [...] Google has been working on its quantum computing program since 2012 and has designed and built several quantum chips. By using quantum processors, Google said it had managed to solve a computing problem in minutes that would take a classical computer more time than the history of the universe.

Google's quantum computing scientists announced another step on the path to real world applications within five years on Wednesday. In a paper published in the scientific journal Nature, the scientists said they had discovered a new approach to quantum simulation, which is a step on the path to achieving Google's objective.

Cellphones

Robocallers Posing As FCC Staff Blocked After Robocalling Real FCC Staff (arstechnica.com) 29

An anonymous reader quotes a report from Ars Technica: Robocallers posing as employees of the Federal Communications Commission made the mistake of trying to scam real employees of the FCC, the FCC announced yesterday. "On the night of February 6, 2024, and continuing into the morning of February 7, 2024, over a dozen FCC staff and some of their family members reported receiving calls on their personal and work telephone numbers," the FCC said. The calls used an artificial voice that said, "Hello [first name of recipient] you are receiving an automated call from the Federal Communications Commission notifying you the Fraud Prevention Team would like to speak with you. If you are available to speak now please press one. If you prefer to schedule a call back please press two."

You may not be surprised to learn that the FCC does not have any "Fraud Prevention Team" like the one mentioned in the robocalls, and especially not one that demands Google gift cards in lieu of jail time. "The FCC's Enforcement Bureau believes the purpose of the calls was to threaten, intimidate, and defraud," the agency said. "One recipient of an imposter call reported that they were ultimately connected to someone who 'demand[ed] that [they] pay the FCC $1,000 in Google gift cards to avoid jail time for [their] crimes against the state.'" The FCC said it does not "publish or otherwise share staff personal phone numbers" and that it "remains unclear how these individuals were targeted." Obviously, robocallers posing as FCC employees probably wouldn't intentionally place scam calls to real FCC employees. But FCC employees are just as likely to get robocalls as anyone else. This set of schemers apparently only made about 1,800 calls before their calling accounts were terminated.

The FCC described the scheme yesterday when it announced a proposed fine of $4,492,500 against Telnyx, the voice service provider accused of carrying the robocalls. The FCC alleges that Telnyx violated "Know Your Customer (KYC)" rules by providing access to calling services without verifying the customers' identities. When contacted by Ars today, Telnyx denied the FCC's allegations and said it will contest the proposed fine.

Security

First OCR Spyware Breaches Both Apple and Google App Stores To Steal Crypto Wallet Phrases (securelist.com) 24

Kaspersky researchers have discovered malware hiding in both Google Play and Apple's App Store that uses optical character recognition to steal cryptocurrency wallet recovery phrases from users' photo galleries. Dubbed "SparkCat" by security firm ESET, the malware was embedded in several messaging and food delivery apps, with the infected Google Play apps accumulating over 242,000 downloads combined.

This marks the first known instance of such OCR-based spyware making it into Apple's App Store. The malware, active since March 2024, masquerades as an analytics SDK called "Spark" and leverages Google's ML Kit library to scan users' photos for wallet recovery phrases in multiple languages. It requests gallery access under the guise of allowing users to attach images to support chat messages. When granted access, it searches for specific keywords related to crypto wallets and uploads matching images to attacker-controlled servers.

The researchers found both Android and iOS variants using similar techniques, with the iOS version being particularly notable as it circumvented Apple's typically stringent app review process. The malware's creators appear to be Chinese-speaking actors based on code comments and server error messages, though definitive attribution remains unclear.

Slashdot Top Deals