Businesses

The Backlash Against Duolingo Going 'AI-First' Didn't Even Matter 36

Duolingo's decision to go "AI-first" sparked backlash from users, but the company's second quarter earnings result tell a different story. Quarterly revenue exceeded expectations, stock surged nearly 30%, and daily active users grew 40% year-over-year. TechCrunch reports: Now the company anticipates making over $1 billion in revenue this year, and daily active users have grown 40% year-over-year. The growth is significant but falls in the lower range of the company's estimates of growing between 40% and 45%, which an investor brought up to [CEO Luis von Ahn] on Wednesday's quarterly earnings call.

"The reason we came [in] towards the lower end was because I said some stuff about AI, and I didn't give enough context. Because of that, we got some backlash on social media," von Ahn said. "The most important thing is we wanted to make the sentiment on our social media positive. We stopped posting edgy posts and started posting things that would get our sentiment more positive. That has worked."
Microsoft

Microsoft's $30 Windows 10 Security Updates Cover 10 Devices 56

Microsoft's $30 Extended Security Updates license for Windows 10 will cover up to 10 devices under a single Microsoft Account, the company confirmed in updated support documentation. The ESU program, which provides security updates through October 13, 2026, requires a Microsoft Account for all three enrollment options: the $30 one-time purchase, redemption of 1,000 Microsoft Reward points, or free enrollment for users who sync their PC settings to OneDrive. Windows 10's support ends October 14, 2025.
Government

Taiwan's High 20% Tariff Rate Linked To Intel Investment (notebookcheck.net) 123

EreIamJH writes: German tech newsletter Notebookcheck is reporting that the unexpectedly high 20% tariff the U.S. recently imposed on Taiwan is intended to pressure TSMC to buy a 49% minority stake in Intel -- including an IP transfer and to spend $400 billion in the U.S., in addition to the $165 billion previously planned.
Privacy

'Facial Recognition Tech Mistook Me For Wanted Man' (bbc.co.uk) 75

Bruce66423 shares a report from the BBC: A man who is bringing a High Court challenge against the Metropolitan Police after live facial recognition technology wrongly identified him as a suspect has described it as "stop and search on steroids." Shaun Thompson, 39, was stopped by police in February last year outside London Bridge Tube station. Privacy campaign group Big Brother Watch said the judicial review, due to be heard in January, was the first legal case of its kind against the "intrusive technology." The Met, which announced last week that it would double its live facial recognition technology (LFR) deployments, said it was removing hundreds of dangerous offenders and remained confident its use is lawful. LFR maps a person's unique facial features, and matches them against faces on watch-lists. [...]

Mr Thompson said his experience of being stopped had been "intimidating" and "aggressive." "Every time I come past London Bridge, I think about that moment. Every single time." He described how he had been returning home from a shift in Croydon, south London, with the community group Street Fathers, which aims to protect young people from knife crime. As he passed a white van, he said police approached him and told him he was a wanted man. "When I asked what I was wanted for, they said, 'that's what we're here to find out'." He said officers asked him for his fingerprints, but he refused, and he was let go only after about 30 minutes, after showing them a photo of his passport.

Mr Thompson says he is bringing the legal challenge because he is worried about the impact LFR could have on others, particularly if young people are misidentified. "I want structural change. This is not the way forward. This is like living in Minority Report," he said, referring to the science fiction film where technology is used to predict crimes before they're committed. "This is not the life I know. It's stop and search on steroids. "I can only imagine the kind of damage it could do to other people if it's making mistakes with me, someone who's doing work with the community."
Bruce66423 comments: "I suspect a payout of 10,000 pounds for each false match that is acted on would probably encourage more careful use, perhaps with a second payout of 100,000 pounds if the same person is victimized again."
Security

Citizen Lab Director Warns Cyber Industry About US Authoritarian Descent (techcrunch.com) 88

An anonymous reader quotes a report from TechCrunch: Ron Deibert, the director of Citizen Lab, one of the most prominent organizations investigating government spyware abuses, is sounding the alarm to the cybersecurity community and asking them to step up and join the fight against authoritarianism. On Wednesday, Deibert will deliver a keynote at the Black Hat cybersecurity conference in Las Vegas, one of the largest gatherings of information security professionals of the year. Ahead of his talk, Deibert told TechCrunch that he plans to speak about what he describes as a "descent into a kind of fusion of tech and fascism," and the role that the Big Tech platforms are playing, and "propelling forward a really frightening type of collective insecurity that isn't typically addressed by this crowd, this community, as a cybersecurity problem."

Deibert described the recent political events in the United States as a "dramatic descent into authoritarianism," but one that the cybersecurity community can help defend against. "I think alarm bells need to be rung for this community that, at the very least, they should be aware of what's going on and hopefully they can not contribute to it, if not help reverse it," Deibert told TechCrunch. [...] "I think that there comes a point at which you have to recognize that the landscape is changing around you, and the security problems you set out for yourselves are maybe trivial in light of the broader context and the insecurities that are being propelled forward in the absence of proper checks and balances and oversight, which are deteriorating," said Deibert.

Deibert is also concerned that big companies like Meta, Google, and Apple could take a step back in their efforts to fight against government spyware -- sometimes referred to as "commercial" or "mercenary" spyware -- by gutting their threat intelligence teams. [...] Deibert believes there is a "huge market failure when it comes to cybersecurity for global civil society," a part of the population that generally cannot afford to get help from big security companies that typically serve governments and corporate clients. "This market failure is going to get more acute as supporting institutions evaporate and attacks on civil society amplify," he said. "Whatever they can do to contribute to offset this market failure (e.g., pro bono work) will be essential to the future of liberal democracy worldwide," he said. Deibert is concerned that these threat intelligence teams could be cut or at least reduced, given that the same companies have cut their moderation and safety teams. He told TechCrunch that threat intelligence teams, like the ones at Meta, are doing "amazing work," in part by staying siloed and separate from the commercial arms of their wider organizations. "But the question is how long will that last?" said Deibert.

News

Ask Slashdot: Who's Still Using an RSS Reader? 158

alternative_right writes: I use RSS to cover all of my news-reading needs because I like a variety of sources spanning several fields -- politics, philosophy, science, and heavy metal. However, it seems Google wanted to kill off RSS a few years back, and it has since fallen out of favor. Some of us are holding on, but how many? And what software do you use (or did you write your own XML parsers)?
Government

Coding Error Blamed After Parts of Constitution Disappear From US Website (arstechnica.com) 60

An anonymous reader quotes a report from Ars Technica: The Library of Congress today said a coding error resulted in the deletion of parts of the US Constitution from Congress' website and promised a fix after many Internet users pointed out the missing sections this morning. The missing portions of the Constitution were restored to one part of the website a few hours after the Library of Congress statement and reappeared on a different part of the website another hour or so later. The Constitution Annotated website carried a notice saying it "is currently experiencing data issues. We are working to resolve this issue and regret the inconvenience."

"Upkeep of Constitution Annotated and other digital resources is a critical part of the Library's mission, and we appreciate the feedback that alerted us to the error and allowed us to fix it," the Library of Congress said. We asked the Library of Congress for specific details on the coding error, but we received only a statement that did not include specifics. "Due to a technical error, some sections of Article 1 were temporarily missing on the Constitution Annotated website. This problem has been corrected, and the missing sections have been restored," the statement said.

The deletion happened sometime in the past few weeks, as an Internet Archive capture shows that the text was still on the site until at least July 21. The deletions were being discussed this morning on Reddit and in news articles, with people expressing suspicions based on which parts of the Constitution were missing.

Google

Google Says AI Search Features Haven't Hurt Web Traffic Despite Industry Reports (blog.google) 13

Google says total organic click volume from its search engine to websites has remained ""relatively stable year-over-year" despite the introduction of AI Overviews, contradicting third-party reports of dramatic traffic declines. The company reports average click quality has increased, with users less likely to immediately return to search results after clicking through to websites. Google attributes stable traffic patterns to users conducting more searches and asking longer, more complex questions since AI features launched, while AI Overviews display more links per page than traditional results.
Movies

Universal Pictures To Big Tech: We'll Sue If You Steal Our Movies For AI (hollywoodreporter.com) 70

Universal Pictures is taking a new approach to combat mass theft of its movies to teach AI systems. From a report: Starting in June with How to Train Your Dragon, the studio has attached a legal warning at the end credits of its films stating that their titles "may not be used to train AI." It's also appeared on Jurassic World Rebirth and Bad Guys 2. "This motion picture is protected under the laws of the United States and other countries," the warning reads. "Unauthorized duplication, distribution or exhibition may result in civil liability and criminal prosecution."
China

Lyft Will Use Chinese Driverless Cars In Britain and Germany (techcrunch.com) 24

An anonymous reader quotes a report from the New York Times: China's automakers have teamed up with software companies togo global with their driverless cars, which are poised to claim a big share of a growing market as Western manufacturers are still preparing to compete. The industry in China is expanding despite tariffs imposed last year by the European Union on electric cars, and despite some worries in Europe about the security implications of relying on Chinese suppliers. Baidu, one of China's biggest software companies, said on Monday that it would supply Lyft, an American ride-hailing service, with self-driving cars assembled by Jiangling Motors of China (source paywalled; alternative source). Lyft is expected to begin operating them next year in Germany and Britain, subject to regulatory approval, the companies said.

The announcement comes three months after Uber and Momenta, a Chinese autonomous driving company, announced their own plans to begin offering self-driving cars in an unspecified European city early next year. Momenta will soon provide assisted driving technology to the Chinese company IM Motors for its cars sold in Britain. While Momenta has not specified the model that Uber will be using, it has already signaled it will choose a Chinese model. In China, "the pace of development and the pressure to deliver at scale push companies to improve quickly," said Gerhard Steiger, the chairman of Momenta Europe. China's state-controlled banking system has been lending money at low interest rates to the country's electric car industry in a bid for global leadership. [...]

Expanding robotaxi services to new cities, not to mention new countries, is not easy. While the individual cars do not have drivers, they typically require one controller for every several cars to handle difficulties and answer questions from users. And the cars often need to be specially programmed for traffic conditions unique to each city. Lyft and Baidu nonetheless said that they had plans for "the fleet scaling to thousands of vehicles across Europe in the following years."

Privacy

Meta Eavesdropped On Period-Tracker App's Users, Jury Rules (sfgate.com) 95

A San Francisco jury ruled that Meta violated the California Invasion of Privacy Act by collecting sensitive data from users of the Flo period-tracking app without consent. "The plaintiff's lawyers who sued Meta are calling this a 'landmark' victory -- the tech company contends that the jury got it all wrong," reports SFGATE. From the report: The case goes back to 2021, when eight women sued Flo and a group of other tech companies, including Google and Facebook, now known as Meta. The stakes were extremely personal. Flo asked users about their sex lives, mental health and diets, and guided them through menstruation and pregnancy. Then, the women alleged, Flo shared pieces of that data with other companies. The claims were largely based on a 2019 Wall Street Journal story and a 2021 Federal Trade Commission investigation. Google, Flo and the analytics company Flurry, which was also part of the lawsuit, reached settlements with the plaintiffs, as is common in class action lawsuits about tech privacy. But Meta stuck it out through the entire trial and lost.

The case against Meta focused on its Facebook software development kit, which Flo added to its app and which is generally used for analytics and advertising services. The women alleged that between June 2016 and February 2019, Flo sent Facebook, through that kit, various records of "Custom App Events" -- such as a user clicking a particular button in the "wanting to get pregnant" section of the app. Their complaint also pointed to Facebook's terms for its business tools, which said the company used so-called "event data" to personalize ads and content.

In a 2022 filing (PDF), the tech giant admitted that Flo used Facebook's kit during this period and that the app sent data connected to "App Events." But Meta denied receiving intimate information about users' health. Nonetheless, the jury ruled (PDF) against Meta. Along with the eavesdropping decision, the group determined that Flo's users had a reasonable expectation they weren't being overheard or recorded, as well as ruling that Meta didn't have consent to eavesdrop or record. The unanimous verdict was that the massive company violated the California Invasion of Privacy Act.
The jury's ruling could impact over 3.7 million U.S. users who registered between November 2016 and February 2019, with updates to be shared via email and a case website. The exact compensation from the trial or potential settlements remains uncertain.
AI

Perplexity Says Cloudflare's Accusations of 'Stealth' AI Scraping Are Based On Embarrassing Errors (zdnet.com) 96

In a report published Monday, Cloudflare accused Perplexity of deploying undeclared web crawlers that masquerade as regular Chrome browsers to access content from websites that have explicitly blocked its official bots. Since then, Perplexity has publicly and loudly announced that Cloudflare's claims are baseless and technically flawed. "This controversy reveals that Cloudflare's systems are fundamentally inadequate for distinguishing between legitimate AI assistants and actual threats," says Perplexity in a blog post. "If you can't tell a helpful digital assistant from a malicious scraper, then you probably shouldn't be making decisions about what constitutes legitimate web traffic."

Perplexity continues: "Technical errors in Cloudflare's analysis aren't just embarrassing -- they're disqualifying. When you misattribute millions of requests, publish completely inaccurate technical diagrams, and demonstrate a fundamental misunderstanding of how modern AI assistants work, you've forfeited any claim to expertise in this space."
Government

Swedish PM Under Fire For Using AI In Role 26

Sweden's Prime Minister Ulf Kristersson has come under fire after admitting that he frequently uses AI tools like ChatGPT for second opinions on political matters. The Guardian reports: ... Kristersson, whose Moderate party leads Sweden's center-right coalition government, said he used tools including ChatGPT and the French service LeChat. His colleagues also used AI in their daily work, he said. Kristersson told the Swedish business newspaper Dagens industri: "I use it myself quite often. If for nothing else than for a second opinion. What have others done? And should we think the complete opposite? Those types of questions."

Tech experts, however, have raised concerns about politicians using AI tools in such a way, and the Aftonbladet newspaper accused Kristersson in a editorial of having "fallen for the oligarchs' AI psychosis." Kristersson's spokesperson, Tom Samuelsson, later said the prime minister did not take risks in his use of AI. "Naturally it is not security sensitive information that ends up there. It is used more as a ballpark," he said.

But Virginia Dignum, a professor of responsible artificial intelligence at Umea University, said AI was not capable of giving a meaningful opinion on political ideas, and that it simply reflects the views of those who built it. "The more he relies on AI for simple things, the bigger the risk of an overconfidence in the system. It is a slippery slope," she told the Dagens Nyheter newspaper. "We must demand that reliability can be guaranteed. We didn't vote for ChatGPT."
The Courts

OpenAI Offers 20 Million User Chats In ChatGPT Lawsuit. NYT Wants 120 Million. (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: OpenAI is preparing to raise what could be its final defense to stop The New York Times from digging through a spectacularly broad range of ChatGPT logs to hunt for any copyright-infringing outputs that could become the most damning evidence in the hotly watched case. In a joint letter (PDF) Thursday, both sides requested to hold a confidential settlement conference on August 7. Ars confirmed with the NYT's legal team that the conference is not about settling the case but instead was scheduled to settle one of the most disputed aspects of the case: news plaintiffs searching through millions of ChatGPT logs. That means it's possible that this week, ChatGPT users will have a much clearer understanding of whether their private chats might be accessed in the lawsuit. In the meantime, OpenAI has broken down (PDF) the "highly complex" process required to make deleted chats searchable in order to block the NYT's request for broader access.

Previously, OpenAI had vowed to stop what it deemed was the NYT's attempt to conduct "mass surveillance" of ChatGPT users. But ultimately, OpenAI lost its fight to keep news plaintiffs away from all ChatGPT logs. After that loss, OpenAI appears to have pivoted and is now doing everything in its power to limit the number of logs accessed in the case -- short of settling -- as its customers fretted over serious privacy concerns. For the most vulnerable users, the lawsuit threatened to expose ChatGPT outputs from sensitive chats that OpenAI had previously promised would be deleted. Most recently, OpenAI floated a compromise, asking the court to agree that news organizations didn't need to search all ChatGPT logs. The AI company cited the "only expert" who has so far weighed in on what could be a statistically relevant, appropriate sample size -- computer science researcher Taylor Berg-Kirkpatrick. He suggested that a sample of 20 million logs would be sufficient to determine how frequently ChatGPT users may be using the chatbot to regurgitate articles and circumvent news sites' paywalls. But the NYT and other news organizations rejected the compromise, OpenAI said in a filing (PDF) yesterday. Instead, news plaintiffs have made what OpenAI said was an "extraordinary request that OpenAI produce the individual log files of 120 million ChatGPT consumer conversations."

That's six times more data than Berg-Kirkpatrick recommended, OpenAI argued. Complying with the request threatens to "increase the scope of user privacy concerns" by delaying the outcome of the case "by months," OpenAI argued. If the request is granted, it would likely trouble many users by extending the amount of time that users' deleted chats will be stored and potentially making them vulnerable to a breach or leak. As negotiations potentially end this week, OpenAI's co-defendant, Microsoft, has picked its own fight with the NYT over its internal ChatGPT equivalent tool that could potentially push the NYT to settle the disputes over ChatGPT logs.

Data Storage

DRAM Prices Soar as China Eyes Self-Reliance For High-End Chips (nikkei.com) 30

Standard DDR4 DRAM prices doubled between May and June 2025, with 8-gigabit units reaching $4.12 and 4-gigabit units hitting $3.14 -- the latter's highest level since July 2021, according to electronics trading companies cited by Nikkei Asia. The unprecedented single-month doubling follows speculation that Chinese manufacturer ChangXin Memory Technologies has halted DDR4 production to shift factories toward DDR5 memory for AI applications.

DDR4 currently comprises 60% of desktop PC memory while DDR5 accounts for 40%, per Tokyo-based BCN research. Samsung Electronics, SK Hynix, and Micron Technology controlled 90% of the global DRAM market in Q2 2025.

Slashdot Top Deals