Google

Google Loses Epic Games Appeal, Must Open App Store To Rivals (reuters.com) 42

Google lost its appeal Thursday of a judge's order that will force the tech giant to open up its app store to competitors. The 9th Circuit Court of Appeals upheld a lower court ruling requiring Google Play to allow rival marketplaces and billing systems, ending a legal battle that began when Epic Games sued over anticompetitive practices.

A jury sided with Epic in December 2023, finding Google paid phone makers and app developers to use its store exclusively.
United Kingdom

UK Competition Authority Rains on Microsoft and Amazon Cloud Parade (cnbc.com) 8

Britain's Competition and Markets Authority concluded that Microsoft and Amazon hold "significant unilateral market power" in cloud services and recommended investigating both companies under new competition rules. The regulator said it had concerns about practices creating customer "lock-in" effects through egress fees and unfavorable licensing terms that trap businesses in difficult-to-exit contracts.

Microsoft and Amazon each control roughly 30-40% of the infrastructure-as-a-service market, while Google holds 5-10%. Microsoft disputed the findings, calling the cloud market "dynamic and competitive." Amazon said the probe recommendations were "unwarranted."
The Internet

Google Tool Misused To Scrub Tech CEO's Shady Past From Search (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: Google is fond of saying its mission is to "organize the world's information," but who gets to decide what information is worthy of organization? A San Francisco tech CEO has spent the past several years attempting to remove unflattering information about himself from Google's search index, and the nonprofit Freedom of the Press Foundation says he's still at it. Most recently, an unknown bad actor used a bug in one of Google's search tools to scrub the offending articles.

The saga began in 2023 when independent journalist Jack Poulson reported on Maury Blackman's 2021 domestic violence arrest. Blackman, who was then the CEO of surveillance tech firm Premise Data Corp., took offense at the publication of his legal issues. The case did not lead to charges after Blackman's 25-year-old girlfriend recanted her claims against the 53-year-old CEO, but Poulson reported on some troubling details of the public arrest report. Blackman has previously used tools like DMCA takedowns and lawsuits to stifle reporting on his indiscretion, but that campaign now appears to have co-opted part of Google's search apparatus. The Freedom of the Press Foundation (FPF) reported on Poulson's work and Blackman's attempts to combat it late last year. In June, Poulson contacted the Freedom of the Press Foundation to report that the article had mysteriously vanished from Google search results.

The foundation began an investigation immediately, which led them to a little-known Google search feature known as Refresh Outdated Content. Google created this tool for users to report links with content that is no longer accurate or that lead to error pages. When it works correctly, Refresh Outdated Content can help make Google's search results more useful. However, Freedom of the Press Foundation now says that a bug allowed an unknown bad actor to scrub mentions of Blackman's arrest from the Internet. Upon investigating, FPF found that its article on Blackman was completely absent from Google results, even through a search with the exact title. Poulson later realized that two of his own Substack articles were similarly affected. The Foundation was led to the Refresh Outdated Content tool upon checking its search console.
The bug in the tool allowed malicious actors to de-index valid URLs from search results by altering the capitalization in the URL slug. Although URLs are typically case-sensitive, Google's tool treated them as case-insensitive. As a result, when someone submitted a slightly altered version of a working URL (for example, changing "anatomy" to "AnAtomy"), Google's crawler would see it as a broken link (404 error) and mistakenly remove the actual page from search results.

Ironically, Blackman is now CEO of the online reputation management firm The Transparency Company.
Earth

Google's AlphaEarth AI Maps Any 10-Meter Area on Earth Using Satellite Data (blog.google) 8

Google today announced AlphaEarth Foundations, a new AI model that processes terabytes of daily satellite data to track environmental changes across the planet. The system, part of Google's broader Earth AI initiative, uses machine learning to compress satellite imagery into color-coded maps showing material properties, vegetation types, groundwater sources, and human constructions down to 10-meter resolution.

The model uses a technique called "embeddings" that reduces storage requirements by 16 times compared to other AI tools Google tested, while delivering 23.9% higher accuracy than similar systems. AlphaEarth has already mapped complex Antarctic terrain and identified variations in Canadian agricultural land use invisible to direct observation.

The technology currently powers flood and wildfire alerts in Google Search and Maps. Research organizations including Brazil's MayBiomas and the Global Ecosystems Atlas are using the system to analyze rainforests, deserts, and wetlands. The model integrates with Google Earth Engine, providing agencies like NASA and the Forest Service access to over one trillion annual data points for environmental monitoring and mapping applications.
Data Storage

'The Future is Not Self-Hosted' (drewlyton.com) 175

A software developer who built his own home server in response to Amazon's removal of Kindle book downloads now argues that self-hosting "is NOT the future we should be fighting for." Drew Lyton constructed a home server running open-source alternatives to Google Drive, Google Photos, Audible, Kindle, and Netflix after Amazon announced that "Kindle users would no longer be able to download and back up their book libraries to their computers."

The change prompted Amazon to update Kindle store language to say "users are purchasing licenses -- not books." Lyton's setup involved a Lenovo P520 with 128GB RAM, multiple hard drives, and Docker containers running applications like Immich for photo storage and Jellyfin for media streaming. The technical complexity required "138 words to describe but took me the better part of two weeks to actually do."

The implementation was successful but Lyton concluded that self-hosting "assumes isolated, independent systems are virtuous. But in reality, this simply makes them hugely inconvenient." He proposes "publicly funded, accessible, at cost cloud-services" as an alternative, suggesting libraries could provide "100GB of encrypted file storage, photo-sharing and document collaboration tools, and media streaming services -- all for free."
Businesses

Amazon Invests In 'Netflix of AI' Start-Up Fable, Which Lets You Make Your Own TV Shows 24

An anonymous reader quotes a report from Variety: Edward Saatchi isn't totally sure people will flock to Showrunner, the new AI-generated TV show service his company is launching publicly this week. But he has a vote of confidence from Amazon, which has invested in Fable, Saatchi's San Francisco-based start-up. The amount of Amazon's funding in Fable isn't being disclosed. The money is going toward building out Showrunner, which Fable has hyped as the "Netflix of AI": a service that lets you type in a few words to create scenes -- or entire episodes -- of a TV show, either from scratch or based on an existing story-world someone else has created.

Fable is launching Showrunner to let users tinker with the animation-focused generative-AI system, following several months in a closed alpha test with 10,000 users. Initially, Showrunner will be free to use but eventually the company plans to charge creators $10-$20 per month for credits allowing them to create hundreds of TV scenes, Saatchi said. Viewing Showrunner-generated content will be free, and anyone can share the AI video on YouTube or other third-party platforms. [...] Fable's Showrunner public launch features two original "shows" -- story worlds with characters users can steer into various narrative arcs. The first is "Exit Valley," described as "a 'Family Guy'-style TV comedy set in 'Sim Francisco' satirizing the AI tech leaders Sam Altman, Elon Musk, et al." The other is "Everything Is Fine," in which a husband and wife, going to Ikea, have a huge fight -- whereupon they're transported to a world where they're separated and have to find each other. [...]

Showrunner is powered by Fable's proprietary AI model, SHOW-2. Last year, the company published a research paper on how it built the SHOW-1 model. As part of that, it released nine AI-generated episodes based on "South Park." The episodes, made without the permission of the "South Park" creators, received more than 80 million views. (Saatchi said he was in touch with the "South Park" team, who were reassured the IP wasn't being deployed commercially.) [...] Out of the gate, Showrunner is focused on animated content because it requires much less processing power than realistic-looking live-action video scenes. Saatchi said Fable wants to stay out of the "knife fight" among big AI companies like OpenAI, Google and Meta that are racing to create photorealistic content. "If you're competing with Google, are you going to win?" Saatchi said. "Our goal is to have the most creative models," he said.
EU

Google Confirms It Will Sign the EU AI Code of Practice (arstechnica.com) 11

An anonymous reader quotes a report from Ars Technica: In a rare move, Google has confirmed it will sign the European Union's AI Code of Practice, a framework it initially opposed for being too harsh. However, Google isn't totally on board with Europe's efforts to rein in the AI explosion. The company's head of global affairs, Kent Walker, noted that the code could stifle innovation if it's not applied carefully, and that's something Google hopes to prevent. While Google was initially opposed to the Code of Practice, Walker says the input it has provided to the European Commission has been well-received, and the result is a legal framework it believes can provide Europe with access to "secure, first-rate AI tools." The company claims that the expansion of such tools on the continent could boost the economy by 8 percent (about 1.8 trillion euros) annually by 2034.

These supposed economic gains are being dangled like bait to entice business interests in the EU to align with Google on the Code of Practice. While the company is signing the agreement, it appears interested in influencing the way it is implemented. Walker says Google remains concerned that tightening copyright guidelines and forced disclosure of possible trade secrets could slow innovation. Having a seat at the table could make it easier to bend the needle of regulation than if it followed some of its competitors in eschewing voluntary compliance. [...] The AI Code of Practice aims to provide AI firms with a bit more certainty in the face of a shifting landscape. It was developed with the input of more than 1,000 citizen groups, academics, and industry experts. The EU Commission says companies that adopt the voluntary code will enjoy a lower bureaucratic burden, easing compliance with the block's AI Act, which came into force last year.

Under the terms of the code, Google will have to publish summaries of its model training data and disclose additional model features to regulators. The code also includes guidance on how firms should manage safety and security in compliance with the AI Act. Likewise, it includes paths to align a company's model development with EU copyright law as it pertains to AI, a sore spot for Google and others. Companies like Meta that don't sign the code will not escape regulation. All AI companies operating in Europe will have to abide by the AI Act, which includes the most detailed regulatory framework for generative AI systems in the world. The law bans high-risk uses of AI like intentional deception or manipulation of users, social scoring systems, and real-time biometric scanning in public spaces. Companies that violate the rules in the AI Act could be hit with fines as high as 35 million euros ($40.1 million) or up to 7 percent of the offender's global revenue.

Android

Nothing's Phone 3 Is Stymied By Contentious Design and Price (ndtvprofit.com) 15

Smartphone maker Nothing's $799 Phone 3 has been "mired in controversy among the same customers who rallied behind the company's past products" since its July launch, Bloomberg reported on Wednesday. Tech enthusiasts have "lambasted the company for the phone's peculiar industrial design and what they perceive to be an unreasonable price."

The Android device lacks the most performant Qualcomm processor chip found in premium Android phones and the camera performance "falls short of other handsets in this price bracket," the publication wrote in a scathing review. The phone costs $200 more than its predecessor and matches pricing with Apple's iPhone 16, Samsung's Galaxy S25, and Google's Pixel 9.

Critics across Reddit and social media have attacked Nothing for removing the signature Glyph Lights from previous models. Comments on Nothing's YouTube channel have been "bruising," focusing on the phone's oddly positioned camera array. "At its current price, the handset is too expensive for what it offers," the review concludes.
Google

Google is Using AI Age Checks To Lock Down User Accounts (theverge.com) 81

Google will soon cast an even wider net with its AI age estimation technology. From a report: After announcing plans to find and restrict underage users on YouTube, the company now says it will start detecting whether Google users based in the US are under 18.

Age estimation is rolling out over the next few weeks and will only impact a "small set" of users to start, though Google plans on expanding it more widely. The company says it will use the information a user has searched for or the types of YouTube videos they watch to determine their age. Google first announced this initiative in February. If Google believes that a user is under 18, it will apply the same restrictions it places on users who proactively identify as underage.

IT

Tech CEO's Negative Coverage Vanished from Google via Security Flaw (404media.co) 16

Journalist Jack Poulson accidentally discovered that Google had completely removed two of his articles from search results after someone exploited a vulnerability in the company's Refresh Outdated Content tool.

The security flaw allowed malicious actors to de-list specific web pages by submitting URLs with altered capitalization to Google's recrawling system. When Google attempted to index these modified URLs, the system received 404 errors and subsequently removed all variations of the page from search results, including the original legitimate articles.

The affected stories concerned tech CEO Delwin Maurice Blackman's 2021 arrest on felony domestic violence charges. In a statement to 404 Media, Google confirmed the vulnerability and said it had deployed a fix for the issue.
Google

Google Execs Say Employees Have To 'Be More AI-Savvy' 88

An anonymous reader quotes a report from CNBC: Google executives are pushing employees to act with more urgency in their use of artificial intelligence as the company looks for ways to cut costs. That was the message at an all-hands meeting last week, featuring CEO Sundar Pichai and Brian Saluzzo, who runs the teams building the technical foundation for Google's flagship products. "Anytime you go through a period of extraordinary investment, you respond by adding a lot of headcount, right?" Pichai said, according to audio obtained by CNBC. "But in this AI moment, I think we have to accomplish more by taking advantage of this transition to drive higher productivity. [...] We are competing with other companies in the world," Pichai said at the meeting. "There will be companies which will become more efficient through this moment in terms of employee productivity, which is why I think it's important to focus on that." [...]

"We are going to be going through a period of much higher investment and I think we have to be frugal with our resources, and I would strive to be more productive and efficient as a company," Pichai said, adding that he's "very optimistic" about how Google is doing. At the meeting, Saluzzo highlighted a number of tools the company is building for software engineers, or SWEs, to help "everybody at Google be more AI-savvy." "We feel the urgency to really quickly and urgently get AI into more of the coding workflows to address top needs so you see a much more rapid increase in velocity," Saluzzo said. Saluzzo said Google has a portfolio of AI products available to employees "so folks can go faster." He mentioned an internal site called "AI Savvy Google" which has courses, toolkits and learning sessions, including some for individual product areas.

Google's engineering education team, which develops courses for internal and external use, partnered with DeepMind on a training called "Building with Gemini" that the company will start promoting soon, Saluzzo said. He also referenced a new internal AI coding tool called Cider that helps software engineers with various aspects of the development process. Since May, when the company first introduced Cider, 50% of users tap the service on a weekly basis, Saluzzo said. Regarding Google's internal AI tools, Saluzzo said that employees should "expect them to continuously get better" and that "they'll become a pretty integral part of most SWE work."
IOS

Jack Dorsey's Bluetooth Messaging App Bitchat Now On App Store 30

Jack Dorsey's new app Bitchat is now available on the iOS App Store. The decentralized, peer-to-peer messaging app uses Bluetooth mesh networks for encrypted, ephemeral chats without requiring accounts, servers, or internet access. Dorsey said he built it over a weekend and cautioned that it "has not received external security review and may contain vulnerabilities..." TechCrunch reports: The app's UX is very minimal. There is no log-in system, and you're immediately brought to an instant messaging box, where you can see what nearby users are saying (if anyone is actually around you and using the app) and set your display name, which can be changed at any time. [...] Dorsey has not directly addressed the fake Bitchat apps on the Google Play store, but he did repost another user's X post that said that Bitchat is not yet on Google Play, and to "beware of fakes."
AI

Cisco Donates the AGNTCY Project to the Linux Foundation 7

Cisco has donated its AGNTCY initiative to the Linux Foundation, aiming to create an open-standard "Internet of Agents" to allow AI agents from different vendors to collaborate seamlessly. The project is backed by tech giants like Google Cloud, Dell, Oracle and Red Hat. "Without such an interoperable standard, companies have been rushing to build specialized AI agents," writes ZDNet's Steven Vaughan-Nichols. "These work in isolated silos that cannot work and play well with each other. This, in turn, makes them less useful for customers than they could be." From the report: AGNTCY was first open-sourced by Cisco in March 2025 and has since attracted support from over 75 companies. By moving it under the Linux Foundation's neutral governance, the hope is that everyone else will jump on the AGNTCY bandwagon, thus making it an industry-wide standard. The Linux Foundation has a long history of providing common ground for what otherwise might be contentious technology battles. The project provides a complete framework to solve the core challenges of multi-agent collaboration:

- Agent Discovery: An Open Agent Schema Framework (OASF) acts like a "DNS for agents," allowing them to find and understand the capabilities of others.
- Agent Identity: A system for cryptographically verifiable identities ensures agents can prove who they are and perform authorized actions securely across different vendors and organizations.
- Agent Messaging: A protocol named Secure Low-latency Interactive Messaging (SLIM) is designed for the complex, multi-modal communication patterns of agents, with built-in support for human-in-the-loop interaction and quantum-safe security.
- Agent Observability: A specialized monitoring framework provides visibility into complex, multi-agent workflows, which is crucial for debugging probabilistic AI systems.

You may well ask, aren't there other emerging AI agency standards? You're right. There are. These include the Agent2Agent (A2A) protocol, which was also recently contributed to the Linux Foundation, and Anthropic's Model Context Protocol (MCP). AGNTCY will help agents using these protocols discover each other and communicate securely. In more detail, it looks like this: AGNTCY enables interoperability and collaboration in three primary ways:

- Discovery: Agents using the A2A protocol and servers using MCP can be listed and found through AGNTCY's directories. This enables different agents to discover each other and understand their functions.
- Messaging: A2A and MCP communications can be transported over SLIM, AGNTCY's messaging protocol designed for secure and efficient agent interaction.
- Observability: The interactions between these different agents and protocols can be monitored using AGNTCY's observability software development kits (SDKs), which increase transparency and help with debugging complex workflows
You can view AGNTCY's code and documentary on GitHub.
Google

Google Failed To Warn 10 Million of Turkey Earthquake Severity (bbc.com) 16

Google has admitted its earthquake early warning system failed to accurately alert people during Turkey's deadly quake of 2023. From a report: Ten million people within 98 miles of the epicentre could have been sent Google's highest level alert -- giving up to 35 seconds of warning to find safety. Instead, only 469 "Take Action" warnings were sent out for the first 7.8 magnitude quake.

Google told the BBC half a million people were sent a lower level warning, which is designed for "light shaking", and does not alert users in the same prominent way. The tech giant previously told the BBC the system had "performed well" after an investigation in 2023. The alerts system is available in just under 100 countries -- and is described by Google as a "global safety net" often operating in countries with no other warning system. Google's system, named Android Earthquake Alerts (AEA), is run by the Silicon Valley firm - not individual countries.

Power

AI Boom Sparks Fight Over Soaring Power Costs 88

Utilities across the U.S. are demanding tech companies pay larger shares of electricity infrastructure costs as AI drives unprecedented data center construction, creating tensions over who bears the financial burden of grid upgrades.

Virginia utility Dominion Energy received requests from data center developers requiring 40 gigawatts of electricity by the end of 2024, enough to power at least 10 million homes, and proposed measures requiring longer-term contracts and guaranteed payments. Ohio became one of the first states to mandate companies pay more connection costs after receiving power requests exceeding 50 times existing data center usage.

Tech giants Microsoft, Google, and Amazon plan to spend $80 billion, $85 billion, and $100 billion respectively this year on AI infrastructure, while utilities worry that grid upgrade costs will increase rates for residential customers.

Further reading: The AI explosion means millions are paying more for electricity
Businesses

Tesla Signs $16.5 Billion Contract With Samsung To Make AI Chips 51

An anonymous reader quotes a report from CNBC: Samsung Electronics has entered into a $16.5 billion contract for supplying semiconductors to Tesla, based on a regulatory filing by the South Korean firm and Tesla CEO Elon Musk's posts on X. The memory chipmaker, which had not named the counterparty, mentioned in its filing that the effective start date of the contract was July 26, 2025 -- receipt of orders -- and its end date was Dec. 31, 2033. However, Musk later confirmed in a reply to a post on social media platform X that Tesla was the counterparty.

He also posted: "Samsung's giant new Texas fab will be dedicated to making Tesla's next-generation AI6 chip. The strategic importance of this is hard to overstate. Samsung currently makes AI4.TSMC will make AI5, which just finished design, initially in Taiwan and then Arizona. Samsung agreed to allow Tesla to assist in maximizing manufacturing efficiency. This is a critical point, as I will walk the line personally to accelerate the pace of progress," Musk said on X, and suggested that the deal with Samsung could likely be even larger than the announced $16.5 billion.

Samsung earlier said that details of the deal, including the name of the counterparty, will not be disclosed until the end of 2033, citing a request from the second party "to protect trade secrets," according to a Google translation of the filing in Korean on Monday. "Since the main contents of the contract have not been disclosed due to the need to maintain business confidentiality, investors are advised to invest carefully considering the possibility of changes or termination of the contract," the company said.
Open Source

Google's New Security Project 'OSS Rebuild' Tackles Package Supply Chain Verification (googleblog.com) 13

This week Google's Open Source Security Team announced "a new project to strengthen trust in open source package ecosystems" — by reproducing upstream artifacts.

It includes automation to derive declarative build definitions, new "build observability and verification tools" for security teams, and even "infrastructure definitions" to help organizations rebuild, sign, and distribute provenance by running their own OSS Rebuild instances. (And as part of the initiative, the team also published SLSA Provenance attestations "for thousands of packages across our supported ecosystems.") Our aim with OSS Rebuild is to empower the security community to deeply understand and control their supply chains by making package consumption as transparent as using a source repository. Our rebuild platform unlocks this transparency by utilizing a declarative build process, build instrumentation, and network monitoring capabilities which, within the SLSA Build framework, produces fine-grained, durable, trustworthy security metadata. Building on the hosted infrastructure model that we pioneered with OSS Fuzz for memory issue detection, OSS Rebuild similarly seeks to use hosted resources to address security challenges in open source, this time aimed at securing the software supply chain... We are committed to bringing supply chain transparency and security to all open source software development. Our initial support for the PyPI (Python), npm (JS/TS), and Crates.io (Rust) package registries — providing rebuild provenance for many of their most popular packages — is just the beginning of our journey...

OSS Rebuild helps detect several classes of supply chain compromise:

- Unsubmitted Source Code: When published packages contain code not present in the public source repository, OSS Rebuild will not attest to the artifact.

- Build Environment Compromise: By creating standardized, minimal build environments with comprehensive monitoring, OSS Rebuild can detect suspicious build activity or avoid exposure to compromised components altogether.

- Stealthy Backdoors: Even sophisticated backdoors like xz often exhibit anomalous behavioral patterns during builds. OSS Rebuild's dynamic analysis capabilities can detect unusual execution paths or suspicious operations that are otherwise impractical to identify through manual review.


For enterprises and security professionals, OSS Rebuild can...

Enhance metadata without changing registries by enriching data for upstream packages. No need to maintain custom registries or migrate to a new package ecosystem.

Augment SBOMs by adding detailed build observability information to existing Software Bills of Materials, creating a more complete security picture...

- Accelerate vulnerability response by providing a path to vendor, patch, and re-host upstream packages using our verifiable build definitions...


The easiest (but not only!) way to access OSS Rebuild attestations is to use the provided Go-based command-line interface.

"With OSS Rebuild's existing automation for PyPI, npm, and Crates.io, most packages obtain protection effortlessly without user or maintainer intervention."
China

Huawei Shows Off 384-Chip AI Computing System That Rivals Nvidia's Top Product (msn.com) 118

Long-time Slashdot reader hackingbear writes: China's Huawei Technologies showed off an AI computing system on Saturday that can rival Nvidia's most advanced offering, even though the company faces U.S. export restrictions. The CloudMatrix 384 system made its first public debut at the World Artificial Intelligence Conference (WAIC), a three-day event in Shanghai where companies showcase their latest AI innovations, drawing a large crowd to the company's booth. The CloudMatrix 384 incorporates 384 of Huawei's latest 910C chips, optically connected through an all-to-all topology, and outperforms Nvidia's GB200 NVL72 on some metrics, which uses 72 B200 chips, according to SemiAnalysis. A full CloudMatrix system can now deliver 300 PFLOPs of dense BF16 compute, almost double that of the GB200 NVL72. With more than 3.6x aggregate memory capacity and 2.1x more memory bandwidth, Huawei and China "now have AI system capabilities that can beat Nvidia's," according to a report by SemiAnalysis.

The trade-off is that it takes 4.1x the power of a GB200 NVL72, with 2.5x worse power per FLOP, 1.9x worse power per TB/s memory bandwidth, and 1.2x worse power per TB HBM memory capacity, but SemiAnalysis noted that China has no power constraints only chip constraints. Nvidia had announced DGX H100 NVL256 "Ranger" Platform [with 256 GPUs], SemiAnalysis writes, but "decided to not bring it to production due to it being prohibitively expensive, power hungry, and unreliable due to all the optical transceivers required and the two tiers of network. The CloudMatrix Pod requires an incredible 6,912 400G LPO transceivers for networking, the vast majority of which are for the scaleup network."



Also at this event, Chinese e-commerce giant Alibaba released a new flagship open-source reasoning model Qwen3-235B-A22B-Thinking-2507 which has "already topped key industry benchmarks, outperforming powerful proprietary systems from rivals like Google and OpenAI," according to industry reports. On the AIME25 benchmark, a test designed to evaluate sophisticated, multi-step problem-solving skills, Qwen3-Thinking-2507 achieved a remarkable score of 92.3. This places it ahead of some of the most powerful proprietary models, notably surpassing Google's Gemini-2.5 Pro, while Qwen3-Thinking secured a top score of 74.1 at LiveCodeBench, comfortably ahead of both Gemini-2.5 Pro and OpenAI's o4-mini, demonstrating its practical utility for developers and engineering teams.
United Kingdom

VPN Downloads Surge in UK as New Age-Verification Rules Take Effect (msn.com) 96

Proton VPN reported a 1,400 percent hourly increase in signups over its baseline Friday — the day the UK's age verification law went into effect. For UK users, "apps with explicit content must now verify visitors' ages via methods such as facial recognition and banking info," notes Mashable: Proton VPN previously documented a 1,000 percent surge in new subscribers in June after Pornhub left France, its second-biggest market, amid the enactment of an age verification law there... A Proton VPN spokesperson told Mashable that it saw an increase in new subscribers right away at midnight Friday, then again at 9 a.m. BST. The company anticipates further surges over the weekend, they added. "This clearly shows that adults are concerned about the impact universal age verification laws will have on their privacy," the spokesperson said... Search interest for the term "Proton VPN" also saw a seven-day spike in the UK around 2 a.m. BST Friday, according to a Google Trends chart.
The Financial Times notes that VPN apps "made up half of the top 10 most popular free apps on the UK's App Store for iOS this weekend, according to Apple's rankings." Proton VPN leapfrogged ChatGPT to become the top free app in the UK, according to Apple's daily App Store charts, with similar services from developers Super Unlimited and Nord Security also rising over the weekend... Data from Google Trends also shows a significant increase in search queries for VPNs in the UK this weekend, with up to 10 times more people looking for VPNs at peak times...

"This is what happens when people who haven't got a clue about technology pass legislation," Anthony Rose, a UK-based tech entrepreneur who helped to create BBC iPlayer, the corporation's streaming service, said in a social media post. Rose said it took "less than five minutes to install a VPN" and that British people had become familiar with using them to access the iPlayer outside the UK. "That's the beauty of VPNs. You can be anywhere you like, and anytime a government comes up with stupid legislation like this, you just turn on your VPN and outwit them," he added...

Online platforms found in breach of the new UK rules face penalties of up to £18mn or 10 percent of global turnover, whichever is greater... However, opposition to the new rules has grown in recent days. A petition submitted through the UK parliament website demanding that the Online Safety Act be repealed has attracted more than 270,000 signatures, with the vast majority submitted in the past week. Ministers must respond to a petition, and parliament has to consider its topic for a debate, if signatures surpass 100,000.

X, Reddit and TikTok have also "introduced new 'age assurance' systems and controls for UK users," according to the article. But Mashable summarizes the situation succinctly.

"Initial research shows that VPNs make age verification laws in the U.S. and abroad tricky to enforce in practice."
Piracy

Creator of 1995 Phishing Tool 'AOHell' On Piracy, Script Kiddies, and What He Thinks of AI (yahoo.com) 14

In 1995's online world, AOL existed mostly beside the internet as a "walled, manicured garden," remembers Fast Company.

Then along came AOHell "the first of what would become thousands of programs designed by young hackers to turn the system upside down" — built by a high school dropout calling himself "Da Chronic" who says he used "a computer that I couldn't even afford" using "a pirated copy of Microsoft Visual Basic." [D]istributed throughout the teen chatrooms, the program combined a pile of tricks and pranks into a slick little control panel that sat above AOL's windows and gave even newbies an arsenal of teenage superpowers. There was a punter to kick people out of chatrooms, scrollers to flood chats with ASCII art, a chat impersonator, an email and instant message bomber, a mass mailer for sharing warez (and later mp3s), and even an "Artificial Intelligence Bot" [which performed automated if-then responses]. Crucially, AOHell could also help users gain "free" access to AOL. The program came with a program for generating fake credit card numbers (which could fool AOL's sign up process), and, by January 1995, a feature for stealing other users' passwords or credit cards. With messages masquerading as alerts from AOL customer service reps, the tool could convince unsuspecting users to hand over their secrets...

Of course, Da Chronic — actually a 17-year-old high school dropout from North Carolina named Koceilah Rekouche — had other reasons, too. Rekouche wanted to hack AOL because he loved being online with his friends, who were a refuge from a difficult life at home, and he couldn't afford the hourly fee. Plus, it was a thrill to cause havoc and break AOL's weak systems and use them exactly how they weren't meant to be, and he didn't want to keep that to himself. Other hackers "hated the fact that I was distributing this thing, putting it into the team chat room, and bringing in all these noobs and lamers and destroying the community," Rekouche told me recently by phone...

Rekouche also couldn't have imagined what else his program would mean: a free, freewheeling creative outlet for thousands of lonely, disaffected kids like him, and an inspiration for a generation of programmers and technologists. By the time he left AOL in late 1995, his program had spawned a whole cottage industry of teenage script kiddies and hackers, and fueled a subculture where legions of young programmers and artists got their start breaking and making things, using pirated software that otherwise would have been out of reach... In 2014, [AOL CEO Steve] Case himself acknowledged on Reddit that "the hacking of AOL was a real challenge for us," but that "some of the hackers have gone on to do more productive things."

When he first met Mark Zuckerberg, he said, the Facebook founder confessed to Case that "he learned how to program by hacking [AOL]."

"I can't imagine somebody doing that on Facebook today," Da Chronic says in a new interview with Fast Company. "They'll kick you off if you create a Google extension that helps you in the slightest bit on Facebook, or an extension that keeps your privacy or does a little cool thing here and there. That's totally not allowed."

AOHell's creators had called their password-stealing techniques "phishing" — and the name stuck. (AOL was working with federal law enforcement to find him, according to a leaked internal email, but "I didn't even see that until years later.") Enrolled in college, he decided to write a technical academic paper about his program. "I do believe it caught the attention of Homeland Security, but I think they realized pretty quickly that I was not a threat."

He's got an interesting perspective today, noting with today's AI tool's it's theoretically possible to "craft dynamic phishing emails... when I see these AI coding tools I think, this might be like today's Visual Basic. They take out a lot of the grunt work."

What's the moral of the story? "I didn't have any qualifications or anything like that," Da Chronic says. "So you don't know who your adversary is going to be, who's going to understand psychology in some nuanced way, who's going to understand how to put some technological pieces together, using AI, and build some really wild shit."

Slashdot Top Deals