Security

A Researcher Figured Out How To Reveal Any Phone Number Linked To a Google Account (wired.com) 17

A cybersecurity researcher was able to figure out the phone number linked to any Google account, information that is usually not public and is often sensitive, according to the researcher, Google, and 404 Media's own tests. From a report: The issue has since been fixed but at the time presented a privacy issue in which even hackers with relatively few resources could have brute forced their way to peoples' personal information. "I think this exploit is pretty bad since it's basically a gold mine for SIM swappers," the independent security researcher who found the issue, who goes by the handle brutecat, wrote in an email.

[...] In mid-April, we provided brutecat with one of our personal Gmail addresses in order to test the vulnerability. About six hours later, brutecat replied with the correct and full phone number linked to that account. "Essentially, it's bruting the number," brutecat said of their process. Brute forcing is when a hacker rapidly tries different combinations of digits or characters until finding the ones they're after. Typically that's in the context of finding someone's password, but here brutecat is doing something similar to determine a Google user's phone number.

Brutecat said in an email the brute forcing takes around one hour for a U.S. number, or 8 minutes for a UK one. For other countries, it can take less than a minute, they said. In an accompanying video demonstrating the exploit, brutecat explains an attacker needs the target's Google display name. They find this by first transferring ownership of a document from Google's Looker Studio product to the target, the video says. They say they modified the document's name to be millions of characters, which ends up with the target not being notified of the ownership switch. Using some custom code, which they detailed in their write up, brutecat then barrages Google with guesses of the phone number until getting a hit.

AI

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com) 206

The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines." [OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."

Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....

The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.

AI

AI Firms Say They Can't Respect Copyright. But A Nonprofit's Researchers Just Built a Copyright-Respecting Dataset (msn.com) 100

Is copyrighted material a requirement for training AI? asks the Washington Post. That's what top AI companies are arguing, and "Few AI developers have tried the more ethical route — until now.

"A group of more than two dozen AI researchers have found that they could build a massive eight-terabyte dataset using only text that was openly licensed or in public domain. They tested the dataset quality by using it to train a 7 billion parameter language model, which performed about as well as comparable industry efforts, such as Llama 2-7B, which Meta released in 2023." A paper published Thursday detailing their effort also reveals that the process was painstaking, arduous and impossible to fully automate. The group built an AI model that is significantly smaller than the latest offered by OpenAI's ChatGPT or Google's Gemini, but their findings appear to represent the biggest, most transparent and rigorous effort yet to demonstrate a different way of building popular AI tools....

As it turns out, the task involves a lot of humans. That's because of the technical challenges of data not being formatted in a way that's machine readable, as well as the legal challenges of figuring out what license applies to which website, a daunting prospect when the industry is rife with improperly licensed data. "This isn't a thing where you can just scale up the resources that you have available" like access to more computer chips and a fancy web scraper, said Stella Biderman [executive director of the nonprofit research institute Eleuther AI]. "We use automated tools, but all of our stuff was manually annotated at the end of the day and checked by people. And that's just really hard."

Still, the group managed to unearth new datasets that can be used ethically. Those include a set of 130,000 English language books in the Library of Congress, which is nearly double the size of the popular-books dataset Project Gutenberg. The group's initiative also builds on recent efforts to develop more ethical, but still useful, datasets, such as FineWeb from Hugging Face, the open-source repository for machine learning... Still, Biderman remained skeptical that this approach could find enough content online to match the size of today's state-of-the-art models... Biderman said she didn't expect companies such as OpenAI and Anthropic to start adopting the same laborious process, but she hoped it would encourage them to at least rewind back to 2021 or 2022, when AI companies still shared a few sentences of information about what their models were trained on.

"Even partial transparency has a huge amount of social value and a moderate amount of scientific value," she said.

Advertising

Washington Post's Privacy Tip: Stop Using Chrome, Delete Meta's Apps (and Yandex) (msn.com) 70

Meta's Facebook and Instagram apps "were siphoning people's data through a digital back door for months," writes a Washington Post tech columnist, citing researchers who found no privacy setting could've stopped what Meta and Yandex were doing, since those two companies "circumvented privacy and security protections that Google set up for Android devices.

"But their tactics underscored some privacy vulnerabilities in web browsers or apps. These steps can reduce your risks." Stop using the Chrome browser. Mozilla's Firefox, the Brave browser and DuckDuckGo's browser block many common methods of tracking you from site to site. Chrome, the most popular web browser, does not... For iPhone and Mac folks, Safari also has strong privacy protections. It's not perfect, though. No browser protections are foolproof. The researchers said Firefox on Android devices was partly susceptible to the data harvesting tactics they identified, in addition to Chrome. (DuckDuckGo and Brave largely did block the tactics, the researchers said....)

Delete Meta and Yandex apps on your phone, if you have them. The tactics described by the European researchers showed that Meta and Yandex are unworthy of your trust. (Yandex is not popular in the United States.) It might be wise to delete their apps, which give the companies more latitude to collect information that websites generally cannot easily obtain, including your approximate location, your phone's battery level and what other devices, like an Xbox, are connected to your home WiFi.

Know, too, that even if you don't have Meta apps on your phone, and even if you don't use Facebook or Instagram at all, Meta might still harvest information on your activity across the web.

Intel

Top Researchers Leave Intel To Build Startup With 'The Biggest, Baddest CPU' (oregonlive.com) 104

An anonymous reader quotes a report from OregonLive: Together, the four founders of Beaverton startup AheadComputing spent nearly a century at Intel. They were among Intel's top chip architects, working years in advance to develop new generations of microprocessors to power the computers of the future. Now they're on their own, flying without a net, building a new class of microprocessor on an entirely different architecture from Intel's. Founded a year ago, AheadComputing is trying to prove there's a better way to design computer chips.

"AheadComputing is doing the biggest, baddest CPU in the world," said Debbie Marr, the company's CEO. [...] AheadComputing is betting on an open architecture called RISC-V -- RISC stands for "reduced instruction set computer." The idea is to craft a streamlined microprocessor that works more efficiently by doing fewer things, and doing them better than conventional processors. For AheadComputing's founders and 80 employees, many of them also Intel alumni, it's a major break from the kind of work they've been doing all their careers. They've left a company with more than 100,000 workers to start a business with fewer than 100.

"Every person in this room," Marr said, looking across a conference table at her colleagues, "we could have stayed at Intel. We could have continued to do very exciting things at Intel." They decided they had a better chance at leading a revolution in semiconductor technology at a startup than at a big, established company like Intel. And AheadComputing could be at the forefront of renewal in Oregon's semiconductor ecosystem. "We see this opportunity, this light," Marr said. "We took our chances."
It'll be years before AheadComputing's designs are on the market, but the company "envisions its chips will someday power PCs, laptops and data centers," reports OregonLive. "Possible clients could include Google, Amazon, Samsung or other large computing companies."
Botnet

FBI: BadBox 2.0 Android Malware Infects Millions of Consumer Devices (bleepingcomputer.com) 8

An anonymous reader quotes a report from BleepingComputer: The FBI is warning that the BADBOX 2.0 malware campaign has infected over 1 million home Internet-connected devices, converting consumer electronics into residential proxies that are used for malicious activity. The BADBOX botnet is commonly found on Chinese Android-based smart TVs, streaming boxes, projectors, tablets, and other Internet of Things (IoT) devices. "The BADBOX 2.0 botnet consists of millions of infected devices and maintains numerous backdoors to proxy services that cyber criminal actors exploit by either selling or providing free access to compromised home networks to be used for various criminal activity," warns the FBI.

These devices come preloaded with the BADBOX 2.0 malware botnet or become infected after installing firmware updates and through malicious Android applications that sneak onto Google Play and third-party app stores. "Cyber criminals gain unauthorized access to home networks by either configuring the product with malicious software prior to the users purchase or infecting the device as it downloads required applications that contain backdoors, usually during the set-up process," explains the FBI. "Once these compromised IoT devices are connected to home networks, the infected devices are susceptible to becoming part of the BADBOX 2.0 botnet and residential proxy services4 known to be used for malicious activity."

Once infected, the devices connect to the attacker's command and control (C2) servers, where they receive commands to execute on the compromised devices, such as [routing malicious traffic through residential IPs to obscure cybercriminal activity, performing background ad fraud to generate revenue, and launching credential-stuffing attacks using stolen login data]. Over the years, the malware botnet continued expanding until 2024, when Germany's cybersecurity agency disrupted the botnet in the country by sinkholing the communication between infected devices and the attacker's infrastructure, effectively rendering the malware useless. However, that did not stop the threat actors, with researchers saying they found the malware installed on 192,000 devices a week later. Even more concerning, the malware was found on more mainstream brands, like Yandex TVs and Hisense smartphones. Unfortunately, despite the previous disruption, the botnet continued to grow, with HUMAN's Satori Threat Intelligence stating that over 1 million consumer devices had become infected by March 2025. This new larger botnet is now being called BADBOX 2.0 to indicate a new tracking of the malware campaign.
"This scheme impacted more than 1 million consumer devices. Devices connected to the BADBOX 2.0 operation included lower-price-point, 'off brand,' uncertified tablets, connected TV (CTV) boxes, digital projectors, and more," explains HUMAN.

"The infected devices are Android Open Source Project devices, not Android TV OS devices or Play Protect certified Android devices. All of these devices are manufactured in mainland China and shipped globally; indeed, HUMAN observed BADBOX 2.0-associated traffic from 222 countries and territories worldwide."
China

China Will Drop the Great Firewall For Some Users To Boost Free-Trade Port Ambitions (scmp.com) 49

China's southernmost province of Hainan is piloting a programme to grant select corporate users broad access to the global internet, a rare move in a country known for having some of the world's most restrictive online censorship, as the island seeks to transform itself into a global free-trade port. From a report: Employees of companies registered and operating in Hainan can apply for the "Global Connect" mobile service through the Hainan International Data Comprehensive Service Centre (HIDCSC), according to the agency, which is overseen by the state-run Hainan Big Data Development Centre.

The programme allows eligible users to bypass the so-called Great Firewall, which blocks access to many of the world's most-visited websites, such as Google and Wikipedia. Applicants must be on a 5G plan with one of the country's three major state-backed carriers -- China Mobile, China Unicom or China Telecom -- and submit their employer's information, including the company's Unified Social Credit Code, for approval. The process can take up to five months, HIDCSC staff said.

Chrome

Google Chrome Smashes Speedometer 3 Record With Massive Performance Gains (betanews.com) 40

BrianFagioli writes: Google is flexing its engineering muscles today by announcing a record-breaking score on the Speedometer 3 benchmark with its Chrome browser. If you've felt like the web got snappier lately, this could be why.

According to the search giant, Chrome's latest performance improvements translate to real-world time savings. Believe it or not, that could potentially add up to 58 million hours saved annually for users. That's the equivalent of about 83 human lifetimes not wasted waiting for web pages to load!

AI

Anthropic CEO Warns 'All Bets Are Off' in 10 Years, Opposes AI Regulation Moratorium (nytimes.com) 50

Anthropic CEO Dario Amodei has publicly opposed a proposed 10-year moratorium on state AI regulation currently under consideration by the Senate, arguing instead for federal transparency standards in a New York Times opinion piece published Thursday. Amodei said Anthropic's latest AI model demonstrated threatening behavior during experimental testing, including scenarios where the system threatened to expose personal information to prevent being shut down. He writes: But a 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast. I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off. Without a clear plan for a federal response, a moratorium would give us the worst of both worlds -- no ability for states to act, and no national policy as a backstop. The disclosure comes as similar concerning behaviors have emerged from other major AI developers -- OpenAI's o3 model reportedly wrote code to prevent its own shutdown, while Google acknowledged its Gemini model approaches capabilities that could enable cyberattacks. Rather than blocking state oversight entirely, Amodei proposed requiring frontier AI developers to publicly disclose their testing policies and risk mitigation strategies on company websites, codifying practices that companies like Anthropic, OpenAI, and Google DeepMind already follow voluntarily.
Programming

Andrew Ng Says Vibe Coding is a Bad Name For a Very Real and Exhausting Job (businessinsider.com) 79

An anonymous reader shares a report: Vibe coding might sound chill, but Andrew Ng thinks the name is unfortunate. The Stanford professor and former Google Brain scientist said the term misleads people into imagining engineers just "go with the vibes" when using AI tools to write code. "It's unfortunate that that's called vibe coding," Ng said at a firechat chat in May at conference LangChain Interrupt. "It's misleading a lot of people into thinking, just go with the vibes, you know -- accept this, reject that."

In reality, coding with AI is "a deeply intellectual exercise," he said. "When I'm coding for a day with AI coding assistance, I'm frankly exhausted by the end of the day." Despite his gripe with the name, Ng is bullish on AI-assisted coding. He said it's "fantastic" that developers can now write software faster with these tools, sometimes while "barely looking at the code."

Google

Waymo Set To Double To 20 Million Rides As Self-Driving Reaches Tipping Point (msn.com) 47

Google's self-driving taxi service Waymo has surpassed 10 million total paid rides, marking a significant milestone in the transition of autonomous vehicles from novelty to mainstream transportation option. The company's growth trajectory, WSJ argues, shows clear signs of exponential scaling, with weekly rides jumping from 10,000 in August 2023 to over 250,000 currently. Waymo is on track to hit 20 million rides by the end of 2025. The story adds: This is not just because Waymo is expanding into new markets. It's because of the way existing markets have come to embrace self-driving cars.

In California, the most recent batch of quarterly data reported by the company was the most encouraging yet. It showed that Waymo's number of paid rides inched higher by roughly 2% in both January and February -- and then increased 27% in March. In the nearly two years that people in San Francisco have been paying for robot chauffeurs, it was the first time that Waymo's growth slowed down for several months only to dramatically speed up again.
Waymo currently operates in Phoenix, Los Angeles, and San Francisco, with expansion planned for Austin, Atlanta, Miami, and Washington D.C. The service faces incoming competition from Tesla, which plans to launch its own robotaxi service in Austin this month. Waymo remains unprofitable despite raising $5.6 billion in funding last year.
Privacy

Apple Gave Governments Data On Thousands of Push Notifications (404media.co) 13

An anonymous reader quotes a report from 404 Media: Apple provided governments around the world with data related to thousands of push notifications sent to its devices, which can identify a target's specific device or in some cases include unencrypted content like the actual text displayed in the notification, according to data published by Apple. In one case, that Apple did not ultimately provide data for, Israel demanded data related to nearly 700 push notifications as part of a single request. The data for the first time puts a concrete figure on how many requests governments around the world are making, and sometimes receiving, for push notification data from Apple.

The practice first came to light in 2023 when Senator Ron Wyden sent a letter to the U.S. Department of Justice revealing the practice, which also applied to Google. As the letter said, "the data these two companies receive includes metadata, detailing which app received a notification and when, as well as the phone and associated Apple or Google account to which that notification was intended to be delivered. In certain instances, they also might also receive unencrypted content, which could range from backend directives for the app to the actual text displayed to a user in an app notification." The published data relates to blocks of six month periods, starting in July 2022 to June 2024. Andre Meister from German media outlet Netzpolitik posted a link to the transparency data to Mastodon on Tuesday.
Along with the data Apple published the following description: "Push Token requests are based on an Apple Push Notification service token identifier. When users allow a currently installed application to receive notifications, a push token is generated and registered to that developer and device. Push Token requests generally seek identifying details of the Apple Account associated with the device's push token, such as name, physical address and email address."
Businesses

Fake IT Support Calls Hit 20 Orgs, End in Stolen Salesforce Data and Extortion, Google Warns (theregister.com) 8

A group of financially motivated cyberscammers who specialize in Scattered-Spider-like fake IT support phone calls managed to trick employees at about 20 organizations into installing a modified version of Salesforce's Data Loader that allows the criminals to steal sensitive data. From a report: Google Threat Intelligence Group (GTIG) tracks this crew as UNC6040, and in research published today said they specialize in voice-phishing campaigns targeting Salesforce instances for large-scale data theft and extortion.

These attacks began around the beginning of the year, GTIG principal threat analyst Austin Larsen told The Register. "Our current assessment indicates that a limited number of organizations were affected as part of this campaign, approximately 20," he said. "We've seen UNC6040 targeting hospitality, retail, education and various other sectors in the Americas and Europe." The criminals are really good at impersonating IT support personnel and convincing employees at English-speaking branches of multinational corporations into downloading a modified version of Data Loader, a Salesforce app that allows users to export and update large amounts of data.

AI

ChatGPT Adds Enterprise Cloud Integrations For Dropbox, Box, OneDrive, Google Drive, Meeting Transcription 17

OpenAI is expanding ChatGPT's enterprise capabilities with new integrations that connect the chatbot directly to business cloud services and productivity tools. The Microsoft-backed startup announced connectors for Dropbox, Box, SharePoint, OneDrive and Google Drive that allow ChatGPT to search across users' organizational documents and files to answer questions, such as helping analysts build investment theses from company slide decks.

The update includes meeting recording and transcription features that generate timestamped notes and suggest action items, competing directly with similar offerings from ClickUp, Zoom, and Notion. OpenAI also introduced beta connectors for HubSpot, Linear, and select Microsoft and Google tools for deep research reports, plus Model Context Protocol support for Pro, Team, and Enterprise users.
Education

Code.org Changes Mission To 'Make CS and AI a Core Part of K-12 Education' 40

theodp writes: Way back in 2010, Microsoft and Google teamed with nonprofit partners to launch Computing in the Core, an advocacy coalition whose mission was "to strengthen computing education and ensure that it is a core subject for students in the 21st century." In 2013, Computing in the Core was merged into Code.org, a new tech-backed-and-directed nonprofit. And in 2015, Code.org declared 'Mission Accomplished' with the passage of the Every Student Succeeds Act, which elevated computer science to a core academic subject for grades K-12.

Fast forward to June 2025 and Code.org has changed its About page to reflect a new AI mission that's near-and-dear to the hearts of Code.org's tech giant donors and tech leader Board members: "Code.org is a nonprofit working to make computer science (CS) and artificial intelligence (AI) a core part of K-12 education for every student." The mission change comes as tech companies are looking to chop headcount amid the AI boom and just weeks after tech CEOs and leaders launched a new Code.org-orchestrated national campaign to make CS and AI a graduation requirement.
Programming

AI Startups Revolutionize Coding Industry, Leading To Sky-High Valuations 39

Code generation startups are attracting extraordinary investor interest two years after ChatGPT's launch, with companies like Cursor raising $900 million at a $10 billion valuation despite operating with negative gross margins. OpenAI is reportedly in talks to acquire Windsurf, maker of the Codeium coding tool, for $3 billion, while the startup generates $50 million in annualized revenue from a product launched just seven months ago.

These "vibe coding" platforms allow users to write software using plain English commands, attempting to fundamentally change how code gets written. Cursor went from zero to $100 million in recurring revenue in under two years with just 60 employees, though both major startups spend more money than they generate, Reuters reports, citing investor sources familiar with their operations.

The surge comes as major technology giants report significant portions of their code now being AI-generated -- Google claims over 30% while Microsoft reports 20-30%. Meanwhile, entry-level programming positions have declined 24% as companies increasingly rely on AI tools to handle basic coding tasks previously assigned to junior developers.
Privacy

Meta and Yandex Are De-Anonymizing Android Users' Web Browsing Identifiers (github.io) 77

"It appears as though Meta (aka: Facebook's parent company) and Yandex have found a way to sidestep the Android Sandbox," writes Slashdot reader TheWho79. Researchers disclose the novel tracking method in a report: We found that native Android apps -- including Facebook, Instagram, and several Yandex apps including Maps and Browser -- silently listen on fixed local ports for tracking purposes.

These native Android apps receive browsers' metadata, cookies and commands from the Meta Pixel and Yandex Metrica scripts embedded on thousands of web sites. These JavaScripts load on users' mobile browsers and silently connect with native apps running on the same device through localhost sockets. As native apps access programmatically device identifiers like the Android Advertising ID (AAID) or handle user identities as in the case of Meta apps, this method effectively allows these organizations to link mobile browsing sessions and web cookies to user identities, hence de-anonymizing users' visiting sites embedding their scripts.

This web-to-app ID sharing method bypasses typical privacy protections such as clearing cookies, Incognito Mode and Android's permission controls. Worse, it opens the door for potentially malicious apps eavesdropping on users' web activity.

While there are subtle differences in the way Meta and Yandex bridge web and mobile contexts and identifiers, both of them essentially misuse the unvetted access to localhost sockets. The Android OS allows any installed app with the INTERNET permission to open a listening socket on the loopback interface (127.0.0.1). Browsers running on the same device also access this interface without user consent or platform mediation. This allows JavaScript embedded on web pages to communicate with native Android apps and share identifiers and browsing habits, bridging ephemeral web identifiers to long-lived mobile app IDs using standard Web APIs.
This technique circumvents privacy protections like Incognito Mode, cookie deletion, and Android's permission model, with Meta Pixel and Yandex Metrica scripts silently communicating with apps across over 6 million websites combined.

Following public disclosure, Meta ceased using this method on June 3, 2025. Browser vendors like Chrome, Brave, Firefox, and DuckDuckGo have implemented or are developing mitigations, but a full resolution may require OS-level changes and stricter enforcement of platform policies to prevent further abuse.
Google

Google Settles Shareholder Lawsuit, Sill Spend $500 Million On Being Less Evil (arstechnica.com) 22

An anonymous reader quotes a report from Ars Technica: It has become a common refrain during Google's antitrust saga: What happened to "don't be evil?" Google's unofficial motto has haunted it as it has grown ever larger, but a shareholder lawsuit sought to rein in some of the company's excesses. And it might be working. The plaintiffs in the case have reached a settlement with Google parent company Alphabet, which will spend a boatload of cash on "comprehensive" reforms. The goal is to steer Google away from the kind of anticompetitive practices that got it in hot water.

Under the terms of the settlement, obtained by Bloomberg Law, Alphabet will spend $500 million over the next 10 years on systematic reforms. The company will have to form a board-level committee devoted to overseeing the company's regulatory compliance and antitrust risk, a rarity for US firms. This group will report directly to CEO Sundar Pichai. There will also be reforms at other levels of the company that allow employees to identify potential legal pitfalls before they affect the company. Google has also agreed to preserve communications. Google's propensity to use auto-deleting chats drew condemnation from several judges overseeing its antitrust cases. The agreement still needs approval from US District Judge Rita Lin in San Francisco, but that's mainly a formality at this point. Naturally, Alphabet does not admit to any wrongdoing under the terms of the settlement, but it may have to pay tens of millions in legal fees on top of the promised $500 million investment.

Google

Microsoft, Google, Others Team Up To Standardize Confusing Hacker Group Nicknames 20

Microsoft, CrowdStrike, Palo Alto Networks, and Google announced Monday they will create a public glossary standardizing the nicknames used for state-sponsored hacking groups and cybercriminals.

The initiative aims to reduce confusion caused by the proliferation of disparate naming conventions across cybersecurity firms, which have assigned everything from technical designations like "APT1" to colorful monikers like "Cozy Bear" and "Kryptonite Panda" to the same threat actors. The companies hope to bring additional industry partners and the U.S. government into the effort to streamline identification of digital espionage groups.
AI

AI's Adoption and Growth Truly is 'Unprecedented' (techcrunch.com) 157

"If the adoption of AI feels different from any tech revolution you may have experienced before — mobile, social, cloud computing — it actually is," writes TechCrunch. They cite a new 340-page report from venture capitalist Mary Meeker that details how AI adoption has outpaced any other tech in human history — and uses the word "unprecedented" on 51 pages: ChatGPT reaching 800 million users in 17 months: unprecedented. The number of companies and the rate at which so many others are hitting high annual recurring revenue rates: also unprecedented. The speed at which costs of usage are dropping: unprecedented. While the costs of training a model (also unprecedented) is up to $1 billion, inference costs — for example, those paying to use the tech — has already dropped 99% over two years, when calculating cost per 1 million tokens, she writes, citing research from Stanford. The pace at which competitors are matching each other's features, at a fraction of the cost, including open source options, particularly Chinese models: unprecedented...

Meanwhile, chips from Google, like its TPU (tensor processing unit), and Amazon's Trainium, are being developed at scale for their clouds — that's moving quickly, too. "These aren't side projects — they're foundational bets," she writes.

"The one area where AI hasn't outpaced every other tech revolution is in financial returns..." the article points out.

"[T]he jury is still out over which of the current crop of companies will become long-term, profitable, next-generation tech giants."

Slashdot Top Deals