Google

Google Defeats RNC Lawsuit Claiming Email Spam Filters Harmed Republican Fundraising 84

A U.S. judge has thrown out a Republican National Committee lawsuit accusing Alphabet's Google of intentionally misdirecting the political party's email messages to users' spam folders. From a report: U.S. District Judge Daniel Calabretta in Sacramento, California, on Wednesday dismissed the RNC's lawsuit for a second time, and said the organization would not be allowed to refile it. While expressing some sympathy for the RNC's allegations, he said it had not made an adequate case that Google violated California's unfair competition law.

The lawsuit alleged Google had intentionally or negligently sent RNC fundraising emails to Gmail users' spam folders and cost the group hundreds of thousands of dollars in potential donations. Google denied any wrongdoing.
Chrome

Chrome is Going To Use AI To Help You Compare Products From Across Your Tabs 41

Google wants to help ease the pain of comparison shopping across multiple tabs in Chrome with a new AI-powered tool that can summarize your tabs into one page. From a report: The tool, which Google is calling "tab compare," will use generative AI to pull product data from tabs you have open and collect it all into one table. Assuming it works and pulls accurate information, the tool seems like it could be a handy way to look at a number of different products in one unified view.

But while it's potentially useful, the tool could also take away traffic from sites that collect and compare product information -- which might be especially worrying for independent publishers that are already struggling to be seen on Google. I'm also skeptical that Google will correctly pull all of the finer details about various products into the tables it creates with tab compare. I don't always trust Google's accuracy right now! There are some limits on what tab compare can do. The tables it creates are limited to 10 items because "we've just found the column layout doesn't scale very well beyond that," Google spokesperson Joshua Cruz tells The Verge.
Mozilla

Mozilla Follows Google in Losing Trust in Entrust's TLS Certificates (theregister.com) 14

Mozilla is following in Google Chrome's footsteps in officially distrusting Entrust as a root certificate authority (CA) following what it says was a protracted period of compliance failures. From a report: A little over a month ago, Google was the first to make the bold step of dropping Entrust as a CA, saying it noted a "pattern of concerning behaviors" from the company. Entrust has apologized to Google, Mozilla, and the wider web community, outlining its plans to regain the trust of browsers, but these appear to be unsatisfactory to both Google and Mozilla.

In an email shared by Mozilla's Ben Wilson on Wednesday, the root store manager said the decision wasn't taken lightly, but equally Entrust's response to Mozilla's concerns didn't inspire confidence that the situation would materially change for the better. "Mozilla previously requested that Entrust provide a detailed report on these recent incidents and their root causes, an evaluation of Entrust's recent actions in light of their previous commitments given in the aftermath of similarly serious incidents in 2020, and a proposal for how Entrust will re-establish Mozilla's and the community's trust," said Wilson.

Social Networks

Reddit CEO Says Microsoft and Others Need To Pay To Search the Site (theverge.com) 78

After striking deals with Google and OpenAI, Reddit CEO Steve Huffman is calling on Microsoft and others to pay if they want to continue scraping the site's data. From a report: "Without these agreements, we don't have any say or knowledge of how our data is displayed and what it's used for, which has put us in a position now of blocking folks who haven't been willing to come to terms with how we'd like our data to be used or not used," Huffman said in an interview this week. He specifically named Microsoft, Anthropic, and Perplexity for refusing to negotiate, saying it has been "a real pain in the ass to block these companies."

Reddit has been escalating its fight against crawlers in recent months. At the beginning of July, its robots.txt file was updated to block web crawlers it doesn't have agreements with. Then people began noticing that Reddit results were only visible in Google results -- where Reddit is paid for its data to be shown -- and not other search engines like Bing. Huffman said that Microsoft has been using Reddit's data to train its AI and summarizing its content in Bing results "without telling us" and that Reddit's data has also been sold through the Bing API to other search engines.

AI

Google Updates Its Search Algorithm To Tackle AI Deepfakes (pcmag.com) 8

Google is updating its search algorithm and removal request process to make it easier for victims to combat unwanted sexually explicit AI deepfakes. "When reported AI deepfakes are identified, Google Search will automatically filter out related search results that might pop up in the future so users won't have to repeatedly report similar images or duplicates of an image to Google," reports PCMag. Additionally, Google will demote sites repeatedly hosting non-consensual deepfakes and aims to differentiate between consensual and non-consensual explicit content. From the report: Google says its Search algorithm update will lower the chances of explicit deepfakes appearing in Search. The search engine will also attempt to differentiate between real sexually explicit content made consensually (such as adult film stars' work, for example) and AI-generated media made without the person's consent. But Google says doing this is a "technical challenge," so these efforts may not be entirely accurate or effective. Regardless, Google claims that the changes it's already made to Search have reduced the resurfacing of such deepfakes by more than 70%. "With these changes, people can read about the impact deepfakes are having on society, rather than see pages with actual non-consensual fake images," Google said.
Google

W3C Slams Google U-turn on Third-Party Cookie Removal (w3.org) 26

The World Wide Web Consortium (W3C) has expressed disappointment with Google's decision to retain third-party cookies, stating it undermines collaborative efforts. Google's reversal follows a five-year initiative to develop privacy-focused ad technology. While some advertising industry representatives welcomed the move, the W3C's criticism highlights the ongoing debate over online privacy and advertising practices. W3C writes: Third-party cookies are not good for the web. They enable tracking, which involves following your activity across multiple websites. They can be helpful for use cases like login and single sign-on, or putting shopping choices into a cart -- but they can also be used to invisibly track your browsing activity across sites for surveillance or ad-targeting purposes. This hidden personal data collection hurts everyone's privacy.

We aren't the only ones who are worried. The updated RFC that defines cookies says that third-party cookies have "inherent privacy issues" and that therefore web "resources cannot rely upon third-party cookies being treated consistently by user agents for the foreseeable future." We agree. Furthermore, tracking and subsequent data collection and brokerage can support micro-targeting of political messages, which can have a detrimental impact on society, as identified by Privacy International and other organizations. Regulatory authorities, such as the UK's Information Commissioner's Office, have also called for the blocking of third-party cookies.

The job of the TAG as stewards of the architecture of the web has us looking at the big picture (the whole web platform) and the details (proposed features and specs). We try to provide guidance to spec authors so that their new technologies fill holes that need to be filled, don't conflict with other parts of the web, and don't set us up for avoidable trouble in the future. We've been working with Chrome's Privacy Sandbox team (as well as others in the W3C community) for several years, trying to help them create better approaches for the things that third-party cookies do. While we haven't always agreed with the Privacy Sandbox team, we have made substantial progress together. This announcement came out of the blue, and undermines a lot of the work we've done together to make the web work without third-party cookies.

The unfortunate climb-down will also have secondary effects, as it is likely to delay cross-browser work on effective alternatives to third-party cookies. We fear it will have an overall detrimental impact on the cause of improving privacy on the web. We sincerely hope that Google reverses this decision and re-commits to a path towards removal of third-party cookies.

AI

AI Won't Replace Human Workers, But People Who Use It Will Replace Those Who Don't, Andrew Ng Says (businessinsider.in) 109

An anonymous reader writes: AI experts tend to agree that rapid advances in the technology will impact jobs. But there's a clear division growing between those who see that as a cause for concern and those who believe it heralds a future of growth. Andrew Ng, the founder of Google Brain and a professor at Stanford University, is in the latter camp. He's optimistic about how AI will transform the labor market. For one, he doesn't think it's going to replace jobs.

"For the vast majority of jobs, if 20-30% is automated, then what that means is the job is going to be there," Ng said in a recent talk organized by Chulalongkorn University in Bangkok, Thailand. "It also means AI won't replace people, but maybe people that use AI will replace people that don't."

Chrome

Forbes Estimates Google's Chrome Temporarily Lost Millions of Saved Passwords (forbes.com) 28

An unexpected disapperance of saved passwords "impacted Chrome web browser users from all over the world," writes Forbes, "leaving them unable to find any passwords already saved using the Chrome password manager." Newly saved passwords were also rendered invisible to the affected users. Google, which has now fixed the issue, said that the problem was limited to the M127 version of Chrome Browser on the Windows platform.

The precise number of users to be hit by the Google password manager vanishing act is hard to pin down. However, working on the basis that there are more than 3 billion Chrome web browser users, with Windows users counting for the vast majority of these, it's possible to come up with an estimated number. Google said that 25% of the user base saw the configuration change rolled out, which, by my calculations, is around 750 million. Of these, around 2%, according to Google's estimation, were hit by the password manager issue. That means around 15 million users have seen their passwords vanish into thin air.

Google said that an interim workaround was provided at the time, which involved the particularly user-unfriendly process of launching the Chrome browser with a command line flag of " — enable-features=SkipUndecryptablePasswords." Thankfully, the full fix that has now been rolled out just requires users to restart their Chrome browser to take effect.

Networking

Is Modern Software Development Mostly 'Junky Overhead'? (tailscale.com) 117

Long-time Slashdot theodp says this "provocative" blog post by former Google engineer Avery Pennarun — now the CEO/founder of Tailscale — is "a call to take back the Internet from its centralized rent-collecting cloud computing gatekeepers."

Pennarun writes: I read a post recently where someone bragged about using Kubernetes to scale all the way up to 500,000 page views per month. But that's 0.2 requests per second. I could serve that from my phone, on battery power, and it would spend most of its time asleep. In modern computing, we tolerate long builds, and then Docker builds, and uploading to container stores, and multi-minute deploy times before the program runs, and even longer times before the log output gets uploaded to somewhere you can see it, all because we've been tricked into this idea that everything has to scale. People get excited about deploying to the latest upstart container hosting service because it only takes tens of seconds to roll out, instead of minutes. But on my slow computer in the 1990s, I could run a perl or python program that started in milliseconds and served way more than 0.2 requests per second, and printed logs to stderr right away so I could edit-run-debug over and over again, multiple times per minute.

How did we get here?

We got here because sometimes, someone really does need to write a program that has to scale to thousands or millions of backends, so it needs all that stuff. And wishful thinking makes people imagine even the lowliest dashboard could be that popular one day. The truth is, most things don't scale, and never need to. We made Tailscale for those things, so you can spend your time scaling the things that really need it. The long tail of jobs that are 90% of what every developer spends their time on. Even developers at companies that make stuff that scales to billions of users, spend most of their time on stuff that doesn't, like dashboards and meme generators.

As an industry, we've spent all our time making the hard things possible, and none of our time making the easy things easy. Programmers are all stuck in the mud. Just listen to any professional developer, and ask what percentage of their time is spent actually solving the problem they set out to work on, and how much is spent on junky overhead.

Tailscale offers a "zero-config" mesh VPN — built on top of WireGuard — for a secure network that's software-defined (and infrastructure-agnostic). "The problem is developers keep scaling things they don't need to scale," Pennarun writes, "and their lives suck as a result...."

"The tech industry has evolved into an absolute mess..." Pennarun adds at one point. "Our tower of complexity is now so tall that we seriously consider slathering LLMs on top to write the incomprehensible code in the incomprehensible frameworks so we don't have to."

Their conclusion? "Modern software development is mostly junky overhead."
AI

What Is the Future of Open Source AI? (fb.com) 22

Tuesday Meta released Llama 3.1, its largest open-source AI model to date. But just one day Mistral released Large 2, notes this report from TechCrunch, "which it claims to be on par with the latest cutting-edge models from OpenAI and Meta in terms of code generation, mathematics, and reasoning...

"Though Mistral is one of the newer entrants in the artificial intelligence space, it's quickly shipping AI models on or near the cutting edge." In a press release, Mistral says one of its key focus areas during training was to minimize the model's hallucination issues. The company says Large 2 was trained to be more discerning in its responses, acknowledging when it does not know something instead of making something up that seems plausible. The Paris-based AI startup recently raised $640 million in a Series B funding round, led by General Catalyst, at a $6 billion valuation...

However, it's important to note that Mistral's models are, like most others, not open source in the traditional sense — any commercial application of the model needs a paid license. And while it's more open than, say, GPT-4o, few in the world have the expertise and infrastructure to implement such a large model. (That goes double for Llama's 405 billion parameters, of course.)

Mistral only has 123 billion parameters, according to the article. But whichever system prevails, "Open Source AI Is the Path Forward," Mark Zuckerberg wrote this week, predicting that open-source AI will soar to the same popularity as Linux: This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency... Beyond releasing these models, we're working with a range of companies to grow the broader ecosystem. Amazon, Databricks, and NVIDIA are launching full suites of services to support developers fine-tuning and distilling their own models. Innovators like Groq have built low-latency, low-cost inference serving for all the new models. The models will be available on all major clouds including AWS, Azure, Google, Oracle, and more. Companies like Scale.AI, Dell, Deloitte, and others are ready to help enterprises adopt Llama and train custom models with their own data.
"As the community grows and more companies develop new services, we can collectively make Llama the industry standard and bring the benefits of AI to everyone," Zuckerberg writes. He says that he's heard from developers, CEOs, and government officials that they want to "train, fine-tune, and distill" their own models, protecting their data with a cheap and efficient model — and without being locked into a closed vendor. But they also tell him that want to invest in an ecosystem "that's going to be the standard for the long term." Lots of people see that open source is advancing at a faster rate than closed models, and they want to build their systems on the architecture that will give them the greatest advantage long term...

One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms. Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it's clear that Meta and many other companies would be freed up to build much better services for people if we could build the best versions of our products and competitors were not able to constrain what we could build. On a philosophical level, this is a major reason why I believe so strongly in building open ecosystems in AI and AR/VR for the next generation of computing...

I believe that open source is necessary for a positive AI future. AI has more potential than any other modern technology to increase human productivity, creativity, and quality of life — and to accelerate economic growth while unlocking progress in medical and scientific research. Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn't concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society. There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. I think governments will conclude it's in their interest to support open source because it will make the world more prosperous and safer... [O]pen source should be significantly safer since the systems are more transparent and can be widely scrutinized...

The bottom line is that open source AI represents the world's best shot at harnessing this technology to create the greatest economic opportunity and security for everyone... I believe the Llama 3.1 release will be an inflection point in the industry where most developers begin to primarily use open source, and I expect that approach to only grow from here. I hope you'll join us on this journey to bring the benefits of AI to everyone in the world.

Google

Crooks Bypassed Google's Email Verification To Create Workspace Accounts, Access 3rd-Party Services (krebsonsecurity.com) 7

Brian Krebs writes via KrebsOnSecurity: Google says it recently fixed an authentication weakness that allowed crooks to circumvent the email verification required to create a Google Workspace account, and leverage that to impersonate a domain holder at third-party services that allow logins through Google's "Sign in with Google" feature. [...] Google Workspace offers a free trial that people can use to access services like Google Docs, but other services such as Gmail are only available to Workspace users who can validate control over the domain name associated with their email address. The weakness Google fixed allowed attackers to bypass this validation process. Google emphasized that none of the affected domains had previously been associated with Workspace accounts or services.

"The tactic here was to create a specifically-constructed request by a bad actor to circumvent email verification during the signup process," [said Anu Yamunan, director of abuse and safety protections at Google Workspace]. "The vector here is they would use one email address to try to sign in, and a completely different email address to verify a token. Once they were email verified, in some cases we have seen them access third party services using Google single sign-on." Yamunan said none of the potentially malicious workspace accounts were used to abuse Google services, but rather the attackers sought to impersonate the domain holder to other services online.

Google

Pixel 9 AI Will Add You To Group Photos Even When You're Not There (androidheadlines.com) 54

Google's upcoming Pixel 9 smartphones are set to introduce new AI-powered features, including "Add Me," a tool that will allow users to insert themselves into group photos after those pictures have been taken, according to leaked promotional video obtained by Android Headlines. This feature builds on the Pixel 8's "Best Take" function, which allowed face swapping in group shots.
Youtube

Russia To Slow YouTube Speeds (yahoo.com) 71

Russia admitted that it's deliberately slowing YouTube's loading speeds and said it plans to throttle the download speeds on the Google platform by up to 70% by the end of next week. Russia is taking this stand in response to Google's refusal to comply with the demands of the Russian authorities, local lawmaker Alexander Khinshtein said. From a report: Khinshtein, the head of the State Duma's Information Policy Committee, claimed that the move is "not aimed against Russian users, but against the administration of a foreign resource that still believes that it can violate and ignore our legislation with impunity."
Chrome

New Chrome Feature Scans Password-Protected Files For Malicious Content (thehackernews.com) 24

An anonymous reader quotes a report from The Hacker News: Google said it's adding new security warnings when downloading potentially suspicious and malicious files via its Chrome web browser. "We have replaced our previous warning messages with more detailed ones that convey more nuance about the nature of the danger and can help users make more informed decisions," Jasika Bawa, Lily Chen, and Daniel Rubery from the Chrome Security team said. To that end, the search giant is introducing a two-tier download warning taxonomy based on verdicts provided by Google Safe Browsing: Suspicious files and Dangerous files. Each category comes with its own iconography, color, and text to distinguish them from one another and help users make an informed choice.

Google is also adding what's called automatic deep scans for users who have opted-in to the Enhanced Protection mode of Safe Browsing in Chrome so that they don't have to be prompted each time to send the files to Safe Browsing for deep scanning before opening them. In cases where such files are embedded within password-protected archives, users now have the option to "enter the file's password and send it along with the file to Safe Browsing so that the file can be opened and a deep scan may be performed." Google emphasized that the files and their associated passwords are deleted a short time after the scan and that the collected data is only used for improving download protections.

AI

AI Models Face Collapse If They Overdose On Their Own Output 106

According to a new study published in Nature, researchers found that training AI models using AI-generated datasets can lead to "model collapse," where models produce increasingly nonsensical outputs over generations. "In one example, a model started with a text about European architecture in the Middle Ages and ended up -- in the ninth generation -- spouting nonsense about jackrabbits," writes The Register's Lindsay Clark. From the report: [W]ork led by Ilia Shumailov, Google DeepMind and Oxford post-doctoral researcher, found that an AI may fail to pick up less common lines of text, for example, in training datasets, which means subsequent models trained on the output cannot carry forward those nuances. Training new models on the output of earlier models in this way ends up in a recursive loop. In an accompanying article, Emily Wenger, assistant professor of electrical and computer engineering at Duke University, illustrated model collapse with the example of a system tasked with generating images of dogs. "The AI model will gravitate towards recreating the breeds of dog most common in its training data, so might over-represent the Golden Retriever compared with the Petit Basset Griffon Vendéen, given the relative prevalence of the two breeds," she said.

"If subsequent models are trained on an AI-generated data set that over-represents Golden Retrievers, the problem is compounded. With enough cycles of over-represented Golden Retriever, the model will forget that obscure dog breeds such as Petit Basset Griffon Vendeen exist and generate pictures of just Golden Retrievers. Eventually, the model will collapse, rendering it unable to generate meaningful content." While she concedes an over-representation of Golden Retrievers may be no bad thing, the process of collapse is a serious problem for meaningful representative output that includes less-common ideas and ways of writing. "This is the problem at the heart of model collapse," she said.
AI

OpenAI To Launch 'SearchGPT' in Challenge To Google 31

OpenAI is launching an online search tool in a direct challenge to Google, opening up a new front in the tech industry's race to commercialise advances in generative artificial intelligence. From a report: The experimental product, known as SearchGPT [non-paywalled], will initially only be available to a small group of users, with the San Francisco-based company opening a 10,000-person waiting list to test the service on Thursday. The product is visually distinct from ChatGPT as it goes beyond generating a single answer by offering a rail of links -- similar to a search engine -- that allows users to click through to external websites.

[...] SearchGPT will "provide up-to-date information from the web while giving you clear links to relevant sources," according to OpenAI. The new search tool will be able to access sites even if they have opted out of training OpenAI's generative AI tools, such as ChatGPT.
Google

Google DeepMind's AI Systems Can Now Solve Complex Math Problems (technologyreview.com) 40

Google DeepMind has announced that its AI systems, AlphaProof and AlphaGeometry 2, have achieved silver medal performance at the 2024 International Mathematical Olympiad (IMO), solving four out of six problems and scoring 28 out of 42 possible points in a significant breakthrough for AI in mathematical reasoning. This marks the first time an AI system has reached such a high level of performance in this prestigious competition, which has long been considered a benchmark for advanced mathematical reasoning capabilities in machine learning.

AlphaProof, a system that combines a pre-trained language model with reinforcement learning techniques, demonstrated its new capability by solving two algebra problems and one number theory problem, including the competition's most challenging question. Meanwhile, AlphaGeometry 2 successfully tackled a complex geometry problem, Google wrote in a blog post. The systems' solutions were formally verified and scored by prominent mathematicians, including Fields Medal winner Prof Sir Timothy Gowers and IMO Problem Selection Committee Chair Dr Joseph Myers, lending credibility to the achievement.

The development of these AI systems represents a significant step forward in bridging the gap between natural language processing and formal mathematical reasoning, the company argued. By fine-tuning a version of Google's Gemini model to translate natural language problem statements into formal mathematical language, the researchers created a vast library of formalized problems, enabling AlphaProof to train on millions of mathematical challenges across various difficulty levels and topic areas. While the systems' performance is impressive, challenges remain, particularly in the field of combinatorics where both AI models were unable to solve the given problems. Researchers at Google DeepMind continue to investigate these limitations, the company said, aiming to further improve the systems' capabilities across all areas of mathematics.
AI

AI Video Generator Runway Trained On Thousands of YouTube Videos Without Permission (404media.co) 81

samleecole writes: A leaked document obtained by 404 Media shows company-wide effort at generative AI company Runway, where employees collected thousands of YouTube videos and pirated content for training data for its Gen-3 Alpha model. The model -- initially codenamed Jupiter and released officially as Gen-3 -- drew widespread praise from the AI development community and technology outlets covering its launch when Runway released it in June. Last year, Runway raised $141 million from investors including Google and Nvidia, at a $1.5 billion valuation.

The spreadsheet of training data viewed by 404 Media and our testing of the model indicates that part of its training data is popular content from the YouTube channels of thousands of media and entertainment companies, including The New Yorker, VICE News, Pixar, Disney, Netflix, Sony, and many others. It also includes links to channels and individual videos belonging to popular influencers and content creators, including Casey Neistat, Sam Kolder, Benjamin Hardman, Marques Brownlee, and numerous others.

Security

Cyber Firm KnowBe4 Hired a Fake IT Worker From North Korea (cyberscoop.com) 49

In a blog post on Tuesday, security firm KnowBe4 revealed that a remote software engineer hire was a North Korean threat actor using a stolen identity and AI-augmented images. "Detailing a seemingly thorough interview process that included background checks, verified references and four video conference-based interviews, KnowBe4 founder and CEO Stu Sjouwerman said the worker avoided being caught by using a valid identity that was stolen from a U.S.-based individual," reports CyberScoop. "The scheme was further enhanced by the actor using a stock image augmented by artificial intelligence." From the report: An internal investigation started when KnowBe4's InfoSec Security Operations Center team detected "a series of suspicious activities" from the new hire. The remote worker was sent an Apple laptop, which was flagged by the company on July 15 when malware was loaded onto the machine. The AI-filtered photo, meanwhile, was flagged by the company's Endpoint Detection and Response software. Later that evening, the SOC team had "contained" the fake worker's systems after he stopped responding to outreach. During a roughly 25-minute period, "the attacker performed various actions to manipulate session history files, transfer potentially harmful files, and execute unauthorized software," Sjouwerman wrote in the post. "He used a [single-board computer] raspberry pi to download the malware." From there, the company shared its data and findings with the FBI and with Mandiant, the Google-owned cyber firm, and came to the conclusion that the worker was a fictional persona operating from North Korea.

KnowBe4 said the fake employee likely had his workstation connected "to an address that is basically an 'IT mule laptop farm.'" They'd then use a VPN to work the night shift from where they actually reside -- in this case, North Korea "or over the border in China." That work would take place overnight, making it appear that they're logged on during normal U.S. business hours. "The scam is that they are actually doing the work, getting paid well, and give a large amount to North Korea to fund their illegal programs," Sjouwerman wrote. "I don't have to tell you about the severe risk of this." Despite the intrusion, Sjouwerman said "no illegal access was gained, and no data was lost, compromised, or exfiltrated on any KnowBe4 systems." He chalked up the incident to a threat actor that "demonstrated a high level of sophistication in creating a believable cover identity" and identified "weaknesses in the hiring and background check processes."

Google

Google's Exclusive Reddit Access (404media.co) 43

Google is now the only search engine that can surface results from Reddit, making one of the web's most valuable repositories of user generated content exclusive to the internet's already dominant search engine. 404 Media: If you use Bing, DuckDuckGo, Mojeek, Qwant or any other alternative search engine that doesn't rely on Google's indexing and search Reddit by using "site:reddit.com," you will not see any results from the last week.

DuckDuckGo is currently turning up seven links when searching Reddit, but provides no data on where the links go or why, instead only saying that "We would like to show you a description here but the site won't allow us." Older results will still show up, but these search engines are no longer able to "crawl" Reddit, meaning that Google is the only search engine that will turn up results from Reddit going forward. Searching for Reddit still works on Kagi, an independent, paid search engine that buys part of its search index from Google. The news shows how Google's near monopoly on search is now actively hindering other companies' ability to compete at a time when Google is facing increasing criticism over the quality of its search results.
The news follows Google signing a $60 million deal with Reddit early this year to use the social network's content to train its LLMs.

Slashdot Top Deals