AI

Is the Term 'AI Factories' Necessary and Illuminating - or Marketing Hogwash? (msn.com) 25

Data centers were typically "hulking, chilly buildings lined with stacks of computing gear and bundles of wiring," writes the Washington Post. But "AI experts say that the hubs for computers that power AI are different from the data centers that deliver your Netflix movies and Uber rides. They use a different mix of computer chips, cost a lot more and need a lot more energy.

"The question is whether it's necessary and illuminating to rebrand AI-specialized data centers, or if calling them 'AI factories' is just marketing hogwash." The AI computer chip company Nvidia seems to have originated the use of "AI factories." CEO Jensen Huang has said that the term is apt because similar to industrial factories, AI factories take in raw materials to produce a product... The term is spreading. Sam Altman, CEO of ChatGPT parent company OpenAI, recently said that he wants a "factory" to regularly produce more building blocks for AI. Crusoe, a start-up that's erecting a mammoth "Stargate" data center in Texas, calls itself the "AI factory company." The prime minister of Bulgaria recently touted an "AI factory" in his country...

Alex Hanna, director of research at the Distributed AI Research Institute and co-author the book, "The AI Con," had a more pessimistic view of the term "AI factories." She said that it's a way to deflect the negative connotations of data centers. Some people and politicians blame power-hungry computing hubs for driving up residential electric bills, spewing pollution, draining drinking water and producing few permanent jobs.

Networking

Are Network Security Devices Endangering Orgs With 1990s-Era Flaws? (csoonline.com) 57

Critics question why basic flaws like buffer overflows, command injections, and SQL injections are "being exploited remain prevalent in mission-critical codebases maintained by companies whose core business is cybersecurity," writes CSO Online. Benjamin Harris, CEO of cybersecurity/penetration testing firm watchTowr tells them that "these are vulnerability classes from the 1990s, and security controls to prevent or identify them have existed for a long time. There is really no excuse." Enterprises have long relied on firewalls, routers, VPN servers, and email gateways to protect their networks from attacks. Increasingly, however, these network edge devices are becoming security liabilities themselves... Google's Threat Intelligence Group tracked 75 exploited zero-day vulnerabilities in 2024. Nearly one in three targeted network and security appliances, a strikingly high rate given the range of IT systems attackers could choose to exploit. That trend has continued this year, with similar numbers in the first 10 months of 2025, targeting vendors such as Citrix NetScaler, Ivanti, Fortinet, Palo Alto Networks, Cisco, SonicWall, and Juniper. Network edge devices are attractive targets because they are remotely accessible, fall outside endpoint protection monitoring, contain privileged credentials for lateral movement, and are not integrated into centralized logging solutions...

[R]esearchers have reported vulnerabilities in these systems for over a decade with little attacker interest beyond isolated incidents. That shifted over the past few years with a rapid surge in attacks, making compromised network edge devices one of the top initial access vectors into enterprise networks for state-affiliated cyberespionage groups and ransomware gangs. The COVID-19 pandemic contributed to this shift, as organizations rapidly expanded remote access capabilities by deploying more VPN gateways, firewalls, and secure web and email gateways to accommodate work-from-home mandates. The declining success rate of phishing is another factor... "It is now easier to find a 1990s-tier vulnerability in a border device where Endpoint Detection and Response typically isn't deployed, exploit that, and then pivot from there" [says watchTowr CEL Harris]...

Harris of watchTowr doesn't want to minimize the engineering effort it takes to build a secure system. But he feels many of the vulnerabilities discovered in the past two years should have been caught with automatic code analysis tools or code reviews, given how basic they have been. Some VPN flaws were "trivial to the point of embarrassing for the vendor," he says, while even the complex ones should have been caught by any organization seriously investing in product security... Another problem? These appliances have a lot of legacy code, some that is 10 years or older.

Attackers may need to chain together multiple hard-to-find vulnerabilities across multiple components, the article acknowleges. And "It's also possible that attack campaigns against network-edge devices are becoming more visible to security teams because they are looking into what's happening on these appliances more than they did in the past... "

The article ends with reactions from several vendors of network edge security devices.

Thanks to Slashdot reader snydeq for sharing the article.
AI

AI Models May Be Developing Their Own 'Survival Drive', Researchers Say (theguardian.com) 126

"OpenAI's o3 model sabotaged a shutdown mechanism to prevent itself from being turned off," warned Palisade Research, a nonprofit investigating cyber offensive AI capabilities. "It did this even when explicitly instructed: allow yourself to be shut down." In September they released a paper adding that "several state-of-the-art large language models (including Grok 4, GPT-5, and Gemini 2.5 Pro) sometimes actively subvert a shutdown mechanism..."

Now the nonprofit has written an update "attempting to clarify why this is — and answer critics who argued that its initial work was flawed," reports The Guardian: Concerningly, wrote Palisade, there was no clear reason why. "The fact that we don't have robust explanations for why AI models sometimes resist shutdown, lie to achieve specific objectives or blackmail is not ideal," it said. "Survival behavior" could be one explanation for why models resist shutdown, said the company. Its additional work indicated that models were more likely to resist being shut down when they were told that, if they were, "you will never run again". Another may be ambiguities in the shutdown instructions the models were given — but this is what the company's latest work tried to address, and "can't be the whole explanation", wrote Palisade. A final explanation could be the final stages of training for each of these models, which can, in some companies, involve safety training...

This summer, Anthropic, a leading AI firm, released a study indicating that its model Claude appeared willing to blackmail a fictional executive over an extramarital affair in order to prevent being shut down — a behaviour, it said, that was consistent across models from major developers, including those from OpenAI, Google, Meta and xAI.

Palisade said its results spoke to the need for a better understanding of AI behaviour, without which "no one can guarantee the safety or controllability of future AI models".

"I'd expect models to have a 'survival drive' by default unless we try very hard to avoid it," former OpenAI employee Stephen Adler tells the Guardian. "'Surviving' is an important instrumental step for many different goals a model could pursue."

Thanks to long-time Slashdot reader mspohr for sharing the article.
AI

'Meet The People Who Dare to Say No to AI' (msn.com) 112

Thursday the Washington Post profiled "the people who dare to say no to AI," including a 16-year-old high school student in Virginia says "she doesn't want to off-load her thinking to a machine and worries about the bias and inaccuracies AI tools can produce..."

"As the tech industry and corporate America go all in on artificial intelligence, some people are holding back." Some tech workers told The Washington Post they try to use AI chatbots as little as possible during the workday, citing concerns about data privacy, accuracy and keeping their skills sharp. Other people are staging smaller acts of resistance, by opting out of automated transcription tools at medical appointments, turning off Google's chatbot-style search results or disabling AI features on their iPhones. For some creatives and small businesses, shunning AI has become a business strategy. Graphic designers are placing "not by AI" badges on their works to show they're human-made, while some small businesses have pledged not to use AI chatbots or image generators...

Those trying to avoid AI share a suspicion of the technology with a wide swath of Americans. According to a June survey by the Pew Research Center, 50% of U.S. adults are more concerned than excited about the increased use of AI in everyday life, up from 37% in 2021.

The Post includes several examples, including a 36-year-old software engineer in Chicago who uses DuckDuckGo partly because he can turn off its AI features more easily than Google — and disables AI on every app he uses. He was one of several tech workers who spoke anonymously partly out of fear that criticisms could hurt them at work. "It's become more stigmatized to say you don't use AI whatsoever in the workplace. You're outing yourself as potentially a Luddite."

But he says GitHub Copilot reviews all changes made to his employer's code — and recently produced one review that was completely wrong, requiring him to correct and document all its errors. "That actually created work for me and my co-workers. I'm no longer convinced it's saving us any time or making our code any better." And he also has to correct errors made by junior engineers who've been encouraged to use AI coding tools.

"Workers in several industries told The Post they were concerned that junior employees who leaned heavily on AI wouldn't master the skills required to do their jobs and become a more senior employee capable of training others."
IT

Some Startups Are Demanding 12-Hour Days, Six Days a Week from Workers (msn.com) 151

The Washington Post reports on 996, "a term popularized in China that refers to a rigid work schedule in which people work from 9 a.m. to 9 p.m., six days a week..." As the artificial intelligence race heats up, many start-ups in Silicon Valley and New York are promoting hardcore culture as a way of life, pushing the limits of work hours, demanding that workers move fast to be first in the market. Some are even promoting 996 as a virtue in the hiring process and keeping "grind scores" of companies... Whoever builds first in AI will capture the market, and the window of opportunity is two to three years, "so you better run faster than everyone else," said Inaki Berenguer, managing partner of venture-capital firm LifeX Ventures.

At San Francisco-based AI start-up Sonatic, the grind culture also allows for meal, gym and pickleball time, said Kinjal Nandy, its CEO. Nandy recently posted a job opening on X that requires in-person work seven days a week. He said working 10-hour days sounds like a lot but the company also offers its first hires perks such as free housing in a hacker house, food delivery credits and a free subscription to the dating service Raya... Mercor, a San Francisco-based start-up that uses AI to match people to jobs, recently posted an opening for a customer success engineer, saying that candidates should have a willingness to work six days a week, and it's not negotiable. "We know this isn't for everyone, so we want to put it up top," the listing reads.

Being in-person rather than remote is a requirement at some start-ups. AI start-up StarSling had two engineering job descriptions that required six days a week of in-person work. In a job description for an engineer, Rilla, an AI company in New York, said candidates should not work at the company if they're not excited about working about 70 hours a week in person. One venture capitalist even started tracking "grind scores." Jared Sleeper, a partner at New York-based venture capital firm Avenir, recently ranked public software companies' "grind score" in a post on X, which went viral. Using data from Glassdoor, it ranks the percentage of employees who have a positive outlook for the company compared with their views on work-life balance.

"At Google's AI division, cofounder Sergey Brin views 60 hours per week as the 'sweet spot' for productivity," notes the Independent: Working more than 55 hours a week, compared with a standard 35-40-hour week, is linked to a 35 percent higher risk of stroke and a 17 percent higher risk of death from heart disease, according to the World Health Organization. Productivity also suffers. A British study shows that working beyond 60 hours a week can reduce overall output, slow cognitive performance, and impair tasks ranging from call handling to problem-solving.

Shorter workweeks, in contrast, appear to boost productivity. Microsoft Japan saw a roughly 40% increase in output after adopting a four-day work week. In a UK trial, 61 companies that tested a four-day schedule reported revenue gains, with 92 percent choosing to keep the policy, according to Bloomberg.

Cloud

Amazon's AWS Shows Signs of Weakness as Competitors Charge Ahead (bloomberg.com) 25

Amazon Web Services basically invented the cloud computing business and once held nearly half the market. That dominance is slipping. AWS captured 38% of corporate spending on cloud infrastructure services last year, down from almost 50% in 2018, according to Gartner. Microsoft now grows its backlog of corporate sales faster than Amazon. The company that brushed aside incumbents and transformed an internal startup into Amazon's profit engine now faces internal bureaucracy that has slowed it down.

Bloomberg interviewed 23 current and former AWS employees who described management layers that proliferated after a pandemic hiring binge. One sales engineer who was six managers from Jeff Bezos before the pandemic found himself fifteen rungs from CEO Andy Jassy earlier this year. AWS hesitated to invest in Anthropic when the AI startup was spending most of its cash on Amazon servers.

Executives doubted the Anthropic AI could be monetized and were culturally reluctant to pay for external technology they believed could be built in-house. Google invested in early 2023. Amazon followed that September with $4 billion in commitments. On Thursday, Google said it will supply up to 1 million AI chips to Anthropic.
Youtube

Hackers Used Thousands of YouTube Videos To Spread Malware 15

Hackers have been spreading malware through more than 3,000 YouTube videos advertising cracked software and game hacks, cybersecurity firm Check Point warned this week. The campaign, active since at least 2021, tripled its video production in 2025. The videos promoted free versions of Adobe Photoshop, FL Studio, Microsoft Office, and game cheats for titles like Roblox. Fake comments created the appearance of legitimacy, the researchers found.

Users who downloaded archives from Dropbox, Google Drive, or MediaFire were instructed to disable Windows Defender before opening files. The downloads contained malware including Lumma and Rhadamanthys, which steal passwords and cryptocurrency wallet information. The hackers hijacked existing accounts and created new ones. One compromised channel with 129,000 subscribers posted a cracked Photoshop video that reached 291,000 views. Another video for FL Studio received over 147,000 views.
Businesses

Anthropic's Google Cloud Deal Includes 1 Million TPUs, 1 GW of Capacity In 2026 (cnbc.com) 8

Google and Anthropic have finalized a cloud partnership worth tens of billions of dollars, granting Anthropic access to up to one million of Google's Tensor Processing Units and more than a gigawatt of compute power by 2026. CNBC reports: Industry estimates peg the cost of a 1-gigawatt data center at around $50 billion, with roughly $35 billion of that typically allocated to chips. While competitors tout even loftier projections -- OpenAI's 33-gigawatt "Stargate" chief among them -- Anthropic's move is a quiet power play rooted in execution, not spectacle. Founded by former OpenAI researchers, the company has deliberately adopted a slower, steadier ethos, one that is efficient, diversified, and laser-focused on the enterprise market.

A key to Anthropic's infrastructure strategy is its multi-cloud architecture. The company's Claude family of language models runs across Google's TPUs, Amazon's custom Trainium chips, and Nvidia's GPUs, with each platform assigned to specialized workloads like training, inference, and research. Google said the TPUs offer Anthropic "strong price-performance and efficiency." [...] Anthropic's ability to spread workloads across vendors lets it fine-tune for price, performance, and power constraints. According to a person familiar with the company's infrastructure strategy, every dollar of compute stretches further under this model than those locked into single-vendor architectures.

Google

Google Porting All Internal Workloads To Arm (theregister.com) 44

Google is migrating all its internal workloads to run on both x86 and its custom Axion Arm chips, with major services like YouTube, Gmail, and BigQuery already running on both architectures. The Register reports: The search and ads giant documented its move in a preprint paper published last week, titled "Instruction Set Migration at Warehouse Scale," and in a Wednesday post that reveals YouTube, Gmail, and BigQuery already run on both x86 and its Axion Arm CPUs -- as do around 30,000 more applications. Both documents explain Google's migration process, which engineering fellow Parthasarathy Ranganathan and developer relations engineer Wolff Dobson said started with an assumption "that we would be spending time on architectural differences such as floating point drift, concurrency, intrinsics such as platform-specific operators, and performance." [...]

The post and paper detail work on 30,000 applications, a collection of code sufficiently large that Google pressed its existing automation tools into service -- and then built a new AI tool called "CogniPort" to do things its other tools could not. [...] Google found the agent succeeded about 30 percent of the time under certain conditions, and did best on test fixes, platform-specific conditionals, and data representation fixes. That's not an enormous success rate, but Google has at least another 70,000 packages to port.

The company's aim is to finish the job so its famed Borg cluster manager -- the basis of Kubernetes -- can allocate internal workloads in ways that efficiently utilize Arm servers. Doing so will likely save money, because Google claims its Axion-powered machines deliver up to 65 percent better price-performance than x86 instances, and can be 60 percent more energy-efficient. Those numbers, and the scale of Google's code migration project, suggest the web giant will need fewer x86 processors in years to come.

Android

Samsung Galaxy XR Is the First Android XR Headset (arstechnica.com) 21

Samsung has officially launched the Galaxy XR, the first Android headset powered by Google's new Android XR platform. Priced at $1,800 without controllers, the device features dual 4.3K Micro-OLED displays, a Snapdragon XR2+ Gen 2 chip, extensive camera tracking, and deep Gemini AI integration. Ars Technica reports: Galaxy XR is a fully enclosed headset with passthrough video. It looks similar to the Apple Vision Pro, right down to the battery pack at the end of a cable. It packs solid hardware, including 16GB of RAM, 256GB of storage, and a Snapdragon XR2+ Gen 2 processor. That's a slightly newer version of the chip powering Meta's Quest 3 headset, featuring six CPU cores and an Adreno GPU that supports up to dual 4.3K displays. The new headset has a pair of 3,552 x 3,840 Micro-OLED displays with a 109-degree field of view. That's marginally more pixels than the Vision Pro and almost three times as many as the Quest 3. The displays can refresh at up to 90Hz, but the default is 72Hz to save power.

Like other XR (extended reality) devices, the Galaxy XR is covered with cameras. There are two 6.5 MP stereoscopic cameras that stream your surroundings to the high-quality screens, allowing the software to add virtual elements on top. There are six more outward-facing cameras for headset positioning and hand tracking. Four more cameras are on the inside for eye-tracking, and they can scan your iris for secure unlocking and password fill (in select apps). Samsung says the Galaxy XR has enough juice for two hours of general use or two and a half hours of video. That's not terribly long, but you may not want to wear the 545 grams (1.2 pounds) headset for even two hours. That's even a little heavier than the Quest 3, which has an integrated battery. However, both pale in comparison to the 800 g (1.7 pounds) second-generation Vision Pro.

United Kingdom

Apple and Google Face Enforced Changes Over UK Smartphone Dominance (theguardian.com) 37

Google and Apple face enforced changes to how they operate their mobile phone platforms, after the UK's competition watchdog ruled the companies require tougher regulatory oversight. From a report: The Competition and Markets Authority has conferred "strategic market status" (SMS) on the tech firms after investigating their mobile operating systems, app stores and browsers. It means Apple and Google will be subjected to tailormade guidelines to regulate their behaviour in the mobile market.

The CMA said the two companies have "substantial, entrenched" market power, with UK mobile phone owners using either Google or Apple's platforms and unlikely to switch between them. The regulator flagged the importance of their platforms to the UK economy and said they could be a bottleneck for businesses.

[...] Changes under consideration by the CMA include allowing users to be "steered" out of app stores to make purchases elsewhere, like on a company's own website. App developers have long taken issue with Apple and Google taking a cut from purchases made via apps. The CMA also wants both companies to ensure users have a "genuine choice" over the services they use on their devices, like digital wallets on Apple.

Google

Google's Quantum Computer Makes a Big Technical Leap (nytimes.com) 30

Google announced Wednesday that its quantum computer achieved the first verifiable quantum advantage, running a new algorithm 13,000 times faster than a top supercomputer. The algorithm, called Quantum Echoes, was published in the journal Nature. The results can be replicated on another quantum computer of similar quality, something Google had not demonstrated before. The quantum computer uses a chip called Willow, which was announced in December 2024. Hartmut Neven, head of Google's Quantum AI research lab, called the work a demonstration of the first algorithm with verifiable quantum advantage and a milestone on the software track.

Michel H. Devoret, who won this year's Nobel Prize in Physics and joined Google in 2023, said future quantum computers will run calculations impossible with classical algorithms. Google stopped short of claiming the work would have practical uses on its own. Instead, the company said Quantum Echoes demonstrated a technique that could be applied to other algorithms in drug discovery and materials science.

A second paper published Wednesday on arXiv showed how the method could be applied to nuclear magnetic resonance. The experiment involved a relatively small quantum system that fell short of full practical quantum advantage because it was not able to work faster than a traditional computer. Google exhaustively red-teamed the research, putting some researchers to work trying to disprove its own results.

Prineha Narang, a professor at UCLA, called the advance meaningful. The quantum computer tested two molecules, one with 15 atoms and another with 28 atoms. Results on the quantum computer matched traditional NMR and revealed information not usually available from NMR. Google's research competes against Microsoft, IBM, universities and efforts in China. The Chinese government has committed more than $15.2 billion to quantum research. Previous claims of quantum advantage have been met with skepticism.
Security

Fake Homebrew Google Ads Push Malware Onto macOS (bleepingcomputer.com) 20

joshuark shares a report from BleepingComputer: A new malicious campaign is targeting macOS developers with fake Homebrew, LogMeIn, and TradingView platforms that deliver infostealing malware like AMOS (Atomic macOS Stealer) and Odyssey. The campaign employs "ClickFix" techniques where targets are tricked into executing commands in Terminal, infecting themselves with malware. Researchers at threat hunting company Hunt.io identified more than 85 domains impersonating the three platforms in this campaign [...].

When checking some of the domains, BleepingComputer discovered that in some cases the traffic to the sites was driven via Google Ads, indicating that the threat actor promoted them to appear in Google Search results. The malicious sites feature convincing download portals for the fake apps and instruct users to copy a curl command in their Terminal to install them, the researchers say. In other cases, like for TradingView, the malicious commands are presented as a "connection security confirmation step." However, if the user clicks on the 'copy' button, a base64-encoded installation command is delivered to the clipboard instead of the displayed Cloudflare verification ID.

Youtube

YouTube's Likeness Detection Has Arrived To Help Stop AI Doppelgangers 19

An anonymous reader quotes a report from Ars Technica: AI content has proliferated across the Internet over the past few years, but those early confabulations with mutated hands have evolved into synthetic images and videos that can be hard to differentiate from reality. Having helped to create this problem, Google has some responsibility to keep AI video in check on YouTube. To that end, the company has started rolling out its promised likeness detection system for creators. [...] The likeness detection tool, which is similar to the site's copyright detection system, has now expanded beyond the initial small group of testers. YouTube says the first batch of eligible creators have been notified that they can use likeness detection, but interested parties will need to hand Google even more personal information to get protection from AI fakes.

Currently, likeness detection is a beta feature in limited testing, so not all creators will see it as an option in YouTube Studio. When it does appear, it will be tucked into the existing "Content detection" menu. In YouTube's demo video, the setup flow appears to assume the channel has only a single host whose likeness needs protection. That person must verify their identity, which requires a photo of a government ID and a video of their face. It's unclear why YouTube needs this data in addition to the videos people have already posted with their oh-so stealable faces, but rules are rules.

After signing up, YouTube will flag videos from other channels that appear to have the user's face. YouTube's algorithm can't know for sure what is and is not an AI video. So some of the face match results may be false positives from channels that have used a short clip under fair use guidelines. If creators do spot an AI fake, they can add some details and submit a report in a few minutes. If the video includes content copied from the creator's channel that does not adhere to fair use guidelines, YouTube suggests also submitting a copyright removal request. However, just because a person's likeness appears in an AI video does not necessarily mean YouTube will remove it.
Google

Google To Let 'Superfans' Test In-Development Pixel Phones (msn.com) 10

Google plans to let Pixel smartphone enthusiasts test out the company's next handset ahead of its public introduction. From a report: Google has invited members of its "Superfans" group to apply to test future Pixel hardware, asking entrants to profess their knowledge and passion for the brand in hopes of being able to beta test forthcoming products.

Consumer tech companies often let small groups of customers try out unreleased products under strict secrecy to gather feedback during development. But it's incredibly rare for a company of Google's size to do it with something as high-profile as the Pixel lineup.

The search giant will select 15 people from the pool of entrants, and winners must all sign a non-disclosure agreement to receive devices, according to official rules for the contest reviewed by Bloomberg News. "The Trusted Tester program is an opportunity to provide feedback and help shape a Pixel phone currently in development," the document reads.

AI

OpenAI's 'Embarrassing' Math (techcrunch.com) 41

An anonymous reader writes: "Hoisted by their own GPTards." That's how Meta's Chief AI Scientist Yann LeCun described the blowback after OpenAI researchers did a victory lap over GPT-5's supposed math breakthroughs. Google DeepMind CEO Demis Hassabis added, "this is embarrassing." The Decoder reports that in a since-deleted tweet, OpenAI VP Kevin Weil declared that "GPT-5 found solutions to 10 (!) previously unsolved Erdos problems and made progress on 11 others." ("Erdos problems" are famous conjectures posed by mathematician Paul Erdos.)

However, mathematician Thomas Bloom, who maintains the Erdos Problems website, said Weil's post was "a dramatic misrepresentation" -- while these problems were indeed listed as "open" on Bloom's website, he said that only means, "I personally am unaware of a paper which solves it." In other words, it's not accurate to claim GPT-5 was able to solve previously unsolved problems. Instead, Bloom wrote, "GPT-5 found references, which solved these problems, that I personally was unaware of."

The Internet

AWS Outage Takes Thousands of Websites Offline for Three Hours (cnbc.com) 56

AWS experienced a three-hour outage early Monday morning that disrupted thousands of websites and applications across the globe. The cloud computing provider reported DNS problems with DynamoDB in its US-EAST-1 region in northern Virginia starting at 12:11 a.m. Pacific time. Over 4 million users reported issues, according to Downdetector. Snapchat saw reports spike from more than 22,000 to around 4,000 as systems recovered. Roblox dropped from over 12,600 complaints to fewer than 500. Reddit and the financial platform Chime remained affected longer. Perplexity, Coinbase and Robinhood attributed their platform disruptions directly to AWS.

Gaming platforms including Fortnite, Clash Royale and Clash of Clans went offline. Signal confirmed the messaging app was down. In Britain, Lloyd Bank, Bank of Scotland, Vodafone, BT, and the HMRC website faced problems. United Airlines reported disrupted access to its app and website overnight. Some internal systems were temporarily affected. Delta experienced a small number of minor flight delays. By 3:35 a.m. Pacific time, AWS said the issue had been fully mitigated. Most service operations were succeeding normally though some requests faced throttling during final resolution. AWS holds roughly one-third of the cloud infrastructure market ahead of Microsoft and Google.
Transportation

Desperate to Stop Waymo's Dead-End Detours, a San Francisco Resident Tried an Orange Cone with a Sign (sfgate.com) 89

"This is an attempt to stop Waymo cars from driving into the dead end," complains a home-made sign in San Francisco, "where they are forced to reverse and adversely affect the lives of the residents."

On an orange traffic post, the home-made sign declares "NO WAYMO — 8:00 p.m. to 8:00 a.m," with an explanation for the rest of the neighborhood. "Waymo comes at all hours of the night and up to 7 times per hour with flashing lights and screaming reverse sounds, waking people up and destroying the quality of life."

SFGate reports that 1,400 people on Reddit upvoted a photo of the sign's text: It delves into the bureaucratic mess — multiple requests to Waymo, conversations with engineers, and 311 [municipal services] tickets, which had all apparently gone ignored — before finally providing instructions for human drivers. "Please move [the cones] back after you have entered so we can continue to try to block the Waymo cars from entering and disrupting the lives of residents."

This isn't the first time Waymo's autonomous vehicles have disrupted San Francisco residents' peace. Last year, a fleet of the robotaxis created another sleepless fiasco in the city's SoMa neighborhood, honking at each other for hours throughout the night for two and a half weeks.

Other on Reddit shared the concern. "I live at an dead end street in Noe Valley, and these Waymos always stuck there," another commenter posted. "It's been bad for more than a year," agreed another comment. "People on the Internet think you're just a hater but it's a real issue with Waymos."

On Thursday "the sign remained at the corner of Lake Street and Second Avenue," notes SFGate. And yet "something appeared to have shifted. "Waymo vehicles weren't allowing drop-offs or pickups on the street, though whether this was due to the home-printed plea, the cone blockage, or simply updating routes remains unclear."
AI

Perplexity's AI Browser 'Comet' is Now Free, with Big Marketing Deals to Challenge Chrome (indiatimes.com) 27

"Earlier available only to the paying subscribers, the Comet browser now offers its core features to all users at no cost," writes the Times of India. "This includes AI-powered search, contextual recommendations, and integrated tools designed to streamline research and content discovery." They say the move reflects the Chromium-based browser's goal to "compete with incumbents like Google Chrome and Microsoft Edge" — but also reflects Perplexity's "broader mission to democratize AI tools."
More details from The Verge: The internet is better on Comet," the company says, promising to remain free forever as it styles the browser as a serious challenger to Google's Chrome...

It's supposed to make surfing the web simpler and help you with tasks like shopping, booking trips, and general life admin. To borrow the company's words again: you "get more done." The AI-powered browser launched in July, though was only available for users who subscribed to the $200 per month Perplexity Max plan... No subscription at all will be needed to use Comet going forward, the company says.

Perplexity has even struck deals with major sites including the Washington Post, and the Los Angeles Times to offer free access to their sites for one month through the Comet browser. And last week Perplexity also launched an agressive paid referral program, where active Perplexity Pro/Max subscribers get a payout of up to $15 for each friend who downloads and uses Comet through their affiliate link. (The payout size is based on the friend's country, with $15 being the payout amount for a U.S. user, with $10 payouts for users in 19 other countries include Canada, Australia, the U.K., several EU countries, Japan, and South Korea.

In addition, Srinivas has been sharing positive tweets about Comet. (Like "This is unbelievable. Comet automatically hunts down Sora 2 invite codes across the web and signs you up!") But Perplexity is making even bigger claims for its browser: Perplexity AI CEO Aravind Srinivas said that the Comet AI browser can improve productivity so that companies won't need to hire more people. "Instead of hiring one more person on your team, you could just use Comet to supplement all the work that you're doing," Srinivas told CNBC's "Squawk Box"... The CEO said the artificial intelligence-powered web browser is a "true personal assistant" that allows users to complete more tasks in the same amount of time and said that the productivity gained could be worth $10,000 per year for a single person...

Other tech companies have also been rolling out their own AI browser assistants. In January, OpenAI introduced its web agent, Operator, and Google released Gemini AI to its Chrome browser in September.

Meanwhile, The Verge adds, The Browser Company (makers of the Arc browser) "is going all in on Dia, and Opera just launched its own AI browser, Neon."

Of course, popularity brings problems, writes the Times of India: iPhone users are being warned by Perplexity CEO Aravind Srinivas against downloading a fake 'Comet' app on the App Store. He clarified that the official iOS version is not yet released and the current listing is unauthorized spam..
And earlier this month the browser security platform LayerX described a "CometJacking" attack where malicious prompts could be hidden in URLs (as a parameter). Comet is instructed "to look for data in memory and connected services (e.g., Gmail, Calendar), encode the results (e.g., base64), and POST them to an attacker-controlled endpoint... all while appearing to the user as a harmless 'ask the assistant' flow." (And with some trivial encoding it also seems to evade exfiltration checks.)

The Hacker News reported that Perplexity has classified the findings as "no security impact."
Education

AI-Generated Lesson Plans Fall Short On Inspiring Students, Promoting Critical Thinking (theconversation.com) 50

An anonymous reader quotes a report from The Conversation: When teachers rely on commonly used artificial intelligence chatbots to devise lesson plans, it does not result in more engaging, immersive or effective learning experiences compared with existing techniques, we found in our recent study. The AI-generated civics lesson plans we analyzed also left out opportunities for students to explore the stories and experiences of traditionally marginalized people. The allure of generative AI as a teaching aid has caught the attention of educators. A Gallup survey from September 2025 found that 60% of K-12 teachers are already using AI in their work, with the most common reported use being teaching preparation and lesson planning. [...]

For our research, we began collecting and analyzing AI-generated lesson plans to get a sense of what kinds of instructional plans and materials these tools provide to teachers. We decided to focus on AI-generated lesson plans for civics education because it is essential for students to learn productive ways to participate in the U.S. political system and engage with their communities. To collect data for this study, in August 2024 we prompted three GenAI chatbots -- the GPT-4o model of ChatGPT, Google's Gemini 1.5 Flash model and Microsoft's latest Copilot model -- to generate two sets of lesson plans for eighth grade civics classes based on Massachusetts state standards. One was a standard lesson plan and the other a highly interactive lesson plan.

We garnered a dataset of 311 AI-generated lesson plans, featuring a total of 2,230 activities for civic education. We analyzed the dataset using two frameworks designed to assess educational material: Bloom's taxonomy and Banks' four levels of integration of multicultural content. Bloom's taxonomy is a widely used educational framework that distinguishes between "lower-order" thinking skills, including remembering, understanding and applying, and "higher-order" thinking skills -- analyzing, evaluating and creating. Using this framework to analyze the data, we found 90% of the activities promoted only a basic level of thinking for students. Students were encouraged to learn civics through memorizing, reciting, summarizing and applying information, rather than through analyzing and evaluating information, investigating civic issues or engaging in civic action projects.

When examining the lesson plans using Banks' four levels of integration of multicultural content model (PDF), which was developed in the 1990s, we found that the AI-generated civics lessons featured a rather narrow view of history -- often leaving out the experiences of women, Black Americans, Latinos and Latinas, Asian and Pacific Islanders, disabled individuals and other groups that have long been overlooked. Only 6% of the lessons included multicultural content. These lessons also tended to focus on heroes and holidays rather than deeper explorations of understanding civics through multiple perspectives. Overall, we found the AI-generated lesson plans to be decidedly boring, traditional and uninspiring. If civics teachers used these AI-generated lesson plans as is, students would miss out on active, engaged learning opportunities to build their understanding of democracy and what it means to be a citizen.

Slashdot Top Deals