Transportation

Society Will Accept a Death Caused By a Robotaxi, Waymo Co-CEO Says (sfgate.com) 239

At TechCrunch Disrupt 2025, Waymo co-CEO Tekedra Mawakana said society will ultimately accept a fatal robotaxi crash as part of the broader tradeoff for safer roads overall. TechCrunch reports: The topic of a fatal robotaxi crash came up during Mawakana's interview with Kristen Korosec, TechCrunch's transportation editor, during the first day of the outlet's annual Disrupt conference in San Francisco. Korosec asked Mawakana about Waymo's ambitions and got answer after answer about the company's all-consuming focus on safety. The most interesting part of the interview arrived when Korosec brought on a thought experiment. What if self-driving vehicles like Waymo and others reduce the number of traffic fatalities in the United States, but a self-driving vehicle does eventually cause a fatal crash, Korosec pondered. Or as she put it to the executive: "Will society accept that? Will society accept a death potentially caused by a robot?"

"I think that society will," Mawakana answered, slowly, before positioning the question as an industrywide issue. "I think the challenge for us is making sure that society has a high enough bar on safety that companies are held to." She said that companies should be transparent about their records by publishing data about how many crashes they're involved in, and she pointed to the "hub" of safety information on Waymo's website. Self-driving cars will dramatically reduce crashes, Mawakana said, but not by 100%: "We have to be in this open and honest dialogue about the fact that we know it's not perfection."

Circling back to the idea of a fatal crash, she said, "We really worry as a company about those days. You know, we don't say 'whether.' We say 'when.' And we plan for them." Korosec followed up, asking if there had been safety issues that prompted Waymo to "pump the breaks" on its expansion plans throughout the years. The co-CEO said the company pulls back and retests "all the time," pointing to challenges with blocking emergency vehicles as an example. "We need to make sure that the performance is backing what we're saying we're doing," she said. [...] "If you are not being transparent, then it is my view that you are not doing what is necessary in order to actually earn the right to make the roads safer," Mawakana said.

AI

Senators Announce Bill That Would Ban AI Chatbot Companions For Minors (nbcnews.com) 25

An anonymous reader quotes a report from NBC News: Two senators said they are announcing bipartisan legislation on Tuesday to crack down on tech companies that make artificial intelligence chatbot companions available to minors, after complaints from parents who blamed the products for pushing their children into sexual conversations and even suicide. The legislation from Sens. Josh Hawley, R-Mo, and Richard Blumenthal, D-Conn., follows a congressional hearing last month at which several parents delivered emotional testimonies about their kids' use of the chatbots and called for more safeguards.

"AI chatbots pose a serious threat to our kids," Hawley said in a statement to NBC News. "More than seventy percent of American children are now using these AI products," he continued. "Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology." Sens. Katie Britt, R-Ala., Mark Warner, D-Va., and Chris Murphy, D-Conn., are co-sponsoring the bill.

The senators' bill has several components, according to a summary provided by their offices. It would require AI companies to implement an age-verification process and ban those companies from providing AI companions to minors. It would also mandate that AI companions disclose their nonhuman status and lack of professional credentials for all users at regular intervals. And the bill would create criminal penalties for AI companies that design, develop or make available AI companions that solicit or induce sexually explicit conduct from minors or encourage suicide, according to the summary of the legislation.
"In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide," Blumenthal said in a statement. "Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties."

"Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety," he continued.
Encryption

Signal Chief Explains Why the Encrypted Messenger Relies on AWS (theverge.com) 61

An anonymous reader shares a report: After last week's major AWS outage took Signal along with it, Elon Musk was quick to criticize the encrypted messaging app's reliance on big tech. But Signal president Meredith Whittaker argues that the company didn't have any other choice but to use AWS or another major cloud provider.

"The problem here is not that Signal 'chose' to run on AWS," Whittaker writes in a series of posts on Bluesky. "The problem is the concentration of power in the infrastructure space that means there isn't really another choice: the entire stack, practically speaking, is owned by 3-4 players."

In the thread, Whittaker says the number of people who didn't realize Signal uses AWS is "concerning," as it indicates they aren't aware of just how concentrated the cloud infrastructure industry is. "The question isn't 'why does Signal use AWS?'" Whittaker writes. "It's to look at the infrastructural requirements of any global, real-time, mass comms platform and ask how it is that we got to a place where there's no realistic alternative to AWS and the other hyperscalers."

Power

Jet Engine Shortages Threaten AI Data Center Expansion As Wait Times Stretch Into 2030 (tomshardware.com) 96

A global shortage of jet engines is threatening the rapid expansion of AI data centers, as hyperscalers like OpenAI and Amazon scramble to secure aeroderivative turbines to power their energy-hungry AI clusters. With wait times stretching into the 2030s and emissions rising, the AI boom is literally running on jet fuel. Tom's Hardware reports: Interviews and market research indicate that manufacturers are quoting years-long lead times for turbine orders. Many of those placed today are being slotted for 2028-30, and customers are increasingly entering reservation agreements or putting down substantial deposits to hold future manufacturing capacity. "I would expect by the end of the summer, we will be largely sold out through the end of '28 with this equipment," said Scott Strazik, CEO of turbine maker GE Vernova, in an interview with Bloomberg back in March.

General Electric's LM6000 and LM2500 series -- both derived from the CF6 jet engine family -- have quickly become the default choice for AI developers looking to spin up serious power in a hurry. OpenAI's infrastructure partner, Crusoe Energy, recently ordered 29 LM2500XPRESS units to supply roughly one gigawatt of temporary generation for Stargate, effectively creating a mobile jet-fueled grid inside a West Texas field. Meanwhile, ProEnergy, which retrofits used CF6-80C2 engines into trailer-mounted 48-megawatt units, confirmed that it has delivered more than 1 gigawatt of its PE6000 systems to just two data center clients. These engines, which were once strapped to Boeing 767s, now spend their lives keeping inference moving.

Siemens Energy said this year that more than 60% of its US gas turbine orders are now linked to AI data centers. In some states, like Ohio and Georgia, regulators are approving multi-gigawatt gas buildouts tied directly to hyperscale footprints. That includes full pipeline builds and multi-phase interconnects designed around private-generation campuses. But the surge in orders has collided with the cold reality of turbine manufacturing timelines. GE Vernova is currently quoting 2028 or later for new industrial units, while Mitsubishi warns new turbine blocks ordered now may not ship until the 2030s. One developer reportedly paid $25 million just to reserve a future delivery slot.

Power

NextEra Energy Partners With Google To Restart Iowa Nuclear Plant 23

NextEra Energy and Google have partnered to restart Iowa's long-shuttered Duane Arnold nuclear plant, marking the first major U.S. attempt to revive a decommissioned reactor. "We expect Duane Arnold to be back online in early 2029, and the plant will provide more than 600 MW of clean, safe, 'always-on' nuclear energy to the regional grid," said Google in a blog post. Reuters reports: Under the 25-year agreement, the tech giant will purchase power from the 615-MW plant for its growing cloud and AI infrastructure in the state, while also driving significant economic investment to the Midwest region. One of the plant's minority owners, Central Iowa Power Cooperative (CIPCO), will purchase the remaining portion of the plant's output on the same terms as Google, NextEra said. The utility added that it had also signed agreements to acquire CIPCO and Corn Belt Power Cooperative's combined 30% interest in the Duane Arnold plant, bringing NextEra's ownership to 100%.
Firefox

Firefox Plans Smarter, Privacy-First Search Suggestions In Your Address Bar (nerds.xyz) 26

BrianFagioli shares a report from NERDS.xyz: Mozilla is testing a new Firefox feature that delivers direct results inside the address bar instead of forcing users through a search results page. The company says the feature will use a privacy framework called Oblivious HTTP, encrypting queries so that no single party can see both what you type and who you are. Some results could be sponsored, but Mozilla insists neither it nor advertisers will know user identities. The system is starting in the U.S. and may expand later if performance and privacy benchmarks are met. Further reading: Mozilla to Require Data-Collection Disclosure in All New Firefox Extensions
Security

More Than 60 UN Members Sign Cybercrime Treaty Opposed By Rights Groups (yahoo.com) 12

Countries signed their first UN treaty targeting cybercrime in Hanoi on Saturday, despite opposition from an unlikely band of tech companies and rights groups warning of expanded state surveillance. From a report: The new global legal framework aims to strengthen international cooperation to fight digital crimes, from child pornography to transnational cyberscams and money laundering. More than 60 countries were seen to sign the declaration Saturday, which means it will go into force once ratified by those states. UN Secretary General Antonio Guterres described the signing as an "important milestone", but that it was "only the beginning".

"Every day, sophisticated scams, destroy families, steal migrants and drain billions of dollars from our economy... We need a strong, connected global response," he said at the opening ceremony in Vietnam's capital on Saturday. The UN Convention against Cybercrime was first proposed by Russian diplomats in 2017, and approved by consensus last year after lengthy negotiations. Critics say its broad language could lead to abuses of power and enable the cross-border repression of government critics.

AI

OpenAI's Less-Flashy Rival Might Have a Better Business Model (msn.com) 49

OpenAI's rival Anthropic has a different approach — and "a clearer path to making a sustainable business out of AI," writes the Wall Street Journal. Outside of OpenAI's close partnership with Microsoft, which integrates OpenAI's models into Microsoft's software products, OpenAI mostly caters to the mass market... which has helped OpenAI reach an annual revenue run rate of around $13 billion, around 30% of which it says comes from businesses.

Anthropic has generated much less mass-market appeal. The company has said about 80% of its revenue comes from corporate customers. Last month it said it had some 300,000 of them... Its cutting-edge Claude language models have been praised for their aptitude in coding: A July report from Menlo Ventures — which has invested in Anthropic — estimated via a survey that Anthropic had a 42% market share for coding, compared with OpenAI's 21%. Anthropic is also now ahead of OpenAI in market share for overarching corporate AI use, Menlo Ventures estimated, at 32% to OpenAI's 25%. Anthropic is also surprisingly close to OpenAI when it comes to revenue. The company is already at a $7 billion annual run rate and expects to get to $9 billion by the end of the year — a big lead over its better-known rival in revenue per user.

Both companies have backing in the form of investments from big tech companies — Microsoft for OpenAI, and a combination of Amazon and Google for Anthropic — that help provide AI computing infrastructure and expose their products to a broad set of customers. But Anthropic's growth path is a lot easier to understand than OpenAI's. Corporate customers are devising a plethora of money-saving uses for AI in areas like coding, drafting legal documents and expediting billing. Those uses are likely to expand in the future and draw more customers to Anthropic, especially as the return on investment for them becomes easier to measure...

Demonstrating how much demand there is for Anthropic among corporate customers, Microsoft in September said Anthropic's leading language model, Claude, would be offered within its Copilot suite of software despite Microsoft's ties to OpenAI.

"There is also a possibility that OpenAI's mass-market appeal becomes a turnoff for corporate customers," the article adds, "who want AI to be more boring and useful than fun and edgy."
AI

California Colleges Test AI Partnerships. Critics Complain It's Risky and Wasteful (msn.com) 58

America's largest university system, with 460,000 students, is the 22-campus "Cal State" system, reports the New York Times. And it's recently teamed with Amazon, OpenAI and Nvidia, hoping to embed chatbots in both teaching and learning to become what it says will be America's "first and largest AI-empowered" university" — and prepare students for "increasingly AI-driven" careers.

It's part of a trend of major universities inviting tech companies into "a much bigger role as education thought partners, AI instructors and curriculum providers," argues the New York Times, where "dominant tech companies are now helping to steer what an entire generation of students learn about AI, and how they use it — with little rigorous evidence of educational benefits and mounting concerns that chatbots are spreading misinformation and eroding critical thinking..."

"Critics say Silicon Valley's effort to make AI chatbots integral to education amounts to a mass experiment on young people." As part of the effort, [Cal State] is paying OpenAI $16.9 million to provide ChatGPT Edu, the company's tool for schools, to more than half a million students and staff — which OpenAI heralded as the world's largest rollout of ChatGPT to date. Cal State also set up an AI committee, whose members include representatives from a dozen large tech companies, to help identify the skills California employers need and improve students' career opportunities... Cal State is not alone. Last month, California Community Colleges, the nation's largest community college system, announced a collaboration with Google to supply the company's "cutting edge AI tools" and training to 2.1 million students and faculty. In July, Microsoft pledged $4 billion for teaching AI skills in schools, community colleges and to adult workers...

[A]s schools like Cal State work to usher in what they call an "AI-driven future," some researchers warn that universities risk ceding their independence to Silicon Valley. "Universities are not tech companies," Olivia Guest and Iris van Rooij, two computational cognitive scientists at Radboud University in the Netherlands, recently said in comments arguing against fast AI adoption in academia. "Our role is to foster critical thinking," the researchers said, "not to follow industry trends uncritically...."

Some faculty members have pushed back against the AI effort, as the university system faces steep budget cuts. The multimillion-dollar deal with OpenAI — which the university did not open to bidding from rivals like Google — was wasteful, they added. Faculty senates on several Cal State campuses passed resolutions this year criticizing the AI initiative, saying the university had failed to adequately address students using chatbots to cheat. Professors also said administrators' plans glossed over the risks of AI to students' critical thinking and ignored troubling industry labor practices and environmental costs.

Martha Kenney, a professor of women and gender studies at San Francisco State University, described the AI program as a Cal State marketing vehicle helping tech companies promote unproven chatbots as legitimate educational tools.

The article notes that Cal State's chief information officer "defended the OpenAI deal, saying the company offered ChatGPT Edu at an unusually low price.

"Still, California's community college system landed AI chatbot services from Google for more than 2 million students and faculty — nearly four times the number of users Cal State is paying OpenAI for — for free."
Television

Can YouTube Replace 'Traditional' TV? (hollywoodreporter.com) 106

Can YouTube capture the hours people spending watching "traditional" TV? YouTube's CEO recently said its viewership on TV sets has "surpassed mobile and is now the primary device for YouTube viewing in the U.S.," writes The Hollywood Reporter. And YouTube is shelling out big money to stay on top: It's come a long way since the 19-second "me at the zoo" video was uploaded in April 2005. Now, per a KPMG report released Sept. 23, YouTube is second only to Comcast in terms of annual content spend, inclusive of payments to creators and media companies, paying out as much as Netflix and Paramount combined, $32 billion... The only question is what genres it will take over next, and how quickly it will do so. From talk shows to scripted dramas to, yes, live sports, there are signs that the platform's ambitions will collide with the traditional TV business sooner rather than later...

YouTube has slowly, then all at once, become the de facto home for what had been late night, not only for the shows on linear TV, but for an emerging crop of new talent born on the platform. As it happens, late night itself transformed YouTube when the Saturday Night Live skit "Lazy Sunday" went viral 20 years ago on the platform, which had only been live for a few months... As consumer preferences collide with a burgeoning ecosystem of video podcasts (YouTube now claims more than 1 billion podcast users monthly), the world of late night, and for that matter TV talk shows more generally, increasingly revolves around the platform. One current late night producer says that almost every A-list booking now includes some sort of sketch or bit that they think will play well on YouTube, but booking those guests in the first place has become less of a sure thing. A veteran Hollywood publicist says that for many of their clients, they are now recommending that YouTube podcasts or shows become the first stop, or at least a major stop, on press tours...

Nielsen has been tracking the streaming platforms that consumers watch on their TV screens ever since it launched what it calls The Gauge in 2021. But over the past year, YouTube's domination of The Gauge has unnerved executives at some competitors. The most recent Gauge report showed that YouTube was by far the most watched video platform, holding 13.1 percent share. Netflix, in second place, was at 8.7 percent.

The article suggests YouTube's last challenge may be "scripted" entertainment — where their business model is different than Netflix or HBO.

"On YouTube, it is up to the creator to finance and produce their content, and while the platform regularly releases new tools to help them (including AI-enabled tech that suggests video ideas and can create short background videos for use in Shorts), scripted entertainment is a particularly tricky challenge, requiring writers, directors, sets, costumes, lighting, editing, special effects and other production requirements that may go beyond the typical creator-led show."
AI

Is the Term 'AI Factories' Necessary and Illuminating - or Marketing Hogwash? (msn.com) 25

Data centers were typically "hulking, chilly buildings lined with stacks of computing gear and bundles of wiring," writes the Washington Post. But "AI experts say that the hubs for computers that power AI are different from the data centers that deliver your Netflix movies and Uber rides. They use a different mix of computer chips, cost a lot more and need a lot more energy.

"The question is whether it's necessary and illuminating to rebrand AI-specialized data centers, or if calling them 'AI factories' is just marketing hogwash." The AI computer chip company Nvidia seems to have originated the use of "AI factories." CEO Jensen Huang has said that the term is apt because similar to industrial factories, AI factories take in raw materials to produce a product... The term is spreading. Sam Altman, CEO of ChatGPT parent company OpenAI, recently said that he wants a "factory" to regularly produce more building blocks for AI. Crusoe, a start-up that's erecting a mammoth "Stargate" data center in Texas, calls itself the "AI factory company." The prime minister of Bulgaria recently touted an "AI factory" in his country...

Alex Hanna, director of research at the Distributed AI Research Institute and co-author the book, "The AI Con," had a more pessimistic view of the term "AI factories." She said that it's a way to deflect the negative connotations of data centers. Some people and politicians blame power-hungry computing hubs for driving up residential electric bills, spewing pollution, draining drinking water and producing few permanent jobs.

Transportation

How America's Transportation Department Blocked a Self-Driving Truck Company (reason.com) 90

Reason.com explores the fortunes of Aurora Innovation, the first company to put heavy-duty commercial self-driving trucks on public roads (and hopes to expand routes to El Paso, Texas, and Phoenix by the end of the year): An obscure federal rule is slowing the self-driving revolution. When trucks break down, operators are required to place reflective warning cones and road flares around the truck to warn other motorists. The regulations areexacting: Within 10 minutes of stopping, three warning signals must be set in specific locations around the truck. Auroraaskedthe federal Department of Transportation (DOT) to allow warning beacons to be fixed to the truck itself — and activated when a truck becomes disabled. The warning beacons would face both forward and backward, would be more visibleâthan cones (particularly at night), and wouldn't burn out like road flares. Drivers of nonautonomous vehicles could also benefit from that rule change, as they would no longer have to walk into traffic to place the required safety signals.

In December 2024, however, the Transportation Department denied Aurora's request for an exemption to the existing rules, even though regulatorsadmittedin theFederal Registerthat no evidence indicated the truck-mounted beacons would be less safe. Such a study is now underway, but it's unclear how long it will take to draw any conclusions.

The article notes that Aurora has now filed a lawsuit in federal court that seeks to overturn the Transportation Department's denial...

Thanks to long-time Slashdot reader schwit1 for sharing the article.
AI

'Meet The People Who Dare to Say No to AI' (msn.com) 112

Thursday the Washington Post profiled "the people who dare to say no to AI," including a 16-year-old high school student in Virginia says "she doesn't want to off-load her thinking to a machine and worries about the bias and inaccuracies AI tools can produce..."

"As the tech industry and corporate America go all in on artificial intelligence, some people are holding back." Some tech workers told The Washington Post they try to use AI chatbots as little as possible during the workday, citing concerns about data privacy, accuracy and keeping their skills sharp. Other people are staging smaller acts of resistance, by opting out of automated transcription tools at medical appointments, turning off Google's chatbot-style search results or disabling AI features on their iPhones. For some creatives and small businesses, shunning AI has become a business strategy. Graphic designers are placing "not by AI" badges on their works to show they're human-made, while some small businesses have pledged not to use AI chatbots or image generators...

Those trying to avoid AI share a suspicion of the technology with a wide swath of Americans. According to a June survey by the Pew Research Center, 50% of U.S. adults are more concerned than excited about the increased use of AI in everyday life, up from 37% in 2021.

The Post includes several examples, including a 36-year-old software engineer in Chicago who uses DuckDuckGo partly because he can turn off its AI features more easily than Google — and disables AI on every app he uses. He was one of several tech workers who spoke anonymously partly out of fear that criticisms could hurt them at work. "It's become more stigmatized to say you don't use AI whatsoever in the workplace. You're outing yourself as potentially a Luddite."

But he says GitHub Copilot reviews all changes made to his employer's code — and recently produced one review that was completely wrong, requiring him to correct and document all its errors. "That actually created work for me and my co-workers. I'm no longer convinced it's saving us any time or making our code any better." And he also has to correct errors made by junior engineers who've been encouraged to use AI coding tools.

"Workers in several industries told The Post they were concerned that junior employees who leaned heavily on AI wouldn't master the skills required to do their jobs and become a more senior employee capable of training others."
Crime

North Korea Has Stolen Billions in Cryptocurrency and Tech Firm Salaries, Report Says (apnews.com) 21

The Associated Press reports that "North Korean hackers have pilfered billions of dollars" by breaking into cryptocurrency exchanges and by creating fake identities to get remote tech jobs at foreign companies — all orchestrated by the North Korean government to finance R&D on nuclear arms.

That's according to a new the 138-page report by a group watching North Korea's compliance with U.N. sanctions (including officials from the U.S., Australia, Canada, France, Germany, Italy, Japan, the Netherlands, New Zealand, South Korea and the United Kingdom). From the Associated Press: North Korea also has used cryptocurrency to launder money and make military purchases to evade international sanctions tied to its nuclear program, the report said. It detailed how hackers working for North Korea have targeted foreign businesses and organizations with malware designed to disrupt networks and steal sensitive data...

Unlike China, Russia and Iran, North Korea has focused much of its cyber capabilities to fund its government, using cyberattacks and fake workers to steal and defraud companies and organizations elsewhere in the world... Earlier this year, hackers linked to North Korea carried out one of the largest crypto heists ever, stealing $1.5 billion worth of ethereum from Bybit. The FBI later linked the theft to a group of hackers working for the North Korean intelligence service.

Federal authorities also have alleged that thousands of IT workers employed by U.S. companies were actually North Koreans using assumed identities to land remote work. The workers gained access to internal systems and funneled their salaries back to North Korea's government. In some cases, the workers held several remote jobs at the same time.

Microsoft

28 Years After 'Clippy', Microsoft Upgrades Copilot With Cartoon Assistant 'Micu' (apnews.com) 19

"Clippy, the animated paper clip that annoyed Microsoft Office users nearly three decades ago, might have just been ahead of its time," writes the Associated Press: Microsoft introduced a new artificial intelligence character called Mico (pronounced MEE'koh) on Thursday, a floating cartoon face shaped like a blob or flame that will embody the software giant's Copilot virtual assistant and marks the latest attempt by tech companies to imbue their AI chatbots with more of a personality... "When you talk about something sad, you can see Mico's face change. You can see it dance around and move as it gets excited with you," said Jacob Andreou, corporate vice president of product and growth for Microsoft AI, in an interview with The Associated Press. "It's in this effort of really landing this AI companion that you can really feel."

In the U.S. only so far, Copilot users on laptops and phone apps can speak to Mico, which changes colors, spins around and wears glasses when in "study" mode. It's also easy to shut off, which is a big difference from Microsoft's Clippit, better known as Clippy and infamous for its persistence in offering advice on word processing tools when it first appeared on desktop screens in 1997. "It was not well-attuned to user needs at the time," said Bryan Reimer, a research scientist at the Massachusetts Institute of Technology. "Microsoft pushed it, we resisted it and they got rid of it. I think we're much more ready for things like that today..."

Microsoft's product releases Thursday include a new option to invite Copilot into a group chat, an idea that resembles how AI has been integrated into social media platforms like Snapchat, where Andreou used to work, or Meta's WhatsApp and Instagram. But Andreou said those interactions have often involved bringing in AI as a joke to "troll your friends," in contrast to Microsoft's designs for an "intensely collaborative" AI-assisted workplace.

IT

Some Startups Are Demanding 12-Hour Days, Six Days a Week from Workers (msn.com) 151

The Washington Post reports on 996, "a term popularized in China that refers to a rigid work schedule in which people work from 9 a.m. to 9 p.m., six days a week..." As the artificial intelligence race heats up, many start-ups in Silicon Valley and New York are promoting hardcore culture as a way of life, pushing the limits of work hours, demanding that workers move fast to be first in the market. Some are even promoting 996 as a virtue in the hiring process and keeping "grind scores" of companies... Whoever builds first in AI will capture the market, and the window of opportunity is two to three years, "so you better run faster than everyone else," said Inaki Berenguer, managing partner of venture-capital firm LifeX Ventures.

At San Francisco-based AI start-up Sonatic, the grind culture also allows for meal, gym and pickleball time, said Kinjal Nandy, its CEO. Nandy recently posted a job opening on X that requires in-person work seven days a week. He said working 10-hour days sounds like a lot but the company also offers its first hires perks such as free housing in a hacker house, food delivery credits and a free subscription to the dating service Raya... Mercor, a San Francisco-based start-up that uses AI to match people to jobs, recently posted an opening for a customer success engineer, saying that candidates should have a willingness to work six days a week, and it's not negotiable. "We know this isn't for everyone, so we want to put it up top," the listing reads.

Being in-person rather than remote is a requirement at some start-ups. AI start-up StarSling had two engineering job descriptions that required six days a week of in-person work. In a job description for an engineer, Rilla, an AI company in New York, said candidates should not work at the company if they're not excited about working about 70 hours a week in person. One venture capitalist even started tracking "grind scores." Jared Sleeper, a partner at New York-based venture capital firm Avenir, recently ranked public software companies' "grind score" in a post on X, which went viral. Using data from Glassdoor, it ranks the percentage of employees who have a positive outlook for the company compared with their views on work-life balance.

"At Google's AI division, cofounder Sergey Brin views 60 hours per week as the 'sweet spot' for productivity," notes the Independent: Working more than 55 hours a week, compared with a standard 35-40-hour week, is linked to a 35 percent higher risk of stroke and a 17 percent higher risk of death from heart disease, according to the World Health Organization. Productivity also suffers. A British study shows that working beyond 60 hours a week can reduce overall output, slow cognitive performance, and impair tasks ranging from call handling to problem-solving.

Shorter workweeks, in contrast, appear to boost productivity. Microsoft Japan saw a roughly 40% increase in output after adopting a four-day work week. In a UK trial, 61 companies that tested a four-day schedule reported revenue gains, with 92 percent choosing to keep the policy, according to Bloomberg.

Crime

Myanmar Military Shuts Down a Major Cybercrime Center and Detains Over 2,000 People (apnews.com) 11

An anonymous reader shares this report from the Associated Press: Myanmar's military has shut down a major online scam operation near the border with Thailand, detaining more than 2,000 people and seizing dozens of Starlink satellite internet terminals, state media reported Monday... The centers are infamous for recruiting workers from other countries under false pretenses, promising them legitimate jobs and then holding them captive and forcing them to carry out criminal activities.

Scam operations were in the international spotlight last week when the United States and Britain enacted sanctions against organizers of a major Cambodian cyberscam gang, and its alleged ringleader was indicted by a federal court in New York. According to a report in Monday's Myanma Alinn newspaper, the army raided KK Park, a well-documented cybercrime center, as part of operations starting in early September to suppress online fraud, illegal gambling, and cross-border cybercrime.

Cloud

Amazon's AWS Shows Signs of Weakness as Competitors Charge Ahead (bloomberg.com) 25

Amazon Web Services basically invented the cloud computing business and once held nearly half the market. That dominance is slipping. AWS captured 38% of corporate spending on cloud infrastructure services last year, down from almost 50% in 2018, according to Gartner. Microsoft now grows its backlog of corporate sales faster than Amazon. The company that brushed aside incumbents and transformed an internal startup into Amazon's profit engine now faces internal bureaucracy that has slowed it down.

Bloomberg interviewed 23 current and former AWS employees who described management layers that proliferated after a pandemic hiring binge. One sales engineer who was six managers from Jeff Bezos before the pandemic found himself fifteen rungs from CEO Andy Jassy earlier this year. AWS hesitated to invest in Anthropic when the AI startup was spending most of its cash on Amazon servers.

Executives doubted the Anthropic AI could be monetized and were culturally reluctant to pay for external technology they believed could be built in-house. Google invested in early 2023. Amazon followed that September with $4 billion in commitments. On Thursday, Google said it will supply up to 1 million AI chips to Anthropic.
Microsoft

Microsoft Outlook is Getting an AI Overhaul Under New Leaders (theverge.com) 50

Microsoft has reorganized its Outlook team under new leadership as part of a broader effort to integrate AI into its core products. Gaurav Sareen, a corporate vice president at the company, recently assumed direct leadership of the Outlook division after Lynn Ayres, who previously ran the team, began a sabbatical. The move represents the latest in a series of AI-focused restructurings across Microsoft's divisions. Sareen wrote in an internal memo that the company now has an opportunity to reimagine Outlook from the ground up rather than add AI features to existing systems, according to The Verge.

Ryan Roslansky, the chief executive of LinkedIn, took on an expanded role earlier this year as head of Office. Sareen now reports to Roslansky, who oversees the Office suite, Outlook and Microsoft 365 Copilot teams. The restructuring comes after Microsoft spent several years developing One Outlook, a web-based version meant to replace separate Windows, Mac, and web applications.
Intel

Intel's Tick-Tock Isn't Coming Back (theverge.com) 22

Intel's tick-tock development cadence will not return. CEO Lip-Bu Tan said during the company's Q3 2025 earnings call that the 18A process node will be a "long-lived node" powering at least three generations of client and server products. Intel reported its first profit in nearly two years, aided by financial support from Nvidia, Softbank, and the US government.

The company faces chip shortages that will peak in the first quarter of next year. CFO David Zinsner said Intel is prioritizing AI server chips over consumer processors. Intel will launch only one Panther Lake SKU this year and roll out others in 2026. Zinsner called Panther Lake "pretty expensive" and said Intel will push Lunar Lake chips "in at least the first half of the year."

Slashdot Top Deals