Google

Google Relaunches Cameyo To Entice Businesses From Windows To ChromeOS (theverge.com) 27

After acquiring software virtualization company Cameyo last year, Google has relaunched a version of the service that makes it easier for Windows-based organizations to migrate over to ChromeOS. From a report: Now called "Cameyo by Google," the Virtual App Delivery (VAD) solution allows users to run legacy Windows apps in the Chrome browser or as web apps, preventing organizations from being tied to Microsoft's operating system. Google says the new Cameyo experience is more efficient than switching between separate virtual desktop environments, allowing users to stream the specific apps they need instead of virtualizing the entire desktop. That allows Windows-based programs like Excel and AutoCAD to run side-by-side with Chrome and other web apps, giving businesses the flexibility to use a mix of Microsoft and Google services.
AI

AI Bubble Is Ignoring Michael Burry's Fears (bloomberg.com) 60

An anonymous reader shares a report: Costing tens of thousands of dollars each, Nvidia's pioneering AI chips make up a hefty chunk of the $400 billion that Big Tech plans to invest this year -- a bill expected to hit $3 trillion by 2029. But unlike 19th-century railroads, or the Dotcom boom's fiber-optic cables, the GPUs fueling today's AI mania are short-lived assets with a shelf life of perhaps five years.

As with your iPhone, this stuff tends to lose value and may need upgrading soon because Nvidia and its rivals aim to keep launching better models. Customers like OpenAI will have to deploy them to stay competitive. So while it's comforting that the companies spending most wildly have mountains of cash to throw around (OpenAI aside), the brief useful life of the chips and the generous accounting assumptions underpinning all of this investment are less consoling.

Michael Burry, who made his name betting against US housing and who's recently turned to the AI boom, waded in this week, warning on X that hyperscalers -- industry jargon for the giant companies building gargantuan data centers -- are underestimating depreciation. Far from being a one-off outlay, there's a danger of AI capex becoming a huge recurring expense. That's great for Nvidia and co., but not necessarily for hyperscalers such as Google and Microsoft. Some face a depreciation tsunami that's forcing them to be extra vigilant about controlling other costs. Amazon has plans to eliminate roughly 14,000 jobs.

And while Wall Street is used to financing fast-depreciating assets such as aircraft and autos, it's worrying that private credit funds are increasingly using GPUs as collateral to finance loans. This includes lending to more speculative startups known as neoclouds, who offer GPUs for rent. Microsoft alone has signed more than $60 billion of neocloud deals.

Apple

The iPad Pro at 10: a Decade of Unrealized Potential (theverge.com) 59

The iPad Pro went on sale ten years ago, launching with a 12.9-inch screen that Apple believed would redefine computing through size alone. The company initially resisted making the device a laptop replacement and maintained strict limitations on multitasking, browser capabilities, and app installation. Over the past decade, Apple reversed course. The iPad Pro gained USB-C ports, external drive support, keyboard and trackpad accessories, and an improved Files app.

The current M5 model includes OLED screens in 13- and 11-inch sizes. iPadOS 26 added free-form multitasking, a menu bar and the Preview app. The webcam now sits in landscape orientation. Despite these advances, the device remains constrained by App Store-only software installation, The Verge writes, limited system access, and the absence of desktop-class browsers. Apple spent years positioning the iPad as a third category between phones and computers. The hardware and accessories now support full computer functionality, but artificial software limitations remain in place.
Education

UK Secondary Schools Pivoting From Narrowly Focused CS Curriculum To AI Literacy 64

Longtime Slashdot reader theodp writes: The UK Department for Education is "replacing its narrowly focused computer science GCSE with a broader, future-facing computing GCSE [General Certificate of Secondary Education] and exploring a new qualification in data science and AI for 16-18-year-olds." The move aims to correct unintended consequences of a shift made more than a decade ago from the existing ICT (Information and Communications Technology) curriculum, which focused on basic digital skills, to a more rigorous Computer Science curriculum at the behest of major tech firms and advocacy groups to address concerns about the UK's programming talent pipeline.

The UK pivot from rigorous CS to AI literacy comes as tech-backed nonprofit Code.org leads a similar shift in the U.S., pivoting from its original 2013 mission calling for rigorous CS for U.S. K-12 students to a new mission that embraces AI literacy. Code.org next month will replace its flagship Hour of Code event with a new Hour of AI "designed to bring AI education into the mainstream" with the support of its partners, including Microsoft, Google, and Amazon. Code.org has pledged to engage 25 million learners with the new Hour of AI this school year.
Security

A Jailed Hacking Kingpin Reveals All About Cybercrime Gang (bbc.com) 19

Slashdot reader alternative_right shares an exclusive BBC interview with Vyacheslav "Tank" Penchukov, once a top-tier cyber-crime boss behind Jabber Zeus, IcedID, and major ransomware campaigns. His story traces the evolution of modern cybercrime from early bank-theft malware to today's lucrative ransomware ecosystem, marked by shifting alliances, Russian security-service ties, and the paranoia that ultimately consumes career hackers. Here's an excerpt from the report: In the late 2000s, he and the infamous Jabber Zeus crew used revolutionary cyber-crime tech to steal directly from the bank accounts of small businesses, local authorities and even charities. Victims saw their savings wiped out and balance sheets upended. In the UK alone, there were more than 600 victims, who lost more than $5.2 million in just three months. Between 2018 and 2022, Penchukov set his sights higher, joining the thriving ransomware ecosystem with gangs that targeted international corporations and even a hospital. [...]

Penchukov says he did not think about the victims, and he does not seem to do so much now, either. The only sign of remorse in our conversation was when he talked about a ransomware attack on a disabled children's charity. His only real regret seems to be that he became too trusting with his fellow hackers, which ultimately led to him and many other criminals being caught. "You can't make friends in cyber-crime, because the next day, your friends will be arrested and they will become an informant," he says. "Paranoia is a constant friend of hackers," he says. But success leads to mistakes. "If you do cyber-crime long enough you lose your edge," he says, wistfully.

EU

Critics Call Proposed Changes To Landmark EU Privacy Law 'Death By a Thousand Cuts' (reuters.com) 27

An anonymous reader quotes a report from Reuters: Privacy activists say proposed changes to Europe's landmark privacy law, including making it easier for Big Tech to harvest Europeans' personal data for AI training, would flout EU case law and gut the legislation. The changes proposed by the European Commission are part of a drive to simplify a slew of laws adopted in recent years on technology, environmental and financial issues which have in turn faced pushback from companies and the U.S. government.

EU antitrust chief Henna Virkkunen will present the Digital Omnibus, in effect proposals to cut red tape and overlapping legislation such as the General Data Protection Regulation, the Artificial Intelligence Act, the e-Privacy Directive and the Data Act, on November 19. According to the plans, Google, Meta Platforms, OpenAI and other tech companies may be allowed to use Europeans' personal data to train their AI models based on legitimate interest.

In addition, companies may be exempted from the ban on processing special categories of personal data "in order not to disproportionately hinder the development and operation of AI and taking into account the capabilities of the controller to identify and remove special categories of personal data." [...] The proposals would need to be thrashed out with EU countries and European Parliament in the coming months before they can be implemented.
"The draft Digital Omnibus proposes countless changes to many different articles of the GDPR. In combination this amounts to a death by a thousand cuts," Austrian privacy group noyb said in a statement. "This would be a massive downgrading of Europeans' privacy 10 years after the GDPR was adopted," noyb's Max Schrems said.

"These proposals would change how the EU protects what happens inside your phone, computer and connected devices," European Digital Rights policy advisor Itxaso Dominguez de Olazabal wrote in a LinkedIn post. "That means access to your device could rely on legitimate interest or broad exemptions like security, fraud detection or audience measurement," she said.
Media

PDF Will Support JPEG XL Format As 'Preferred Solution' (theregister.com) 18

The PDF Association is adding JPEG XL (JXL) support to the PDF specification, giving the advanced image format a new path to relevance despite Google's decision to declare it obsolete and remove it from Chromium. The Register reports: Peter Wyatt, CTO of the PDF Association, said: "We need to adopt a new image [format] that can support HDR [High Dynamic Range] content ... we have picked JPEG XL as our preferred solution." Wyatt also praised other benefits of JXL including wide gamut images, ultra-high resolution support for images with more than 1 billion pixels, and up to 4099 channels with up to 32 bits per channel.

The association is responsible for developing PDF specifications and standards and manages the ISO committee for PDF. JPEG XL is an advanced image format that was designed to be both more efficient and richer in features than JPEG. It was based on a combination of the Free Lossless Image Format (FLIF) from Cloudinary and a Google project called PIK, first released in late 2020, and fully standardized in October 2021 as ISO/IEC 18181. There is a reference implementation called libjxl. A second edition of the ISO standard was published in 2024.

JXL appeared to have wide industry support, including experimental implementation in Chrome and Chromium, until it was killed by Google in October 2022 and removed from its web browser engine. The company stated that "there is not enough interest from the entire ecosystem to continue experimenting with JPEG XL." Many in the community disagreed with the decision, including FLIF inventor Jon Sneyers, who perceived it as the outcome of an internal battle between proponents of JXL and a rival format, AVIF. "AVIF proponents within Chrome are essentially being prosecutor, judge and executioner at the same time," he said.

Power

Data Centers in Nvidia's Hometown Stand Empty Awaiting Power (yahoo.com) 40

Two of the world's biggest data center developers have projects in Nvidia's hometown that may sit empty for years because the local utility isn't ready to supply electricity. From a report: In Santa Clara, California, where the world's biggest supplier of artificial-intelligence chips is based, Digital Realty Trust applied in 2019 to build a data center. Roughly six years later, the development remains an empty shell awaiting full energization. Stack Infrastructure, which was acquired earlier this year by Blue Owl Capital, has a nearby 48-megawatt project that's also vacant, while the city-owned utility, Silicon Valley Power, struggles to upgrade its capacity.

The fate of the two facilities highlights a major challenge for the US tech sector and indeed the wider economy. While demand for data centers has never been greater, driven by the boom in cloud computing and AI, access to electricity is emerging as the biggest constraint. That's largely because of aging power infrastructure, a slow build-out of new transmission lines and a variety of regulatory and permitting hurdles. And the pressure on power systems is only going to increase. Electricity requirements from AI computing will likely more than double in the US alone by 2035, based on BloombergNEF projections. Nvidia's Jensen Huang and OpenAI's Sam Altman are among corporate leaders that have predicted trillions of dollars will pour into building new AI infrastructure.

The Internet

Tim Berners-Lee Says AI Will Not Destroy the Web (theverge.com) 54

Tim Berners-Lee thinks AI will help the web, not destroy it. The inventor of the World Wide Web has spent years warning about platform concentration and social media's corrosive effects, but he views AI differently. AI has accomplished what his Semantic Web project could not. The technology extracts structured data from websites regardless of how the information was formatted. Berners-Lee spent decades trying to convince database owners to make their systems machine-readable voluntarily. AI companies simply took the data anyway. They achieved the machine-readable internet through extraction rather than cooperation, but the result is the same.

Berners-Lee also weighed in on the growing browser competition in the market. OpenAI released Atlas a few weeks ago. Perplexity has launched Comet. Google has expanded AI features in Chrome. All these browsers run on Chromium, which Berners-Lee acknowledges is not ideal, but conceded that browser engines are expensive to build. He thinks Apple's decision to restrict iPhones to WebKit prevents web apps from competing with native apps.
Network

Subsea Cable Investment Set To Double As Tech Giants Accelerate AI Buildout (cnbc.com) 9

Investment in subsea cable projects is expected to reach around $13 billion between 2025 and 2027, almost twice the amount invested between 2022 and 2024, according to telecommunications data provider TeleGeography. Tech giants Meta, Google, Amazon and Microsoft now represent about 50% of the overall market, up from a negligible share a decade ago.

The companies are expanding their subsea infrastructure to connect growing networks of data centers needed for AI development. Meta announced Project Waterworth in February, a 50,000-kilometer cable connecting five continents that will be the world's longest subsea cable project. Amazon announced its first wholly-owned subsea cable called Fastnet, connecting Maryland to Ireland. Google has invested in over 30 subsea cables. Over 95% of international data and voice call traffic travels through nearly a million miles of underwater cables.
Iphone

Apple Explores New Satellite Features for Future iPhones (macobserver.com) 23

In 2022 the iPhone 14 featured emergency satellite service, and there's now support for roadside assistance and the ability to send and receive text messages.

But for future iPhones, Apple is now reportedly working on five new satellite features, reports LiveMint: As per Bloomberg's Mark Gurman, Apple is building an API that would allow developers to add satellite connections to their own apps. However, the implementation is said to depend on app makers, and not every feature or service may be compatible with this system. The iPhone maker is also reportedly working on bringing satellite connectivity to Apple Maps, which would give users the chance to navigate without having access to a SIM card or Wi-Fi. The company is also said to be working on improved satellite messages that could support sending photos and not be limited to just text messages. Apple currently relies on the satellite network run by Globalstar to power current features on iPhones. However, the company is said to be exploring a potential sale, and Elon Musk's SpaceX could be a possible purchaser.
The Mac Observer notes Bloomberg also reported Apple "has discussed building its own satellite service instead of depending on partners." And while some Apple executives pushed back, "the company continues to fund satellite research and infrastructure upgrades with the goal of offering a broader range of features."

And "Future iPhones will use satellite links to extend 5G coverage in low-signal regions, ensuring that users remain connected even when cell towers are out of range.... Apple's slow but steady progress shows how the company wants iPhone satellite technology to move from emergency use to everyday convenience."
Biotech

Genetically Engineered Babies Are Banned in the US. But Tech Titans Are Trying to Make One Anyway (msn.com) 91

"For months, a small company in San Francisco has been pursuing a secretive project: the birth of a genetically engineered baby," reports the Wall Street Journal: Backed by OpenAI chief executive Sam Altman and his husband, along with Coinbase co-founder and CEO Brian Armstrong, the startup — called Preventive — has been quietly preparing what would amount to a biological first. They are working toward creating a child born from an embryo edited to prevent a hereditary disease.... Editing genes in embryos with the intention of creating babies from them is banned in the U.S. and many countries. Preventive has been searching for places to experiment where embryo editing is allowed, including the United Arab Emirates, according to correspondence reviewed by The Wall Street Journal...

Preventive is in the vanguard of a growing number of startups, funded by some of the most powerful people in Silicon Valley, that are pushing the boundaries of fertility and working to commercialize reproductive genetic technologies. Some are working on embryo editing, while others are already selling genetic screening tools that seek to account for the influence of dozens or hundreds of genes on a trait. They say their ultimate goal is to produce babies who are free of genetic disease and resilient against illnesses. Some say they can also give parents the ability to choose embryos that will have higher IQs and preferred traits such as height and eye color. Armstrong, the cryptocurrency billionaire, is leading the charge to make embryo editing a reality. He has told people that gene-editing technology could produce children who are less prone to heart disease, with lower cholesterol and stronger bones to prevent osteoporosis. According to documents and people briefed on his plans, he is already an investor or in talks with embryo editing ventures...

After the Journal approached people close to the company last month to ask about its work, Preventive announced on its website that it had raised $30 million in investment to explore embryo editing. The statement pledged not to advance to human trials "if safety cannot be established through extensive research..." Other embryo editing startups are Manhattan Genomics, co-founded by Thiel Fellow Cathy Tie, and Bootstrap Bio, which plans to conduct tests in Honduras. Both companies are in early stages.

The article notes the only known instance of children born from edited embryos was in 2018, when Chinese scientist He Jiankui "shocked the world with news that he had produced three children genetically altered as embryos to be immune to HIV. He was sentenced to prison in China for three years for the illegal practice of medicine.

"He hasn't publicly shared the children's identities but says they are healthy.
AI

'AI Slop' in Court Filings: Lawyers Keep Citing Fake AI-Hallucinated Cases (indianexpress.com) 135

"According to court filings and interviews with lawyers and scholars, the legal profession in recent months has increasingly become a hotbed for AI blunders," reports the New York Times: Earlier this year, a lawyer filed a motion in a Texas bankruptcy court that cited a 1985 case called Brasher v. Stewart. Only the case doesn't exist. Artificial intelligence had concocted that citation, along with 31 others. A judge blasted the lawyer in an opinion, referring him to the state bar's disciplinary committee and mandating six hours of A.I. training.

That filing was spotted by Robert Freund, a Los Angeles-based lawyer, who fed it to an online database that tracks legal A.I. misuse globally. Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it... [C]ourts are starting to map out punishments of small fines and other discipline. The problem, though, keeps getting worse. That's why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.

Initially he found three or four examples a month. Now he often receives that many in a day. Many lawyers... have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like "artificial intelligence," "fabricated cases" and "nonexistent cases." Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges' opinions scolding lawyers...

Court-ordered penalties "are not having a deterrent effect," said Freund, who has publicly flagged more than four dozen examples this year. "The proof is that it continues to happen."

Google

Did ChatGPT Conversations Leak... Into Google Search Console Results? (arstechnica.com) 51

"For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination," reports Ars Technica: the search-traffic tool for webmasters , Google Search Console.

Though it normally shows the short phrases or keywords typed into Google which led someone to their site, "starting this September, odd queries, sometimes more than 300 characters long, could also be found" in Google Search Console. And the chats "appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private." Jason Packer, owner of analytics consulting firm Quantable, flagged the issue in a detailed blog post last month, telling Ars Technica he'd seen 200 odd queries — including "some pretty crazy ones." (Web optimization consultant Slobodan ManiÄ helped Packer investigate...) Packer points out "nobody clicked share" or were given an option to prevent their chats from being exposed. Packer suspected that these queries were connected to reporting from The Information in August that cited sources claiming OpenAI was scraping Google search results to power ChatGPT responses. Sources claimed that OpenAI was leaning on Google to answer prompts to ChatGPT seeking information about current events, like news or sports... "Did OpenAI go so fast that they didn't consider the privacy implications of this, or did they just not care?" Packer posited in his blog... Clearly some of those searches relied on Google, Packer's blog said, mistakenly sending to GSC "whatever" the user says in the prompt box... This means "that OpenAI is sharing any prompt that requires a Google Search with both Google and whoever is doing their scraping," Packer alleged. "And then also with whoever's site shows up in the search results! Yikes."

To Packer, it appeared that "ALL ChatGPT prompts" that used Google Search risked being leaked during the past two months. OpenAI claimed only a small number of queries were leaked but declined to provide a more precise estimate. So, it remains unclear how many of the 700 million people who use ChatGPT each week had prompts routed to Google Search Console.

"Perhaps most troubling to some users — whose identities are not linked in chats unless their prompts perhaps share identifying information — there does not seem to be any way to remove the leaked chats from Google Search Console.."
Facebook

Bombshell Report Exposes How Meta Relied On Scam Ad Profits To Fund AI (reuters.com) 59

"Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them," writes Ars Technica, citing a lengthy report from Reuters.

Reuters reports that Meta "for at least three years failed to identify and stop an avalanche of ads that exposed Facebook, Instagram and WhatsApp's billions of users to fraudulent e-commerce and investment schemes, illegal online casinos, and the sale of banned medical products..." On average, one December 2024 document notes, the company shows its platforms' users an estimated 15 billion "higher risk" scam advertisements — those that show clear signs of being fraudulent — every day. Meta earns about $7 billion in annualized revenue from this category of scam ads each year, another late 2024 document states. Much of the fraud came from marketers acting suspiciously enough to be flagged by Meta's internal warning systems.

But the company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show. If the company is less certain — but still believes the advertiser is a likely scammer — Meta charges higher ad rates as a penalty, according to the documents. The idea is to dissuade suspect advertisers from placing ads. The documents further note that users who click on scam ads are likely to see more of them because of Meta's ad-personalization system, which tries to deliver ads based on a user's interests... The documents indicate that Meta's own research suggests its products have become a pillar of the global fraud economy. A May 2025 presentation by its safety staff estimated that the company's platforms were involved in a third of all successful scams in the U.S.

Meta also acknowledged in other internal documents that some of its main competitors were doing a better job at weeding out fraud on their platforms... The documents note that Meta plans to try to cut the share of Facebook and Instagram revenue derived from scam ads. In the meantime, Meta has internally acknowledged that regulatory fines for scam ads are certain, and anticipates penalties of up to $1 billion, according to one internal document. But those fines would be much smaller than Meta's revenue from scam ads, a separate document from November 2024 states. Every six months, Meta earns $3.5 billion from just the portion of scam ads that "present higher legal risk," the document says, such as those falsely claiming to represent a consumer brand or public figure or demonstrating other signs of deceit. That figure almost certainly exceeds "the cost of any regulatory settlement involving scam ads...."

A planning document for the first half of 2023 notes that everyone who worked on the team handling advertiser concerns about brand-rights issues had been laid off. The company was also devoting resources so heavily to virtual reality and AI that safety staffers were ordered to restrict their use of Meta's computing resources. They were instructed merely to "keep the lights on...." Meta also was ignoring the vast majority of user reports of scams, a document from 2023 indicates. By that year, safety staffers estimated that Facebook and Instagram users each week were filing about 100,000 valid reports of fraudsters messaging them, the document says. But Meta ignored or incorrectly rejected 96% of them. Meta's safety staff resolved to do better. In the future, the company hoped to dismiss no more than 75% of valid scam reports, according to another 2023 document.

A small advertiser would have to get flagged for promoting financial fraud at least eight times before Meta blocked it, a 2024 document states. Some bigger spenders — known as "High Value Accounts" — could accrue more than 500 strikes without Meta shutting them down, other documents say.

Thanks to long-time Slashdot reader schwit1 for sharing the article.
The Almighty Buck

You Can't Leave Unless You Buy Something (sfgate.com) 195

An anonymous reader quotes a report from SFGATE: At the Safeway on San Francisco's King Street, you now can't leave the store unless you buy something. The Mission Bay grocery store recently installed new anti-theft measures at the entrance and exit. New gates at the entrance automatically swing open when customers walk in, but they're set to trigger an alarm if someone attempts to back out. And if you walk into Safeway and change your mind about grocery shopping, you might find yourself trapped: Another gate that only opens if you scan your receipt blocks the store's sole exit.

During my Monday visit, I purchased a kombucha and went through the check-out line without incident. (No high-tech gates block the exit if you go through the line like normal.) But for journalism's sake, I then headed back into the store to try going out the new gate. While I watched some customers struggle with the new technology, my receipt scanned immediately. The glass doors slid open, and I was free. But if, like this person on the San Francisco subreddit recounted, I hadn't bought anything, my only means of exit would have been to beg the security guard to let me out.

Social Networks

Denmark's Government Aims To Ban Access To Social Media For Children Under 15 (apnews.com) 35

An anonymous reader quotes a report from the Associated Press: Denmark's government on Friday announced an agreement to ban access to social media for anyone under 15, ratcheting up pressure on Big Tech platforms as concerns grow that kids are getting too swept up in a digitized world of harmful content and commercial interests. The move would give some parents -- after a specific assessment -- the right to let their children access social media from age 13.

It wasn't immediately clear how such a ban would be enforced: Many tech platforms already restrict pre-teens from signing up. Officials and experts say such restrictions don't always work. Such a measure would be among the most sweeping steps yet by a European Union government to limit use of social media among teens and younger children, which has drawn concerns in many parts of an increasingly online world.
"We've given the tech giants so many chances to stand up and to do something about what is happening on their platforms. They haven't done it," said Caroline Stage, Denmark's minister for digital affairs. "So now we will take over the steering wheel and make sure that our children's futures are safe."

"I can assure you that Denmark will hurry, but we won't do it too quickly because we need to make sure that the regulation is right and that there is no loopholes for the tech giants to go through," Stage said.
Sci-Fi

Why Does So Much New Technology Feel Inspired by Dystopian Sci-Fi Movies? (nytimes.com) 111

In a recent article published in the New York Times, author Casey Michael Henry argues that today's tech industry keeps borrowing dystopian sci-fi aesthetics and ideas -- often the parts that were meant as warnings -- and repackages them as exciting products without recognizing that they were originally cautionary tales to avoid. "The tech industry is delivering on some of the futuristic notions of late-20th-century science fiction," writes Henry. "Yet it seems, at times, bizarrely unaware that many of those notions were meant to be dystopian or satirical -- dismal visions of where our worst and dumbest habits could lead us." Here's an excerpt from the report: You worry that someone in today's tech world might watch "Gattaca" -- a film that features a eugenicist future in which people with ordinary DNA are relegated to menial jobs -- and see it as an inspirational launching point for a collaboration between 23andMe and a charter school. The material on Sora, for instance, can feel oddly similar to the jokes about crass entertainment embedded in dystopian films and postmodern novels. In the movie "Idiocracy," America loved a show called "Ow! My Balls!" in which a man is hit in the testicles in increasingly florid ways. "Robocop" imagined a show about a goggle-eyed pervert with an inane catchphrase. "The Running Man" had a game show in which contestants desperately collected dollar bills and climbed a rope to escape ravenous dogs. That Sora could be prompted to imagine a game show in which Michel Foucault chokeslams Ronald Reagan, or Prince battles an anaconda, doesn't feel new; it feels like a gag from a 1990s writer or a film about social decay.

The echoes aren't all accidental. Modern design has been influenced by our old techno-dystopias -- particularly the cyberpunk variety, with its neon-noir gloss and "high tech, low life" allure. From William Gibson novels to films like "The Matrix," the culture has taken in countless ruined cityscapes, all-controlling megacorporations, high-tech body modifications, V.R.-induced illnesses, deceptive A.I. paramours, mechanical assassins and leather-clad hacker antiheroes, navigating a dissociative cyberspace with savvily repurposed junk-tech. This was not a world many people wanted to live in, but its style and ethos seem to reverberate in the tech industry's boldest visions of the future.

The Courts

Why Sam Altman Was Booted From OpenAI, According To New Testimony (theverge.com) 38

An anonymous reader quotes a report from The Verge: What did Ilya see?" Two years ago, it was the meme seen 'round the world (or at least 'round the tech industry). OpenAI CEO Sam Altman had been briefly ousted in November 2023 by members of the company's board of directors, including his longtime collaborator and fellow cofounder Ilya Sutskever. The board claimed Altman "was not consistently candid in his communications with the board," undermining their confidence in him. He was out for less than a week before being reinstated after hundreds of employees threatened to resign. But observers wondered: What hadn't Altman been candid about? And what led Sutskever to turn against him?

Now, new details have come to light in a legal deposition involving Sutskever, part of Musk's ongoing lawsuit against Altman and OpenAI. For nearly 10 hours on October 1st, bookended by repeated sniping between Musk's and Sutsever's attorneys, Sutskever answered questions about the turmoil around Altman's ouster, from conflicts between executives to short-lived merger talks with Anthropic. He testified that from personal experience and documentation he'd viewed, he'd seen Altman pit high-ranking executives against each other and offer conflicting information about his plans for the company, telling people what they wanted to hear.

The testimony paints a picture of a leader who could be manipulative and chameleon-like in the relentless pursuit of his own agenda -- though Sutskever expressed hesitation about his reliance on some of the secondhand accounts later in testimony, saying he "learned the critical importance of firsthand knowledge for matters like this." In a statement toThe Verge, OpenAI spokesperson Liz Bourgeois said that "The events of 2023 are behind us. These claims were fully examined during the board's independent review, which unanimously concluded Sam and Greg are the right leaders for OpenAI." The comment echoes a 2024 statement by board chair Bret Taylor, following an investigation conducted by the company.
Altman "exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another," reads a quote from the memo Sutskever. Altman told him and Jakub Pachocki, who is now OpenAI's chief scientist, "conflicting things about the way the company would be run," leading to internal conflict and repeated undermining.

Sutskever said he also faulted Altman for "not accepting or rejecting" former OpenAI research executive Dario Amodei Dario's conditions when he wanted to run all research and fire OpenAI president Greg Brockman, implying Altman played both sides.

Furthermore, OpenAI CTO Mira Murati surfaced claims that Altman left Y Combinator for "similar behaviors. He was creating chaos, starting lots of new projects, pitting people against each other, and thus was not managing YC well."
Google

Google Plans Secret AI Military Outpost on Tiny Island Overrun By Crabs (arstechnica.com) 39

An anonymous reader shares a report: On Wednesday, Reuters reported that Google is planning to build a large AI data center on Christmas Island, a 52-square-mile Australian territory in the Indian Ocean, following a cloud computing deal with Australia's military. The previously undisclosed project will reportedly position advanced AI infrastructure a mere 220 miles south of Indonesia at a location military strategists consider critical for monitoring Chinese naval activity.

Aside from its strategic military position, the island is famous for its massive annual crab migration, where over 100 million of red crabs make their way across the island to spawn in the ocean. That's notable because the tech giant has applied for environmental approvals to build a subsea cable connecting the 135-square-kilometer island to Darwin, where US Marines are stationed for six months each year.

[...] Christmas Island's annual crab migration is a natural phenomenon that Sir David Attenborough reportedly once described as one of his greatest TV moments when he visited the site in 1990. Every year, millions of crabs emerge from the forest and swarm across roads, streams, rocks, and beaches to reach the ocean, where each female can produce up to 100,000 eggs. The tiny baby crabs that survive take about nine days to march back inland to the safety of the plateau.

Slashdot Top Deals