Chrome

OpenAI Would Buy Google's Chrome, Exec Testifies At Trial (reuters.com) 60

At Google's antitrust trial, OpenAI's head of product revealed the company would consider buying Chrome if regulators force Alphabet to sell it, arguing such a move could help improve ChatGPT's search capabilities. Reuters reports: ChatGPT head of product Nick Turley made the statement while testifying at trial in Washington where U.S. Department of Justice seeks to require Google to undertake far-reaching measures restore competition in online search. The judge overseeing the trial found last year that Google has a monopoly in online search and related advertising. Google has not offered Chrome for sale. The company plans to appeal the ruling that it holds a monopoly.

Turley wrote last year that ChatGPT was leading in the consumer chatbot market and did not see Google as its biggest competitor, according to an internal OpenAI document Google's lawyer showed at trial. He testified that the document was meant to inspire OpenAI employees and that the company would still benefit from distribution partnerships. Turley, a witness for the government, testified earlier in the day that Google shot down a bid by OpenAI to use its search technology within ChatGPT. OpenAI had reached out to Google after experiencing issues with its own search provider, Turley said, without naming the provider. ChatGPT uses technology from Microsoft's search engine, Bing. "We believe having multiple partners, and in particular Google's API, would enable us to provide a better product to users," OpenAI told Google, according to an email shown at trial.

OpenAI first reached out in July, and Google declined the request in August, saying it would involve too many competitors, according to the email. "We have no partnership with Google today," Turley said. The DOJ's proposal to make Google share search data with competitors as one means of restoring competition would help accelerate efforts to improve ChatGPT, Turley said. Search is a critical part of ChatGPT to provide answers to user queries that are up to date and factual, Turley said. ChatGPT is years away from its goal of being able to use its own search technology to answer 80% of queries, he added.

AI

Anthropic Warns Fully AI Employees Are a Year Away 71

Anthropic predicts AI-powered virtual employees will start operating within companies in the next year, introducing new risks such as account misuse and rogue behavior. Axios reports: Virtual employees could be the next AI innovation hotbed, Jason Clinton, the company's chief information security officer, told Axios. Agents typically focus on a specific, programmable task. In security, that's meant having autonomous agents respond to phishing alerts and other threat indicators. Virtual employees would take that automation a step further: These AI identities would have their own "memories," their own roles in the company and even their own corporate accounts and passwords. They would have a level of autonomy that far exceeds what agents have today. "In that world, there are so many problems that we haven't solved yet from a security perspective that we need to solve," Clinton said.

Those problems include how to secure the AI employee's user accounts, what network access it should be given and who is responsible for managing its actions, Clinton added. Anthropic believes it has two responsibilities to help navigate AI-related security challenges. First, to thoroughly test Claude models to ensure they can withstand cyberattacks, Clinton said. The second is to monitor safety issues and mitigate the ways that malicious actors can abuse Claude.

AI employees could go rogue and hack the company's continuous integration system -- where new code is merged and tested before it's deployed -- while completing a task, Clinton said. "In an old world, that's a punishable offense," he said. "But in this new world, who's responsible for an agent that was running for a couple of weeks and got to that point?" Clinton says virtual employee security is one of the biggest security areas where AI companies could be making investments in the next few years.
Businesses

Companies Ditch Fluorescent Lights in Battle for Office Return (msn.com) 96

Offices nationwide are ditching harsh fluorescent lighting in favor of advanced systems designed to improve cognitive function and entice remote workers back to physical workplaces. Companies are investing in circadian-tuned lighting that adjusts intensity and color temperature throughout the day to mimic natural light patterns, syncing with employees' biological rhythms, according to WSJ.

The technology arsenal includes faux skylights displaying virtual suns and moons, AI-controlled self-tinting windows, and customizable lighting zones that can be adjusted via remote control. Research suggests these innovations may improve brain function during tasks requiring sustained attention. "We've known for a long time that natural light is better and makes people feel better," says Peter Cappelli, professor at Wharton School. The innovations stem from discoveries in the early 2000s of photosensitive retinal cells that affect biology independent of vision. Industry specialists report a "huge uptick in requests," though implementation adds 20-30% to project costs, potentially slowing mainstream adoption.
AI

AI Floods Amazon With Strange Political Books Before Canadian Election (msn.com) 24

An anonymous reader shares a report: Canada has seen a boom in political books created with generative artificial intelligence, adding to concerns about how new technologies are affecting the information voters receive during the election campaign.

Prime Minister Mark Carney was the subject of at least 16 books published in March and listed on Amazon.com, according to a review of the site on April 16. Five of those were published on a single day. In total, some 30 titles were published about Carney this year and made available on Amazon -- but most were taken down from the site after inquiries from Bloomberg News.

One author, James A. Powell, put his name to at least three books about the former central banker, who's now leading the Liberal Party and is narrowly favored to win the election. Among the titles that Amazon removed: "Carney's Code: Climate Capitalism, Digital Currencies, and the Technocratic Takeover of the Global Economy -- Inside Mark Carney's Blueprint for the Post-Democratic World."

Apple

Apple Removes 'Available Now' Claim from Intelligence Page Following NAD Review (theverge.com) 21

Apple has quietly removed the "available now" designation from its Apple Intelligence marketing page following a National Advertising Division review. The change came after the NAD recommended Apple "discontinue or modify" the claim, which "reasonably conveyed the message" that all promoted AI features were immediately available with iPhone 16 devices.

The NAD, part of the Better Business Bureau, determined Apple's footnote explaining feature availability was "neither sufficiently clear and conspicuous nor close to the triggering claims."

Further reading:
Apple Delays 'More Personalized Siri' Apple Intelligence Features;
'Something Is Rotten in the State of Cupertino';
Apple Shakes Up AI Executive Ranks in Bid to Turn Around Siri.
Movies

Movies Made With AI Can Win Oscars, Academy Says (bbc.com) 24

Films made with the help of AI will be able to win top awards at the Oscars, according to its organisers. From a report: The Academy of Motion Picture Arts and Sciences issued new rules on Monday which said the use of AI and other digital tools would "neither help nor harm the chances of achieving a nomination."

[...] The Academy said it would still consider human involvement when selecting its winners. The Academy said its new language around eligibility for films made using generative AI tools was recommended by its Science and Technology Council. Under further rule changes announced on Monday, Academy members must now watch all nominated films in each category in order to be able to take part in the final round of voting, which decides upon winners.

Google

Google Pays Samsung 'Enormous Sums' for Gemini AI App Installs (msn.com) 27

Google pays Samsung an "enormous sum of money" every month to preinstall Google generative AI app, Gemini, on its phones and devices, according to court testimony, even though the company's practice of paying for installations has twice been found to violate the law. From a report: The company began paying Samsung for Gemini in January, according to Peter Fitzgerald, Google's vice president of platforms and device partnerships, who testified Monday in Washington federal court as part of the Justice Department's antitrust case. The contract, set to run at least two years, provides fixed monthly payments for each device that preinstalls Gemini and pays Samsung a percentage of the revenue Google earns from advertisements within the app, Fitzgerald told Judge Amit Mehta, who is overseeing the case.
Google

Google Says DOJ Breakup Would Harm US In 'Global Race With China' (cnbc.com) 55

Google has argued in court that the U.S. Department of Justice's proposal to break up its Chrome and Android businesses would weaken national security and harm the country's position in the global AI race, particularly against China. CNBC reports: The remedies trial in Washington, D.C., follows a judge's ruling in August that Google has held a monopoly in its core market of internet search, the most-significant antitrust ruling in the tech industry since the case against Microsoft more than 20 years ago. The Justice Department has called for Google to divest its Chrome browser unit and open its search data to rivals.

Google said in a blog post on Monday that such a move is not in the best interest of the country as the global battle for supremacy in artificial intelligence rapidly intensifies. In the first paragraph of the post, Google named China's DeepSeek as an emerging AI competitor. The DOJ's proposal would "hamstring how we develop AI, and have a government-appointed committee regulate the design and development of our products," Lee-Anne Mulholland, Google's vice president of regulatory affairs, wrote in the post. "That would hold back American innovation at a critical juncture. We're in a fiercely competitive global race with China for the next generation of technology leadership, and Google is at the forefront of American companies making scientific and technological breakthroughs."

Security

AI Hallucinations Lead To a New Cyber Threat: Slopsquatting 51

Researchers have uncovered a new supply chain attack called Slopsquatting, where threat actors exploit hallucinated, non-existent package names generated by AI coding tools like GPT-4 and CodeLlama. These believable yet fake packages, representing almost 20% of the samples tested, can be registered by attackers to distribute malicious code. CSO Online reports: Slopsquatting, as researchers are calling it, is a term first coined by Seth Larson, a security developer-in-residence at Python Software Foundation (PSF), for its resemblance to the typosquatting technique. Instead of relying on a user's mistake, as in typosquats, threat actors rely on an AI model's mistake. A significant number of packages, amounting to 19.7% (205,000 packages), recommended in test samples were found to be fakes. Open-source models -- like DeepSeek and WizardCoder -- hallucinated more frequently, at 21.7% on average, compared to the commercial ones (5.2%) like GPT 4. Researchers found CodeLlama ( hallucinating over a third of the outputs) to be the worst offender, and GPT-4 Turbo ( just 3.59% hallucinations) to be the best performer.

These package hallucinations are particularly dangerous as they were found to be persistent, repetitive, and believable. When researchers reran 500 prompts that had previously produced hallucinated packages, 43% of hallucinations reappeared every time in 10 successive re-runs, with 58% of them appearing in more than one run. The study concluded that this persistence indicates "that the majority of hallucinations are not just random noise, but repeatable artifacts of how the models respond to certain prompts." This increases their value to attackers, it added. Additionally, these hallucinated package names were observed to be "semantically convincing." Thirty-eight percent of them had moderate string similarity to real packages, suggesting a similar naming structure. "Only 13% of hallucinations were simple off-by-one typos," Socket added.
The research can found be in a paper on arXiv.org (PDF).
Microsoft

Microsoft Implements Stricter Performance Management System With Two-Year Rehire Ban (businessinsider.com) 52

Microsoft is intensifying performance scrutiny through new policies that target underperforming employees, according to an internal email from Chief People Officer Amy Coleman. The company has introduced a formalized Performance Improvement Plan (PIP) system that gives struggling employees two options: accept improvement targets or exit the company with a Global Voluntary Separation Agreement.

The policy establishes a two-year rehire blackout period for employees who leave with low performance ratings (zero to 60% in Microsoft's 0-200 scale) or during a PIP process. These employees are also barred from internal transfers while still at the company.

Coming months after Microsoft terminated 2,000 underperformers without severance, the company is also developing AI-supported tools to help managers "prepare for constructive or challenging conversations" through interactive practice environments. "Our focus remains on enabling high performance to achieve our priorities spanning security, quality, and leading AI," Coleman wrote, emphasizing that these changes aim to create "a globally consistent and transparent experience" while fostering "accountability and growth."
AI

Cursor AI's Own Support Bot Hallucinated Its Usage Policy (theregister.com) 9

Cursor AI users recently encountered an ironic AI failure when the platform's support bot falsely claimed a non-existent login restriction policy. Co-founder Michael Truell apologized for the issue, clarified that no such policy exists, and attributed the mishap to AI hallucination and a session management bug. The Register reports: Users of the Cursor editor, designed to generate and fix source code in response to user prompts, have sometimes been booted from the software when trying to use the app in multiple sessions on different machines. Some folks who inquired about the inability to maintain multiple logins for the subscription service across different machines received a reply from the company's support email indicating this was expected behavior. But the person on the other end of that email wasn't a person at all, but an AI support bot. And it evidently made that policy up.

In an effort to placate annoyed users this week, Michael Truell co-founder of Cursor creator Anysphere, published a note to Reddit to apologize for the snafu. "Hey! We have no such policy," he wrote. "You're of course free to use Cursor on multiple machines. Unfortunately, this is an incorrect response from a front-line AI support bot. We did roll out a change to improve the security of sessions, and we're investigating to see if it caused any problems with session invalidation." Truell added that Cursor provides an interface for viewing active sessions in its settings and apologized for the confusion.

In a post to the Hacker News discussion of the SNAFU, Truell again apologized and acknowledged that something had gone wrong. "We've already begun investigating, and some very early results: Any AI responses used for email support are now clearly labeled as such. We use AI-assisted responses as the first filter for email support." He said the developer who raised this issue had been refunded. The session logout issue, now fixed, appears to have been the result of a race condition that arises on slow connections and spawns unwanted sessions.

Software

Over 100 Public Software Companies Getting 'Squeezed' by AI, Study Finds (businessinsider.com) 37

Over 100 mid-market software companies are caught in a dangerous "squeeze" between AI-native startups and tech giants, according to a new AlixPartners study released Monday. The consulting firm warns many face "threats to their survival over the next 24 months" as generative AI fundamentally reshapes enterprise software.

The squeeze reflects a dramatic shift: AI agents are evolving from mere assistants to becoming applications themselves, potentially rendering traditional SaaS architecture obsolete. High-growth companies in this sector plummeted from 57% in 2023 to 39% in 2024, with further decline expected. Customer stickiness is also deteriorating, with median net dollar retention falling from 120% in 2021 to 108% in Q3 2024.
Books

Should the Government Have Regulated the Early Internet - or Our Future AI? (hedgehogreview.com) 45

In February tech journalist Nicholas Carr published Superbloom: How Technologies of Connection Tear Us Apart.

A University of Virginia academic journal says the book "appraises the past and present" of information technology while issuing "a warning about its future." And specifically Carr argues that the government ignored historic precedents by not regulating the early internet sometime in the 1990s. But as he goes on to remind us, the early 1990s were also when the triumphalism of America's Cold War victory, combined with the utopianism of Silicon Valley, convinced a generation of decision-makers that "an unfettered market seemed the best guarantor of growth and prosperity" and "defending the public interest now meant little more than expanding consumer choice." So rather than try to anticipate the dangers and excesses of commercialized digital media, Congress gave it free rein in the Telecommunications Act of 1996, which, as Carr explains,

"...erased the legal and ethical distinction between interpersonal communication and broadcast communications that had governed media in the twentieth century. When Google introduced its Gmail service in 2004, it announced, with an almost imperial air of entitlement, that it would scan the contents of all messages and use the resulting data for any purpose it wanted. Our new mailman would read all our mail."

As for the social-media platforms, Section 230 of the Act shields them from liability for all but the most egregiously illegal content posted by users, while explicitly encouraging them to censor any user-generated content they deem offensive, "whether or not such material is constitutionally protected" (emphasis added). Needless to say, this bizarre abdication of responsibility has led to countless problems, including what one observer calls a "sociopathic rendition of human sociability." For Carr, this is old news, but he warns us once again that the compulsion "to inscribe ourselves moment by moment on the screen, to reimagine ourselves as streams of text and image...[fosters] a strange, needy sort of solipsism. We socialize more than ever, but we're also at a further remove from those we interact with."

Carr's book suggests "frictional design" to slow posting (and reposting) on social media might "encourage civil behavior" — but then decides it's too little, too late, because our current frictionless efficiency "has burrowed its way too deeply into society and the social mind."

Based on all of this, the article's author looks ahead to the next revolution — AI — and concludes "I do not think it wise to wait until these kindly bots are in place before deciding how effective they are. Better to roll them off the nearest cliff today..."
Space

Space Investor Sees Opportunities in Defense-Related Startups and AI-Driven Systems (yahoo.com) 12

Chad Anderson is the founder/managing partner of the early-stage VC Space Capital (and an investor in SpaceX, along with dozens of other space companies). Space Capital produces quarterly reports on the space economy, and he says today, unlike 2021, "the froth is gone. But so is the hype. What's left is a more grounded — and investable — space economy."

On Yahoo Finance he shares several of the report's insights — including the emergence of "investable opportunities across defense-oriented startups in space domain awareness, AI-driven command systems, and hardened infrastructure." The same geopolitical instability that's undermining public markets is driving national urgency around space resilience. China's simulated space "dogfights" prompted the US Department of Defense to double down on orbital supremacy, with the proposed "Golden Dome" missile shield potentially unleashing a new wave of federal spending...

Defense tech is on fire, but commercial location-based services and logistics are freezing over. Companies like Shield AI and Saronic raised monster rounds, while others are relying on bridge financings to stay afloat...

Q1 also saw a breakout quarter for geospatial artificial intelligence (GeoAI). Software developer Niantic launched a spatial computing platform. SkyWatch partnered with GIS software supplier Esri. Planet Labs collaborated with Anthropic. And Xona Space Systems inked a deal with Trimble to boost precision GPS. This is the next leg of the space economy, where massive volumes of satellite data is finally made useful through machine learning, semantic indexing, and real-time analytics.

Distribution-layer companies are doing more with less. They remain underfunded relative to infrastructure and applications but are quietly powering the most critical systems, such as resilient communications, battlefield networks, and edge-based geospatial analysis. Don't let the low round count fool you; innovation here is quietly outpacing capital.

The article includes several predictions, insights, and possible trends (going beyond the fact that defense spending "will carry the sector...")
  • "AI's integration into space (across geospatial intelligence, satellite communications, and sensor fusion) is not a novelty. It's a competitive necessity."
  • "Focusing solely on rockets and orbital assets misses where much of the innovation and disruption is occurring: the software-defined layers that sit atop the physical backbone..."
  • "For years, SpaceX faced little serious competition, but that's starting to change." [He cites Blue Origin's progress toward approval for launching U.S. military satellites, and how Rocket Lab and Stoke Space "have also joined the competition for lucrative government launch contracts." Even Relativity Space may make a comeback, with former GOogle CEO Eric Schmidt acquiring a controlling stake.]
  • "An infrastructure reset is coming. The imminent ramp-up of SpaceX's Starship could collapse the cost structure for the infrastructure layer. When that happens, legacy providers with fixed-cost-heavy business models will be at risk. Conversely, capital-light innovators in station design, logistics, and in-orbit servicing could suddenly be massively undervalued."

AI

Can You Run the Llama 2 LLM on DOS? (yeokhengmeng.com) 26

Slashdot reader yeokm1 is the Singapore-based embedded security researcher whose side projects include installing Linux on a 1993 PC and building a ChatGPT client for MS-DOS.

He's now sharing his latest project — installing Llama 2 on DOS: Conventional wisdom states that running LLMs locally will require computers with high performance specifications especially GPUs with lots of VRAM. But is this actually true?

Thanks to an open-source llama2.c project [original created by Andrej Karpathy], I ported it to work so vintage machines running DOS can actually inference with Llama 2 LLM models. Of course there are severe limitations but the results will surprise you.

"Everything is open sourced with the executable available here," according to the blog post. (They even addressed an early "gotcha" with DOS filenames being limited to eight characters.)

"As expected, the more modern the system, the faster the inference speed..." it adds. "Still, I'm amazed what can still be accomplished with vintage systems."
AI

Famed AI Researcher Launches Controversial Startup to Replace All Human Workers Everywhere (techcrunch.com) 177

TechCrunch looks at Mechanize, an ambitious new startup "whose founder — and the non-profit AI research organization he founded called Epoch — is being skewered on X..." Mechanize was launched on Thursday via a post on X by its founder, famed AI researcher Tamay Besiroglu. The startup's goal, Besiroglu wrote, is "the full automation of all work" and "the full automation of the economy."

Does that mean Mechanize is working to replace every human worker with an AI agent bot? Essentially, yes. The startup wants to provide the data, evaluations, and digital environments to make worker automation of any job possible. Besiroglu even calculated Mechanize's total addressable market by aggregating all the wages humans are currently paid. "The market potential here is absurdly large: workers in the US are paid around $18 trillion per year in aggregate. For the entire world, the number is over three times greater, around $60 trillion per year," he wrote.

Besiroglu did, however, clarify to TechCrunch that "our immediate focus is indeed on white-collar work" rather than manual labor jobs that would require robotics...

Besiroglu argues to the naysayers that having agents do all the work will actually enrich humans, not impoverish them, through "explosive economic growth." He points to a paper he published on the topic. "Completely automating labor could generate vast abundance, much higher standards of living, and new goods and services that we can't even imagine today," he told TechCrunch.

TechCrunch wonders how jobless humans will produce goods — and whether wealth will simply concentrate around whoever owns the agents.

But they do concede that Besiroglu may be right that "If each human worker has a personal crew of agents which helps them produce more work, economic abundance could follow..."
AI

Open Source Advocate Argues DeepSeek is 'a Movement... It's Linux All Over Again' (infoworld.com) 33

Matt Asay answered questions from Slashdot readers in 2010 (as the then-COO of Canonical). He currently runs developer relations at MongoDB (after holding similar positions at AWS and Adobe).

This week he contributed an opinion piece to InfoWorld arguing that DeepSeek "may have originated in China, but it stopped being Chinese the minute it was released on Hugging Face with an accompanying paper detailing its development." Soon after, a range of developers, including the Beijing Academy of Artificial Intelligence (BAAI), scrambled to replicate DeepSeek's success but this time as open source software. BAAI, for its part, launched OpenSeek, an ambitious effort to take DeepSeek's open-weight models and create a project that surpasses DeepSeek while uniting "the global open source communities to drive collaborative innovation in algorithms, data, and systems."

If that sounds cool to you, it didn't to the U.S. government, which promptly put BAAI on its "baddie" list. Someone needs to remind U.S. (and global) policymakers that no single country, company, or government can contain community-driven open source... DeepSeek didn't just have a moment. It's now very much a movement, one that will frustrate all efforts to contain it. DeepSeek, and the open source AI ecosystem surrounding it, has rapidly evolved from a brief snapshot of technological brilliance into something much bigger — and much harder to stop. Tens of thousands of developers, from seasoned researchers to passionate hobbyists, are now working on enhancing, tuning, and extending these open source models in ways no centralized entity could manage alone.

For example, it's perhaps not surprising that Hugging Face is actively attempting to reverse engineer and publicly disseminate DeepSeek's R1 model. Hugging Face, while important, is just one company, just one platform. But Hugging Face has attracted hundreds of thousands of developers who actively contribute to, adapt, and build on open source models, driving AI innovation at a speed and scale unmatched even by the most agile corporate labs.

Hugging Face by itself could be stopped. But the communities it enables and accelerates cannot. Through the influence of Hugging Face and many others, variants of DeepSeek models are already finding their way into a wide range of applications. Companies like Perplexity are embedding these powerful open source models into consumer-facing services, proving their real-world utility. This democratization of technology ensures that cutting-edge AI capabilities are no longer locked behind the walls of large corporations or elite government labs but are instead openly accessible, adaptable, and improvable by a global community.

"It's Linux all over again..." Asay writes at one point. "What started as the passion project of a lone developer quickly blossomed into an essential, foundational technology embraced by enterprises worldwide," winning out "precisely because it captivated developers who embraced its promise and contributed toward its potential."

We are witnessing a similar phenomenon with DeepSeek and the broader open source AI ecosystem, but this time it's happening much, much faster...

Organizations that cling to proprietary approaches (looking at you, OpenAI!) or attempt to exert control through restrictive policies (you again, OpenAI!) are not just swimming upstream — they're attempting to dam an ocean. (Yes, OpenAI has now started to talk up open source, but it's a long way from releasing a DeepSeek/OpenSeek equivalent on GitHub.)

AI

US Chipmakers Fear Ceding China's AI Market to Huawei After New Trump Restrictions (msn.com) 99

The Trump administration is "taking measures to restrict the sale of AI chips by Nvidia, Advanced Micro Devices and Intel," especially in China, reports the New York Times. But that's triggered a series of dominoes. "In the two days after the limits became public, shares of Nvidia, the world's leading AI chipmaker, fell 8.4%. AMD's shares dropped 7.4%, and Intel's were down 6.8%." (AMD expects up to $800 million in charges after the move, according to CNBC, while NVIDIA said it would take a quarterly charge of about $5.5 billion.)

The Times notes hopeful remarks Thursday from Jensen Huang, CEO of Nvidia, during a meeting with the China Council for the Promotion of International Trade. "We're going to continue to make significant effort to optimize our products that are compliant within the regulations and continue to serve China's market." But America's chipmakers also have a greater fear, according to the article: "that their retreat could turn the Chinese tech giant Huawei into a global chip-making powerhouse." "For the U.S. semiconductor industry, China is gone," said Handel Jones, a semiconductor consultant at International Business Strategies, which advises electronics companies. He projects that Chinese companies will have a majority share of chips in every major category in China by 2030... Huang's message spoke to one of his biggest fears. For years, he has worried that Huawei, China's telecommunications giant, will become a major competitor in AI. He has warned U.S. officials that blocking U.S. companies from competing in China would accelerate Huawei's rise, said three people familiar with those meetings who spoke on the condition of anonymity.

If Huawei gains ground, Huang and others at Nvidia have painted a dark picture of a future in which China will use the company's chips to build AI data centers across the world for the Belt and Road Initiative, a strategic effort to increase Beijing's influence by paying for infrastructure projects around the world, a person familiar with the company's thinking said...

Nvidia's previous generation of chips perform about 40% better than Huawei's best product, said Gregory C. Allen, who has written about Huawei in his role as director of the Wadhwani AI Center at the Center for Strategic and International Studies. But that gap could dwindle if Huawei scoops up the business of its American rivals, Allen said. Nvidia was expected to make more than $16 billion in sales this year from the H20 in China before the restriction. Huawei could use that money to hire more experienced engineers and make higher-quality chips. Allen said the U.S. government's restrictions also could help Huawei bring on customers like DeepSeek, a leading Chinese AI startup. Working with those companies could help Huawei improve the software it develops to control its chips. Those kinds of tools have been one of Nvidia's strengths over the years.

TechRepublic identifies this key quote from an earlier article: "This kills NVIDIA's access to a key market, and they will lose traction in the country," Patrick Moorhead, a tech analyst with Moor Insights & Strategy, told The New York Times. He added that Chinese companies will buy from local rival Huawei instead.
AI

Could AI and Automation Find Better Treatments for Cancer - and Maybe Aging? (cnn.com) 28

CNN looks at "one field that's really benefitting" from the use of AI: "the discovery of new medicines".

The founder/CEO of London-based LabGenius says their automated robotic system can assemble "thousands of different DNA constructs, each of which encodes a completely unique therapeutic molecule that we'll then test in the lab. This is something that historically would've had to have been done by hand." In short, CNN says, their system lets them "design and conduct experiments, and learn from them in a circular process that creates molecular antibodies at a rate far faster than a human researcher."

While many cancer treatments have debilitating side effects, CNN notes that LabGenius "reengineers therapeutic molecules so they can selectively target just the diseased cells." But more importantly, their founder says they've now discovered "completely novel molecules with over 400x improvement in [cell] killing selectivity."

A senior lecturer at Imperial College London tells CNN that LabGenius seems to have created an efficient process with seamless connections, identifying a series of antibodies that look like they can target cancer cells very selectively "that's as good as any results I've ever seen for this." (Although the final proof will be what happens when they test them on patients..) "And that's the next step for Labgenius," says CNN. "They aim to have their first therapeutics entering clinics in 2027."

Finally, CNN asks, if it succeeds is their potential beyond cancer treatment? "If you take one step further," says the company's CEO/founder, "you could think about knocking out senescent cells or aging cells as a way to treat the underlying cause of aging."
Space

High School Student Discovers 1.5M New Astronomical Objects by Developing an AI Algorithm (smithsonianmag.com) 21

For combining machine learning with astronomy, high school senior Matteo Paz won $250,000 in the Regeneron Science Talent Search, reports Smithsonian magazine: The young scientist's tool processed 200 billion data entries from NASA's now-retired Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE) telescope. His model revealed 1.5 million previously unknown potential celestial bodies.... [H]e worked on an A.I. model that sorted through the raw data in search of tiny changes in infrared radiation, which could indicate the presence of variable objects.
Working with a mentor at the Planet Finder Academy at Caltech, Paz eventually flagged 1.5 million potential new objects, accoridng to the article, including supernovas and black holes.

And that mentor says other Caltech researchers are using Paz's catalog of potential variable objects to study binary star systems.

Thanks to long-time Slashdot reader schwit1 for sharing the article.

Slashdot Top Deals