AI

Massive Study Detects AI Fingerprints In Millions of Scientific Papers 18

A team of U.S. and German researchers analyzed over 15 million biomedical papers and found that AI-generated content has subtly infiltrated academic writing, with telltale stylistic shifts -- such as a rise in flowery verbs and adjectives. "Their investigation revealed that since the emergence of LLMs there has been a corresponding increase in the frequency of certain stylist word choices within the academic literature," reports Phys.Org. "These data suggest that at least 13.5% of the papers published in 2024 were written with some amount of LLM processing." From the report: The researchers modeled their investigation on prior COVID-19 public-health research, which was able to infer COVID-19's impact on mortality by comparing excess deaths before and after the pandemic. By applying the same before-and-after approach, the new study analyzed patterns of excess word use prior to the emergence of LLMs and after. The researchers found that after the release of LLMs, there was a significant shift away from the excess use of "content words" to an excess use of "stylistic and flowery" word choices, such as "showcasing," "pivotal," and "grappling."

By manually assigning parts of speech to each excess word, the authors determined that before 2024, 79.2% of excess word choices were nouns. During 2024 there was a clearly identifiable shift. 66% of excess word choices were verbs and 14% were adjectives. The team also identified notable differences in LLM usage between research fields, countries, and venues.
The findings have been published in the journal Science Advances.
AI

People Are Using AI Chatbots To Guide Their Psychedelic Trips 27

An anonymous reader quotes a report from Wired: Trey had struggled with alcoholism for 15 years, eventually drinking heavily each night before quitting in December. But staying sober was a struggle for the 36-year-old first responder from Atlanta, who did not wish to use his real name due to professional concerns. Then he discovered Alterd, an AI-powered journaling app that invites users to "explore new dimensions" geared towards psychedelics and cannabis consumers, meditators, and alcohol drinkers. In April, using the app as a tripsitter -- a term for someone who soberly watches over another while they trip on psychedelics to provide reassurance and support -- he took a huge dose of 700 micrograms of LSD. (A typicalrecreational doseis considered to be 100 micrograms.) "I went from craving compulsions to feeling true freedom and not needing or wanting alcohol," he says.

He recently asked the app's "chat with your mind" function how he had become more wise through all his AI-assisted psychedelic trips. It responded: "I trust my own guidance now, not just external rules or what others think. I'm more creative, less trapped by fear, and I actually live by my values, not just talk about them. The way I see, reflect, and act in the world is clearer and more grounded every day." "It's almost like your own self that you're communicating with," says Trey, adding he's tripped with his AI chatbot about a dozen times since April. "It's like your best friend. It's kind of crazy."
The article mentions several different chatbot tools and AI systems that are being used for psychedelic therapy.

ChatGPT: "Already, many millions of people are using ChatGPT on a daily basis, and the developments may have helped democratize access to psychotherapy-style guidance, albeit in a dubious Silicon Valley style with advice that is often flush with untruths," reports Wired. The general-purpose AI chatbot is being used for emotional support, intention-setting, and even real-time guidance during psychedelic trips. While not designed for therapy, it has been used informally as a trip companion, offering customized music playlists, safety reminders, and existential reflections. Experts caution that its lack of emotional nuance and clinical oversight poses significant risks during altered states.

Alterd: Alterd is a personalized AI journal app that serves as a reflective tool by analyzing a user's entries, moods, and behavior patterns. Its "mind chat" function acts like a digital subconscious, offering supportive insights while gently confronting negative habits like substance use. Users credit it with deepening self-awareness and maintaining sobriety, particularly in the context of psychedelic-assisted growth.

Mindbloom's AI Copilot: Integrated into Mindbloom's at-home ketamine therapy program, the AI copilot helps clients set pretrip intentions, process post-trip emotions, and stay grounded between sessions. It generates custom reflections and visual art based on voice journals, aiming to enhance the therapeutic journey even outside of human-guided sessions. The company plans to evolve the tool into a real-time, intelligent assistant capable of interacting more dynamically with users.

Orb AI/Shaman Concepts (Speculative): Conceptual "orb" interfaces imagine an AI-powered, shaman-like robot facilitating various aspects of psychedelic therapy, from intake to trip navigation. While still speculative, such designs hint at a future where AI plays a central, embodied role in guiding altered states. These ideas raise provocative ethical and safety questions about replacing human presence with machines in deeply vulnerable psychological contexts.

AI in Virtual Reality and Brain Modulation Systems: Researchers are exploring how AI could coordinate immersive virtual reality environments and brain-modulating devices to enhance psychedelic therapy. These systems would respond to real-time emotional and physiological signals, using haptic suits and VR to deepen and personalize the psychedelic experience. Though still in the conceptual phase, this approach represents the fusion of biotech, immersive tech, and AI in pursuit of therapeutic transformation.
AI

Tennis Players Criticize AI Technology Used By Wimbledon 26

Wimbledon's use of AI-powered electronic line-calling has sparked backlash from players who say the system made several incorrect calls, affecting match outcomes and creating accessibility issues. "This is the first year the prestigious tennis tournament, which is still ongoing, replaced human line judges, who determine if a ball is in or out, with an electronic line calling system (ELC)," notes TechCrunch. From the report: British tennis star Emma Raducanu called out the technology for missing a ball that her opponent hit out, but instead had to be played as if it were in. On a television replay, the ball indeed looked out, the Telegraph reported. Jack Draper, the British No. 1, also said he felt some line calls were wrong, saying he did not think the AI technology was "100 percent accurate."

Player Ben Shelton had to speed up his match after being told that the new AI line system was about to stop working because of the dimming sunlight. Elsewhere, players said they couldn't hear the new automated speaker system, with one deaf player saying that without the human hand signals from the line judges, she was unable to tell when she won a point or not.

The technology also met a blip at a key point during a match this weekend between British player Sonay Kartal and the Russian Anastasia Pavlyuchenkova, where a ball went out, but the technology failed to make the call. The umpire had to step in to stop the rally and told the players to replay the point because the ELC failed to track the point. Wimbledon later apologized, saying it was a "human error," and that the technology was accidentally shut off during the match. It also adjusted the technology so that, ideally, the mistake could not be repeated.

Debbie Jevans, chair of the All England Club, the organization that hosts Wimbledon, hit back at Raducanu and Draper, saying, "When we did have linesmen, we were constantly asked why we didn't have electronic line calling because it's more accurate than the rest of the tour."
Open Source

The Open-Source Software Saving the Internet From AI Bot Scrapers (404media.co) 16

An anonymous reader quotes a report from 404 Media: For someone who says she is fighting AI bot scrapers just in her free time, Xe Iaso seems to be putting up an impressive fight. Since she launched it in January, Anubis, a "program is designed to help protect the small internet from the endless storm of requests that flood in from AI companies," has been downloaded nearly 200,000 times, and is being used by notable organizations including GNOME, the popular open-source desktop environment for Linux, FFmpeg, the open-source software project for handling video and other media, and UNESCO, the United Nations organization for educations, science, and culture. [...]

"Anubis is an uncaptcha," Iaso explains on her site. "It uses features of your browser to automate a lot of the work that a CAPTCHA would, and right now the main implementation is by having it run a bunch of cryptographic math with JavaScript to prove that you can run JavaScript in a way that can be validated on the server." Essentially, Anubis verifies that any visitor to a site is a human using a browser as opposed to a bot. One of the ways it does this is by making the browser do a type of cryptographic math with JavaScript or other subtle checks that browsers do by default but bots have to be explicitly programmed to do. This check is invisible to the user, and most browsers since 2022 are able to complete this test. In theory, bot scrapers could pretend to be users with browsers as well, but the additional computational cost of doing so on the scale of scraping the entire internet would be huge. This way, Anubis creates a computational cost that is prohibitively expensive for AI scrapers that are hitting millions and millions of sites, but marginal for an individual user who is just using the internet like a human.

Anubis is free, open source, lightweight, can be self-hosted, and can be implemented almost anywhere. It also appears to be a pretty good solution for what we've repeatedly reported is a widespread problem across the internet, which helps explain its popularity. But Iaso is still putting a lot of work into improving it and adding features. She told me she's working on a non cryptographic challenge so it taxes users' CPUs less, and also thinking about a version that doesn't require JavaScript, which some privacy-minded disable in their browsers. The biggest challenge in developing Anubis, Iaso said, is finding the balance. "The balance between figuring out how to block things without people being blocked, without affecting too many people with false positives," she said. "And also making sure that the people running the bots can't figure out what pattern they're hitting, while also letting people that are caught in the web be able to figure out what pattern they're hitting, so that they can contact the organization and get help. So that's like, you know, the standard, impossible scenario."

Transportation

Waymo Starts Robotaxi Testing In Philadelphia and NYC (techcrunch.com) 30

Waymo has launched new "road trips" to Philadelphia and New York City, "signaling the Alphabet-owned company's interest in expanding into Northeastern cities," reports TechCrunch. While these trips don't guarantee commercial launches, they follow a pattern that previously led to deployments in cities like Los Angeles. Other road trips this year are planned for Houston, Orlando, Las Vegas, San Diego, and San Antonio. From the report: Typically, the trips involve sending a small fleet of human-driven vehicles equipped with Waymo's autonomous driving system to map out the new city. Then Waymo tests the vehicles autonomously, though still with a human behind the wheel, before taking any data and learnings back to its engineers to improve the AI driver's performance. In some cases, these road trips have led to commercial launches. In 2023, the company made a road trip to Santa Monica, a city in Los Angeles County. The company now operates a commercial service in Los Angeles, including Santa Monica, Beverly Hills, and Hollywood.

For its Philadelphia trip, Waymo plans to place vehicles in the most complex parts of the city, including downtown and freeways, according to a spokesperson. She noted folks will see Waymo vehicles driving "at all hours throughout various Philadelphia neighborhoods, from North Central to Eastwick, University City, and as far east as the Delaware River."

In NYC, Waymo will drive its cars manually in Manhattan just north of Central Park down to The Battery and parts of Downtown Brooklyn. The company will also map parts of Jersey City and Hoboken in New Jersey. Waymo applied last month for a permit to test its AVs in New York City with a human behind the wheel. The company has not yet received approval.

Google

OpenAI Says It Has No Plan To Use Google's In-house Chip (reuters.com) 3

An anonymous reader shares a report: OpenAI said it has no active plans to use Google's in-house chip to power its products, two days after Reuters and other news outlets reported on the AI lab's move to turn to its competitor's artificial intelligence chips to meet growing demand.

A spokesperson for OpenAI said on Sunday that while the AI lab is in early testing with some of Google's tensor processing units (TPUs), it has no plans to deploy them at scale right now.

Science

Springer Nature Book on Machine Learning is Full of Made-Up Citations (retractionwatch.com) 50

Springer Nature published a $169 machine learning textbook in April containing citations that appear to be largely fabricated, according to an investigation by Retraction Watch. The site checked 18 of the 46 citations in "Mastering Machine Learning: From Basics to Advanced" by Govindakumar Madhavan and found two-thirds either did not exist or contained substantial errors.

Three researchers contacted by Retraction Watch confirmed their supposedly authored works were fake or incorrectly cited. Yehuda Dar of Ben-Gurion University said a paper cited as appearing in IEEE Signal Processing Magazine was actually an unpublished arXiv preprint. Aaron Courville of Universite de Montreal confirmed he was cited for sections of his "Deep Learning" book that "doesn't seem to exist."

The pattern of nonexistent citations matches known hallmarks of large language model-generated text. Madhavan did not answer whether he used AI to generate the book's content. The book contains no AI disclosure despite Springer Nature policies requiring authors to declare AI use beyond basic copy editing.
China

The Startup-Filled Coder 'Village' at the Heart of China's AI Frenzy (msn.com) 6

China "is pouring money into building an AI supply chain with as little reliance on the U.S. as possible," the Wall Street Journal noted this weekend.

But what does that look like? The New York Times visits Liangzhu, "the coder 'village' at the heart of China's AI frenzy... a quiet suburb of the southern Chinese city of Hangzhou... As China faces off with the United States over tech primacy, Hangzhou has become the centre of China's AI frenzy," with its proximity to tech companies like Alibaba and DeepSeek..." In Liangzhu, many engineers said they were killing time until they could create their own startups, waiting out noncompete agreements they had signed at bigger companies like ByteDance... But some said the government support for Hangzhou's tech scene had scared off some investors. Several company founders, who asked not to be named so they could discuss sensitive topics, said it was difficult for them to attract funds from foreign venture capital firms, frustrating their ambitions to grow outside China. The nightmare situation, they said, would be to end up like ByteDance, the Chinese parent of TikTok, whose executives have been questioned before Congress about the company's ties to the Chinese government. Founders described choosing between two paths for their companies' growth: Take government funding and tailor their product to the Chinese market, or raise enough money on their own to set up offices in a country like Singapore to pitch foreign investors. For most, the first was the only feasible option.

Another uncertainty is access to the advanced computer chips that power artificial intelligence systems. Washington has spent years trying to prevent Chinese companies from buying these chips, and Chinese companies like Huawei and Semiconductor Manufacturing International Corp. are racing to produce their own. So far, the Chinese-made chips work well enough to help companies like ByteDance provide some of their AI services in China. Many Chinese companies have created stockpiles of Nvidia chips despite Washington's controls. But it is not clear how long that supply will last, or how quickly China's chipmakers can catch up to their American counterparts...

Liangzhu villagers have been hosting film nights. They had recently gathered to watch "The Matrix." Afterward, they decided the movie should be required viewing, Lin said. Its theme — people finding their way out of a vast system controlling society — provided spot-on inspiration. Aspiring founders in Liangzhu, even those who did not go to top universities, believe they could start the next world-changing tech company, said Felix Tao [a 36-year-old former Facebook and Alibaba employee.] "Many of them are super brave to make a choice to explore their own way, because in China that is not the common way to live your life."

Science

Citizen Scientists Just Helped Discover Nearly 8,000 New Eclipsing Binary Stars (spokesman.com) 12

"Citizen scientists have successfully located thousands of previously unknown pairs of 'eclipsing binary' stars," reports the Washington Post, citing a recent announcement from NASA. The ongoing initiative helps space researchers hunt for "eclipsing binary" stars, a rare phenomenon in which two stars orbit one another, periodically blocking each other's light. These star pairs offer important data to astrophysicists, who consider the many measurable properties of eclipsing binaries — and the information they bear about the history of star formation and destruction — as a foundation of the field...

The citizen science project in question, the Eclipsing Binary Patrol, validates images from NASA's Transiting Exoplanet Survey Satellite (TESS) mission. The satellite, launched in 2018, is "exceptionally capable at detecting varying stars," the researchers write in a preprint paper describing the initiative. The researchers used machine learning to identify about 1.2 million potential eclipsing star pairs. Citizen scientists then validated a subset of about 60,000... manually inspecting hundreds of thousands of images of eclipse-like events and weeding out actual binaries from images that tricked the algorithm. "Thankfully," the researchers write, "to the rescue come volunteers from all walks of life that boost the capacity of bandwidth-limited professional astronomers many-fold and help tackle the ever-increasing volume of publicly available astronomical data."

Universe Today describes how they limited the dataset to only stars with a magnitude brighter than 15, then used a Python tool to generate a massive dataset of millions of light curves... The outcome of all the work resulted in the identification of 10,001 eclipsing binary systems. 7,936 of them are new to science, while the other 2,065 were previously known, but the study provided updated, more accurate, parameters for their periods, as TESS' dataset provided better insight. There were also some particularly interesting systems that could hold new discoveries, including several that had variable eclipse timings, and plenty that might have a third star, and some that show a significant dynamic between the star being orbited and the one doing the orbiting.

All of those systems await further research, but there's another, unspoken factor at play in this data — exoplanets. TESS was originally designed as an exoplanet hunter, and this kind of large scale AI/human collaboration of lightcurve analysis is exactly the kind of work that could potentially produce even more accurate exoplanet catalogues, as evidenced by some of the work already done in this paper. That seems to be the next step for this dataset, with Dr. Kostov telling an interviewer "I can't wait to search them for exoplanets!" Given the data has already been collected, and the team has already been assembled, it's very likely he'll get his chance soon.

AI

Google DeepMind's Spinoff Company 'Very Close' to Human Trials for Its AI-Designed Drugs (fortune.com) 40

Google DeepMind's chief business officer says Alphabet's drug-discovery company Isomorphic Labs "is preparing to launch human trials of AI-designed drugs," according to a report in Fortune, "pairing cutting-edge AI with pharma veterans to design medicines faster, cheaper, and more accurately." "There are people sitting in our office in King's Cross, London, working, and collaborating with AI to design drugs for cancer," said Colin Murdoch [DeepMind's chief business officer and president of Isomorphic Labs]. "That's happening right now."

After years in development, Murdoch says human clinical trials for Isomorphic's AI-assisted drugs are finally in sight. "The next big milestone is actually going out to clinical trials, starting to put these things into human beings," he said. "We're staffing up now. We're getting very close."

The company, which was spun out of DeepMind in 2021, was born from one of DeepMind's most celebrated breakthroughs, AlphaFold, an AI system capable of predicting protein structures with a high level of accuracy. Interactions of AlphaFold progressed from being able to accurately predict individual protein structures to modeling how proteins interact with other molecules like DNA and drugs. These leaps made it far more useful for drug discovery, helping researchers design medicines faster and more precisely, turning the tool into a launchpad for a much larger ambition... In 2024, the same year it released AlphaFold 3, Isomorphic signed major research collaborations with pharma companies Novartis and Eli Lilly. A year later, in April 2025, Isomorphic Labs raised $600 million in its first-ever external funding round, led by Thrive Capital. The deals are part of Isomorphic's plan to build a "world-class drug design engine..."

Today, pharma companies often spend millions attempting to bring a single drug to market, sometimes with just a 10% chance of success once trials begin. Murdoch believes Isomorphic's tech could radically improve those odds. "We're trying to do all these things: speed them up, reduce the cost, but also really improve the chance that we can be successful," he says. He wants to harness AlphaFold's technology to get to a point where researchers have 100% conviction that the drugs they are developing are going to work in human trials. "One day we hope to be able to say — well, here's a disease, and then click a button and out pops the design for a drug to address that disease," Murdoch said. "All powered by these amazing AI tools."

China

Chinese Film Foundation Plans to Use AI to 'Revitalize' 100 Classic Kung Fu Films (msn.com) 58

"The China Film Foundation, a nonprofit fund under the Chinese government, plans to use AI to revitalize 100 kung fu classics including Police Story, Once Upon a Time in China and Fist of Fury, featuring Jackie Chan, Jet Li and Bruce Lee, respectively," reports the Los Angeles Times.

"The foundation said it will partner with businesses including Shanghai Canxing Culture & Media Co., which will license 100 Hong Kong films to AI companies to reintroduce those movies to younger audiences globally." The foundation said there are opportunities to use AI to tell those stories through animation, for example. There are plans to release an animated version of director John Woo's 1986 film A Better Tomorrow that uses AI to "reinterpret" Woo's "signature visual language," according to an English transcript of the announcement....

The project raised eyebrows among U.S. artists, many of whom are deeply wary of the use of AI in creative pursuits. The Directors Guild of America said AI is a creative tool that should only be used to enhance the creative storytelling process and "it should never be used retroactively to distort or destroy a filmmaker's artistic work... The DGA strongly opposes the use of AI or any other technology to mutilate a film or to alter a director's vision," the DGA said in a statement. "The Guild has a longstanding history of opposing such alterations on issues like colorization or sanitization of films to eliminate so-called 'objectionable content', or other changes that fundamentally alter a film's original style, meaning, and substance."

The project highlights widely divergent views on AI's potential to reshape entertainment as the two countries compete for dominance in the highly competitive AI space.... During the project's announcement, supporters touted the opportunity AI will bring to China to further its cultural message globally and generate new work for creatives. At the same time, they touted AI's disruption of the filmmaking process, saying the A Better Tomorrow remake was completed with just 30 people, significantly fewer than a typical animated project. China is a "more brutal society in that sense," said Eric Harwit, professor of Asian studies at the University of Hawaii at Manoa. "If somebody loses their job because artificial intelligence is taking over, well, that's just the cost of China's moving forward.... You don't have those freestanding labor organizations, so they don't have that kind of clout to protest against the Chinese using artificial intelligence in a way that might reduce their job opportunities or lead to layoffs in the sector..."

The kung fu revitalization efforts will extend into other areas, including the creation of a martial arts video game.

The article also includes an interesting statistic. "Many people in China embrace AI, with 83% feeling confident that AI systems are designed to act in the best interest of society, much higher than the U.S. where it's 37%, according to a survey from the United Nations Development Program."
Education

Recent College Graduates Face Higher Unemployment Than Other Workers - for the First Time in Decades (msn.com) 134

"A growing group of young, college-educated Americans are struggling to find work," reports the Minnesota Star Tribune, "as the unemployment rate for recent graduates outpaces overall unemployment for the first time in decades." While the national unemployment rate has hovered around 4% for months, the rate for 20-something degree holders is nearly 6%, data from the Federal Reserve Bank of New York shows. [And for young workers (ages 22 to 27) without a degree it's 6.9%.] The amount of time young workers report being unemployed is also on the rise.

Economists attribute some of the shift to normal post-pandemic cooling of labor market, which is making it harder for job-seekers of all ages to land a gig. But there's also widespread economic uncertainty causing employers to pull back on hiring and signs AI could replace entry-level positions....

Business schools nationwide were among the first to see the labor market shift in early 2023 as tech industry cuts bled into other sectors, said Maggie Tomas, Business Career Center executive director at Carlson. Tariffs and stock market volatility have only added to the uncertainty, she said. In 2022, when workers had their pick of jobs, 98% of full-time Carlson MBA graduates had a job offer in a field related to their degree within three months of graduation, according to the school. That number, which Tomas said is usually 90% or higher, dropped to 89% in 2023 and 83% in 2024.

Part of the challenge, she said, is recent graduates are now competing with more experienced workers who are re-entering the market amid layoffs and hiring freezes... After doing a lot of hiring in 2021 and 2022, Securian Financial in St. Paul is prioritizing internal hires, said Human Resources Director Leah Henrikson. Many entry-level roles have gone to current employees looking for a change, she said. "We are still looking externally, it's just the folks that we are looking for externally tend ... to fulfill a specific skill gap we may have at that moment in time," Henrikson said.

AI

Is China Quickly Eroding America's Lead in the Global AI Race? (msn.com) 133

China "is pouring money into building an AI supply chain with as little reliance on the U.S. as possible," reports the Wall Street Journal.

And now Chinese AI companies "are loosening the U.S.'s global stranglehold on AI," reports the Wall Street Journal, "challenging American superiority and setting the stage for a global arms race in the technology." In Europe, the Middle East, Africa and Asia, users ranging from multinational banks to public universities are turning to large language models from Chinese companies such as startup DeepSeek and e-commerce giant Alibaba as alternatives to American offerings such as ChatGPT... Saudi Aramco, the world's largest oil company, recently installed DeepSeek in its main data center. Even major American cloud service providers such as Amazon Web Services, Microsoft and Google offer DeepSeek to customers, despite the White House banning use of the company's app on some government devices over data-security concerns.

OpenAI's ChatGPT remains the world's predominant AI consumer chatbot, with 910 million global downloads compared with DeepSeek's 125 million, figures from researcher Sensor Tower show. American AI is widely seen as the industry's gold standard, thanks to advantages in computing semiconductors, cutting-edge research and access to financial capital. But as in many other industries, Chinese companies have started to snatch customers by offering performance that is nearly as good at vastly lower prices. A study of global competitiveness in critical technologies released in early June by researchers at Harvard University found China has advantages in two key building blocks of AI, data and human capital, that are helping it keep pace...

Leading Chinese AI companies — which include Tencent and Baidu — further benefit from releasing their AI models open-source, meaning users are free to tweak them for their own purposes. That encourages developers and companies globally to adopt them. Analysts say it could also pressure U.S. rivals such as OpenAI and Anthropic to justify keeping their models private and the premiums they charge for their service... On Latenode, a Cyprus-based platform that helps global businesses build custom AI tools for tasks including creating social-media and marketing content, as many as one in five users globally now opt for DeepSeek's model, according to co-founder Oleg Zankov. "DeepSeek is overall the same quality but 17 times cheaper," Zankov said, which makes it particularly appealing for clients in places such as Chile and Brazil, where money and computing power aren't as plentiful...

The less dominant American AI companies are, the less power the U.S. will have to set global standards for how the technology should be used, industry analysts say. That opens the door for Beijing to use Chinese models as a Trojan horse for disseminating information that reflects its preferred view of the world, some warn.... The U.S. also risks losing insight into China's ambitions and AI innovations, according to Ritwik Gupta, AI policy fellow at the University of California, Berkeley. "If they are dependent on the global ecosystem, then we can govern it," said Gupta. "If not, China is going to do what it is going to do, and we won't have visibility."

The article also warns of other potential issues:
  • "Further down the line, a breakdown in U.S.-China cooperation on safety and security could cripple the world's capacity to fight future military and societal threats from unrestrained AI."
  • "The fracturing of global AI is already costing Western makers of computer chips and other hardware billions in lost sales... Adoption of Chinese models globally could also mean lost market share and earnings for AI-related U.S. firms such as Google and Meta."

GNU is Not Unix

The FSF Faces Active 'Ongoing and Increasing' DDoS Attacks (fsf.org) 34

The Free Software Foundation's services face "ongoing (and increasing) distributed denial of service (DDoS) attacks," senior systems administrator Ian Kelling wrote Wednesday. But "Even though we are under active attack, gnu.org, ftp.gnu.org, and savannah.gnu.org are up with normal response times at the moment, and have been for the majority of this week, largely thanks to hard work from the Savannah hackers Bob, Corwin, and Luke who've helped us, your sysadmins."

"We've shielded these sites for almost a full year of intense attacks now, and we'll keep on fighting these attacks for as long as they continue." Our infrastructure has been under attack since August 2024. Large Language Model (LLM) web crawlers have been a significant source of the attacks, and as for the rest, we don't expect to ever know what kind of entity is targeting our sites or why.

- In the fall Bulletin, we wrote about the August attack on gnu.org. That attack continues, but we have mitigated it. Judging from the pattern and scope, the goal was likely to take the site down and it was not an LLM crawler. We do not know who or what is behind the attack, but since then, we have had more attacks with even higher severity.

- To begin with, GNU Savannah, the FSF's collaborative software development system, was hit by a massive botnet controlling about five million IPs starting in January. As of this writing, the attack is still ongoing, but the botnet's current iteration is mitigated. The goal is likely to build an LLM training dataset. We do not know who or what is behind this.

- Furthermore, gnu.org and ftp.gnu.org were targets in a new DDoS attack starting on May 27, 2025. Its goal seems to be to take the site down. It is currently mitigated. It has had several iterations, and each has caused some hours of downtime while we figured out how to defend ourselves against it. Here again, the goal was likely to take our sites down and we do not know who or what is behind this.

- In addition, directory.fsf.org, the server behind the Free Software Directory, has been under attack since June 18. This likely is an LLM scraper designed to specifically target Media Wiki sites with a botnet. This attack is very active and now partially mitigated...

Even though we are under active attack, gnu.org, ftp.gnu.org, and savannah.gnu.org are up with normal response times at the moment, and have been for the majority of this week, largely thanks to hard work from the Savannah hackers Bob, Corwin, and Luke who've helped us, your sysadmins. We've shielded these sites for almost a full year of intense attacks now, and we'll keep on fighting these attacks for as long as they continue.

The full-time FSF tech staff is just two systems administrators, "and we currently lack the funds to hire more tech staff any time soon," Kelling points out. Kelling titled his post "our small team vs millions of bots," suggesting that supporters purchase FSF memberships "to improve our staffing situation... Can you join us in our crucial work to guard user freedom and defy dystopia?"

Kelling also points out they're also facing "run-of-the-mill standard crawlers, SEO crawlers, crawlers pretending to be normal users, crawlers pretending to be other crawlers, uptime systems, vulnerability scanners, carrier-grade network address translation, VPNs, and normal browsers hitting our sites..."

"Some of the abuse is not unique to us, and it seems that the health of the web has some serious problems right now."
AI

Police Department Apologizes for Sharing AI-Doctored Evidence Photo on Social Media (boston.com) 89

A Maine police department has now acknowledged "it inadvertently shared an AI-altered photo of drug evidence on social media," reports Boston.com: The image from the Westbrook Police Department showed a collection of drug paraphernalia purportedly seized during a recent drug bust on Brackett Street, including a scale and white powder in plastic bags. According to Westbrook police, an officer involved in the arrests snapped the evidence photo and used a photo editing app to insert the department's patch. "The patch was added, and the photograph with the patch was sent to one of our Facebook administrators, who posted it," the department explained in a post. "Unbeknownst to anyone, when the app added the patch, it altered the packaging and some of the other attributes on the photograph. None of us caught it or realized it."

It wasn't long before the edited image's gibberish text and hazy edges drew criticism from social media users. According to the Portland Press Herald, Westbrook police initially denied AI had been used to generate the photo before eventually confirming its use of the AI chatbot ChatGPT. The department issued a public apology Tuesday, sharing a side-by-side comparison of the original and edited images.

"It was never our intent to alter the image of the evidence," the department's post read. "We never realized that using a photoshop app to add our logo would alter a photograph so substantially."

Programming

Diffusion + Coding = DiffuCode. How Apple Released a Weirdly Interesting Coding Language Model (9to5mac.com) 7

"Apple quietly dropped a new AI model on Hugging Face with an interesting twist," writes 9to5Mac. "Instead of writing code like traditional LLMs generate text (left to right, top to bottom), it can also write out of order, and improve multiple chunks at once."

"The result is faster code generation, at a performance that rivals top open-source coding models." Traditionally, most LLMs have been autoregressive. This means that when you ask them something, they process your entire question, predict the first token of the answer, reprocess the entire question with the first token, predict the second token, and so on. This makes them generate text like most of us read: left to right, top to bottom... An alternative to autoregressive models is diffusion models, which have been more often used by image models like Stable Diffusion. In a nutshell, the model starts with a fuzzy, noisy image, and it iteratively removes the noise while keeping the user request in mind, steering it towards something that looks more and more like what the user requested...

Lately, some large language models have looked to the diffusion architecture to generate text, and the results have been pretty promising... This behavior is especially useful for programming, where global structure matters more than linear token prediction... [Apple] released an open-source model called DiffuCode-7B-cpGRPO, that builds on top of a paper called DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation, released just last month... [W]ith an extra training step called coupled-GRPO, it learned to generate higher-quality code with fewer passes. The result? Code that's faster to generate, globally coherent, and competitive with some of the best open-source programming models out there.

Even more interestingly, Apple's model is built on top of Qwen2.5-7B, an open-source foundation model from Alibaba. Alibaba first fine-tuned that model for better code generation (as Qwen2.5-Coder-7B), then Apple took it and made its own adjustments. They turned it into a new model with a diffusion-based decoder, as described in the DiffuCoder paper, and then adjusted it again to better follow instructions. Once that was done, they trained yet another version of it using more than 20,000 carefully picked coding examples.

"Although DiffuCoder did better than many diffusion-based coding models (and that was before the 4.4% bump from DiffuCoder-7B-cpGRPO), it still doesn't quite reach the level of GPT-4 or Gemini Diffusion..." the article points out.

But "the bigger point is this: little by little, Apple has been laying the groundwork for its generative AI efforts with some pretty interesting and novel ideas."
AI

'Vibe Coder' Who Doesn't Know How to Code Keeps Winning Hackathons in San Francisco (sfstandard.com) 171

An anonymous reader shared this report from the San Francisco Standard: About an hour into my meeting with the undisputed hackathon king of San Francisco, Rene Turcios asked if I wanted to smoke a joint with him. I politely declined, but his offer hardly surprised me. Turcios has built a reputation as a cannabis-loving former professional Yu-Gi-Oh! player who resells Labubus out of his Tenderloin apartment when he's not busy attending nearly every hackathon happening in the city. Since 2023, Turcios, 29, has attended more than 200 events, where he's won cash, software credits, and clout. "I'm always hustling," he said.

The craziest part: he doesn't even know how to code.

"Rene is the original vibe coder," said RJ Moscardon, a friend and fellow hacker who watched Turcios win second place at his first-ever hackathon at the AGI House mansion in Hillsborough. "All the engineers with prestigious degrees scoffed at him at first. But now they're all doing exactly the same thing...." Turcios was vibe coding long before the technique had a name — and was looked down upon by longtime hackers for using AI. But as Tiger Woods once said, "Winning takes care of everything...."

Instead of vigorously coding until the deadline, he finished his projects hours early by getting AI to do the technical work for him. "I didn't write a single line of code," Turcios said of his first hackathon where he prompted ChatGPT using plain English to generate a program that can convert any song into a lo-fi version. When the organizers announced Turcios had won second place, he screamed in celebration.... "I realized that I could compete with people who have degrees and fancy jobs...."

Turcios is now known for being able to build anything quickly. Businesses reach out to him to contract out projects that would take software engineering teams weeks — and he delivers in hours. He's even started running workshops to teach non-technical groups and experienced software engineers how to get the most out of AI for coding.

"He grew up in Missouri to parents who worked in an international circus, taming bears and lions..."
Programming

How Do You Teach Computer Science in the Age of AI? (thestar.com.my) 175

"A computer science degree used to be a golden ticket to the promised land of jobs," a college senior tells the New York Times. But "That's no longer the case."

The article notes that in the last three years there's been a 65% drop from companies seeking workers with two years of experience or less (according to an analysis by technology research/education organization CompTIA), with tech companies "relying more on AI for some aspects of coding, eliminating some entry-level work."

So what do college professors teach when AI "is coming fastest and most forcefully to computer science"? Computer science programs at universities across the country are now scrambling to understand the implications of the technological transformation, grappling with what to keep teaching in the AI era. Ideas range from less emphasis on mastering programming languages to focusing on hybrid courses designed to inject computing into every profession, as educators ponder what the tech jobs of the future will look like in an AI economy... Some educators now believe the discipline could broaden to become more like a liberal arts degree, with a greater emphasis on critical thinking and communication skills.

The National Science Foundation is funding a program, Level Up AI, to bring together university and community college educators and researchers to move toward a shared vision of the essentials of AI education. The 18-month project, run by the Computing Research Association, a research and education nonprofit, in partnership with New Mexico State University, is organising conferences and roundtables and producing white papers to share resources and best practices. The NSF-backed initiative was created because of "a sense of urgency that we need a lot more computing students — and more people — who know about AI in the workforce," said Mary Lou Maher, a computer scientist and a director of the Computing Research Association.

The future of computer science education, Maher said, is likely to focus less on coding and more on computational thinking and AI literacy. Computational thinking involves breaking down problems into smaller tasks, developing step-by-step solutions and using data to reach evidence-based conclusions. AI literacy is an understanding — at varying depths for students at different levels — of how AI works, how to use it responsibly and how it is affecting society. Nurturing informed skepticism, she said, should be a goal.

The article raises other possibilities. Experts also suggest the possibility of "a burst of technology democratization as chatbot-style tools are used by people in fields from medicine to marketing to create their own programs, tailored for their industry, fed by industry-specific data sets." Stanford CS professor Alex Aiken even argues that "The growth in software engineering jobs may decline, but the total number of people involved in programming will increase."

Last year, Carnegie Mellon actually endorsed using AI for its introductory CS courses. The dean of the school's undergraduate programs believes that coursework "should include instruction in the traditional basics of computing and AI principles, followed by plenty of hands-on experience designing software using the new tools."
Programming

Microsoft Open Sources Copilot Chat for VS Code on GitHub (nerds.xyz) 18

"Microsoft has released the source code for the GitHub Copilot Chat extension for VS Code under the MIT license," reports BleepingComputer. This provides the community access to the full implementation of the chat-based coding assistant, including the implementation of "agent mode," what contextual data is sent to large language models (LLMs), and the design of system prompts. The GitHub repository hosting the code also details telemetry collection mechanisms, addressing long-standing questions about data transparency in AI-assisted coding tools...

As the VS Code team explained previously, shifts in AI tooling landscape like the rapid growth of the open-source AI ecosystem and a more level playing field for all have reduced the need for secrecy around prompt engineering and UI design. At the same time, increased targeting of development tools by malicious actors has increased the need for crowdsourcing contributions to rapidly pinpoint problems and develop effective fixes. Essentially, openness is now considered superior from a security perspective.

"If you've been hesitant to adopt AI tools because you don't trust the black box behind them, this move opensources-github-copilot-chat-vscode/offers something rare these days: transparency," writes Slashdot reader BrianFagioli" Now that the extension is open source, developers can audit how agent mode actually works. You can also dig into how it manages your data, customize its behavior, or build entirely new tools on top of it. This could be especially useful in enterprise environments where compliance and control are non negotiable.

It is worth pointing out that the backend models powering Copilot remain closed source. So no, you won't be able to self host the whole experience or train your own Copilot. But everything running locally in VS Code is now fair game. Microsoft says it is planning to eventually merge inline code completions into the same open source package too, which would make Copilot Chat the new hub for both chat and suggestions.

Slashdot Top Deals