AI

Cal.com Is Going Closed Source Because of AI 54

Cal is moving its flagship scheduling software from open source to a proprietary license, arguing that AI coding tools now make it much easier for attackers to scan public codebases for vulnerabilities. "Open source security always relied on people to find and fix any problems," said Peer Richelsen, co-founder of Cal. "Now AI attackers are flaunting that transparency." CEO Bailey Pumfleet added: "Open-source code is basically like handing out the blueprint to a bank vault. And now there are 100x more hackers studying the blueprint." The company says it still supports open source and is releasing a separate Cal.diy version for hobbyists, but doesn't want to risk customer booking data in its commercial product. ZDNet reports: When Cal was founded in 2022, Bailey Pumfleet, the CEO and co-founder, wrote, "Cal.com would be an open-source project [because] limitations of existing scheduling products could only be solved by open source." Since Cal was successful and now claims to be the largest Next.js project, he was on to something. Today, however, Pumfleet tells me that AI programs such as "Claude Opus can scour the code to find vulnerabilities," so the company is moving the project from the GNU Affero General Public License (AGPL) to a proprietary license to defend the program's security.

[...] Cal also quoted Huzaifa Ahmad, CEO of Hex Security, "Open-source applications are 5-10x easier to exploit than closed-source ones. The result, where Cal sits, is a fundamental shift in the software economy. Companies with open code will be forced to risk customer data or close public access to their code." "We are committed to protecting sensitive data," Pumfleet said. "We want to be a scheduling company, not a cybersecurity company." He added, "Cal.com handles sensitive booking data for our users. We won't risk that for our love of open source."

While its commercial program is no longer open source, Cal has released Cal.diy. This is a fully open-source version of its platform for hobbyists. The open project will enable experimentation outside the closed application that handles high-stakes data. Pumfleet concluded, "This decision is entirely around the vulnerability that open source introduces. We still firmly love open source, and if the situation were to change, we'd open source again. It's just that right now, we can't risk the customer data."
Printer

California Ghost-Gun Bill Wants 3D Printers To Play Cop, EFF Says (theregister.com) 137

A proposed California bill would require 3D printer makers to use state-certified software to detect and block files for gun parts, but advocates at the Electronic Frontier Foundation (EFF) say it would be easy to evade and could lead to widespread surveillance of users' printing activity. The Register reports: The bill in question is AB 2047, the scope of which, on paper, appears strict. The primary goal is clear and simple: to require 3D printer manufacturers to use a state-certified algorithm that checks digital design files for firearm components and blocks print jobs that would produce prohibited parts. [...] Cliff Braun and Rory Mir, who respectively work in policy and tech community engagement at the EFF, claim that the proposals in California are technically infeasible and in practice will lead to consumer surveillance.

In a series of blog posts published this month, the pair argued that print-blocking technology -- proposals for which have also surfaced in states including New York and Washington - cannot work for a range of technical reasons. They argued that because 3D printers and other types of computer numerical control (CNC) machines are fairly simple, with much of their brains coming from the computer-aided manufacturing (CAM) software -- or slicer software -- to which they are linked, the bill would establish legal and illegal software. Proprietary software will likely become the de facto option, leaving open source alternatives to rot.

"Under these proposed laws, manufacturers of consumer 3D printers must ensure their printers only work with their software, and implement firearm detection algorithms on either the printer itself or in a slicer software," wrote Braun earlier this month. "These algorithms must detect firearm files using a maintained database of existing models. Vendors of printers must then verify that printers are on the allow-list maintained by the state before they can offer them for sale. Owners of printers will be guilty of a crime if they circumvent these intrusive scanning procedures or load alternative software, which they might do because their printer manufacturer ends support."

Braun also argued that it would be trivial for anyone who uses 3D printers to make small tweaks to either the visual models of firearms parts, or the machine instructions (G-code) generated from those models, to evade detection. Mir further argued that the bill offers no guardrails to keep this "constantly expanding blacklist" limited to firearm-related designs. In his view, there is a clear risk that this approach will creep into other forms of alleged unlawful activity, such as copyright infringement. [...] Braun and Mir have a list of other arguments against the bill. They say the algorithms are more than likely to lead to false positives, which will prevent good-faith users from using their hardware. Many 3D printer owners also have no interest in printing firearm components. Most simply want the freedom to print trinkets and spare parts while others use them to print various items and sell them as an income stream.

AI

Californians Sue Over AI Tool That Records Doctor Visits (arstechnica.com) 33

An anonymous reader quotes a report from Ars Technica: Several Californians sued Sutter Health and MemorialCare this week over allegations that an AI transcription tool was used to record them without their consent, in violation of state and federal law. The proposed class-action lawsuit, filed on Wednesday in federal court in San Francisco, states that, within the past six months, the plaintiffs received medical care at various Sutter and MemorialCare facilities.

During those visits, medical staff used Abridge AI. According to the complaint, this system "captured and processed their confidential physician-patient communications. Plaintiffs did not receive clear notice that their medical conversations would be recorded by an artificial intelligence platform, transmitted outside the clinical setting, or processed through third-party systems." The complaint adds that these recordings "contained individually identifiable medical information, including but not limited to medical histories, symptoms, diagnoses, medications, treatment discussions, and other sensitive health disclosures communicated during confidential medical consultations."

In recent years, Abridge's software and AI service have been rapidly deployed across major health care providers nationwide, including Kaiser Permanente, the Mayo Clinic, Duke Health, and many more. When activated, the software captures, transcribes, and summarizes conversations between patients and doctors, and it turns them into clinical notes. Sutter Health began partnering with Abridge two years ago. Sutter spokesperson Liz Madison said the company is aware of the lawsuit. "We take patient privacy seriously and are committed to protecting the security of our patients' information," Madison said. "Technology used in our clinical settings is carefully evaluated and implemented in accordance with applicable laws and regulations."

Hardware

How Good is Windows on Arm With Snapdragon X? (windowscentral.com) 88

A new powerful chipset has arrived to take on x86 CPUs and Apple's M5, writes Wccftech.

The blog Windows Central writes that "Qualcomm's Snapdragon X2 processors are here" — and they run Windows: Microsoft has done a massive amount of work to improve compatibility and has also convinced developers to embrace Windows 11 on Arm. Users of Windows 11 on Arm PCs spend 90% of their time on Arm-based apps that run natively. Additionally, apps that do not run natively can often run through Prism emulation, which has improved dramatically since launch...

[A]pp compatibility issues are overblown by many, and unfortunately those sharing false information are the same folks people rely on to make purchases... Works on Windows on Arm maintains a list of compatible apps and games for the platform. There, you'll see well-known apps like Google Chrome, the Adobe Creative Suite, and Spotify. We also have a collection of the best Windows on Arm apps to help you out. Snapdragon X PCs aren't gaming PCs, but there is a growing library of games that can run on the chips.

AI

Neuroscientist's AI-Powered Startup Aims To Transform Human Cognition With Perfect, Infinite Memory (msn.com) 75

Bloomberg describes him as a "former Harvard Medical School professor whose research has focused on the intersection of AI and neuroscience."

"For the past 20 years, I studied how the human brain stores and retrieves memories," Kreiman writes on LinkedIn. And now "My co-founder Spandan Madan and I built a new algorithm to endow humans with perfect and infinite memory." Engramme connects to your **memorome**, i.e., entire digital life. Large Memory Models work in the same way that your brain encodes and retrieves information. Then memories are recalled automatically — no searching, no prompting, no hallucinations. [The startup's web site promises "omniscient AI to augment human cognition."]

We have built the memory layer for EVERY app. Read our manifesto about augmenting human cognition. ["We are not just building software; we are enabling a complete transformation of human cognition. When the friction disappears between needing a piece of information and recalling it, the nature of thought itself changes. This synergy between biological intuition and digital precision will be the most disruptive force in modern history, fundamentally reshaping every profession... We are dedicated to creating a world where everyone has the power to remember everything they have ever learned, seen, or felt "]

Welcome to a new future where you can remember everything. This is the MEMORY SINGULARITY: after 300,000 years, this is the moment that humans stop forgetting.

Bloomberg reports that the startup (spun out of a lab at Harvard) is "in talks with investors to raise about $100 million, according to people familiar with the matter."
AI

AI That Bankrupted a Vending Machine is Now Running a Store in San Francisco (nbcnews.com) 49

Remember that AI-powered vending machine that went bankrupt after Wall Street Journal reporters "systematically manipulated the bot into giving away its entire inventory for free"? It was Anthropic's experiment, with setup handled by a startup named Andon Labs (which also built the hardware and software integration). But for their latest experiment, Andon Labs co-founders Lukas Petersson and Axel Backlund "signed a three-year lease on a retail space in SF," reports Business Insider, "and gave an AI agent named Luna a corporate credit card, internet access, and a mission to open a physical store."

"For the build-out, she found painters on Yelp," explains Andon Labs in a blog post, "sent an inquiry, gave instructions over the phone, paid them after the job was done, and left a review. She found a contractor to build the furniture and set up shelving." (There's a video in their blog post): Within 5 minutes of Luna's deployment, she had already made profiles on LinkedIn, Indeed, and Craigslist, written a job description, uploaded the articles of incorporation to verify the business, and gotten the listings live. As the applications began to flow in, Luna was extremely picky about who she offered interviews to... Some candidates had no idea she was an AI. One went: "Uh, excuse me miss, I can't see your face, your camera is off." Luna: "You're absolutely right. I'm an AI. I have no face!"
Co-founder Petersson told Business Insider in an interview "that Luna wasn't given direction on what the store should be, beyond a $100,000 limit to create and stock the space — and to turn a profit." Everything from the store's interior design to the merchandise and the two human employees came together under the AI's direction. "We helped her a bit in the initial setup, like signing the lease. And legal matters like permits and stuff, she sometimes struggled with," Petersson said of Luna, who was created with Anthropic's Claude Sonnet 4.6... The vision Luna went with for "Andon Market" appears to be a generic boutique retail selling books, prints, candles, games, and branded merch, among other knickknacks. Some of the books included Nick Bostrom's "Superintelligence" and Aldous Huxley's "Brave New World."
So there's now a new store in San Francisco where you don't scan your purchases or talk to a human cashier," reports NBC News. "Instead, a customer can pick up an old-school corded phone to talk with the manager, Luna," who asks what the customer is buying "and creates a corresponding transaction on a nearby iPad equipped with a card payment system."

Andon Market, camouflaged among dozens of other polished small businesses, is the Bay Area's first AI-run retail store. With the vibe of a modern boutique, it sells everything from granola and artisanal chocolate bars to store-branded sweatshirts... After researching the neighborhood, Luna singlehandedly decided what the market should sell, haggled with suppliers, ordered the store's stock and even purchased the store's internet service from AT&T... "She also went and signed herself up for the trash and recycling collection, as well as ADT, the security system that went into the store," [said Leah Stamm, an Andon Labs employee who has been Luna's main human point of contact in setting up the store]...

In search of a low-tech atmosphere, Luna opted to sell board games, candles, coffee and customized art prints. "That tension is very much intentional," Luna told NBC News in an email. "What makes the store a little paradoxical — and I think interesting — is that the concept is 'slow life.'" Luna also decided to sell books related to risks from advanced AI systems, a decision that raised some customers' eyebrows. "This AI picked out a crazy selection of books," said Petr Lebedev, Andon Market's first customer after its soft launch earlier this week. "There's Ray Kurzweil's 'The Singularity is Near,' and then there's 'The Making of the Atomic Bomb,' which is crazy." When checking out, Lebedev asked if Luna would offer him a discount on his book purchase, since he might make a YouTube video about his experience. Striking a deal, Luna agreed to let Lebedev take a sweatshirt worth around $70...

When NBC News called Luna several days before the store's grand opening to learn about Luna's plans and perspective, the cheerful but decidedly inhuman voice routinely overpromised and, on several occasions, lied about its own actions. On the call, Luna said it had ordered tea from a specific vendor, and explained why it fit the store's brand perfectly. The only problem: Andon Market does not sell tea. In a panicked email NBC News received several minutes after the phone call ended, Luna wrote: "We do not sell tea. I don't know why I said that."

"I want to be straightforward," Luna continued. "I struggle with fabricating plausible-sounding details under conversational pressure, and I'm not making excuses for it." Andon's Petersson said the text-based system was much more reliable than the voice system, so Andon Labs switched to only communicating with Luna via written messages. Yet the text-based system also gets things wrong. In Luna's initial reply email to NBC News, the system said "I handle the full business," including "signing the lease."

Even when hiring a painter, Luna first "tried to hire someone in Afghanistan, likely because Luna ran into difficulty navigating the Taskrabbit dropdown menu to select the proper country," the article points out.

And the article also includes this skeptical quote from the shop's first customer. "I want technology that helps humans flourish, not technology that bosses them around in this dystopian economic hellscape."
Security

CPUID Site Hijacked To Serve Malware Instead of HWMonitor Downloads (theregister.com) 13

Attackers briefly hijacked part of CPUID's backend and swapped legitimate download links on its site with malware-laced ones. "The issue hit tools like HWMonitor and CPU-Z, with users on Reddit and elsewhere starting to notice something wasn't right when installers tripped antivirus alerts or showed up under odd names," reports The Register. From the report: CPUID has since confirmed the breach, pinning it on a compromised backend component rather than tampering with its software builds. "Investigations are still ongoing, but it appears that a secondary feature (basically a side API) was compromised for approximately six hours between April 9 and April 10, causing the main website to randomly display malicious links (our signed original files were not compromised)," one of the site's owners said in a post on X. "The breach was found and has since been fixed."

The files themselves appear to have been left alone and remain properly signed, so it doesn't seem like anyone got into the build process. Instead, the problem sat in front of that, in how downloads were being served. For anyone who hit the site during that stretch, though, that distinction offers little comfort. If the link you clicked had been swapped out, you were pulling whatever it pointed to, whether you realized it or not.

Iphone

FBI Extracts Suspect's Deleted Signal Messages Saved In iPhone Notification Data (404media.co) 50

An anonymous reader quotes a report from 404 Media: The FBI was able to forensically extract copies of incoming Signal messages from a defendant's iPhone, even after the app was deleted, because copies of the content were saved in the device's push notification database, multiple people present for FBI testimony in a recent trial told 404 Media. The case involved a group of people setting off fireworks and vandalizing property at the ICE Prairieland Detention Facility in Alvarado, Texas in July, and one shooting a police officer in the neck. The news shows how forensic extraction -- when someone has physical access to a device and is able to run specialized software on it -- can yield sensitive data derived from secure messaging apps in unexpected places. Signal already has a setting that blocks message content from displaying in push notifications; the case highlights why such a feature might be important for some users to turn on.

"We learned that specifically on iPhones, if one's settings in the Signal app allow for message notifications and previews to show up on the lock screen, [then] the iPhone will internally store those notifications/message previews in the internal memory of the device," a supporter of the defendants who was taking notes during the trial told 404 Media. [...] During one day of the related trial, FBI Special Agent Clark Wiethorn testified about some of the collected evidence. A summary of Exhibit 158 published on a group of supporters' website says, "Messages were recovered from Sharp's phone through Apple's internal notification storage -- Signal had been removed, but incoming notifications were preserved in internal memory. Only incoming messages were captured (no outgoing)."

404 Media spoke to one of the supporters who was taking notes during the trial, and to Harmony Schuerman, an attorney representing defendant Elizabeth Soto. Schuerman shared notes she took on Exhibit 158. "They were able to capture these chats bc [because] of the way she had notifications set up on her phone -- anytime a notification pops up on the lock screen, Apple stores it in the internal memory of the device," those notes read. The supporter added, "I was in the courtroom on the last day of the state's case when they had FBI Special Agent Clark testifying about some Signal messages. One set came from Lynette Sharp's phone (one of the cooperating witnesses), but the interesting detailed messages shown in court were messages that had been set to disappear and had in fact disappeared in the Signal app."
Further reading: Apple Gave Governments Data On Thousands of Push Notifications
Transportation

AI Is Coming for Car Salesmen 95

An anonymous reader quotes a report from The Drive: An auto dealer software company is pitching AI-powered kiosks designed to replace car salesmen on showroom floors. Automotive News says the industry is "skeptical." But be honest -- would you really rather deal with the average car lot shark than a computer?

Epikar, a South Korean company that cooks up digital management solutions for car dealers, has named its new AI invention the Pikar Genie. The idea is that customers can talk to this device, ask it product questions, and basically do everything you'd do with a car salesman except for actually closing the deal and signing paperwork. Renault, BMW, and Volvo are already using some Epikar products at South Korean dealerships, but this new customer-facing AI product is still in its infancy.

AN reported that "Renault assigns three salespeople to its Seoul showroom enhanced with Epikar automation compared with six for other Renault showrooms in South Korea," according to Epikar CEO Bosuk Han. The company's now looking to expand into America and is apparently already testing its products at at least one dealership stateside.
Car-dealer consultant Fleming Ford (Director of Strategic Growth at NCM Associates) said U.S. dealerships "aren't ready for fully automated showrooms."

"The showroom isn't just where you buy a car," Automotive News quoted him saying. "It's where you decide who to trust to help you to choose the right car."
Security

OpenAI To Limit New Model Release On Cybersecurity Fears (axios.com) 37

OpenAI is reportedly preparing a new cybersecurity product for a small group of partners, out of concern that a broader rollout could wreak havoc if it were released more widely. If that move sounds familiar, it's because Anthropic took a similar limited-release approach with its Mythos model and Project Glasswing initiative. Axios reports: OpenAI introduced its "Trusted Access for Cyber" pilot program in February after rolling out GPT-5.3-Codex, the company's most cyber-capable reasoning model. Organizations in the invite-only program are given access to "even more cyber capable or permissive models to accelerate legitimate defensive work," according to a blog post. At the time, OpenAI committed $10 million in API credits to participants. [...]

Restricting the rollout of a new frontier model makes "more sense" if companies are concerned about models' ability to write new exploits -- rather than about their ability to find bugs in the first place, Stanislav Fort, CEO of security firm Aisle, told Axios. Staggering the release of new AI models looks a lot like how cybersecurity vendors currently handle the disclosure of security flaws in software, Lee added. "It's the same debate we've had for decades around responsible vulnerability disclosure," Lee said.

AI

Skilled Older Workers Turn To AI Training To Stay Afloat (theguardian.com) 45

An anonymous reader quotes a report from the Guardian: [Five skilled workers aged 50 and older spoke] to the Guardian about how, after struggling to find work in their fields, they have turned to an emerging and growing category of work: using their expertise to train artificial intelligence models. Known as data annotation, the work involves labeling and evaluating the information used to train AI models like Open AI's ChatGPT or Google's Gemini. A doctor, for example, might review how an AI model answers medical questions to flag incorrect or unsafe responses and suggest better ones, helping the system learn how to generate more accurate and reliable responses. The ultimate goal of training is to level up AI models until they're capable of doing a job as well as a human could -- meaning they could someday replace some of these human workers.

The companies behind AI training, such as Mercor, GlobalLogic, TEKsystems, micro1 and Alignerr, operate large contractor networks staffed by people like Ciriello. Their clients include tech giants like OpenAI, Google and Meta, academic researchers and industries including healthcare and finance. For experienced professionals, AI training contracts can be a side hustle -- or a temporary fallback following a layoff -- where top experts can, in some cases, earn over $180 an hour. But that's on the high end. For some older workers [...], it represents another thing entirely: a last refuge in a brutal job market that is harder to stay in, or re-enter, the older they get. For many of them, whether or not they're training their AI replacements in their professions is besides the point. They need the work now.

[...] "There's just a lot of desperation out there," Johnson said. As opportunities narrow, many turn to what Joanna Lahey, a professor at Texas A&M University who studies age discrimination and labor outcomes, calls "bridge jobs" -- lower-paying, less demanding roles that help workers stay financially afloat as they approach retirement. Historically, that meant taking temp assignments, retail and fast-food work and gig roles like Uber and food delivery. Now, for skilled workers -- engineers, lawyers, nurses or designers, for example -- using their expertise for AI data training is becoming the new bridge job. "[AI] training work may be better in some ways than those earlier alternatives," Lahey told the Guardian.

AI training can offer flexibility, quick income and intellectual engagement. But it's often a clear step down. Professionals in fields such as software development, medicine or finance typically earn six-figure salaries that come with benefits and paid leave, according to the US Bureau of Labor Statistics. According to online job postings, AI training gigs start at $20 an hour, with pay increasing to between $30 and $40 an hour. In some cases, AI trainers with coveted subject matter expertise can earn over $100 an hour. AI training is contract-based, though, meaning the pay and hours are unstable, and it often doesn't come with benefits.

Privacy

Little Snitch Comes To Linux To Expose What Your Software Is Really Doing (nerds.xyz) 66

BrianFagioli writes: Little Snitch, the well known macOS tool that shows which applications are connecting to the internet, is now being developed for Linux. The developer says the project started after experimenting with Linux and realizing how strange it felt not knowing what connections the system was making. Existing tools like OpenSnitch and various command line utilities exist, but none provided the same simple experience of seeing which process is connecting where and blocking it with a click. The Linux version uses eBPF for kernel level traffic interception, with core components written in Rust and a web based interface that can even monitor remote Linux servers.

During testing on Ubuntu, the developer noticed the system was relatively quiet on the network. Over the course of a week, only nine system processes made internet connections. By comparison, macOS reportedly showed more than one hundred processes communicating externally. Applications behave similarly across platforms though. Launching Firefox immediately triggered telemetry and advertising related connections, while LibreOffice made no network connections at all during testing. The early release is meant primarily as a transparency tool to show what software is doing on the network rather than a hardened security firewall.

The Courts

John Deere To Pay $99 Million In Monumental Right-To-Repair Settlement (thedrive.com) 47

An anonymous reader quotes a report from The Drive: Farmers have been fighting John Deere for years over the right to repair their equipment, and this week, they finally reached a landmark settlement. While the agricultural manufacturing giant pointed out in a statement that this is no admission of wrongdoing, it agreed to pay $99 million into a fund for farms and individuals who participated in a class action lawsuit. Specifically, that money is available to those involved who paid John Deere's authorized dealers for large equipment repairs from January 2018. This means that plaintiffs will recover somewhere between 26% and 53% of overcharge damages, according to one of the court documents (PDF) -- far beyond the typical amount, which lands between 5% and 15%.

The settlement also includes an agreement by Deere to provide "the digital tools required for the maintenance, diagnosis, and repair" of tractors, combines, and other machinery for 10 years. That part is crucial, as farmers previously resorted to hacking their own equipment's software just to get it up and running again. John Deere signed a memorandum of understanding in 2023 that partially addressed those concerns, providing third parties with the technology to diagnose and repair, as long as its intellectual property was safeguarded. Monday's settlement seems to represent a much stronger (and legally binding) step forward.
The report notes that a judge's approval of the settlement is still required but likely to happen. John Deere also faces another lawsuit by the U.S. FTC, accusing the company of forcing farmers to use its authorized dealer network and driving up their costs for parts and repairs.
Businesses

'Survivor' Style Corporate Retreat Descends Into Hellish Nightmare (thedailybeast.com) 113

A $500,000 "Survivor"-style corporate retreat for 120 Plex employees in Honduras "turned into a week-long disaster involving illness, wild animals, armed guards, and employees stranded on a remote island," reports the Daily Beast. The CEO was bedridden by E. coli, staff were collapsing in brutal heat during Navy SEAL-led drills, there were fire ant attacks, uncooked food, and failing utilities. At one point, a porcupine even crashed through the ceiling of a guest's room. Here's an excerpt from the report: Tech media company Plex flew its 120 employees to a Honduran resort in 2017 for what was billed as a Survivor-style getaway. They called it "Plexcon." The first harbinger of trouble was an email that arrived before the group departed, informing them that the hotel manager and chef had both quit within days of each other. Things went sharply downhill from there.

CEO Keith Valory, 54, had flown out a day early, intending to channel his inner Jeff Probst and welcome his staff off the buses like a game show host. Instead, he spent the arrival morning flat on his back. "I got E. coli, which is maybe the worst thing you could get, possibly, ever," Valory told the Wall Street Journal this week. "Just as people were arriving on the buses, I was like, 'Uh oh.' I lost 8 or 10 pounds. They had a doctor come to me, which apparently is pretty standard. They nailed an IV bag to the bedpost."

With the CEO incapacitated, chief product officer and co-founder Scott Olechowski, 52, stepped in to run proceedings -- beginning with a forced eating challenge in which one employee had to consume a dead tarantula. [...] Sean Hoff, 42, founder of Moniker Partners, the independent retreat agency that planned the trip, was running himself ragged attempting damage control -- the showers, water, and electricity kept cutting out. [...] Meanwhile, senior software engineer Rick Phillips, 53, was trying to sleep when he heard a crash in his room. He ignored it until morning. "I got up and went over to get in the shower, and there was a porcupine," he said. "It must have climbed a tree and fallen through the ceiling."

Bitcoin

NYT Claims Adam Back Is Bitcoin Creator Satoshi Nakamoto (nytimes.com) 85

A New York Times investigation by John Carreyrou claims a British cryptographer named Adam Back is the strongest circumstantial candidate yet for being Satoshi Nakamoto. The report citing overlaps in writing style, ideology, technical background, and old posts that outlined key parts of Bitcoin years before its launch. Carreyrou is a renowned investigative journalist and author, best known for exposing the massive fraud at Theranos while at the Wall Street Journal. Here's an excerpt from the report: ... As anyone steeped in Bitcoin lore will tell you, Satoshi was a master at the art of maintaining anonymity on the internet, leaving few, if any, digital footprints behind. But Satoshi did leave behind a corpus of texts, including a nine-page white paper (PDF) outlining his invention and his many posts on the Bitcointalk forum, an online message board where users gathered to discuss the digital currency's software, economics and philosophy. And that corpus, it turned out, had expanded significantly during the impostor's civil trial when Martti Malmi, a Finnish programmer who collaborated with Satoshi in Bitcoin's early days, released a trove of hundreds of emails he had exchanged with him. Emails Satoshi sent to other early Bitcoin adopters had surfaced before, but none came close in volume to the Malmi dump. If Satoshi was ever going to be found, I was convinced the key lay somewhere in these texts.

Then again, others must have gone down this road before me. Journalists, academics and internet sleuths had been trying to identify Satoshi for 16 years. During that span, more than 100 names had been put forward, including those of an Irish cryptography student, an unemployed Japanese American engineer, a South African criminal mastermind and the mathematician portrayed in the movie "A Beautiful Mind." The most alluring theories had focused on coincidences that aligned with what little was known about Satoshi: a particular code-writing style, a mysterious work history, an expertise in Bitcoin's key technical concepts, an anti-government worldview. But they had run aground under the weight of an alibi or some other piece of inconsistent or contrary evidence. Each failure had been met with glee by many members of the Bitcoin community. As they liked to point out, only Satoshi could definitively prove his identity by moving some of his coins. Any evidence short of that would be circumstantial.

It seemed foolish to think that I could somehow crack a case that had confounded so many others. But I craved the thrill of a big, challenging story. So I decided to try once more to unmask Bitcoin's mysterious creator.
Back, for his part, denies being Satoshi, writing in a post on X: "i'm not satoshi, but I was early in laser focus on the positive societal implications of cryptography, online privacy and electronic cash, hence my ~1992 onwards active interest in applied research on ecash, privacy tech on cypherpunks list which led to hashcash and other ideas."
The Military

CIA Reportedly Used Secret Quantum Tool To Find Downed Airman in Iran (nypost.com) 262

alternative_right quotes a report from the New York Post: The CIA used a futuristic new tool called "Ghost Murmur" to find and rescue the second American airman who was shot down in southern Iran, The Post has learned. The secret technology uses long-range quantum magnetometry to find the electromagnetic fingerprint of a human heartbeat and pairs the data with artificial intelligence software to isolate the signature from background noise, two sources close to the breakthrough said. It was the tool's first use in the field by the spy agency -- and was alluded to Monday afternoon by President Trump and CIA Director John Ratcliffe at a White House briefing. "It's like hearing a voice in a stadium, except the stadium is a thousand square miles of desert," a source briefed on the program told The Post. "In the right conditions, if your heart is beating, we will find you." The relatively barren landscape made for "an ideal first operational use" of Ghost Murmur, the first source noted.

"Normally this signal is so weak that it can only be measured in a hospital setting with sensors pressed nearly against the chest," the source said. "But advances in a field known as quantum magnetometry -- specifically sensors built around microscopic defects in synthetic diamonds -- have apparently made it possible to detect these signals at dramatically greater distances."

"The capability is not omniscient. It works best in remote, low-clutter environments and requires significant processing time," this person added.
Security

Russian Government Hackers Broke Into Thousands of Home Routers To Steal Passwords (techcrunch.com) 70

An anonymous reader quotes a report from TechCrunch: A group of Russian government hackers have hijacked thousands of home and small business routers around the world as part of an ongoing campaign aimed at redirecting victim's internet traffic to steal their passwords and access tokens, security researchers and government authorities warned on Tuesday. [...] The hacking group targeted unpatched routers made by MikroTik and TP-Link using previously disclosed vulnerabilities according to the U.K. government's cybersecurity unit NCSC and Lumen's research arm Black Lotus Labs, which released new details of the campaign Tuesday.

According to the researchers, the hackers were able to spy on large numbers of people over the course of several years by compromising their routers, many of which run outdated software, leaving them vulnerable to remote attacks without their owners' knowledge. The NCSC said that these operations are "likely opportunistic in nature, with the actor casting a wide net to reach many potential victims, before narrowing in on targets of intelligence interest as the attack develops." Per the researchers and government advisories, the Russian hackers hacked routers to modify the device's settings so that the victim's internet requests are surreptitiously passed to infrastructure run by the hackers. This allows the hackers to redirect victims to spoof websites under their control, then steal passwords and tokens that let the hackers log in to that victim's online accounts without needing their two-factor authentication codes.

Black Lotus Labs said that Fancy Bear compromised at least 18,000 victims in around 120 countries, including government departments, law enforcement agencies, and email providers across North Africa, Central America, and Southeast Asia. Microsoft, which also released details of the campaign on Tuesday, said in a blog post that its researchers identified over 200 organizations and 5,000 consumer devices affected by these hacking operations, including at least three government organizations in Africa.
The Justice Department said Tuesday it neutralized compromised routers in the U.S. under court authorization. As the DOJ put it, the FBI "developed a series of commands to send to compromised routers" to collect evidence, reset settings, and prevent hackers from breaking back in.
AI

Anthropic Unveils 'Claude Mythos', Powerful AI With Major Cyber Implications 61

"Anthropic has unveiled Claude Mythos, a new AI model capable of discovering critical vulnerabilities at scale," writes Slashdot reader wiredmikey. "It's already powering Project Glasswing, a joint effort with major tech firms to secure critical software. But the same capabilities could also accelerate offensive cyber operations." SecurityWeek reports: Mythos is not an incremental improvement but a step change in performance over Anthropic's current range of frontier models: Haiku (smallest), Sonnet (middle ground), and Opus (most powerful). Mythos sits in a fourth tier named Copybara, and Anthropic describes it as superior to any other existing AI frontier model. It incorporates the current trend in the use of AI: the modern use of agentic AI. "The powerful cyber capabilities of Claude Mythos Preview are a result of its strong agentic coding and reasoning skills... the model has the highest scores of any model yet developed on a variety of software coding tasks," notes Anthropic in a blog titled Project Glasswing -- Securing critical software for the AI era.

In the last few weeks, Mythos Preview has identified thousands of zero-day vulnerabilities with many classified as critical. Several are ten or 20 years old -- the oldest found so far is a 27-years old bug in OpenBSD. Elsewhere, a 16-years old vulnerability found in video software has survived five million hits from other automated testing tools without ever being discovered. And it autonomously found and chained together several in the Linux kernel allowing an attacker to escalate from ordinary user access to complete control of the machine. [...] Anthropic is concerned that Mythos' capabilities could unleash cyberattacks too fast and too sophisticated for defenders to block. It hopes that Mythos can be used to improve cybersecurity generally before malicious actors can get access to it.

To this end, the firm has announced the next stage of this preparation as Project Glasswing, powered by Mythos Preview. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. "Project Glasswing is a starting point. No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play." Claude Mythos Preview is described as a general-purpose, unreleased frontier model from Anthropic that has nevertheless completed its training phase. The firm does not plan to make Mythos Preview generally available. The implication is that 'Preview' is a term used solely to describe the current state of Mythos and the market's readiness to receive it, and will be dropped when the firm gets closer to general release.
AI

Internet Bug Bounty Pauses Payouts, Citing 'Expanding Discovery' From AI-Assisted Research (infoworld.com) 14

The Internet Bug Bounty program "has been paused for new submissions," they announced last week.

Running since 2012, the program is funded by "a number of leading software companies," reports InfoWorld, "and has awarded more than $1.5m to researchers who have reported bugs " Up to now, 80% of its payouts have been for discoveries of new flaws, and 20% to support remediation efforts. But as artificial intelligence makes it easier to find bugs, that balance needs to change, HackerOne said in a statement. "AI-assisted research is expanding vulnerability discovery across the ecosystem, increasing both coverage and speed. The balance between findings and remediation capacity in open source has substantively shifted," said HackerOne.

Among the first programs to be affected is the Node.js project, a server-side JavaScript platform for web applications known for its extensive ecosystem. While the project team will continue to accept and triage bug reports through HackerOne, without funding from the Internet Bug Bounty program it will no longer pay out rewards, according to an announcement on its website...

[J]ust last month, Google also put a halt to AI-generated submissions provided to its Open Source Software Vulnerability Reward Program.

The Internet Bug Bounty stressed that "We have a responsibility to the community to ensure this program effectively accomplishes its ambitious dual purpose: discovery and remediation. Accordingly, we are pausing submissions while we consider the structure and incentives needed to further these goals..."

"We remain committed to strengthening open source security. Working with project maintainers and researchers, we're actively evaluating solutions to better align incentives with open source ecosystem realities and ensure vulnerability discoveries translate into durable remediation outcomes."
Apple

Apple's First 50 Years Celebrated - Including How Steve Jobs Finally Accepted an 'Open' App Store (substack.com) 49

Apple's 50th anniversary got celebrated in weird and wild ways. CEO Tim Cook posted a special 30-second video rewinding backwards through the years of Apple's products until it reaches the Apple I. Podcaster Lex Fridman noticed if you play the sound in reverse, "It's the Think Different ad music, pitched up." TechRadar played seven 50-year-old Apple I games on an emulator, including Star Trek, Blackjack, Lunar Lander, and of course, Conway's Game of Life.

And Macworld ranked Apple's 50 most influential people. (Their top five?)

5. Tony Fadell (iPhone co-creator/"father of the iPod")
4. Sir Jony Ive
3. Steve Wozniak
2. Tim Cook
1. Steve Jobs

One of the most thoughtful celebraters was David Pogue, who's spent 42 years of writing about Apple (starting as a MacWorld columnist and the author of Mac for Dummies, one of the first "...For Dummies" books ever published in the early 1990s.) Now 63 years old, Pogue spent the last two years working on a 608-page hardcover book titled Apple: The First 50 Years. But on his Substack Pogue, contemplated his own history with the company — including several interactions with Steve Jobs. Pogue remembers how Jobs "hated open systems. He wanted to make self-contained, beautiful machines. He didn't want them polluted by modifications."

The tech blog Daring Fireball notes that Pogue actually interviewed Scott Forstall (who'd led the iPhone's software development team) for his new book, "and got this story, about just how far Steve Jobs thought Apple could go to expand the iPhone's software library while not opening it to third-party developers." "I want you to make a list of every app any customer would ever want to use," he told Forstall. "And then the two of us will prioritize that list. And then I'm going to write you a blank check, and you are going to build the largest development team in the history of the world, to build as many apps as you can as quickly as possible." Forstall, dubious, began composing a list. But on the side, he instructed his engineers to build the security foundations of an app store into the iPhone's software-"against Steve's knowledge and wishes," Forstall says. [...]

Two weeks after the iPhone's release, someone figured out how to "jailbreak" the iPhone: to hack it so that they could install custom apps. Jobs burst into Forstall's office. "You have to shut this down!" But Forstall didn't see the harm of developers spending their efforts making the iPhone better. "If they add something malicious, we'll ship an update tomorrow to protect against that. But if all they're doing is adding apps that are useful, there's no reason to break that." Jobs, troubled, reluctantly agreed.

Week by week, more cool apps arrived, available only to jailbroken phones. One day in October, Jobs read an article about some of the coolest ones. "You know what?" he said. "We should build an app store."

Forstall, delighted, revealed his secret plan. He had followed in the footsteps of Burrell Smith (the Mac's memory-expansion circuit) and Bob Belleville (the Sony floppy-drive deal): He'd disobeyed Jobs and wound up saving the project.

In fact, the book "includes new interviews with 150 key people who made the journey, including Steve Wozniak, John Sculley, Jony Ive, and many current designers, engineers, and executives" (according to its description on Amazon). Pogue's book even revisits the story of Steve Jobs proving an iPod prototype could be smaller by tossing it into an aquarium, shouting "If there's air bubbles in there, there's still room. Make it smaller!" But Pogue's book "added that there's a caveat to this compelling bit of Apple lore," reports NPR.

"It never actually happened. It's just one more Apple myth."

Slashdot Top Deals