The Courts

Apple Agrees To Pay iPhone Owners $250 Million For Not Delivering AI Siri 30

Apple has agreed to a proposed $250 million settlement over claims that it misled iPhone buyers about the availability of Apple Intelligence and its upgraded Siri features. The settlement would cover U.S. buyers of the iPhone 16 lineup and iPhone 15 Pro models between June 10, 2024, and March 29, 2025. The Verge reports: The settlement will resolve a 2025 lawsuit, alleging Apple's advertisements created a "clear and reasonable consumer expectation" that Apple Intelligence features would be available with the launch of the iPhone 16. The lawsuit claimed Apple's products "offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance."

Apple brought certain AI-powered features to the iPhone 16 weeks after its release, and delayed the launch of its more personalized Siri, which is now expected to arrive later this year. Last April, the National Advertising Division recommended that Apple "discontinue or modify" its "available now" claim for Apple Intelligence. Apple also pulled an iPhone 16 ad showing actor Bella Ramsey using the AI-upgraded Siri.
AI

Google DeepMind Workers Vote To Unionize Over Military AI Deals 35

An anonymous reader quotes a report from Wired: Employees at Google DeepMind in London have voted to unionize as part of a bid to block the AI lab from providing its technology to the US and Israeli militaries. In a letter addressed to Google's managing director for the UK and Ireland, Debbie Weinstein, the workers asked the company to recognize the Communication Workers Union and Unite the Union as joint representatives for DeepMind employees. "Fundamentally, the push for unionization is about holding Google to its own ethical standards on AI, how they monetize it, what the products do, and who they work with," John Chadfield, national officer for technology at the CWU, tells WIRED. "Through the process of unionization, workers are collectively in a much stronger place to put [demands] to an increasingly deaf management."

[...] The DeepMind employee tells WIRED that if the staff succeeds in unionizing in the UK, they will likely demand that Google pulls out of its long-standing contract with the Israeli military, and seek greater transparency over how its AI products will be used, and some sort of assurance relating to layoffs made possible by automation. If Google does not engage, the letter states, the employees will ask an arbitration committee to compel the company to recognize the unions. Since the turn of the year, both Anthropic and OpenAI have announced large-scale expansions of their operations in London. CWU hopes the unionization effort at DeepMind will spur workers at those labs into similar action. "These conversations are happening," claims Chadfield. "The workers at other frontier labs have seen what Google DeepMind workers have done. They've come to us asking for help as well."
The unionization push began in February 2025 after Alphabet removed a pledge from its AI ethics guidelines that had barred uses such as weapons development and surveillance. "A lot of people here bought into the Google DeepMind tagline 'to build AI responsibly to benefit humanity,'" the DeepMind employee told WIRED. "The direction of travel is to further militarization of the AI models we're building here."
IBM

Moving To Mainframe Can Be Cheaper Than Sticking With VMware (theregister.com) 36

Gartner says some VMware customers may find it cheaper to move certain Linux VM workloads to IBM mainframes than to adopt Broadcom's new VMware licensing, especially for fleets of hundreds of Linux VMs and mission-critical apps needing long-term stability. The Register reports: Speaking to The Register to discuss the analyst firm's mid-April publication, "The State of the IBM Mainframe in 2026," [Gartner Vice President Analyst Alessandro Galimberti] said some buyers in many fields are comparing mainframes to modern environments and deciding Big Blue's big iron comes out ahead. "I can build a multi-region cloud application, but things like data synchronization and high availability are things I need to build into application logic," he said. "The mainframe has that in the platform, which shields developers from complexity." He also thinks mainframes are ideally suited to workloads that need many years of transactional consistency and backward-compatibility.

That said, Galimberti doesn't recommend the mainframe for all applications. He said mission-critical applications that are unlikely to change much for a decade are best-suited to the machines, as are Linux applications because the open source OS runs on IBM's hardware. IBM also offers the z/VM hypervisor, which he says can make Linux "even better and more enterprise-ready." Which is why Galimberti thinks IBM's ecosystem is attractive to VMware users, especially those who operate a fleet of 500 to 700 Linux VMs. [...]

Committing to mainframes therefore means planning "to spend time negotiating price and renewal protections, rather than prioritizing the business value these solutions can deliver." Another downside is that mainframes pose clear lock-in risk, so users may hold back on useful customizations out of fear they make it harder to extricate themselves from the platform. Access to skills remains an issue, too, as kids these days mostly don't contemplate a career working with big iron. Galimberti sees more service providers investing in their mainframe programs, which might help. So does the availability of Linux.

Software

'Notepad++ For Mac' Release Is Disavowed By the Creator of the Original (arstechnica.com) 63

An anonymous reader quotes a report from Ars Technica, written by Andrew Cunningham: As its name implies, the venerable Notepad++ text editor began as a more capable version of the classic Windows Notepad, with features such as line numbering and syntax highlighting. It was created in 2003 by Don Ho, who continues to be its primary author and maintainer, and it has been a Windows-exclusive app throughout its existence (older Notepad++ versions support OSes as old as Windows 95; the current version officially supports everything going back to Windows 7). I'm not a devoted user of the app, but I was aware of its history, which is why I was surprised to see news of a "Notepad++ for Mac" port making the rounds last week, as though it were a port of the original available from the Notepad++ website.

Apparently, this news surprised Ho as well, who claims that the Mac version and its author, Andrey Letov, are "using the Notepad++ trademark (the name) without permission." "This is misleading, inappropriate, and frankly disrespectful to both the project and its users," Ho wrote. "It has already fooled people -- including tech media -- into believing this is an official release. To be crystal clear: Notepad++ has never released a macOS version. Anyone claiming otherwise is simply riding on the Notepad++ name."
Ho repeatedly asked the developer to stop using the brand and eventually reported the trademark use to Cloudflare, the CDN of the Notepad++ for Mac site. "Every day that website remains active, you are in further violation of the law," Ho wrote. "I cannot authorize a 'week or two' of continued trademark infringement."

Letov has since begun rebranding the app as "NextPad++," though the old branding and URL reportedly remained available. The name changes is "an homage to NeXT Computer," notes Ars, "and uses a frog icon rather than the Notepad++ lizard."
AI

White House Considers Vetting AI Models Before They Are Released 122

The Trump administration is reportedly considering an executive order to create a working group that could review advanced AI models before public release. The shift follows concerns over Anthropic's powerful Mythos model and its cyber capabilities, with officials weighing whether the government should get early access to frontier models without necessarily blocking their release. The New York Times reports: In meetings last week, White House officials told executives from Anthropic, Google and OpenAI about some of those plans, people briefed on the conversations said. The working group is likely to consider a number of oversight approaches, officials said. But a review process could be similar to one being developed in Britain, which has assigned several government bodies to ensure that A.I. models meet certain safety standards, people in the tech industry and the administration said.

The discussions signal a stark reversal in the Trump administration's approach to A.I. Since returning to office last year, Mr. Trump has been a major booster of the technology, which he has said is vital to winning the geopolitical contest against China. Among other moves, he swiftly rolled back a Biden administration regulatory process that asked A.I. developers to perform safety evaluations and report on A.I. models with potential military applications. "We're going to make this industry absolutely the top, because right now it's a beautiful baby that's born," Mr. Trump said of A.I. at an event in July. "We have to grow that baby and let that baby thrive. We can't stop it. We can't stop it with politics. We can't stop it with foolish rules and even stupid rules." Mr. Trump left room for some rules, but he added that "they have to be more brilliant than even the technology itself."

The White House wants to avoid any political repercussions if a devastating A.I.-enabled cyberattack were to occur, people in the tech industry and the administration said. The administration is also evaluating whether new A.I. models could yield cyber-capabilities that could be useful to the Pentagon and U.S. intelligence agencies, they said. To get ahead of models like Mythos, some officials are pushing for a review system that would give the government first access to A.I. models, but that would not block their release, people briefed on the talks said.
Cellphones

The Pixel 11 Could Be the Next Victim of the RAM Shortage (theverge.com) 36

Google's Pixel 11 lineup could see RAM cuts or lower starting configurations because of the global memory shortage, with leaks suggesting the base model may drop from 12GB to 8GB while Pro models could add 12GB versions below the current 16GB tier. The Verge reports: There will be 16GB configurations available for each, but adding a lower-spec model could mean the 16GB version is getting a price hike. However, the silver lining is that the specs from MysticLeaks also include camera upgrades and brighter displays for the Pro models. The RAM shortage is pushing other phone makers, including Samsung, to raise prices, too.
Science

Infrasound Waves Stop Kitchen Fires, But Can They Replace Sprinklers? (reuters.com) 41

An anonymous reader quotes a report from Ars Technica: In a makeshift demonstration kitchen in Concord, California, cooking oil splatters in and around a frying pan, which catches fire on an unattended gas stove. Within moments, a smoke detector wails. But in this demonstration, something less common happens: An AI-driven sensor activates and wall emitters blast infrasound waves toward the source of the fire in an attempt to put it out. The science of acoustic fire suppression, which has long been known and documented in scientific literature and the press, works by vibrating oxygen molecules away from a fuel source, depriving the fire of a critical component needed for combustion. Indeed, after just a few seconds of infrasound, the tiny kitchen blaze goes out.

"We were able to not just point-and-shoot like a fire extinguisher; we figured out how to run it through ducting and distribute it like a sprinkler system," said Geoff Bruder, co-founder and CEO of Sonic Fire Tech, during the presentation. The company's goal is to replace sprinklers, which are effective at stopping fires but can also do significant water damage to a property. Sonic Fire Tech appears to be the first company trying to commercialize the science of acoustic fire suppression. Its executives have already been touring Southern California; Wednesday's event was the first in the northern half of the state.

The company aims to make this infrasound technique mainstream in both commercial (for instance, a data center, where sprinklers would damage electronics) and in-home installations, given that sprinklers are already required in all new California homes built in 2011 and later. Sonic Fire Tech also hopes to produce a backpack-based system that could be worn by wildland firefighters headed out into the field. "We are making meaningful technological improvements on a monthly basis," Stefan Pollack, a company spokesperson, emailed Ars after the event. But two experts who spoke with Ars raised serious questions about the potential for this technology to supplant traditional sprinklers in a home. They are even more skeptical as to whether the technique can be effective in an uncontrolled wildfire situation, where flames can grow very quickly.
Experts are concerned that infrasound may knock down small flames but does not cool hot surfaces or wet fuel like sprinklers do, which raises the risk of re-ignition, smoldering fires, hidden fires, or blocked fires. Sonic Fire Tech has claimed third-party validation and possible NFPA 13D equivalency, but it has not publicly released full testing details.

Fire officials and outside observers also want more information about reliability, maintenance, calibration, and how system failures would be detected and communicated.
AI

ChatGPT Became So Obsessed With Goblins That OpenAI Had to Intervene (msn.com) 63

The Wall Street Journal reports that OpenAI "recently gave its popular ChatGPT strict instructions. Stop talking about goblins." Recent models of the artificial-intelligence chatbot have been bringing up the creatures in conversations with users seemingly out of the blue, as well as gremlins, trolls and ogres. The goblin-speak caught the attention of programmers, who are often heavy users of the bot. Barron Roth, a 32-year-old product manager at a tech company, said the bot referred to a flaw in his code as a "classic little goblin." He said he counted more than 20 times it mentioned goblins, without any prompting...

Several users speculated that goblin terminology was how the model characterized itself, in lieu of identifying as a person with a soul. Then OpenAI decided enough was enough. "Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query," reads an open source line in ChatGPT's base instructions for its coding assistant.

The Journal calls this "a reminder that even as AI companies tout one advance after another in their technology, they are sometimes baffled by the things their own models do...." While training a "nerdy" personality for their model's customization feature, "We unknowingly gave particularly high rewards for metaphors with creatures," OpenAI explained in a log post. And "From there, the goblins spread." When we looked, use of "goblin" in ChatGPT had risen by 175% after the launch of GPT-5.1, while "gremlin" had risen by 52%... With GPT-5.4, we and our usersâ noticed an even bigger uptick in references to these creatures... Nerdy accounted for only 2.5% of all ChatGPT responses, but 66.7% of all "goblin" mentions in ChatGPT responses... The rewards were applied only in the Nerdy condition, but reinforcement learning does not guarantee that learned behaviors stay neatly scoped to the condition that produced them. Once a style tic is rewarded, later training can spread or reinforce it elsewhere, especially if those outputs are reused in supervised fine-tuning or preference data.
It all started because the "nerdy" personality's prompt had said "You must undercut pretension through playful use of language. The world is complex and strange, and its strangeness must be acknowledged, analyzed, and enjoyed..." Now OpenAI calls this "a powerful example of how reward signals can shape model behavior in unexpected ways, and how models can learn to generalize rewards in certain situations to unrelated ones."

But "fans of goblins don't have to fear," notes the Wall Street Journal. "OpenAI provided a command in its blog post that would remove its creature-suppressing instructions."
Science

Former NASA Engineers Create Ingenious Way To Save Homes From Wildfires Using Noise (nypost.com) 77

"Scientists have created a miraculous new way to stop fires from spreading through neighborhoods using nothing but sound," reports the New York Post: Former NASA engineers with California-based Sonic Fire Tech found that using sound waves can snuff out blazes and potentially be used to stop another Pacific Palisades inferno... The technology works by targeting oxygen molecules using low-frequency sound waves that vibrate them, stopping the fire from growing. "Sound waves vibrate the oxygen faster than the fuel can use it, and break the chemical reaction of the flame," Remington Hotchkis, Chief Commercialization Officer at Sonic Fire Tech told The Post.

The San Bernardino County Fire Department recently tested out the equipment using a backpack version and the results were incredible. Video shows firefighters fighting small blazes on a shrub and a stove top fire with the technology putting it out... In the home application, the system would be alerted/activated if there was a fire, sending the sound waves through a home duct system, essentially snuffing out the blaze. The sound waves can reach as far as 30ft from a home, the report noted. The sound is also harmless to pets and humans.

The article includes this quote that an executive at the company gave local news station KMPH. "Our former NASA engineers are rocket scientists, and they say it seems like magic, but it's just physics."
AI

What if Tech Company Layoffs Aren't All About AI? (yahoo.com) 31

"Running a Big Tech company during Silicon Valley's AI mania may not necessarily require fewer workers or cost less," writes the Washington Post: Amazon, Google and Meta together have roughly the same number of employees now as they did during an industry-wide hiring binge in 2022, company disclosures show. Growing costs for technical workers and related expenses have often outpaced sales recently. The tech giants' big AI bet hasn't yet paid for itself.

That means AI might be killing jobs not through its labor-saving wizardry but by increasing spending so much that CEOs are pressured to find savings, giving them cover to consciously uncouple from their workforces. Marc Andreessen, a prominent start-up investor and a Meta board director, put it bluntly on a recent podcast. Big company layoffs are a fix for overstaffing and changing economic conditions, he said, but AI provides a convenient scapegoat. "Now they all have the silver bullet excuse: 'Ah, it's AI,'" he said...

"Almost every company that does layoffs is blaming AI, whether or not it really is about AI," Sam Altman, CEO of ChatGPT owner OpenAI, said at a March conference when he listed explanations for AI's unpopularity in the United States.

"Recent history suggests Big Tech companies might not be moving toward a future with fewer workers," the article concludes, "but recalibrating to spend the same, or more, on different people and projects."

So in the end, "AI might soon reduce hiring," the article acknowledges, "But the reluctance or inability of the largest tech firms to cut too deeply so far could also show that the path to making a workforce AI-ready — whatever that means — isn't a predictable straight line charting declining headcount."
Cloud

Amazon Stuck With Months of Repairs After Drone Strikes On Data Centers (arstechnica.com) 191

An anonymous reader quotes a report from Ars Technica: Amazon's cloud customers will need to wait several more months before the US tech company can repair war-damaged data centers and restore normal operations in the Middle East. The announcement comes two months after Iranian drone strikes targeted three Amazon data centers in the United Arab Emirates and Bahrain -- meaning that full recovery from the cloud disruption could take nearly half a year in all. The Amazon Web Services (AWS) dashboard posted an April 30 update describing how its UAE and Bahrain cloud regions "suffered damage as a result of the conflict in the Middle East" and are unable to support customer applications. The update also said that "relevant billing operations are currently suspended while we restore normal operations" in a process that "is expected to take several months."

That wording suggests Amazon will continue to avoid billing AWS customers in the affected regions -- ME-CENTRAL-1 and ME-SOUTH-1 -- after it initially waived all usage-related charges for March 2026 at an estimated cost of $150 million. AWS also "strongly" recommended that customers migrate resources to other cloud regions and rely on remote backups to restore any "inaccessible resources." Some customers, such as the Dubai-based super app Careem—which offers ride-hailing, household services, and food and grocery delivery -- were able to get back online quickly after doing an overnight migration to other data center servers.

Government

Pentagon Reaches Agreements With Top AI Companies, But Not Anthropic 21

The Pentagon says it has reached deals with seven AI companies -- SpaceX, OpenAI, Google, Nvidia, Reflection AI, Microsoft, and AWS -- to deploy their tools on classified Defense Department networks. The odd one out is Anthropic, which remains excluded after being labeled a supply-chain risk amid a dispute over military-use guardrails. Reuters reports: SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft, and Amazon Web Services (AWS), several of which already work with the Pentagon, will be integrated into its secret and top-secret network environments, providing more military access to their products for use on sensitive topics, the Pentagon said in a statement. The lesser-known Reflection AI, which raised $2 billion in October, is backed by 1789 Capital, a venture capital firm in which Donald Trump Jr. is a partner and investor.

Since the Pentagon deemed Anthropic's products a "supply-chain risk" in March and the two sides became embroiled in a lawsuit, the military has expressed increasing interest in AI startups. Since the blow-up, newer AI entrants have said the military has sped up the process of incorporating them onto secret and top-secret data levels to less than three months. The process previously took 18 months or longer.

By expanding AI services offered to troops, who use it for planning, logistics, targeting and in other ways to streamline huge operations and perform more quickly, the Pentagon said in its statement it will avoid "vendor lock," a likely nod to its overdependence on Anthropic or other dominant service providers. [...] AI has become increasingly important for the U.S. military. The Pentagon's main AI platform, GenAI.mil, has been used by over 1.3 million Defense Department personnel, the agency noted in its release, after five months of operation.
Further reading: Google and Pentagon Reportedly Agree On Deal For 'Any Lawful' Use of AI
Government

DOJ Sues Cloudera For Deliberately Excluding American Workers From Tech Jobs (zerohedge.com) 94

Longtime Slashdot reader schwit1 shares a report from ZeroHedge: The Justice Department on Tuesday sued Cloudera, accusing the enterprise data and artificial intelligence company of deliberately engineering a hiring process that excluded American workers from at least seven lucrative technology positions while the firm pursued permanent residency sponsorship for foreign workers on temporary visas. In a 14-page complaint filed with the Office of the Chief Administrative Hearing Officer, the department's Civil Rights Division alleges that Cloudera, from March 31, 2024, through at least January 28, 2025, instructed job candidates to submit applications to a dedicated email address, amerijobpostings@cloudera.com, that rejected all external messages with an automated bounce-back error. The company did not advertise the roles on its public careers website or accept applications through its standard portal, as it did for non-sponsorship positions.

Cloudera then attested to the Department of Labor that it could not locate any qualified U.S. workers for the roles, which paid between approximately $180,000 and $294,000 annually, according to the filing. The positions included a Product Manager role in Santa Clara, California, with a listed salary range of $170,186 to $190,000. The case marks one of the most detailed enforcement actions under the Justice Department's Protecting U.S. Workers Initiative, which was relaunched last year and has already produced 10 settlements targeting employers accused of discriminating against American workers in favor of temporary visa holders. "Employers cannot use the PERM sponsorship process as a backdoor for discriminating against U.S. workers," Assistant Attorney General Harmeet K. Dhillon of the Civil Rights Division said in a statement. "The Division will not hesitate to sue companies who intentionally deter U.S. workers from applying to American jobs."

Transportation

First Tesla Semi Rolls Off High-Volume Production Line (electrek.co) 134

Tesla has produced the first Semi from its new high-volume production line at Gigafactory Nevada, a milestone for the long-delayed electric Class 8 truck program after years of pilot builds and delays. Electrek reports: The Tesla Semi has had one of the longest gestation periods in Tesla's history. First unveiled in 2017, the truck was originally promised for production in 2019. That target slipped repeatedly -- to 2020, then 2021, then 2022 -- before Tesla finally delivered a handful of units to PepsiCo in late 2022. Those early trucks were essentially hand-built on a pilot line. Tesla spent the next three years refining the design, cutting roughly 1,000 lbs from the truck, and building out a dedicated factory adjacent to Gigafactory Nevada in Sparks. The company revealed the final production specs in February, confirming two trims: a Standard Range with 325 miles at full 82,000-lb gross combination weight, and a Long Range with 500 miles of range.

Tesla is quoting $290,000 for the 500-mile Long Range version and roughly $260,000 for the Standard Range -- making it the lowest-priced Class 8 battery electric tractor on the market. The shift from a pilot line to a high-volume production line is significant. Tesla's Semi factory is designed for an annual capacity of 50,000 trucks, though the company will ramp gradually. Analysts project deliveries between 5,000 and 15,000 units in 2026, but that sounds way too optimistic. [...] Both trims feature an 800-kW tri-motor drivetrain producing 1,072 hp and support 1.2-MW Megacharger speeds, restoring 60% of range in roughly 30 minutes -- conveniently timed around a driver's mandatory rest break. Tesla has opened its first Megacharger station in Ontario, California, and has mapped 66 Megacharger locations across 15 states.

The Courts

Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (sfchronicle.com) 51

An anonymous reader quotes a report from the San Francisco Chronicle: Elon Musk returned to the witness stand Wednesday in Oakland federal court for a second day of testimony in his case against OpenAI, detailing his shift from being an enthusiastic supporter of the nonprofit to feeling betrayed. He also clashed repeatedly with OpenAI's attorney over questions that Musk believed were unfair. He said his feelings towards OpenAI CEO Sam Altman and President Greg Brockman shifted from a "phase one" of support, "phase two" of doubts, and finally "phase three, where I'm sure they're looting the nonprofit. We're currently in phase three," Musk said with a chuckle. Musk said he was a "fool" for giving OpenAI "$38 million of essentially free funding to create what would become an $800 billion company," of which he has no equity stake.

In his 2024 lawsuit, Musk alleged breach of charitable trust and unjust enrichment, arguing OpenAI abandoned its original nonprofit mission to benefit humanity to pursue financial gain. OpenAI's lawyer William Savitt argued Tuesday during his opening statement that the nonprofit entity remains in control of the for-profit public benefit corporation and is now one of the most well-funded nonprofits in the world. Musk is seeking to oust Altman from OpenAI's board and upwards of $134 billion in damages, which he said would be used to fund OpenAI's nonprofit mission. During cross-examination, Savitt clashed with Musk over questioning. Savitt asked whether Musk had contributed $38 million to OpenAI, rather than the $100 million that he later claimed to have invested on X. Musk said he also contributed his reputation to the company and came up with the idea for the name, leading Savitt to ask Musk to respond yes or no to "simple" questions.

"Your questions are not simple. They're designed to trick me, essentially," Musk said, adding that he had to elaborate or it would mislead the jury. He compared Savitt's questions to asking, "have you stopped beating your wife?" Judge Yvonne Gonzalez Rogers intervened, leading Musk to answer yes to the $38 million investment amount. The world's richest man said his doubts grew and by late 2022, he thought "wait a second, these guys are betraying their promise. They're breaking the deal." "I started to lose confidence that they were telling me the truth," Musk said. A turning point was co-defendent Microsoft's investment of billions of dollars into OpenAI, Musk said. On October 23, 2022, Musk texted Altman that he was "disturbed" to see OpenAI's valuation of $20 billion in the wake of the Microsoft deal. Musk called the deal a "bait and switch," since a nonprofit doesn't have a valuation. OpenAI had "for all intents and purposes" become primarily a for-profit company, Musk argued. Altman responded to Musk by text that "I agree this feels bad," saying that OpenAI had previously offered equity in the company but Musk hadn't wanted it at the time. Altman said the company was happy to offer equity in the future. Musk said it "didn't seem to make sense to me" to hold equity in what should be a nonprofit.
Musk also testified about former OpenAI board member Shivon Zilis, who lives with him, is the mother of four of his children, and served as a senior advisor at Neuralink. He denied that she shared sensitive OpenAI information with him. Court evidence showed Musk had encouraged her to stay close to OpenAI to "keep info flowing" and had approved Neuralink recruiting OpenAI employees, which he defended by saying workers are free to change jobs. "It's a free country," Musk said.

Recap:
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Slashdot Top Deals