Submission + - Trump Administration To Take Equity Stake in Former Intel CEO's Chip Startup (wsj.com)

An anonymous reader writes: The Trump administration has agreed to inject up to $150 million into a startup trying to develop more advanced semiconductor manufacturing techniques in the U.S., its latest bid to support strategically important domestic industries with government incentives. Under the arrangement, the Commerce Department would give the incentives to xLight, a startup trying to improve the critical chip-making process known as extreme ultraviolet lithography, the agency said in a Monday release. In return, the government would get an equity stake that would likely make it xLight’s largest shareholder.

The Dutch firm ASML is currently the only global producer of EUV machines, which can cost hundreds of millions of dollars each. XLight is seeking to improve on just one component of the EUV process: the crucially important lasers that etch complex microscopic patterns onto chemical-treated silicon wafers. The startup is hoping to integrate its light sources into ASML’s machines. XLight represents a second act for Pat Gelsinger, the former chief executive of Intel who was fired by the board late last year after the chip maker suffered from weak financial performance and a stalled manufacturing expansion. Gelsinger serves as executive chairman of xLight’s board.

[...] The xLight deal uses funding from the 2022 Chips and Science Act allocated for earlier stage companies with promising technologies. It is the first Chips Act award in President Trump’s second term and is a preliminary agreement, meaning it isn’t finalized and could change. “This partnership would back a technology that can fundamentally rewrite the limits of chipmaking,” Commerce Secretary Howard Lutnick said in the release.

Submission + - Fidelity sues Broadcom, says cutoff of VMware software threatens major system fa (msn.com)

Joe_Dragon writes: Fidelity Technology Group, the tech arm of investment manager Fidelity, told a court in Suffolk County on Friday that Broadcom is about to pull the plug on software the company has used for years, causing huge system failures across all of its platforms.

The filing said the conflict began when Broadcom told Fidelity it would end its access to the VMware tools after January 21, a move Fidelity said could shut down trading, block customers from their accounts, and break the systems its workers use each day.

Fidelity said it filed the action because it believes Broadcom is ignoring a contract that came with VMware long before Broadcom bought the company.
Fidelity challenges Broadcom over VMware access

The lawsuit said VMware’s virtualization software has powered Fidelity’s virtual servers since 2005, and the company said it built most of its internal and customer-facing systems on top of that setup.

Fidelity said the software became central to how it handles account access, trade execution, and everyday service for its nearly 50 million customers.

Fidelity explained that this fight began in 2023 when Broadcom completed its purchase of VMware and changed the entire product lineup.

The filing said Broadcom took the older VMware tools and rebuilt them into new bundles that cost far more than the separate products Fidelity used for years.

Fidelity said that when it tried to renew its old subscription, Broadcom refused to honor the VMware contract. Fidelity said Broadcom pushed it to buy the new bundle instead of the tools it already used, which the company said would change its tech setup in a way that made no sense for its systems.

Fidelity argued that losing access on the date Broadcom first gave, December 22, would have made it impossible to keep its platforms running.

Fidelity’s filing said the company told the court it would need at least 18 to 24 months to move to a new setup because of how deeply VMware runs through its servers.

The filing said Broadcom later agreed to extend the cutoff to January 21, giving the judge time to hear the case. Fidelity said this delay helps only for now, because the threat to its operations still stands if access ends.

Submission + - OpenAI Says Dead Teen Violated TOS When He Used ChatGPT To Plan Suicide (arstechnica.com)

An anonymous reader writes: Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot. The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot. “A full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT,” OpenAI’s filing argued. [...] All the logs that OpenAI referenced in its filing are sealed, making it impossible to verify the broader context the AI firm claims the logs provide. In its blog, OpenAI said it was limiting the amount of “sensitive evidence” made available to the public, due to its intention to handle mental health-related cases with “care, transparency, and respect.”

Submission + - US Banks Scramble To Assess Data Theft After Hackers Breach Financial Tech Firm (techcrunch.com)

An anonymous reader writes: Several U.S. banking giants and mortgage lenders are reportedly scrambling to assess how much of their customers’ data was stolen during a cyberattack on a New York financial technology company earlier this month. SitusAMC, which provides technology for over a thousand commercial and real estate financiers, confirmed in a statement over the weekend that it had identified a data breach on November 12. The company said that unspecified hackers had stolen corporate data associated with its banking customers’ relationship with SitusAMC, as well as “accounting records and legal agreements” during the cyberattack.

The statement added that the scope and nature of the cyberattack “remains under investigation.” SitusAMC said that the incident is “now contained,” and that its systems are operational. The company said that no encrypting malware was used, suggesting that the hackers were focused on exfiltrating data from the company’s systems rather than causing destruction. According to Bloomberg and CNN, citing sources, SitusAMC sent data breach notifications to several financial giants, including JPMorgan Chase, Citigroup, and Morgan Stanley. SitusAMC also counts pension funds and state governments as customers, according to its website.

It’s unclear how much data was taken, or how many U.S. banking consumers may be affected by the breach. Companies like SitusAMC may not be widely known outside of the financial world, but provide the mechanisms and technologies for its banking and real estate customers to comply with state and federal rules and regulations. In its role as a middleman for financial clients, the company handles vast amounts of non-public banking information on behalf of its customers. According to SitusAMC’s website, the company processes billions of documents related to loans annually.

Submission + - DOJ Arrests U.S. Citizens and Chinese Nationals for Exporting AI Tech to China (pjmedia.com)

schwit1 writes: The U.S. Department of Justice (DOJ) announced in a statement that it has arrested two U.S. citizens and two Chinese nationals and charged them with conspiracy to illegally export to China advanced NVIDIA microchips called Graphics Processing Units (GPUs). GPUs are used in a wide range of critical artificial intelligence (AI) applications.

The two American citizens who were arrested are Hon Ning Ho, also known as “Mathew Ho,” a Tampa resident who was born in Hong Kong, and Brian Curtis Raymond from Huntsville, Alabama. The two Chinese nationals arrested by the DOJ are Cham Li, also known as “Tony Li,” a resident of San Leandro, California, and Jing Chen, also known as “Harry Chen,” a 45-year-old who was living in Tampa under an F-1 nonimmigrant student visa.

All four were arrested and appeared in courtrooms in their respective jurisdictions on Nov. 19.

“The indictment unsealed yesterday alleges a deliberate and deceptive effort to transship controlled NVIDIA GPUs to China by falsifying paperwork, creating fake contracts, and misleading U.S. authorities,” said Assistant Attorney General for National Security John A. Eisenberg. “The National Security Division is committed to disrupting these kinds of black markets of sensitive U.S. technologies and holding accountable those who participate in this illicit trade.”

The charges the defendants face include multiple counts of conspiracy to violate the Export Control Reform Act (ECRA); ECRA violations; smuggling; conspiracy to commit money laundering, and money laundering. Each defendant faces a possible 20-year prison sentence for each ECRA violation, 10 years per smuggling count, and 20 years per money laundering count. Given the number of counts they face, it’s possible they could spend the rest of their lives in prison.

The defendants will be tried in federal court in Florida.

Earlier this year, a report from the Financial Times revealed that at least $1 billion worth of Nvidia’s chips were shipped to China after the Trump administration began to intensify the restrictions on microchips to China.

Submission + - Nano Banana Pro Uses Gemini 3 Power To Generate More Realistic AI Images (arstechnica.com)

An anonymous reader writes: Google’s meme-friendly Nano Banana image-generation model is getting an upgrade. The new Nano Banana Pro is rolling out with improved reasoning and instruction following, giving users the ability to create more accurate images with legible text and make precise edits to existing images. It’s available to everyone in the Gemini app, but free users will find themselves up against the usage limits pretty quickly. Nano Banana Pro is part of the newly launched Gemini 3 Pro—it’s actually called Gemini 3 Pro Image in the same way the original is Gemini 2.5 Flash Image, but Google is sticking with the meme-y name. You can access it by selecting Gemini 3 Pro and then turning on the “Create images” option.

Google says the new model can follow complex prompts to create more accurate images. The model is apparently so capable that it can generate an entire usable infographic in a single shot with no weird AI squiggles in place of words. Nano Banana Pro is also better at maintaining consistency in images. You can blend up to 14 images with this tool, and it can maintain the appearance of up to five people in outputs. Google also promises better editing. You can refine your AI images or provide Nano Banana Pro with a photo and make localized edits without as many AI glitches. It can even change core elements of the image like camera angles, color grading, and lighting without altering other elements. Google is pushing the professional use angle with its new model, which has much-improved resolution options. Your creations in Nano Banana Pro can be rendered at up to 4K.

Submission + - White House Prepares Executive Order To Block State AI Laws (politico.com)

An anonymous reader writes: The White House is preparing to issue an executive order as soon as Friday that tells the Department of Justice and other federal agencies to prevent states from regulating artificial intelligence, according to four people familiar with the matter and a leaked draft of the order obtained by POLITICO. The draft document, confirmed as authentic by three people familiar with the matter, would create an “AI Litigation Task Force” at the DOJ whose “sole responsibility” would be to challenge state AI laws.

Government lawyers would be directed to challenge state laws on the grounds that they unconstitutionally regulate interstate commerce, are preempted by existing federal regulations or otherwise at the attorney general’s discretion. The task force would consult with administration officials, including the special adviser for AI and crypto — a role currently occupied by tech investor David Sacks.

The executive order, in the draft obtained by POLITICO, would also empower Commerce Secretary Howard Lutnick to publish a review of “onerous” state AI laws within 90 days and restrict federal broadband funds to states whose AI laws are found to be objectionable. It would direct the Federal Trade Commission to investigate whether state AI laws that “require alterations to the truthful outputs of AI models” are blocked by the FTC Act. And it would order the Federal Communications Commission to begin work on a reporting and disclosure standard for AI models that would preempt conflicting state laws.

Submission + - In the AI Race, Chinese Talent Still Drives American Research (nytimes.com)

An anonymous reader writes: When Mark Zuckerberg, Meta’s chief executive, unveiled the company’s Superintelligence Lab in June, he named 11 artificial intelligence researchers who were joining his ambitious effort to build a machine more powerful than the human brain. All 11 were immigrants educated in other countries. Seven were born in China, according to a memo viewed by The New York Times. Although many American executives, government officials and pundits have spent months painting China as the enemy of America’s rapid push into A.I., much of the groundbreaking research emerging from the United States is driven by Chinese talent.

Two new studies show that researchers born and educated in China have for years played major roles inside leading U.S. artificial intelligence labs. They also continue to drive important A.I. research in industry and academia, despite the Trump administration’s crackdown on immigration and growing anti-China sentiment in Silicon Valley. The research, from two organizations, provides a detailed look at how much the American tech industry continues to rely on engineers from China, particularly in A.I. The findings also offer a more nuanced understanding of how researchers in the two countries continue to collaborate, despite increasingly heated language from Washington and Beijing.

Submission + - US Backs Three Mile Island Nuclear Restart With $1 Billion Loan To Constellation (cnbc.com)

An anonymous reader writes: The Trump administration will provide Constellation Energy with a $1 billion loan to restart the Crane Clean Energy Center nuclear plant in Pennsylvania, Department of Energy officials said Tuesday. Previously known as Three Mile Island Unit 1, the plant is expected to start generating power again in 2027. Constellation unveiled plans to rename and restart the reactor in Sept. 2024 through a power purchase agreement with Microsoft to support the tech company’s data center demand in the region.

Three Mile Island Unit 1 ceased operations in 2019, one of a dozen reactors that closed in recent years as nuclear struggled to compete against cheap natural gas. It sits on the same site as Three Mile Island Unit 2, the reactor that partially melted down in 1979 in the worst nuclear accident in U.S. history. The loan would cover the majority to the project’s estimated cost of $1.6 billion. The first advance to Constellation is expected in the first quarter of 2026, said Greg Beard, senior advisor to the Energy Department’s Loan Programs Office, in a call with reporters. The loan comes with a guarantee from Constellation that it will protect taxpayer money, Beard said.

Submission + - LOGITECH HACKED (nerds.xyz)

BrianFagioli writes: Logitech has confirmed a cybersecurity breach after an intruder exploited a zero-day in a third-party software platform and copied internal data. The company says the incident did not affect its products, manufacturing or business operations, and it does not believe sensitive personal information like national ID numbers or credit card data were stored in the impacted system. The attacker still managed to pull limited information tied to employees, consumers, customers and suppliers, raising fair questions about how long the zero-day existed before being patched.

Logitech brought in outside cybersecurity firms, notified regulators and says the incident will not materially affect its financial results. The company expects its cybersecurity insurance policy to cover investigation costs and any potential legal or regulatory issues. Still, with zero-day attacks increasing across the tech world, even established hardware brands are being forced to acknowledge uncomfortable weaknesses in their internal systems.

Submission + - Hyundai Data Breach May Have Leaked Drivers' Personal Information (caranddriver.com)

sinij writes:

Hyundai is warning customers of a data breach that resulted in the personal data of up to 2.7 million customers being leaked, the brand confirmed to Car and Driver.

Thanks to tracking modules plaguing most modern cars that data is likely includes times and locations of customer's vehicles. These repeated breaches make it clear that unlike smartphone manufacturers, that are inherently tech companies, car manufacturers that collect your data are going to keep getting breached and leaking all that data.

Submission + - Code.org Unveils Activities for Inaugural Hour of AI

theodp writes: Twelve years after it unveiled activities for the inaugural Hour of Code in 2013, tech-backed nonprofit Code.org's unveiled activities for next month's inaugural Hour of AI. From the press release, Hour of AI Unveils 100+ Free Activities to Help Demystify AI for Educators, Families, and Kids:

Today, Code.org and CSforALL unveiled the activity catalog for the first annual Hour of AI, which takes place during Computer Science Education Week (December 8–14, 2025). More than 50 leading tech companies, nonprofits, and foundations are contributing to a suite of activities that will help learners around the world explore the power and possibilities of AI through creativity, play, and problem-solving.

"The next generation can't afford to be passive users of AI – they must be active shapers of it," said Hadi Partovi, CEO and co-founder of Code.org. "The Hour of AI and its roster of incredible partners are empowering students to explore, create, and take ownership of the technology that is shaping their future."

Building on more than a decade of global excitement around the Hour of Code, the Hour of AI marks a new chapter that helps students move from consuming AI to creating with it. With engaging activities from partners like Google, [Microsoft-owned] Minecraft Education, LEGO Education, Scratch Foundation, and Khan Academy, students will have the opportunity to see how AI and computer science work hand-in-hand to fuel imagination, innovation, and impact.

Submission + - Europe's cookie law messed up the internet. Brussels wants to fix it. (politico.eu)

AmiMoJo writes: In a bid to slash red tape, the European Commission wants to eliminate one of its peskiest laws: a 2009 tech rule that plastered the online world with pop-ups requesting consent to cookies. European rulemakers in 2009 revised a law called the e-Privacy Directive to require websites to get consent from users before loading cookies on their devices, unless the cookies are “strictly necessary” to provide a service. Fast forward to 2025 and the internet is full of consent banners that users have long learned to click away without thinking twice.

A note sent to industry and civil society attending a focus group on Sept. 15, seen by POLITICO, showed the Commission is pondering how to tweak the rules to include more exceptions or make sure users can set their preferences on cookies once (for example, in their browser settings) instead of every time they visit a website.

Submission + - UK Secondary Schools Pivoting from Narrowly Focused CS Curriculum to AI Literacy

theodp writes: The UK Department for Education is "replacing its narrowly focused computer science GCSE with a broader, future-facing computing GCSE [General Certificate of Secondary Education] and exploring a new qualification in data science and AI for 16–18-year-olds." The move aims to correct unintended consequences of a shift made more than a decade ago from the existing ICT (Information and Communications Technology) curriculum, which focused on basic digital skills, to a more rigorous Computer Science curriculum at the behest of major tech firms and advocacy groups to address concerns about the UK’s programming talent pipeline.

The UK pivot from rigorous CS to AI literacy comes as tech-backed nonprofit Code.org leads a similar shift in the U.S., pivoting from its original 2013 mission calling for rigorous CS for U.S. K-12 students to a new mission that embraces AI literacy. Code.org next month will replace its flagship Hour of Code event with a new Hour of AI "designed to bring AI education into the mainstream" with the support of its partners, including Microsoft, Google, and Amazon. Code.org has pledged to engage 25 million learners with the new Hour of AI this school year.

Submission + - A jailed hacking kingpin reveals all about cybercrime gang (bbc.com)

alternative_right writes: Penchukov and the gangs he either led or was a part of stole tens of millions of pounds from them.

In the late 2000s, he and the infamous Jabber Zeus crew used revolutionary cyber-crime tech to steal directly from the bank accounts of small businesses, local authorities and even charities. Victims saw their savings wiped out and balance sheets upended. In the UK alone, there were more than 600 victims, who lost more than £4m ($5.2m) in just three months.

Between 2018 and 2022, Penchukov set his sights higher, joining the thriving ransomware ecosystem with gangs that targeted international corporations and even a hospital.

Submission + - Proposed Changes To Landmark EU Privacy Law 'Death By a Thousand Cuts' (reuters.com)

An anonymous reader writes: Privacy activists say proposed changes to Europe's landmark privacy law, including making it easier for Big Tech to harvest Europeans' personal data for AI training, would flout EU case law and gut the legislation. The changes proposed by the European Commission are part of a drive to simplify a slew of laws adopted in recent years on technology, environmental and financial issues which have in turn faced pushback from companies and the U.S. government.

EU antitrust chief Henna Virkkunen will present the Digital Omnibus, in effect proposals to cut red tape and overlapping legislation such as the General Data Protection Regulation, the Artificial Intelligence Act, the e-Privacy Directive and the Data Act, on November 19. According to the plans, Google, Meta Platforms, OpenAI and other tech companies may be allowed to use Europeans' personal data to train their AI models based on legitimate interest.

In addition, companies may be exempted from the ban on processing special categories of personal data "in order not to disproportionately hinder the development and operation of AI and taking into account the capabilities of the controller to identify and remove special categories of personal data." [...] The proposals would need to be thrashed out with EU countries and European Parliament in the coming months before they can be implemented.

Submission + - UK Replacing Narrowly Focused CS GCSE in Pivot to AI Literacy for Schoolkids

theodp writes: The UK Department for Education announced this week that it is "replacing the narrowly focused computer science GCSE with a broader, future-facing computing GCSE [General Certificate of Secondary Education] and exploring a new qualification in data science and AI for 16–18-year-olds." The move aims to correct the unintended consequences of a shift made more than a decade ago from the existing ICT (Information and Communications Technology) curriculum, which focused on basic digital skills, to a more rigorous Computer Science curriculum at the behest of major tech firms and advocacy groups like Google, Microsoft, and the British Computer Society, who pushed for a curriculum overhaul to address concerns about the UK’s programming talent pipeline (a similar U.S. talent pipeline crisis was also declared around the same time).

From the Government Response to the Curriculum and Assessment Review: "We will rebalance the computing curriculum as the Review suggests, to ensure pupils develop essential digital literacy whilst retaining important computer science content. Through the reformed curriculum, pupils will know from a young age how computers can be trained using data and they will learn essential digital skills such as AI literacy."

The UK pivot from rigorous CS to AI literacy comes as tech-backed nonprofit Code.org is orchestrating a similar move in the U.S., pivoting from its original 2013 mission calling for rigorous CS for U.S. K-12 students to a new mission that embraces AI literacy. Code.org next month will replace its flagship Hour of Code event with a new Hour of AI "designed to bring AI education into the mainstream" that's supported by AI giants and Code.org donors Microsoft, Google, and Amazon. In September, Code.org pledged to the White House at an AI Education Task Force meeting led by First Lady Melania Trump and attended by U.S. Secretary of Education Linda McMahon and Google CEO Sundar Pichai (OpenAI CEO Sam Altman was spotted in the audience) that it will engage 25 million learners in the new Hour of AI this school year, build AI pathways in 25 states, and launch a free high school AI course for 400,000 students by 2028.

Submission + - Ask Slashdot: how to get people to our 2600 meeting?

alternative_right writes: Years ago, we had a large and exciting group at Houston 2600: hobbyists of all sorts, each with their own interests and active projects or at least fascinations. Then COVID-19 hit and people stopped coming. Now, it seems the audience are staying home, and the only people sporadically showing up are interested in talking about the latest hacking tools to use for their future careers in computer security, or a group of wannabe hackers who seem to have no curiosity about anything other than money. Where are the hobbyists, and how do we get them to join us and share some excitement about technology? Or did Big Tech and social media finally manage to kill that?

Submission + - Netflix is way worse for the environment than ChatGPT (nerds.xyz) 5

BrianFagioli writes: Netflix and YouTube streaming produce far more COâ than asking ChatGPT a question, according to a new analysis of digital energy use. An hour of HD video streaming generates about 42 grams of COâ, while a chatbot prompt is around 0.1 grams. Even AI image generation (about 1 gram per image) comes in well below binge-watching. The study also found that Zoom calls and text-to-video AI generation sit in the middle, but streaming is still the standout energy hog because it requires continuous data transfer and processing.

Researchers say the bigger problem isnâ(TM)t individual behavior but the energy sources that power data centers. The tech sector produced an estimated 900 million tons of COâ last year, with only about 30 percent powered by renewables. If that shifted to 80 or 90 percent, emissions from all digital activities would drop significantly without people changing their habits at all.

Submission + - Gemini AI To Transform Google Maps Into a More Conversational Experience (apnews.com)

An anonymous reader writes: Google Maps is heading in a new direction with artificial intelligence sitting in the passenger’s seat. Fueled by Google’s Gemini AI technology, the world’s most popular navigation app will become a more conversational companion as part of a redesign announced Wednesday. The hands-free experience is meant to turn Google Maps into something more like an insightful passenger able to direct a driver to a destination while also providing nearby recommendations on places to eat, shop or sightsee, when asked for the advice. “No fumbling required — now you can just ask,” Google promised in a blog post about the app makeover.

The AI features are also supposed to enable Google Maps to be more precise by calling out landmarks to denote the place to make a turn instead of relying on distance notifications. AI chatbots, like Gemini and OpenAI’s ChatGPT, have sometimes lapsed into periods of making things up — known as “hallucinations” in tech speak — but Google is promising that built-in safeguards will prevent Maps from accidentally sending drivers down the wrong road. All the information that Gemini is drawing upon will be culled from the roughly 250 million places stored in Google Maps’ database of reviews accumulated during the past 20 years. Google Maps’ new AI capabilities will be rolling out to both Apple’s iPhone and Android mobile devices.

Slashdot Top Deals