Submission + - Nvidia Details New AI Chips and Autonomous Car Project With Mercedes (nytimes.com)

An anonymous reader writes: On Monday, [Jensen Huang, the chief executive of the chip-making giant Nvidia] said the company would begin shipping a new A.I. chip later this year, one that can do more computing with less power than previous generations of chips could. Known as the Vera Rubin, the chip has been in development for three years and is designed to fulfill A.I. requests more quickly and cheaply than its predecessors. Mr. Huang, who spoke during CES, an annual tech conference in Las Vegas, also discussed Nvidia’s surprisingly ambitious work around autonomous vehicles. This year, Mercedes-Benz will begin shipping cars equipped with Nvidia self-driving technology comparable to Tesla’s Autopilot.

Nvidia’s new Rubin chips are being manufactured and will be shipped to customers, including Microsoft and Amazon, in the second half of the year, fulfilling a promise Mr. Huang made last March when he first described the chip at the company’s annual conference in San Jose, Calif. Companies will be able to train A.I. models with one-quarter as many Rubin chips as its predecessor, the Blackwell. It can provide information for chatbots and other A.I. products for one-tenth of the cost. They will also be able to install the chips in data centers more quickly, courtesy of redesigned supercomputers that feature fewer cables. If the new chips live up to their promise, they could allow companies to develop A.I. at a lower cost and at least begin to respond to the soaring electrical demands of data centers being built around the world.

[...] On Monday, he said Nvidia had developed new A.I. software that would allow customers like Uber and Lucid to develop cars that navigate roads autonomously. It will share the system, called Alpamayo, to spread its influence and the appeal of Nvidia’s chip technology. Since 2020, Nvidia has been working with Mercedes to develop a class of self-driving cars. They will begin shipping an early example of their collaboration when Mercedes CLA cars become available in the first half of the year in Europe and the United States. Mr. Huang said the company started working on self-driving technology eight years ago. It has more than a thousand people working on the project. "Our vision is that someday, every single car, every single truck, will be autonomous," Mr. Huang said.

Submission + - The Nation's Strictest Privacy Law Goes Into Effect (arstechnica.com)

An anonymous reader writes: Californians are getting a new, supercharged way to stop data brokers from hoarding and selling their personal information, as a recently enacted law that’s among the strictest in the nation took effect at the beginning of the year. [...] Two years ago, California’s Delete Act took effect. It required data brokers to provide residents with a means to obtain a copy of all data pertaining to them and to demand that such information be deleted. Unfortunately, Consumer Watchdog found that only 1 percent of Californians exercised these rights in the first 12 months after the law went into effect. A chief reason: Residents were required to file a separate demand with each broker. With hundreds of companies selling data, the burden was too onerous for most residents to take on.

On January 1, a new law known as DROP (Delete Request and Opt-out Platform) took effect. DROP allows California residents to register a single demand for their data to be deleted and no longer collected in the future. CalPrivacy then forwards it to all brokers. Starting in August, brokers will have 45 days after receiving the notice to report the status of each deletion request. If any of the brokers’ records match the information in the demand, all associated data—including inferences—must be deleted unless legal exemptions such as information provided during one-to-one interactions between the individual and the broker apply. To use DROP, individuals must first prove they’re a California resident.

Submission + - Lego's Smart Brick Gives the Iconic Analog Toy a New Digital Brain (wired.com)

An anonymous reader writes: At CES in Las Vegas today, Lego has unveiled its new Smart Play platform, aimed at taking its distinctly analog plastic blocks and figures into a new world of tech-powered interactive play—but crucially one without any reliance on screens. Smart Play revolves around Lego's patented sensor- and tech-packed brick. It's the same size as a standard 2 x 4 Lego brick, but it is capable of connecting to compatible Smart Minifigures and Smart Tags and interacting with them in real time. By pairing these components, kids big and small can create context-appropriate sounds and light effects as they play with the Danish company's toys.

[...] Lego is claiming this Smart Play platform developed in house by the company’s Creative Play Lab team in collaboration with Capgemini's Cambridge Consultants “features more than 20 patented world-firsts within its technology.” The heart of the system is the Smart Brick's custom-made chip, measuring smaller than a standard Lego stud. Other elements crammed into the eight-stud brick are an LED light array, accelerometers, light sensors, and sound sensor, and even a miniature speaker. The internal battery will supposedly work even after years of inactivity, and to avoid any need for cable access to the Smart Brick once it's built into a beloved creation, Lego has also added wireless charging. Indeed, Lego has made a charging pad that will power up several Smart Bricks simultaneously.

That all-important brain chip is a 4.1-millimeter custom mixed-signal ASIC chip running a bespoke Play Engine, which interprets motion, orientation, and magnetic fields. A copper coil assembly enables the brick’s tag recognition, while a proprietary “Brick-to-Brick position system” uses these coils to sense distance, direction, and orientation between multiple Smart Bricks. Moreover, Lego claims this use of multiple Smart Bricks creates a “self-organizing network” that requires no setup, no app, no central hub, nor external controllers—and so no screens. A Bluetooth-based “BrickNet” protocol shares the data between the Smart Bricks.

Sounds are handled by a tiny analog synthesizer putting out real-time audio (thus minimizing memory load) via the brick's miniature speaker, which uses the brick's internal air spaces to amplify sound. As a result, the audio effects are apparently immediate and can be used to enhance play with real-time sound. Lego insists there are no prerecorded clips of lightsabers or other pieces of audio being used as a cheat. Just like the Smart Minifigs, the 2 x 2 studless tile tags trigger sounds, lights, or behaviors tied to where they are placed or how they are played with. They communicate with other components through near-field magnetic connections. Each tile has a unique digital ID, which is read by the brain brick, while the minifigures—outwardly identical to standard minifigs—carry their unique digital ID on an internal chip.

Submission + - The GeekWire Stories that Defined 2025 (Spoiler Alert: AI Dominated)

theodp writes: In a year-end podcast, GeekWire looks back at the stories that defined 2025, with the "Most Popular" award going to Coding is dead: UW computer science program rethinks curriculum for the AI era.

Not too surprisingly, AI dominated 2025's headlines. Mandates from tech company leaders to use AI — but with no playbook on how — are creating worker stress, prompting one tech veteran to comment on the brutality of tech cycles: "The challenge, and opportunity for leadership, is whether the [AI] bets actually compound into something durable, or just become another slide deck for next year’s reorg."

GeekWire notes that Microsoft President Brad Smith offered his own evidence to investors that AI-is-real at Microsoft's Annual Shareholder Meeting in December, explaining that he asked Copilot’s Researcher Agent earlier in the day to produce a report on an issue from seven or eight years ago, and it generated a 25-page report with 100 citations that so wowed his colleagues that they clamored for him to share the prompt he used to produce it so they could all learn how to use AI more effectively. While Smith didn't share either the report or prompt in the webcast), the anecdote alone had his fellow Microsoft execs nodding and smiling in amazement (GeekWire couldn't resist wondering aloud how many of the recipients used their AI agents to summarize the 25-page report rather than having to actually read it).

Submission + - Ready, Fire, Aim: As Schools Embrace AI, Skeptics Raise Concerns

theodp writes: "Fueled partly by American tech companies, governments around the globe are racing to deploy generative A.I. systems and training in schools and universities," reports the NY Times. "In early November, Microsoft said it would supply artificial intelligence tools and training to more than 200,000 students and educators in the United Arab Emirates. Days later, a financial services company in Kazakhstan announced an agreement with OpenAI to provide ChatGPT Edu, a service for schools and universities, for 165,000 educators in Kazakhstan. Last month, xAI, Elon Musk’s artificial intelligence company, announced an even bigger project with El Salvador: developing an A.I. tutoring system, using the company’s Grok chatbot, for more than a million students in thousands of schools there."

"In the United States, where states and school districts typically decide what to teach, some prominent school systems recently introduced popular chatbots for teaching and learning. In Florida alone, Miami-Dade County Public Schools, the nation’s third-largest school system, rolled out Google’s Gemini chatbot for more than 100,000 high school students. And Broward County Public Schools, the nation’s sixth-biggest school district, introduced Microsoft’s Copilot chatbot for thousands of teachers and staff members."

"Teachers currently have few rigorous studies to guide generative A.I. use in schools. Researchers are just beginning to follow the long-term effects of A.I. chatbots on teenagers and schoolchildren. 'Lots of institutions are trying A.I.,' said Drew Bent, the education lead at Anthropic. 'We’re at a point now where we need to make sure that these things are backed by outcomes and figure out what’s working and what’s not working.'"

Submission + - France Targets Australia-Style Social Media Ban For Children Next Year (theguardian.com)

An anonymous reader writes: France intends to follow Australia and ban social media platforms for children from the start of the 2026 academic year. A draft bill preventing under-15s from using social media will be submitted for legal checks and is expected to be debated in parliament early in the new year. The French president, Emmanuel Macron, has made it clear in recent weeks that he wants France to swiftly follow Australia’s world-first ban on social media platforms for under-16s, which came into force in December. It includes Facebook, Snapchat, TikTok and YouTube.

Le Monde and France Info reported on Wednesday that a draft bill was now complete and contained two measures: a ban on social media for under-15s and a ban on mobile phones in high schools, where 15- to 18-year-olds study. Phones have already been banned in primary and middle schools. The bill will be submitted to France’s Conseil d'Etat for legal review in the coming days. Education unions will also look at the proposed high-school ban on phones. The government wants the social media ban to come into force from September 2026.

Le Monde reported the text of the draft bill cited “the risks of excessive screen use by teenagers," including the dangers of being exposed to inappropriate social media content, online bullying, and altered sleep patterns. The bill states the need to “protect future generations” from dangers that threaten their ability to thrive and live together in a society with shared values. Earlier this month, Macron confirmed at a public debate in Saint Malo that he wanted a social media ban for young teenagers. He said there was “consensus being shaped” on the issue after Australia introduced its ban. “The more screen time there is, the more school achievement drops the more screen time there is, the more mental health problems go up,” he said. He used the analogy of a teenager getting into a Formula One racing car before they had learned to drive. “If a child is in a Formula One car and they turn on the engine, I don’t want them to win the race, I just want them to get out of the car. I want them to learn the highway code first, and to ensure the car works, and to teach them to drive in a different car.”

Submission + - China Drafts World's Strictest Rules To End AI-Encouraged Suicide, Violence (arstechnica.com)

An anonymous reader writes: China drafted landmark rules to stop AI chatbots from emotionally manipulating users, including what could become the strictest policy worldwide intended to prevent AI-supported suicides, self-harm, and violence. China’s Cyberspace Administration proposed the rules on Saturday. If finalized, they would apply to any AI products or services publicly available in China that use text, images, audio, video, or “other means” to simulate engaging human conversation. Winston Ma, adjunct professor at NYU School of Law, told CNBC that the “planned rules would mark the world’s first attempt to regulate AI with human or anthropomorphic characteristics” at a time when companion bot usage is rising globally.

[...] Proposed rules would require, for example, that a human intervene as soon as suicide is mentioned. The rules also dictate that all minor and elderly users must provide the contact information for a guardian when they register—the guardian would be notified if suicide or self-harm is discussed. Generally, chatbots would be prohibited from generating content that encourages suicide, self-harm, or violence, as well as attempts to emotionally manipulate a user, such as by making false promises. Chatbots would also be banned from promoting obscenity, gambling, or instigation of a crime, as well as from slandering or insulting users. Also banned are what are termed “emotional traps,"—chatbots would additionally be prevented from misleading users into making “unreasonable decisions,” a translation of the rules indicates.

Perhaps most troubling to AI developers, China’s rules would also put an end to building chatbots that “induce addiction and dependence as design goals." [...] AI developers will also likely balk at annual safety tests and audits that China wants to require for any service or products exceeding 1 million registered users or more than 100,000 monthly active users. Those audits would log user complaints, which may multiply if the rules pass, as China also plans to require AI developers to make it easier to report complaints and feedback. Should any AI company fail to follow the rules, app stores could be ordered to terminate access to their chatbots in China. That could mess with AI firms’ hopes for global dominance, as China’s market is key to promoting companion bots, Business Research Insights reported earlier this month.

Submission + - Ask Slashdot: What's the Stupidest Use of AI You Saw in 2025?

destinyland writes: What's the stupidest use of AI you encountered in 2025? Have you been called by AI telemarketers? Forced to do job interviews with a glitching AI?

With all this talk of "disruption" and "inevitability," this is our chance to have some fun. Personally, I think 2025's worst AI "innovation" was the AI-powered web browsers that eat web pages and then spit out a slop "summary" of what you would've seen if you'd actually visited the page. But there've been other AI projects that were just exquisitely, quintessentially bad...

— Two years after the death of Suzanne Somers, her husband recreated her with an AI-powered robot.

— Disneyland imagineers used deep reinforcement learning to program a talking robot snowman.

— Attendees at an LA Comic Con were offered that chance to to talk to an AI-powered hologram of Stan Lee for $20.

— And of course, as the year ended, the Wall Street Journal announced that a vending machine run by Anthropic's Claude AI had been tricked into giving away hundreds of dollars in merchandise for free, including a PlayStation 5, a live fish, and underwear.

What did I miss? What "AI fails" will you remember most about 2025?

Submission + - Digital Sovereignty in Europe (theregister.com)

mspohr writes: Europe’s quest for digital sovereignty is hampered by a 90 per cent dependency on US cloud infrastructure, claims Cristina Caffarra, a competition expert and a driving force behind the Eurostack initiative.

While Brussels champions policy initiatives and American tech giants market their own ‘sovereign’ solutions, a handful of public authorities in Austria, Germany, and France, alongside the International Criminal Court in The Hague, are taking concrete steps to regain control over their IT.
These cases provide a potential blueprint for a continent grappling with its technological autonomy, while simultaneously revealing the deep-seated legal and commercial challenges that make true independence so difficult to achieve.

The core of the problem lies in a direct and irreconcilable legal conflict. The US CLOUD Act of 2018 allows American authorities to compel US-based technology companies to provide requested data, regardless of where that data is stored globally. This places European organizations in a precarious position, as it directly clashes with Europe's own stringent privacy regulation, the General Data Protection Regulation (GDPR).

Austria's Federal Ministry for Economy, Energy and Tourism is a case in point. The ministry recently completed a migration of 1,200 employees to the European open-source collaboration platform Nextcloud, but the project was not a migration away from an existing US cloud provider. It was a deliberate choice not to adopt one.

The primary driver was not cost, but sovereignty. "It was never about saving money," Zinnagl adds. "It was about maintaining control over our own data and our own systems."

The decision has triggered a ripple effect, as several other Austrian ministries have since begun implementing Nextcloud. For Zinnagl and Ollrom, this proves that one organization willing to take the first step can inspire others to follow.

Their advice to other European governments is clear: be brave, involve management, and start. "You don't achieve digital sovereignty overnight," Ollrom tells The Register. "You have to do this in many steps, but you have to start with the first step. Don't just talk about it, but execute it."

Submission + - Apple's App Course Runs $20,000 a Student. Is It Really Worth It? (wired.com)

An anonymous reader writes: Two years ago, Lizmary Fernandez took a detour from studying to be an immigration attorney to join a free Apple course for making iPhone apps. The Apple Developer Academy in Detroit launched as part of the company’s $200 million response to the Black Lives Matter protests and aims to expand opportunities for people of color in the country’s poorest big city. But Fernandez found the program’s cost-of-living stipend lacking—“A lot of us got on food stamps,” she says—and the coursework insufficient for landing a coding job. “I didn’t have the experience or portfolio,” says the 25-year-old, who is now a flight attendant and preparing to apply to law school. “Coding is not something I got back to.”

Since 2021, the academy has welcomed over 1,700 students, a racially diverse mix with varying levels of tech literacy and financial flexibility. About 600 students, including Fernandez, have completed its 10-month course of half-days at Michigan State University, which cosponsors the Apple-branded and Apple-focused program. WIRED reviewed contracts and budgets and spoke with officials and graduates for the first in-depth examination of the nearly $30 million invested in the academy over the past four years—almost 30 percent of which came from Michigan taxpayers and the university’s regular students. As tech giants begin pouring billions of dollars into AI-related job training courses across the country, the Apple academy offers lessons on the challenges of uplifting diverse communities.

[...] The program gives out iPhones and MacBooks and spends an estimated $20,000 per student, nearly twice as much as state and local governments budget for community colleges. [...] About 70 percent of students graduate, which [Sarah Gretter, the academy leader for Michigan State] describes as higher than typical for adult education. She says the goal is for them to take “a next step,” whether a job or more courses. Roughly a third of participants are under 25, and virtually all of them pursue further schooling. [...] About 71 percent of graduates from the last two years went onto full-time jobs across a variety of industries, according to academy officials. Amy J. Ko, a University of Washington computer scientist who researches computing education, calls under 80 percent typical for the coding schools she has studied but notes that one of her department’s own undergraduate programs has a 95 percent job placement rate.

Submission + - China Is Worried AI Threatens Party Rule (wsj.com)

An anonymous reader writes: Concerned that artificial intelligence could threaten Communist Party rule, Beijing is taking extraordinary steps to keep it under control. Although China’s government sees AI as crucial to the country’s economic and military future, regulations and recent purges of online content show it also fears AI could destabilize society. Chatbots pose a particular problem: Their ability to think for themselves could generate responses that spur people to question party rule.

In November, Beijing formalized rules it has been working on with AI companies to ensure their chatbots are trained on data filtered for politically sensitive content, and that they can pass an ideological test before going public. All AI-generated texts, videos and images must be explicitly labeled and traceable, making it easier to track and punish anyone spreading undesirable content. Authorities recently said they removed 960,000 pieces of what they regarded as illegal or harmful AI-generated content during three months of an enforcement campaign. Authorities have officially classified AI as a major potential threat, adding it alongside earthquakes and epidemics to its National Emergency Response Plan.

Chinese authorities don’t want to regulate too much, people familiar with the government’s thinking said. Doing so could extinguish innovation and condemn China to second-tier status in the global AI race behind the U.S., which is taking a more hands-off approach toward policing AI. But Beijing also can’t afford to let AI run amok. Chinese leader Xi Jinping said earlier this year that AI brought “unprecedented risks,” according to state media. A lieutenant called AI without safety like driving on a highway without brakes. There are signs that China is, for now, finding a way to thread the needle.

Chinese models are scoring well in international rankings, both overall and in specific areas such as computer coding, even as they censor responses about the Tiananmen Square massacre, human-rights concerns and other sensitive topics. Major American AI models are for the most part unavailable in China. It could become harder for DeepSeek and other Chinese models to keep up with U.S. models as AI systems become more sophisticated. Researchers outside of China who have reviewed both Chinese and American models also say that China’s regulatory approach has some benefits: Its chatbots are often safer by some metrics, with less violence and pornography, and are less likely to steer people toward self-harm.

Submission + - US Bars Five Europeans It Says Pressured Tech Firms To Censor American Viewpoint (apnews.com)

An anonymous reader writes: The State Department announced Tuesday it was barring five Europeans it accused of leading efforts to pressure U.S. tech firms to censor or suppress American viewpoints. The Europeans, characterized by Secretary of State Marco Rubio as “radical” activists and “weaponized” nongovernmental organizations, fell afoul of a new visa policy announced in May to restrict the entry of foreigners deemed responsible for censorship of protected speech in the United States.

“For far too long, ideologues in Europe have led organized efforts to coerce American platforms to punish American viewpoints they oppose,” Rubio posted on X. “The Trump Administration will no longer tolerate these egregious acts of extraterritorial censorship." The five Europeans were identified by Sarah Rogers, the under secretary of state for public diplomacy, in a series of posts on social media. They include the leaders of organizations that address digital hate and a former European Union commissioner who clashed with tech billionaire Elon Musk over broadcasting an online interview with Donald Trump. Rubio’s statement said they advanced foreign government censorship campaigns against Americans and U.S. companies, which he said created “potentially serious adverse foreign policy consequences” for the U.S. The action to bar them from the U.S. is part of a Trump administration campaign against foreign influence over online speech, using immigration law rather than platform regulations or sanctions.

The five Europeans named by Rogers are: Imran Ahmed, chief executive of the Centre for Countering Digital Hate; Josephine Ballon and Anna-Lena von Hodenberg, leaders of HateAid, a German organization; Clare Melford, who runs the Global Disinformation Index; and former EU Commissioner Thierry Breton, who was responsible for digital affairs. Rogers in her post on X called Breton, a French business executive and former finance minister, the “mastermind” behind the EU’s Digital Services Act, which imposes a set of strict requirements designed to keep internet users safe online. This includes flagging harmful or illegal content like hate speech. She referred to Breton warning Musk of a possible “amplification of harmful content” by broadcasting his livestream interview with Trump in August 2024 when he was running for president.

Submission + - LimeWire Re-Emerges In Online Rush To Share Pulled '60 Minutes' Segment (arstechnica.com)

An anonymous reader writes: CBS cannot contain the online spread of a “60 Minutes” segment that its editor-in-chief, Bari Weiss, tried to block from airing. The episode, “Inside CECOT,” featured testimonies from US deportees who were tortured or suffered physical or sexual abuse at a notorious Salvadoran prison, the Center for the Confinement of Terrorism. “Welcome to hell,” one former inmate was told upon arriving, the segment reported, while also highlighting a clip of Donald Trump praising CECOT and its leadership for “great facilities, very strong facilities, and they don’t play games.”

Weiss controversially pulled the segment on Monday, claiming it could not air in the US because it lacked critical voices, as no Trump officials were interviewed. She claimed that the segment “did not advance the ball” and merely echoed others’ reporting, NBC News reported. Her plan was to air the segment when it was “ready,” insisting that holding stories “for whatever reason” happens “every day in every newsroom.” But Weiss apparently did not realize that the “Inside CECOT” would still stream in Canada, giving the public a chance to view the segment as reporters had intended.

Critics accusing CBS of censoring the story quickly shared the segment online Monday after discovering that it was available on the Global TV app. Using a VPN to connect to the app with a Canadian IP address was all it took to override Weiss’ block in the US, as 404 Media reported the segment was uploaded to “to a variety of file sharing sites and services, including iCloud, Mega, and as a torrent,” including on the recently revived file-sharing service LimeWire. It’s currently also available to stream on the Internet Archive, where one reviewer largely summed up the public’s response so far, writing, “cannot believe this was pulled, not a dang thing wrong with this segment except it shows truth.”

Submission + - 13.1 Million K-12 Schoolkids Participated in Inaugural 'Hour of AI'

theodp writes: At a high-profile White House gathering of AI tech leaders last September, tech-backed nonprofit Code.org pledged to engage 25 million K-12 schoolchildren in an "Hour of AI" this school year.

Preliminary numbers released this week by the Code.org Advocacy Coalition this week showed that 13.1 million users had participated in the inaugural Hour of AI, attaining 52.4% of its goal of 25 million participants.

In a pivot from coding to AI literacy, the Hour of AI replaced Code.org's hugely-popular Hour of Code this December as the flagship event of Computer Science Education Week (Dec. 8-14). According to Code.org's 2024-25 Impact Report, "in 2024–25 alone, students logged over 100 million Hours of Code, including more than 43 million in the four months leading up to and including CS Education Week."

Submission + - UK government launches Women in Tech Taskforce (computerweekly.com)

An anonymous reader writes: To help increase the number of women in technology, and prevent those already in the industry from leaving, the UK government has launched a Women in Tech Taskforce

Submission + - Young People's Mental Health Is Improving. Tech Alarmists Take Note. (reason.com) 1

fjo3 writes: When you're motivated to find evidence that today's tech is dooming young people, it's certainly easy to do so. But when you consider the totality of the data, the picture becomes much, much more complicated. Suddenly we see evidence that tech may have both negative and positive effects on young people—sometimes simultaneously; that its effects may differ greatly based on individuals' pre-existing circumstances and psychological makeups; that there are at least other plausible explanations for negative developments that many attribute only to technology; and that even where tech usage could credibly be causing damage, the effect sizes are often much smaller than folks make it seem.

Submission + - Tech Giant-Supported Study Chastises K-12 Schools for Lack of AI + CS Education

theodp writes: Coinciding with Computer Science Education Week and its flagship event the Hour of AI, tech-backed nonprofit Code.org this week released the 2025 State of AI & Computer Science Education report, chastising K-12 schools for the lack of access to AI and CS education and thanking its funders Microsoft, Amazon, and Google for supporting the report's creation.

"For the first time ever," Code.org explains, "the State of AI + CS Education features a state-by-state analysis of AI education policies, including whether standards and graduation requirements emphasize AI. The report continues to track the CS access, participation, and fundamental policies that have made it a trusted benchmark for policymakers, educators, and advocates."

The report laments that "0 out of 50 states require AI+CS for graduation," adding that "access to CS has plateaued" at 60% nationwide, with Minnesota and Alaska bringing up the rear with a woeful 34%. However, flaws with the statistic on which the K-12 CS education crisis movement was built — the "Percentage of Public High Schools Offering Foundational Computer Science" — become apparent with just a casual glance at the data underlying Minnesota's failing 34% grade. Because that metric neglects to take into account school sizes — which of course vary widely — the percentage of schools offering access to CS can be vastly different than the percentage of students attending schools offering access to CS. So, when Code.org reports that only 33% of the three Prior Lake-Savage Area Schools offer access to CS, keep in mind that left unreported is that more than 95% of students in the district attend the one Prior Lake-Savage Area School that does offer access to CS, which is a far less alarming metric. Code.org reports that Prior Lake High School (2,854 students, per NCES records) offers access to CS, while Prior Lake-Savage Area ALC (93 students) and Laker Online (45 students) do not. And that, kids, is today's lesson in K-12 CS education access crisis math, where 95% (2,854 students/2,992 students) can equal 33% (1 school/3 schools)!

Submission + - Elon Musk admits DOGE was a waste of time (and money) (yahoo.com)

echo123 writes: Elon Musk appeared to admit for the first time that his work at the so-called Department of Government Efficiency was a total waste of time—which also destroyed his reputation.

He told Katie Miller, who is married to Donald Trump’s deputy chief of staff Stephen Miller, that he would not take the controversial post in Washington, D.C., if he had his time over again.

“I think instead of doing DOGE, I would have basically built—worked on my companies, essentially," he told The Katie Miller Podcast.

“If you could go back and start from scratch like it’s January 20th all again, would you go back and do it differently? And, knowing what you know now, do you think there’s ever a place to restart?”

After a deep sigh, Elon Musk, 54, replied, “I mean, no, I don’t think so.”

“You gave up a lot to DOGE,” she said.

“Yeah,” he conceded, sadly.

DOGE oversaw a $220 billion jump in federal spending—not including interest—in the fiscal year, according to The Wall Street Journal.

Bill Gates has warned Elon Musk’s DOGE cuts will cause ‘millions of deaths’

Submission + - OpenAI joins the Linux Foundationâ(TM)s new Agentic AI Foundation and the o (nerds.xyz)

BrianFagioli writes: OpenAI and several other AI giants have launched the Agentic AI Foundation under the Linux Foundation, describing it as a neutral home for standards as agentic systems move into real production. But Iâ(TM)m not buying the narrative. Instead of opening models, training data, or anything that would meaningfully shift power toward the community, the companies involved are donating lightweight artifacts like AGENTS.md, MCP, and goose. Theyâ(TM)re useful, but theyâ(TM)re also the safest, least threatening pieces of their ecosystem to âoeopen.â From where I sit, it looks like a strategic attempt to lock in influence over emerging standards before truly open projects get a chance to define the space.

I see the entire move as smoke and mirrors. With regulators paying closer attention and developer trust slipping, creating a Linux Foundation directed fund gives these companies convenient cover to say theyâ(TM)re being transparent and collaborative. But nothing about this structure forces them to share anything substantial, and nothing about it changes the closed nature of their core technology. To me, it looks like big tech trying to set the rules of the game early, using the language of openness without actually embracing it. Slashdot readers have seen this pattern before, and this one feels no different.

Slashdot Top Deals