Government

DOJ Sues Cloudera For Deliberately Excluding American Workers From Tech Jobs (zerohedge.com) 60

Longtime Slashdot reader schwit1 shares a report from ZeroHedge: The Justice Department on Tuesday sued Cloudera, accusing the enterprise data and artificial intelligence company of deliberately engineering a hiring process that excluded American workers from at least seven lucrative technology positions while the firm pursued permanent residency sponsorship for foreign workers on temporary visas. In a 14-page complaint filed with the Office of the Chief Administrative Hearing Officer, the department's Civil Rights Division alleges that Cloudera, from March 31, 2024, through at least January 28, 2025, instructed job candidates to submit applications to a dedicated email address, amerijobpostings@cloudera.com, that rejected all external messages with an automated bounce-back error. The company did not advertise the roles on its public careers website or accept applications through its standard portal, as it did for non-sponsorship positions.

Cloudera then attested to the Department of Labor that it could not locate any qualified U.S. workers for the roles, which paid between approximately $180,000 and $294,000 annually, according to the filing. The positions included a Product Manager role in Santa Clara, California, with a listed salary range of $170,186 to $190,000. The case marks one of the most detailed enforcement actions under the Justice Department's Protecting U.S. Workers Initiative, which was relaunched last year and has already produced 10 settlements targeting employers accused of discriminating against American workers in favor of temporary visa holders. "Employers cannot use the PERM sponsorship process as a backdoor for discriminating against U.S. workers," Assistant Attorney General Harmeet K. Dhillon of the Civil Rights Division said in a statement. "The Division will not hesitate to sue companies who intentionally deter U.S. workers from applying to American jobs."

Transportation

First Tesla Semi Rolls Off High-Volume Production Line (electrek.co) 107

Tesla has produced the first Semi from its new high-volume production line at Gigafactory Nevada, a milestone for the long-delayed electric Class 8 truck program after years of pilot builds and delays. Electrek reports: The Tesla Semi has had one of the longest gestation periods in Tesla's history. First unveiled in 2017, the truck was originally promised for production in 2019. That target slipped repeatedly -- to 2020, then 2021, then 2022 -- before Tesla finally delivered a handful of units to PepsiCo in late 2022. Those early trucks were essentially hand-built on a pilot line. Tesla spent the next three years refining the design, cutting roughly 1,000 lbs from the truck, and building out a dedicated factory adjacent to Gigafactory Nevada in Sparks. The company revealed the final production specs in February, confirming two trims: a Standard Range with 325 miles at full 82,000-lb gross combination weight, and a Long Range with 500 miles of range.

Tesla is quoting $290,000 for the 500-mile Long Range version and roughly $260,000 for the Standard Range -- making it the lowest-priced Class 8 battery electric tractor on the market. The shift from a pilot line to a high-volume production line is significant. Tesla's Semi factory is designed for an annual capacity of 50,000 trucks, though the company will ramp gradually. Analysts project deliveries between 5,000 and 15,000 units in 2026, but that sounds way too optimistic. [...] Both trims feature an 800-kW tri-motor drivetrain producing 1,072 hp and support 1.2-MW Megacharger speeds, restoring 60% of range in roughly 30 minutes -- conveniently timed around a driver's mandatory rest break. Tesla has opened its first Megacharger station in Ontario, California, and has mapped 66 Megacharger locations across 15 states.

The Courts

Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (sfchronicle.com) 48

An anonymous reader quotes a report from the San Francisco Chronicle: Elon Musk returned to the witness stand Wednesday in Oakland federal court for a second day of testimony in his case against OpenAI, detailing his shift from being an enthusiastic supporter of the nonprofit to feeling betrayed. He also clashed repeatedly with OpenAI's attorney over questions that Musk believed were unfair. He said his feelings towards OpenAI CEO Sam Altman and President Greg Brockman shifted from a "phase one" of support, "phase two" of doubts, and finally "phase three, where I'm sure they're looting the nonprofit. We're currently in phase three," Musk said with a chuckle. Musk said he was a "fool" for giving OpenAI "$38 million of essentially free funding to create what would become an $800 billion company," of which he has no equity stake.

In his 2024 lawsuit, Musk alleged breach of charitable trust and unjust enrichment, arguing OpenAI abandoned its original nonprofit mission to benefit humanity to pursue financial gain. OpenAI's lawyer William Savitt argued Tuesday during his opening statement that the nonprofit entity remains in control of the for-profit public benefit corporation and is now one of the most well-funded nonprofits in the world. Musk is seeking to oust Altman from OpenAI's board and upwards of $134 billion in damages, which he said would be used to fund OpenAI's nonprofit mission. During cross-examination, Savitt clashed with Musk over questioning. Savitt asked whether Musk had contributed $38 million to OpenAI, rather than the $100 million that he later claimed to have invested on X. Musk said he also contributed his reputation to the company and came up with the idea for the name, leading Savitt to ask Musk to respond yes or no to "simple" questions.

"Your questions are not simple. They're designed to trick me, essentially," Musk said, adding that he had to elaborate or it would mislead the jury. He compared Savitt's questions to asking, "have you stopped beating your wife?" Judge Yvonne Gonzalez Rogers intervened, leading Musk to answer yes to the $38 million investment amount. The world's richest man said his doubts grew and by late 2022, he thought "wait a second, these guys are betraying their promise. They're breaking the deal." "I started to lose confidence that they were telling me the truth," Musk said. A turning point was co-defendent Microsoft's investment of billions of dollars into OpenAI, Musk said. On October 23, 2022, Musk texted Altman that he was "disturbed" to see OpenAI's valuation of $20 billion in the wake of the Microsoft deal. Musk called the deal a "bait and switch," since a nonprofit doesn't have a valuation. OpenAI had "for all intents and purposes" become primarily a for-profit company, Musk argued. Altman responded to Musk by text that "I agree this feels bad," saying that OpenAI had previously offered equity in the company but Musk hadn't wanted it at the time. Altman said the company was happy to offer equity in the future. Musk said it "didn't seem to make sense to me" to hold equity in what should be a nonprofit.
Musk also testified about former OpenAI board member Shivon Zilis, who lives with him, is the mother of four of his children, and served as a senior advisor at Neuralink. He denied that she shared sensitive OpenAI information with him. Court evidence showed Musk had encouraged her to stay close to OpenAI to "keep info flowing" and had approved Neuralink recruiting OpenAI employees, which he defended by saying workers are free to change jobs. "It's a free country," Musk said.

Recap:
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
The Courts

New Sam Bankman-Fried Trial Would Be Huge Waste of Court's Time, Judge Says (arstechnica.com) 31

A federal judge denied Sam Bankman-Fried's request for a new trial, calling his claims of DOJ witness intimidation "wildly conspiratorial" and unsupported by the record. Judge Lewis Kaplan said (PDF) the FTX founder's motion appeared tied to a pre-indictment plan to recast himself as a Republican victim of Biden's DOJ in hopes of gaining sympathy, leniency, or even a Trump pardon. Ars Technica reports: Bankman-Fried was sentenced to 25 years in prison in 2024 for "masterminding one of the largest financial frauds in American history," US District Judge Lewis Kaplan wrote in his order. He was convicted on all charges, including wire fraud, conspiracy to commit securities fraud, commodities fraud, and money laundering. There is already an appeal pending in another court, the judge noted. But Bankman-Fried filed a separate motion for a new trial, claiming that there were "newly discovered" witnesses and evidence that might have helped his defense, if Joe Biden's Department of Justice hadn't intimidated them into refusing to testify or, in one case, lying on the stand.

He also asked for a new judge, wanting Kaplan to recuse himself. However, Kaplan pointed out that "none of the witnesses" were "newly discovered." And more concerningly, Bankman-Fried offered no evidence that the witnesses could prove the "wildly conspiratorial" theory the FTX founder raised, claiming that their absence at the trial was a "product of government threats and retaliation," the judge wrote. Bankman-Fried's theory is "entirely contradicted by the record," Kaplan said. He emphasized that granting Bankman-Fried's request "would be a large waste of judicial resources as it could require another judge to familiarize himself or herself with an extensive and complicated record."

Additionally, all three witnesses that Bankman-Fried claimed could give crucial testimony in his defense were known to him throughout the trial, and he never sought to compel their testimony. And the "self-serving social-media posts" of one witness who now claims that he lied when testifying against Bankman-Fried -- "Ryan Salame, who pleaded guilty" -- must be met with "utmost suspicion," Kaplan said. "If one were to take Salame at his current word, he lied under oath when pleading guilty before this Court," Kaplan wrote. Even if taken seriously, "his out-of-court, unsworn statements could not come anywhere close to clearing the bar to warrant a new trial," Kaplan said, deeming Salame's credibility "highly questionable." Further, "even if these individuals had testified for Bankman-Fried, his protestations that one or more of them would have supported his claims that FTX was not insolvent and that his victims all were compensated fully in the bankruptcy proceedings are inaccurate or misleading," Kaplan concluded.

In the order, Kaplan's frustration seems palpable, as there may have been no need for him to rule on the motion at all after Bankman-Fried requested to withdraw it. But the judge said the ruling was needed after Bankman-Fried waited to file his withdrawal request until after the DOJ and the court wasted time responding and reviewing filings, the judge said. Troublingly, Bankman-Fried's request to withdraw his request without prejudice would have allowed him to potentially request a new trial after the appeal ended. Based on the substance of the filing, that risked wasting future court resources, Kaplan determined. To prevent overburdening the justice system, Kaplan deemed it necessary to deny Bankman-Fried's motion and request for recusal, rather than allow him to withdraw the filing without prejudice.

Ubuntu

Ubuntu's AI Plans Have Linux Users Looking For a 'Kill Switch' (theverge.com) 120

Canonical's plan to add AI features to Ubuntu has sparked pushback from users who are concerned it could follow Windows 11's AI-heavy direction. "After Canonical's announcement earlier this week that it's bringing AI features to Ubuntu, replies included requests for an AI 'kill switch' or a way to disable the upcoming features," reports The Verge. Canonical says it has no plans for a "global AI kill switch" but it will allow users to remove any AI features they don't want. From the report: In his original post, [Canonical's VP of engineering, Jon Seager] said the upcoming AI features will include accessibility tools like AI speech-to-text and text-to-speech, along with agentic AI features for tasks like troubleshooting and automation. Canonical is also encouraging its engineers to use AI more and plans to begin introducing AI features in Ubuntu "throughout the next year."

In a follow-up comment, Seager clarified that, "my plan is to introduce AI-backed features as a 'preview' on a strictly opt-in basis in [Ubuntu version] 26.10. In subsequent releases, my plan is to have a step in the initial setup wizard that allows the user to choose whether or not they'd like the AI-native features enabled." Ultimately, he said, "All of these capabilities will be delivered as Snaps to the OS, layered on top of the existing Ubuntu stack. That means there will always be the option of removing those Snaps."
Users who prefer to avoid AI entirely could switch to other distros like Linux Mint, Pop!_OS, or Zorin OS. "These distros have some similarities to Ubuntu, but may not necessarily adopt the new AI features Canonical is rolling out," adds The Verge.
Transportation

California High-Speed Rail Price Tag Jumps To $231 Billion (kmph.com) 187

Longtime Slashdot reader schwit1 writes: California's long-delayed high-speed rail project is now facing renewed scrutiny after state leaders revealed a dramatically higher price tag, now estimated at roughly $231 billion, nearly seven times the original $33 billion projection approved by voters in 2008. The revised figures have reignited talks in Sacramento over whether the project can realistically be completed, how long it will take, and whether the state can continue to fund it at this scale.

Senator Strickland pointed to comments from Lou Thompson, former chair of the California High-Speed Rail Authority peer review group, who recently criticized the latest draft business plan. Thompson wrote that the 2026 draft plan "has reached a dead end," arguing that the project has drifted far from its original vision due to escalating costs, delays, and unfunded gaps. Under current projections, assuming funding and construction proceed as planned, service between San Francisco and Bakersfield could begin around 2033, while the full Los Angeles to San Francisco connection could extend to 2040.

United States

Colorado's Anti-Repair Bill Is Dead (wired.com) 11

An anonymous reader quotes a report from Wired: A controversial bill in Colorado that would have undone some repair protections in the state has failed. The bill had been the target of right-to-repair advocates, who saw it as a bellwether for how tech companies might try to undo repair legislation more broadly in the US. Colorado's landmark 2024 repair law, the Consumer Right to Repair Digital Electronic Equipment, went into effect in January 2026 and ensured access to tools and documentation people needed to modify and fix digital electronics such as phones, computers, and Wi-Fi routers. The new bill, SB26-090, would have carved out an exception to those repair protections for "critical infrastructure," a loosely defined term that repair advocates worried could be applied to just about any technology.

SB26-090 was introduced during a Colorado Senate hearing on April 2 and was supported by lobbying efforts from companies such as Cisco and IBM. It passed that hearing unanimously. The bill then passed in the Colorado Senate on April 16. On Monday evening, the bill was discussed in a long, delayed hearing in the Colorado House's State, Civic, Military, and Veterans Affairs Committee. Dozens of supporters and detractors gave public comments. Finally, the bill was shot down in a 7-to-4 vote and classified as postponed indefinitely.
"While we were making progress at chipping away at the momentum for it, we had still been losing," said Danny Katz, executive director of the local nonprofit consumer advocacy group CoPIRG. "So, we took nothing for granted, and I believe the incredible testimony from the broad range of cybersecurity experts, businesses, repair advocates, recyclers, and people who want the freedom to fix their stuff made a big difference."
Android

EU Tells Google To Open Up AI On Android; Google Says That's 'Unwarranted Intervention' (arstechnica.com) 50

An anonymous reader quotes a report from Ars Technica: In January, the European Commission began an initial investigation, known as a specification proceeding, into how Google has implemented AI in the Android operating system. The results are in, and the EU says Android needs to be more open, which is not surprising. Meanwhile, Google says this amounts to "unwarranted intervention," which is equally unsurprising. Regardless of Google's characterization of the investigation, the commission may force Google to make Android AI changes this summer. This action stems from the continent's Digital Markets Act (DMA), a sweeping law that designates seven dominant technology companies as "gatekeepers" that are subject to greater regulation to ensure fair competition. Google has consistently spoken against the regulations imposed under the DMA, but it and the other gatekeepers have been subject to the law for several years now, and there's little chance the commission backs away from it.

The issue before the commission currently is the built-in advantage for Gemini on Android. When you turn on any Google-powered Android phone, Gemini is already there and gets special treatment at the system level. The European Commission is taking aim at the lack of features available to third-party AI services. The commission believes that there are too many experiences on Android that only work with Google's Gemini AI, and as a gatekeeper, Google must change that. "As we navigate the rapidly evolving landscape of AI, it is clear that interoperability is key to unlocking the full potential of these technologies," said Commission VP for Tech Sovereignty Henna Virkkunen in a statement. "These measures will open up Android devices to a wider range of AI services, so that users will have the freedom to choose the AI services that best meet their needs and values, without sacrificing functionality."

The commission does have a solid track record pushing for openness so far. Since the DMA came into force, Google has been required to make numerous changes to its business in Europe, like implementing search choice screens on Android, allowing alternative payment methods in the Play Store, and limiting data sharing across services. Now, the EU wants Google to make the Android platform more hospitable to third-party AI services. Google's objection focuses on preserving the autonomy for device makers (including Google) to customize AI services. "This unwarranted intervention would strip away that autonomy, mandate access to sensitive hardware and device permissions; unnecessarily driving up costs while undermining critical privacy and security protections for European users," said Google senior competition counsel Claire Kelly.
The problem isn't that you can't install ChatGPT or Grok; it's that these chatbots don't have the same access to data and features as Gemini.

To address that imbalance, the EU is considering several requirements that would force Google to give third-party AI assistants deeper access to Android, closer to what Gemini currently enjoys. The proposed requirements include:
- Letting alternative AI tools be launched system-wide through hot words, gestures, or button presses.
- Allowing third-party assistants to see screen context when users invoke them.
- Giving non-Gemini AI tools access to local device data, with user permission, so they can generate proactive suggestions, summaries, and contextual help.
- Allowing other AI services to control installed apps and Android system features on the user's behalf.
- Ensuring third-party developers can access the necessary device hardware to run local AI models with strong performance, availability, and responsiveness.
- Requiring Google to create APIs that let outside AI providers plug into Android more deeply.
- Requiring Google to provide technical assistance to those AI providers.
- Making those APIs and support available free of charge.
AI

China Blocks Meta's $2 Billion Takeover of AI Startup Manus 21

China has blocked Meta's planned $2 billion acquisition of AI startup Manus, ordering the deal withdrawn after months of scrutiny from both Beijing and Washington. "The decision to prohibit foreign investment in Manus was made in accordance with laws and regulations," reports CNBC, citing the National Development and Reform Commission. "It added that it has asked the parties involved to withdraw the acquisition transaction." From the report: The deal had attracted scrutiny from both China and Washington, as lawmakers in the U.S. have prohibited American investors from backing Chinese AI companies directly. Meanwhile, Beijing has increased efforts to discourage Chinese AI founders from moving business offshore. The Chinese government's intervention in the transaction drew alarm among tech founders and venture capitalists in the country who were hoping to take advantage of the so-called Singapore-washing model, where companies relocate from China to the city-state to avoid scrutiny from Beijing and Washington.

Manus was founded in China before relocating to Singapore. The company develops general purpose AI agents and launched its first general AI agent in March last year, which can execute complex tasks such as market research, coding and data analysis. The release saw the startup lauded as the next DeepSeek. Manus said it had passed $100 million in annual recurring revenue, or ARR, in December, eight months on from launching a product, which it claimed made it the fastest startup in the world at the time to hit the milestone from $0. The company raised $75 million in a round led by U.S. VC Benchmark in April last year.
Earth

Two Hot Climate Tech Startups Just Raised $1 Billion+ in IPOs (techcrunch.com) 37

Public stock exchanges "appear to be warming to climate tech startups," reports TechCrunch. "Or at least some of them." This week, nuclear startup X-energy went public, raising $1 billion in an upsized share offering that appears to have delivered a windfall for its investors, including Amazon [and Google]. Retail investors apparently can't get enough, with the stock popping 25% in its first hour of trading. Also this week, geothermal startup Fervo said it filed for an initial public offering. The size of the Fervo IPO has yet to be disclosed, but private investors have valued the company at around $3 billion, according to PitchBook.

The move to go public aligns with what investors told TechCrunch at the end of last year. After years of tepid attitudes toward climate tech companies, they expected public markets to start welcoming energy-related startups. Nearly every investor that weighed in on the question said the startups with the best chances of going public specialize in either nuclear fission or enhanced geothermal. Fervo, specifically, was mentioned several times. Thank data centers for that. The AI craze has taken a trend of rising demand for electricity and made it sexy and salable.

The Almighty Buck

Elon Musk Vies to Turn X Into Super App With Banking Tool Near Launch (theedgesingapore.com) 132

An anonymous reader shared this report from Bloomberg: More than three years after acquiring Twitter, Elon Musk says he's nearing his long-stated goal of turning it into an "everything app" with a new financial services tool that he pledged to launch for the public this month... Early users testing the service have touted competitive perks, including 3% cash back on eligible purchases and a 6% interest rate on cash savings — the latter of which is roughly 15 times the national average. Musk's new product is also expected to offer free peer-to-peer transfers, a metal Visa debit card personalised with a user's X handle, and an AI concierge built by Musk's xAI startup that tracks spending and sorts through past transactions, according to reports from users with early access.

Musk, who first rose to prominence in Silicon Valley by co-founding PayPal Holdings Inc, sees payments as crucial to creating a so-called super app similar to social products that have flourished in China. WeChat, for example, lets users hail a ride, book a flight and pay off their credit card... If it works, X Money would sit at the intersection of social media and finance in a way no American product has attempted at this scale... Creators who currently receive payments from X for engagement will be switched from Stripe to X Money as their payment platform, according to early users — a move that guarantees an initial base of active accounts. Some have already been testing X Money to send payments to one another through the app's chat feature or directly through their profiles, according to early participants in the rollout...

X currently holds licences in 44 states, according to its website, and likely won't be able to operate in states where it hasn't obtained a licence.

Iphone

How Will Apple Change Under Its New CEO? (9to5mac.com) 45

How will Apple change in September under its new CEO — former hardware chief John Ternus? The blog Geeky Gadgets is already expecting "significant updates to the iPhone over the next three years," as well as streamlined internal engineering (plus durability enhancements and high-capacity batteries).

2026: Foldable display
2027: Bezel-less iPhone 20 (celebrating the iPhone's 20th anniversary)

CNET's web sites (which include ZDNET, PCMag, Mashable and Lifehacker) are even hosting a contest "to see which of our readers can make the best Apple predictions for 2026. Answer five questions in any of our three rounds of the contest to be entered to win [$applePrize] in September."

But the blog 9to5Mac already has a list of new upcoming Apple products, courtesy of Bloomberg's Mark Gurman (who appeared on the TBPN podcast this week "to talk about Apple's CEO transition, what to expect from John Ternus, and more." As part of the conversation, Gurman said: "There are six major Apple products in development right now, six major new product categories." Here's the full list he shared:

1. AI AirPods
2. Smart glasses
3. Pendant
4. Smart display
5. Tabletop robot
6. Security camera

[...] Gurman has reported on the Pendant before as a new AI wearable that's an alternative to AI AirPods and Glasses. All three products are expected to rely heavily on a paired iPhone for Siri and other AI features. The smart display ('HomePad'), tabletop robot, and security camera are all brand new Apple Home products.

The AI features arrive "thanks to the revamped Apple Foundation Models trained by Google Gemini," reports the AppleInsider blog (citing Gurman's Power On newsletter at Bloomberg). The smart doorbell camera will include "an Apple Intelligence-upgraded version of the facial recognition already included with HomeKit Secure Video. Today, HSV can utilize the Apple Home admin's tagged faces in their Photos app to label people that are viewed on the camera. When a known person rings the doorbell, Siri will announce them by name over the HomePod chime."
Government

Privacy Advocate Accuses US Government of Investing in AI-Powered Mass Surveillance (theconversation.com) 25

The Conversation published this warning from privacy/tech law/electronic surveillance attorney Anne Toomey McKenna (also an affiliated faculty member at Penn State's Institute for Computational and Data Sciences). The U.S. government "is able to purchase Americans' sensitive data because the information it buys is not subject to the same restrictions as information it collects directly. The federal government is also ramping up its abilities to directly collect data through partnerships with private tech companies. These surveillance tech partnerships are becoming entrenched, domestically and abroad, as advances in AI take surveillance to unprecedented levels... " Congressional funding is supercharging huge government investments in surveillance tech and data analytics driven by AI, which automates analysis of very large amounts of data. The massive 2025 tax-and-spending law netted the Department of Homeland Security an unprecedented US$165 billion in yearly funding. Immigration and Customs Enforcement, part of DHS, got about $86 billion. Disclosure of documents allegedly hacked from Homeland Security reveal a massive surveillance web that has all Americans in its scope. DHS is expanding its AI surveillance capabilities with a surge in contracts to private companies. It is reportedly funding companies that provide more AI-automated surveillance in airports; adapters to convert agents' phones into biometric scanners; and an AI platform that acquires all 911 call center data to build geospatial heat maps to predict incident trends. Predicting incident trends can be a form of predictive policing, which uses data to anticipate where, when and how crime may occur...

Meanwhile, the Trump administration's national policy framework for artificial intelligence, released on March 20, 2026, urges Congress to use grants and tax incentives to fund "wider deployment of AI tools across American industry" and to allow industry and academia to use federal datasets to train AI. Using federal datasets this way raises privacy law concerns because they contain a lifetime of sensitive details about you, including biographical, employment and tax information....

The author argues that it's now critical for Americans to know "why the laws you might think are protecting your data do not apply or are ignored." On March 18, 2026, FBI Director Kash Patel confirmed to Congress that the FBI is buying Americans' data from data brokers, including location histories, to track American citizens.... But in buying your data in bulk on the commercial market, the government is circumventing the Constitution, Supreme Court decisions and federal laws designed to protect your privacy from unwarranted government overreach... Supreme Court cases require police to get a warrant to search a phone or use cellular or GPS location information to track someone. The Electronic Communications Privacy Act's Wiretap Act prohibits unauthorized interception of wire, oral and electronic communications.

Despite some efforts, Congress has failed to enact legislation to protect data privacy, the use of sensitive data by AI systems or to restore the intent of the Electronic Communications Privacy Act. Courts have allowed the broad electronic privacy protections in the federal Wiretap Act to be eviscerated by companies claiming consent. In my opinion, the way to begin to address these problems is to restore the Wiretap Act and related laws to their intended purposes of protecting Americans' privacy in communications, and for Congress to follow through on its promises and efforts by passing legislation that secures Americans' data privacy and protects them from AI harms.

Thanks to long-time Slashdot reader sinij for sharing the article.
AI

Is AI Cannibalizing Human Intelligence? A Neuroscientist's Way to Stop It (wsj.com) 22

The AI industry is largely failing to ask a key design question, argues theoretical neuroscientist/cognitive scientist Vivienne Ming. Are their AI products building human capacity or consuming it?

In the Wall Street Journal Ming shares her experiment about which group performed best at predicting real-world events (compared to forecasters on prediction market Polymarket) — AI, human, or human-AI hybrid teams. The human groups performed poorly, relying on instinct or whatever information had come across their feeds that morning. The large AI models — ChatGPT and Gemini, in this case — performed considerably better, though still short of the market itself. But when we combined AI with humans, things got more interesting. Most hybrid teams used AI for the answer and submitted it as their own, performing no better than the AI alone. Others fed their own predictions into AI and asked it to come up with supporting evidence. These "validators" had stumbled into a classic confirmation bias-loop: the sycophancy that leads chatbots to tell you what you want to hear, even if it isn't true. They ended up performing worse than an AI working solo.

But in roughly 5% to 10% of teams, something different emerged. The AI became a sparring partner. The teams pushed back, demanding evidence and interrogating assumptions. When the AI expressed high confidence, the humans questioned it. When the humans felt strongly about an intuition, they asked the AI to come up with a counterargument... These teams reached insightful conclusions that neither a human nor a machine could have produced on its own. They were the only group to consistently rival the prediction market's accuracy. On certain questions, they even outperformed it...

We are building AI systems specifically designed to give us the answer before we feel the discomfort of not having it. What my experiment suggests is that the human qualities most likely to matter are not the feel-good ones. They're the uncomfortable ones: the capacity to be wrong in public and stay curious; to sit with a question your phone could answer in three seconds and resist the urge to reach for it. To read a confident, fluent response from an AI and ask yourself, "What's missing?" rather than default to "Great, that's done." To disagree with something that sounds authoritative and to trust your instinct enough to follow it. We don't build these capacities by avoiding discomfort. We build them by choosing it, repeatedly, in small ways: the student who struggles through a problem before checking the answer; the person who asks a follow-up question in a conversation; the reader who sits with a difficult idea long enough for it to actually change one's mind. Most AI chatbots today default to easy answers, which is hurting our ability to think critically.

I call this the Information-Exploration Paradox. As the cost of information approaches zero, human exploration collapses. We see it in students who perform better on AI-assisted tasks and worse on everything afterward. We see it in developers shipping more code and understanding it less. We are, in ways that feel like progress, slowly optimizing ourselves out of the loop.

The author just published a book called " Robot-Proof: When Machines Have All The Answers, Build Better People." They suggest using AI to "explore uncertainty.... before you accept an AI's answer, ask it for the strongest argument against itself."

And they're also urging new performance benchmarks for AI-human hybrid teams.
Australia

Australia's Teen Social Media Ban Isn't Working. Half Their Teens Still Have Access, Survey Finds (yahoo.com) 76

After Australia banned social media for users younger than 16, teenagers "immediately worked to circumvent the restrictions," reports Fortune: 14-year-old in New South Wales, told The Washington Post in December 2025, just before the implementation of the ban, she planned to use her mother's face ID to log in to Snapchat and . In a Reddit thread on ways to bypass the ban, one user suggested using a printed mesh face mask from Temu to outsmart apps' facial recognition tools. Others still have tried VPNs that obscure their locations.

A new report suggests these efforts are working. In a survey of 1,050 Australians ages 12 to 15 conducted last month, the UK-based suicide prevention organization the Molly Rose Foundation found more than 60% of teens who had social media accounts before the ban still had access to at least one of those platforms. Social media sites including TikTok, YouTube, and Instagram, have retained more than half of their users under 16. About two-thirds of young users say these platforms have taken "no action" to remove or reactive accounts that existed before the restrictions.

The survey comes at the heels of the Australian internet regulator calling for an investigation into the five largest social media platforms over potential breaches of the ban.

The article points out that "Greece, France, Indonesia, Austria, Spain, and the UK have or are considering similar action, and eight U.S. states are weighing legislation that would put guardrails or ban social media use for minors.

Slashdot Top Deals