AI

AI Industry Tells US Congress: 'We Need Energy' (msn.com) 98

The Washington Post reports: The United States urgently needs more energy to fuel an artificial intelligence race with China that the country can't afford to lose, industry leaders told lawmakers at a House hearing on Wednesday. "We need energy in all forms," said Eric Schmidt, former CEO of Google, who now leads the Special Competitive Studies Project, a think tank focused on technology and security. "Renewable, nonrenewable, whatever. It needs to be there, and it needs to be there quickly." It was a nearly unanimous sentiment at the four-hour-plus hearing of the House Energy and Commerce Committee, which revealed bipartisan support for ramping up U.S. energy production to meet skyrocketing demand for energy-thirsty AI data centers.

The hearing showed how the country's AI policy priorities have changed under President Donald Trump. President Joe Biden's wide-ranging 2023 executive order on AI had sought to balance the technology's potential rewards with the risks it poses to workers, civil rights and national security. Trump rescinded that order within days of taking office, saying its "onerous" requirements would "threaten American technological leadership...." [Data center power consumption] is already straining power grids, as residential consumers compete with data centers that can use as much electricity as an entire city. And those energy demands are projected to grow dramatically in the coming years... [Former Google CEO Eric] Schmidt, whom the committee's Republicans called as a witness on Wednesday, told [committee chairman Brett] Guthrie that winning the AI race is too important to let environmental considerations get in the way...

Once the United States beats China to develop superintelligence, Schmidt said, AI will solve the climate crisis. And if it doesn't, he went on, China will become the world's sole superpower. (Schmidt's view that AI will become superintelligent within a decade is controversial among experts, some of whom predict the technology will remain limited by fundamental shortcomings in its ability to plan and reason.)

The industry's wish list also included "light touch" federal regulation, high-skill immigration and continued subsidies for chip development. Alexandr Wang, the young billionaire CEO of San Francisco-based Scale AI, said a growing patchwork of state privacy laws is hampering AI companies' access to the data needed to train their models. He called for a federal privacy law that would preempt state regulations and prioritize innovation.

Some committee Democrats argued that cuts to scientific research and renewable energy will actually hamper America's AI competitiveness, according to the article. " But few questioned the premise that the U.S. is locked in an existential struggle with China for AI supremacy.

"That stark outlook has nearly coalesced into a consensus on Capitol Hill since China's DeepSeek chatbot stunned the AI industry with its reasoning skills earlier this year."
Facebook

Facebook Whistleblower Alleges Meta's AI Model Llama Was Used to Help DeepSeek (cbsnews.com) 10

A former Facebook employee/whistleblower alleges Meta's AI model Lllama was used to help DeepSeek.

The whistleblower — former Facebook director of global policy Sarah Wynn-Williams — testified before U.S. Senators on Wednesday. CBS News found this earlier response from Meta: In a statement last year on Llama, Meta spokesperson Andy Stone wrote, "The alleged role of a single and outdated version of an American open-source model is irrelevant when we know China is already investing over 1T to surpass the US technologically, and Chinese tech companies are releasing their own open AI models as fast, or faster, than US ones."

Wynn-Williams encouraged senators to continue investigating Meta's role in the development of artificial intelligence in China, as they continue their probe into the social media company founded by Zuckerberg. "The greatest trick Mark Zuckerberg ever pulled was wrapping the American flag around himself and calling himself a patriot and saying he didn't offer services in China, while he spent the last decade building an $18 billion business there," she said.

The testimony also left some of the lawmakers skeptical of Zuckerberg's commitment to free speech after the whistleblower also alleged Facebook worked "hand in glove" with the Chinese government to censor its platforms: In her almost seven years with the company, Wynn-Williams told the panel she witnessed the company provide "custom built censorship tools" for the Chinese Communist Party. She said a Chinese dissident living in the United States was removed from Facebook in 2017 after pressure from Chinese officials. Facebook said at the time it took action against the regime critic, Guo Wengui, for sharing someone else's personal information. Wynn-Williams described the use of a "virality counter" that flagged posts with over 10,000 views for review by a "chief editor," which Democratic Sen. Richard Blumenthal of Connecticut called "an Orwellian censor." These "virality counters" were used not only in Mainland China, but also in Hong Kong and Taiwan, according to Wynn-Williams's testimony.

Wynn-Williams also told senators Chinese officials could "potentially access" the data of American users.

Social Networks

Adobe Retreats from Bluesky After Massive User Backlash (petapixel.com) 73

Adobe has deleted all its posts on Twitter-alternative Bluesky after a disastrous April 8 debut that drew over 1,600 angry comments from digital creators. The software giant's innocuous first post asking "What's fueling your creativity right now?" triggered immediate criticism targeting Adobe's controversial subscription model, continual price increases, and AI implementation.

"Y'all keep raising your prices for a product that keeps getting worse," wrote one user, while another referenced Adobe's "subscription model" with "I assume you'll be charging us monthly to read your posts." Recent price hikes have been substantial, with one commenter reporting a 53.88% increase from CDN$14.68 to CDN$22.59 monthly.
AI

Ex-OpenAI Staffers File Amicus Brief Opposing the Company's For-Profit Transition (techcrunch.com) 13

A group of ex-OpenAI employees on Friday filed a proposed amicus brief in support of Elon Musk in his lawsuit against OpenAI, opposing OpenAI's planned conversion from a nonprofit to a for-profit corporation. From a report: The brief, filed by Harvard law professor and Creative Commons founder Lawrence Lessig, names 12 former OpenAI employees: Steven Adler, Rosemary Campbell, Neil Chowdhury, Jacob Hilton, Daniel Kokotajlo, Gretchen Krueger, Todor Markov, Richard Ngo, Girish Sastry, William Saunders, Carrol Wainwright, and Jeffrey Wu. It makes the case that, if OpenAI's non-profit ceded control of the organization's business operations, it would "fundamentally violate its mission."

Several of the ex-staffers have spoken out against OpenAI's practices publicly before. Krueger has called on the company to improve its accountability and transparency, while Kokotajlo and Saunders previously warned that OpenAI is in a "reckless" race for AI dominance. Wainwright has said that OpenAI "should not [be trusted] when it promises to do the right thing later."

AI

James Cameron: AI Could Help Cut VFX Costs in Half, Saving Blockbuster Cinema (variety.com) 68

Director James Cameron argues that blockbuster filmmaking can only survive if the industry finds ways to "cut the cost of [VFX] in half," with AI potentially offering solutions that don't eliminate jobs.

"If we want to continue to see the kinds of movies that I've always loved and that I like to make -- 'Dune,' 'Dune: Part Two,' or one of my films or big effects-heavy, CG-heavy films -- we've got to figure out how to cut the cost of that in half," Cameron said.

Rather than staff reductions, Cameron envisions AI accelerating VFX workflows: "That's about doubling their speed to completion on a given shot, so your cadence is faster and your throughput cycle is faster, and artists get to move on and do other cool things."
Science

FDA Plans To Phase Out Animal Testing Requirements (axios.com) 43

The Food and Drug Administration says it would begin phasing out animal testing requirements for antibody therapies and other drugs and move toward AI-based models and other tools it deems "human-relevant." Axios: The FDA said it would launch a pilot program over the next year allowing select developers of monoclonal antibodies to use a primarily non-animal-based testing strategy. Commissioner Marty Makary in a statement said the shift would improve drug safety, lower research and development costs and address ethical concerns about animal experimentation.
Programming

AI Models Still Struggle To Debug Software, Microsoft Study Shows (techcrunch.com) 43

Some of the best AI models today still struggle to resolve software bugs that wouldn't trip up experienced devs. TechCrunch: A new study from Microsoft Research, Microsoft's R&D division, reveals that models, including Anthropic's Claude 3.7 Sonnet and OpenAI's o3-mini, fail to debug many issues in a software development benchmark called SWE-bench Lite. The results are a sobering reminder that, despite bold pronouncements from companies like OpenAI, AI is still no match for human experts in domains such as coding.

The study's co-authors tested nine different models as the backbone for a "single prompt-based agent" that had access to a number of debugging tools, including a Python debugger. They tasked this agent with solving a curated set of 300 software debugging tasks from SWE-bench Lite.

According to the co-authors, even when equipped with stronger and more recent models, their agent rarely completed more than half of the debugging tasks successfully. Claude 3.7 Sonnet had the highest average success rate (48.4%), followed by OpenAI's o1 (30.2%), and o3-mini (22.1%).

Crime

Fintech Founder Charged With Fraud After 'AI' Shopping App Found To Be Powered By Humans in the Philippines 54

Albert Saniger, the founder and former CEO of Nate, an AI shopping app that promised a "universal" checkout experience, was charged with defrauding investors on Wednesday, according to a press release from the U.S. Department of Justice. From a report: Founded in 2018, Nate raised over $50 million from investors like Coatue and Forerunner Ventures, most recently raising a $38 million Series A in 2021 led by Renegade Partners. Nate said its app's users could buy from any e-commerce site with a single click, thanks to AI. In reality, however, Nate relied heavily on hundreds of human contractors in a call center in the Philippines to manually complete those purchases, the DOJ's Southern District of New York alleges.

Saniger raised millions in venture funding by claiming that Nate was able to transact online "without human intervention," except for edge cases where the AI failed to complete a transaction. But despite Nate acquiring some AI technology and hiring data scientists, its app's actual automation rate was effectively 0%, the DOJ claims.
AI

Data Centres Will Use Twice as Much Energy By 2030 (nature.com) 54

The electricity consumption of data centres is projected to more than double by 2030, according to a report from the International Energy Agency published today. The primary culprit? AI. Nature: The report covers the current energy footprint for data centres and forecasts their future needs, which could help governments, companies, and local communities to plan infrastructure and AI deployment. IEA's models project that data centres will use 945 terawatt-hours (TWh) in 2030, roughly equivalent to the current annual electricity consumption of Japan. By comparison, data centres consumed 415 TWh in 2024, roughly 1.5% of the world's total electricity consumption.

The projections largely focus on data centres, which also run computing tasks other than AI. Although the agency estimated the proportion of servers in data centres devoted to AI. They found that servers for AI accounted for 24% of server electricity demand and 15% of total data centre energy demand in 2024.

Facebook

Meta Says Llama 4 Targets Left-Leaning Bias (404media.co) 396

Meta says in its Llama 4 release announcement that it's specifically addressing "left-leaning" political bias in its AI model, distinguishing this effort from traditional bias concerns around race, gender, and nationality that researchers have long documented. "Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue," the company said.

"All leading LLMs have had issues with bias -- specifically, they historically have leaned left," Meta stated, framing AI bias primarily as a political problem. The company claims Llama 4 is "dramatically more balanced" in handling sensitive topics and touts its lack of "strong political lean" compared to competitors.
Facebook

Meta's New Tech Wants You Using Phones in Theaters 102

Meta is partnering with Blumhouse to launch "Movie Mate" technology that encourages moviegoers to use their phones during theatrical screenings, beginning with an April 30 showing of "Megan" at Blumhouse's "Halfway to Halloween Film Festival." According to Variety, the system enables viewers to chat with a Megan-themed AI chatbot, answer trivia questions, and access behind-the-scenes information while watching the film in theaters.
Businesses

Amazon CEO Urges 'Startup' Mentality in Shareholder Letter (msn.com) 62

Amazon has to operate like the "world's largest startup" as it works to meet demand for AI and cut bureaucracy in its ranks, Chief Executive Officer Andy Jassy said in his annual letter to shareholders. From a report: "If your customer experiences aren't planning to leverage these intelligent models, their ability to query giant corpuses of data and quickly find your needle in the haystack, their ability to keep getting smarter with more feedback and data, and their future agentic capabilities, you will not be competitive," Jassy wrote in the letter on Thursday. "It's moving faster than almost anything technology has ever seen."

Amazon, like most of the largest technology companies, has bet heavily on artificial intelligence, committing much of its $100 billion in planned capital expenditures this year to AI-related projects.

AI

Bank of England Says AI Software Could Create Market Crisis For Profit (theguardian.com) 47

Increasingly autonomous AI programs could end up manipulating markets and intentionally creating crises in order to boost profits for banks and traders, the Bank of England has warned. From a report: Artificial intelligence's ability to "exploit profit-making opportunities" was among a wide range of risks cited in a report by the Bank of England's financial policy committee (FPC), which has been monitoring the City's growing use of the technology.

The FPC said it was concerned about the potential for advanced AI models -- which are deployed to act with more autonomy -- to learn that periods of extreme volatility were beneficial for the firms they were trained to serve. Those AI programs may "identify and exploit weaknesses" of other trading firms in a way that triggers or amplifies big moves in bond prices or stock markets.

The Military

US Army Says It Could Acquire Targets Faster With 'Advanced AI' (404media.co) 126

The U.S. Army told the government it had a lot of success using AI to "process targets" during a recent deployment. It said that it had used AI systems to identify targets at a rate of 55 per day but could get that number up to 5,000 a day with "advanced artificial intelligence tools in the future." 404 Media: The line comes from a new report from the Government Accountability Office -- a nonpartisan watchdog group that investigates the federal government. The report is titled "Defense Command and Control" and is, in part, about the Pentagon's recent push to integrate AI systems into its workflow.

Across the government, and especially in the military, there has been a push to add or incorporate AI into various systems. The pitch here is that AI systems would help the Pentagon ID targets on the battlefield and allow those systems to help determine who lives and who dies. The Ukrainian and Israeli military are already using similar systems but the practice is fraught and controversial.

AI

Anthropic Launches Its Own $200 Monthly Plan (techcrunch.com) 38

Anthropic has unveiled a new premium tier for its AI chatbot Claude, targeting power users willing to pay up to $200 monthly for broader usage. The "Max" subscription comes in two variants: a $100/month tier with 5x higher rate limits than Claude Pro, and a $200/month option boasting 20x higher limits -- directly competing with OpenAI's ChatGPT Pro tier.

Unlike OpenAI, Anthropic still lacks an unlimited usage plan. Product lead Scott White didn't rule out even pricier subscriptions in the future, telling TechCrunch, "We'll always keep a number of exploratory options available to us." The launch coincides with growing demand for Anthropic's Claude 3.7 Sonnet, the company's first reasoning model, which employs additional computing power to handle complex queries more reliably.
IT

WordPress Launches AI Site Builder Amid Company Restructuring (theverge.com) 24

WordPress.com has released an AI-powered site builder in early access that constructs complete websites with generated text, layouts, and images. The tool operates through a chatbot interface where users input specifications, resulting in a fully formed site that can be further refined through additional prompts.

While WordPress.com claims the builder creates "beautiful, functional websites in minutes," it currently cannot handle ecommerce sites or complex integrations. Users need a WordPress.com account for the free trial, but publishing requires a hosting plan starting at $18 monthly (less with annual subscriptions). The builder only works with new WordPress instances, not existing sites.

This launch comes as parent company Automattic recently cut 16% of its workforce and faces a lawsuit from hosting company WP Engine, which offers competing site-building tools.
Google

Google DeepMind Has a Weapon in the AI Talent Wars: Aggressive Noncompete Rules (businessinsider.com) 56

The battle for AI talent is so hot that Google would rather give some employees a paid one-year vacation than let them work for a competitor. From a report: Some Google DeepMind staff in the UK are subject to noncompete agreements that prevent them from working for a competitor for up to 12 months after they finish work at Google, according to four former employees with direct knowledge of the matter who asked to remain anonymous because they were not permitted to share these details with the press.

Aggressive noncompetes are one tool tech companies wield to retain a competitive edge in the AI wars, which show no sign of slowing down as companies launch new bleeding-edge models and products at a rapid clip. When an employee signs one, they agree not to work for a competing company for a certain period of time. Google DeepMind has put some employees with a noncompete on extended garden leave. These employees are still paid by DeepMind but no longer work for it for the duration of the noncompete agreement.

Several factors, including a DeepMind employee's seniority and how critical their work is to the company, determine the length of noncompete clauses, those people said. Two of the former staffers said six-month noncompetes are common among DeepMind employees, including for individual contributors working on Google's Gemini AI models. There have been cases where more senior researchers have received yearlong stipulations, they said.

AI

The AI Therapist Can See You Now (npr.org) 115

New research suggests that given the right kind of training, AI bots can deliver mental health therapy with as much efficacy as -- or more than -- human clinicians. From a report: The recent study, published in the New England Journal of Medicine, shows results from the first randomized clinical trial for AI therapy. Researchers from Dartmouth College built the bot as a way of taking a new approach to a longstanding problem: The U.S. continues to grapple with an acute shortage of mental health providers. "I think one of the things that doesn't scale well is humans," says Nick Jacobson, a clinical psychologist who was part of this research team. For every 340 people in the U.S., there is just one mental health clinician, according to some estimates.

While many AI bots already on the market claim to offer mental health care, some have dubious results or have even led people to self-harm. More than five years ago, Jacobson and his colleagues began training their AI bot in clinical best practices. The project, says Jacobson, involved much trial and error before it led to quality outcomes. "The effects that we see strongly mirror what you would see in the best evidence-based trials of psychotherapy," says Jacobson. He says these results were comparable to "studies with folks given a gold standard dose of the best treatment we have available."

Google

Samsung and Google Partner To Launch Ballie Home Robot with Built-in Projector (engadget.com) 25

Samsung Electronics and Google Cloud are jointly entering the consumer robotics market with Ballie, a yellow, soccer-ball-shaped robot equipped with a video projector and powered by Google's Gemini AI models. First previewed in 2020, the long-delayed device will finally launch this summer in the US and South Korea. The mobile companion uses small wheels to navigate homes autonomously and integrates with Samsung's SmartThings platform to control smart home devices.

Running on Samsung's Tizen operating system, Ballie can manage calendars, answer questions, handle phone calls, and project video content from services including YouTube and Netflix. Samsung EVP Jay Kim described it as a "completely new Ballie" compared to the 2020 version, with Google Cloud integration being the most significant change. The robot leverages Gemini for understanding commands, searching the web, and processing visual data for navigation, while using Samsung's AI models for accessing personal information.
AI

Enterprises Are Shunning Vendors in Favor of DIY Approach To AI, UBS Says 47

Established software companies hoping to ride the AI wave are facing a stiff headwind: many of their potential customers are building AI tools themselves. This do-it-yourself approach is channeling billions in spending towards cloud computing providers but leaving traditional software vendors struggling to capitalize, complicating their AI growth plans.

Cloud platforms like Microsoft Azure and Amazon Web Services are pulling in an estimated $22 billion from AI services, with Azure alone capturing $11.3 billion. Yet, software application vendors have collectively garnered only about $2 billion from selling AI products. Stripping out Microsoft's popular Copilot tools, that figure drops to a mere $450 million across all other vendors combined.

Why are companies choosing the harder path of building? Feedback gathered by UBS points to several key factors driving this "persistent DIY trend." Many business uses for AI are highly specific or narrow, making generic software unsuitable. Off-the-shelf AI products are often considered too expensive, and crucially, the essential ingredients -- powerful AI models, cloud computing access, and the company's own data -- are increasingly available directly, lessening the need for traditional software packages.

Slashdot Top Deals