×
AI

Wearable AI Startup Humane Explores Potential Sale 18

AI startup Humane has been seeking a buyer for its business, Bloomberg News reported, citing people familiar with the matter, just weeks after the company's closely watched wearable AI device had a rocky public launch. From the report: The company is working with a financial adviser to assist it, said the people, who asked not to be identified because the matter is private. Humane is seeking a price of between $750 million and $1 billion in a sale [non-paywalled link], one person said. The process is still early and may not result in a deal. Humane was founded in 2018 by two longtime Apple veterans, the married couple Imran Chaudhri and Bethany Bongiorno, in an attempt to come up with a new, AI-powered device that could potentially rival the iPhone. Last year it was valued by investors at $850 million, according to tech news site the Information.
AI

Meta AI Chief Says Large Language Models Will Not Reach Human Intelligence (ft.com) 78

Meta's AI chief said the large language models that power generative AI products such as ChatGPT would never achieve the ability to reason and plan like humans, as he focused instead on a radical alternative approach to create "superintelligence" in machines. From a report: Yann LeCun, chief AI scientist at the social media giant that owns Facebook and Instagram, said LLMs had "very limited understanding of logicâ... do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot planâ...âhierarchically."

In an interview with the Financial Times, he argued against relying on advancing LLMs in the quest to make human-level intelligence, as these models can only answer prompts accurately if they have been fed the right training data and are, therefore, "intrinsically unsafe." Instead, he is working to develop an entirely new generation of AI systems that he hopes will power machines with human-level intelligence, although he said this vision could take 10 years to achieve. Meta has been pouring billions of dollars into developing its own LLMs as generative AI has exploded, aiming to catch up with rival tech groups, including Microsoft-backed OpenAI and Alphabet's Google.

AI

DOJ Makes Its First Known Arrest For AI-Generated CSAM (engadget.com) 98

In what's believed to be the first case of its kind, the U.S. Department of Justice arrested a Wisconsin man last week for generating and distributing AI-generated child sexual abuse material (CSAM). Even if no children were used to create the material, the DOJ "looks to establish a judicial precedent that exploitative materials are still illegal," reports Engadget. From the report: The DOJ says 42-year-old software engineer Steven Anderegg of Holmen, WI, used a fork of the open-source AI image generator Stable Diffusion to make the images, which he then used to try to lure an underage boy into sexual situations. The latter will likely play a central role in the eventual trial for the four counts of "producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16." The government says Anderegg's images showed "nude or partially clothed minors lasciviously displaying or touching their genitals or engaging in sexual intercourse with men." The DOJ claims he used specific prompts, including negative prompts (extra guidance for the AI model, telling it what not to produce) to spur the generator into making the CSAM.

Cloud-based image generators like Midjourney and DALL-E 3 have safeguards against this type of activity, but Ars Technica reports that Anderegg allegedly used Stable Diffusion 1.5, a variant with fewer boundaries. Stability AI told the publication that fork was produced by Runway ML. According to the DOJ, Anderegg communicated online with the 15-year-old boy, describing how he used the AI model to create the images. The agency says the accused sent the teen direct messages on Instagram, including several AI images of "minors lasciviously displaying their genitals." To its credit, Instagram reported the images to the National Center for Missing and Exploited Children (NCMEC), which alerted law enforcement. Anderegg could face five to 70 years in prison if convicted on all four counts. He's currently in federal custody before a hearing scheduled for May 22.

EU

EU Sets Benchmark For Rest of the World With Landmark AI Laws (reuters.com) 28

An anonymous reader quotes a report from Reuters: Europe's landmark rules on artificial intelligence will enter into force next month after EU countries endorsed on Tuesday a political deal reached in December, setting a potential global benchmark for a technology used in business and everyday life. The European Union's AI Act is more comprehensive than the United States' light-touch voluntary compliance approach while China's approach aims to maintain social stability and state control. The vote by EU countries came two months after EU lawmakers backed the AI legislation drafted by the European Commission in 2021 after making a number of key changes. [...]

The AI Act imposes strict transparency obligations on high-risk AI systems while such requirements for general-purpose AI models will be lighter. It restricts governments' use of real-time biometric surveillance in public spaces to cases of certain crimes, prevention of terrorist attacks and searches for people suspected of the most serious crimes. The new legislation will have an impact beyond the 27-country bloc, said Patrick van Eecke at law firm Cooley. "The Act will have global reach. Companies outside the EU who use EU customer data in their AI platforms will need to comply. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR," he said, referring to EU privacy rules.

While the new legislation will apply in 2026, bans on the use of artificial intelligence in social scoring, predictive policing and untargeted scraping of facial images from the internet or CCTV footage will kick in in six months once the new regulation enters into force. Obligations for general purpose AI models will apply after 12 months and rules for AI systems embedded into regulated products in 36 months. Fines for violations range from $8.2 million or 1.5% of turnover to 35 million euros or 7% of global turnover depending on the type of violations.

Windows

Windows Now Has AI-Powered Copy and Paste 59

Umar Shakir reports via The Verge: Microsoft is adding a new Advanced Paste feature to PowerToys for Windows 11 that can convert your clipboard content on the fly with the power of AI. The new feature can help people speed up their workflows by doing things like copying code in one language and pasting it in another, although its best tricks require OpenAI API credits.

Advanced Paste is included in PowerToys version 0.81 and, once enabled, can be activated with a special key command: Windows Key + Shift + V. That opens an Advanced Paste text window that offers paste conversion options including plaintext, markdown, and JSON. If you enable Paste with AI in the Advanced Paste settings, you'll also see an OpenAI prompt where you can enter the conversion you want -- summarized text, translations, generated code, a rewrite from casual to professional style, Yoda syntax, or whatever you can think to ask for.
Google

Google's Moonshot Factory Falls Back Down to Earth 25

Alphabet's moonshot factory, X, is scaling back its ambitious projects amid concerns over Google's core search business facing competition from AI chatbots like ChatGPT. The lab, once a symbol of Google's commitment to innovation, is now spinning off projects as startups rather than integrating them into Alphabet. The shift reflects a broader trend among tech giants, who are cutting costs and focusing on their core businesses in response to the rapidly evolving AI landscape.
Microsoft

Microsoft Edge Will Dub Streamed Video With AI-Translated Audio (pcworld.com) 19

Microsoft is planning to either add subtitles or even dub video produced by major video sites, using AI to translate the audio into foreign languages within Microsoft Edge in real time. From a report: At its Microsoft Build developer conference, Microsoft named several sites that would benefit from the new real-time translation capabilities within Edge, including Reuters, CNBC News, Bloomberg, and Coursera, plus Microsoft's own LinkedIn. Interestingly, Microsoft also named Google's YouTube as a beneficiary of the translation capabilities. Microsoft plans to translate the video from Spanish to English and from English to German, Hindi, Italian, Russian, and Spanish. There are plans to add additional languages and video platforms in the future, Microsoft said.
Education

Microsoft Launches Free AI Assistant For All Educators in US in Deal With Khan Academy (nbcnewyork.com) 35

Microsoft is partnering with tutoring organization Khan Academy to provide a generative AI assistant to all teachers in the U.S. for free. From a report: Khanmigo for Teachers, which helps teachers prepare lessons for class, is free to all educators in the U.S. as of Tuesday. The program can help create lessons, analyze student performance, plan assignments, and provide teachers with opportunities to enhance their own learning.

"Unlike most things in technology and education in the past where this is a 'nice-to-have,' this is a 'must-have' for a lot of teachers," Sal Khan, founder and CEO of Khan Academy, said in a CNBC "Squawk Box" interview last Friday ahead of the deal. Khan Academy has roughly 170 million registered users in over 50 languages around the world, and while its videos are best known, its interactive exercise platform was one which Microsoft-funded artificial intelligence company OpenAI's top executives, Sam Altman and Greg Brockman, zeroed in on early when they were looking for a partner to pilot GPT with that offered socially positive use cases.

Technology

Match Group, Meta, Coinbase And More Form Anti-Scam Coalition (engadget.com) 23

An anonymous reader shares a report: Scams are all over the internet, and AI is making matters worse (no, Taylor Swift didn't giveaway Le Creuset pans, and Tom Hanks didn't promote a dental plan). Now, companies such as Match Group, Meta and Coinbase are launching Tech Against Scams, a new coalition focused on collaboration to prevent online fraud and financial schemes. They will "collaborate on ways to take action against the tools used by scammers, educate and protect consumers and disrupt rapidly evolving financial scams."

Meta, Coinbase and Match Group -- which owns Hinge and Tinder -- first joined forces on this issue last summer but are now teaming up with additional digital, social media and crypto companies, along with the Global Anti-Scam Organization. A major focus of this coalition is pig butchering scams, a type of fraud in which a scammer tricks someone into giving them more and more money through trusting digital relationships, both romantic and platonic in nature.

AI

Scarlett Johansson Warned OpenAI To Not Use Her Voice 241

Actress Scarlett Johansson's legal team has sent two letters to OpenAI, demanding the company disclose how it developed an AI personal assistant voice that the actress claims sounds uncannily similar to her own. The controversy was prompted after OpenAI held a live demonstration of the voice, dubbed "Sky," which many observers compared to Johansson's voice in the 2013 film "Her."

OpenAI CEO Sam Altman had approached Johansson months prior and as recently as two days before the event, proposing to license her voice for the new ChatGPT voice assistant, but she declined the offer, she said. Johansson said she was "shocked and angered" at the similarity between the AI voice and her own, stating, "in a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity." OpenAI denied any connection between Johansson and the "Sky" voice, claiming it was developed from the voice of another actress. The company paused using the voice in its products on Monday.
HP

HP Resurrects '90s OmniBook Branding, Kills Spectre and Dragonfly (arstechnica.com) 53

HP announced today that it will resurrect the "Omni" branding it first coined for its business-oriented laptops introduced in 1993. The vintage branding will now be used for the company's new consumer-facing laptops, with HP retiring the Spectre and Dragonfly brands in the process. Furthermore, computers under consumer PC series names like Pavilion will also no longer be released. "Instead, every consumer computer from HP will be called either an OmniBook for laptops, an OmniDesk for desktops, or an OmniStudio for AIOs," reports Ars Technica. From the report: The computers will also have a modifier, ranging from 3 up to 5, 7, X, or Ultra to denote computers that are entry-level all the way up to advanced. For instance, an HP OmniBook Ultra would represent HP's highest-grade consumer laptop. "For example, an HP OmniBook 3 will appeal to customers who prioritize entertainment and personal use, while the OmniBook X will be designed for those with higher creative and technical demands," Stacy Wolff, SVP of design and sustainability at HP, said via a press announcement today. [...] So far, HP has announced one new Omni computer, the OmniBook X. It has a 12-core Snapdragon X Elite X1E-78-100, 16GB or 32GB of MPDDR5x-8448 memory, up to 2TB of storage, and a 14-inch, 2240x1400 IPS display. HP is pointing to the Latin translation of omni, meaning "all" (or everything), as the rationale behind the naming update. The new name should give shoppers confidence that the computers will provide all the things that they need.

HP is also getting rid of some of its commercial series names, like Pro. From now on, new, lower-end commercial laptops will be ProBooks. There will also be ProDesktop desktops and ProStudio AIOs. These computers will have either a 2 modifier for entry-level designs or a 4 modifier for ones with a little more power. For example, an HP ProDesk 2 is less powerful than an HP ProDesk 4. Anything more powerful will be considered either an EliteBook (laptops), EliteDesk (desktops), or EliteStudio (AIOs). For the Elite computers, the modifiers go from 6 to 8, X, and then Ultra. A Dragonfly laptop today would fall into the Ultra category. HP did less overhauling of its commercial lineup because it "recognized a need to preserve the brand equity and familiarity with our current sub-brands," Wolff said, adding that HP "acknowledged the creation of additional product names like Dragonfly made those products stand out, rather than be seen as part of a holistic portfolio." [...]

As you might now expect of any tech rebranding, marketing push, or product release these days, HP is also announcing a new emblem that will appear on its computers, as well as other products or services, that substantially incorporate AI. The two laptops announced today carry the logo. According to Wolff, on computers, the logo means that the systems have an integrated NPU "at 40+ trillions of operations per second." They also come with a chatbot based on ChatGPT 4, an HP spokesperson told me.

Graphics

Microsoft Paint Is Getting an AI-Powered Image Generator (engadget.com) 41

Microsoft Paint is getting a new image generator tool called Cocreator that can generate images based on text prompts and doodles. Engadget reports: During a demo at its Surface event, the company showed off how Cocreator combines your own drawings with text prompts to create an image. There's also a "creativity slider" that allows you to control how much you want AI to take over compared with your original art. As Microsoft pointed out, the combination of text prompts and your own brush strokes enables faster edits. It could also help provide a more precise rendering than what you'd be able to achieve with DALL-E or another text-to-image generator alone.
Microsoft

Microsoft Launches Arm-Powered Surface Laptop (theverge.com) 28

Microsoft today launched its new Surface Laptop, featuring Qualcomm's Snapdragon X Elite or Plus chips, aiming to compete with Apple's powerful and efficient MacBook laptops. The Surface Laptop, available for preorder starting at $999.99, boasts up to 22 hours of battery life, a haptic touchpad, and support for three external 4K monitors. Microsoft claims the device is 80% faster than its predecessor and comes with AI features powered by its Copilot technology.
AI

With Recall, Microsoft is Using AI To Fix Windows' Eternally Broken Search 102

Microsoft today unveiled Recall, a new AI-powered feature for Windows 11 PCs, at its Build 2024 conference. Recall aims to improve local searches by making them as efficient as web searches, allowing users to quickly retrieve anything they've seen on their PC. Using voice commands and contextual clues, Recall can find specific emails, documents, chat threads, and even PowerPoint slides.

The feature uses semantic associations to make connections, as demonstrated by Microsoft Product Manager Caroline Hernandez, who searched for a blue dress and refined the query with specific details. Microsoft said that Recall's processing is done locally, ensuring data privacy and security. The feature utilizes over 40 local multi-modal small language models to recognize text, images, and video.
AI

OpenAI Says Sky Voice in ChatGPT Will Be Paused After Concerns It Sounds Too Much Like Scarlett Johansson (tomsguide.com) 54

OpenAI is pausing the use of the popular Sky voice in ChatGPT over concerns it sounds too much like the "Her" actress Scarlett Johansson. From a report: The company says the voices in ChatGPT were from paid voice actors. A final five were selected from an initial pool of 400 and it's purely a coincidence the unnamed actress behind the Sky voice has a similar tone to Johansson. Voice is about to become more prominent for OpenAI as it begins to roll out a new GPT-4o model into ChatGPT. With it will come an entirely new conversational interface where users can talk in real-time to a natural-sounding and emotion-mimicking AI.

While the Sky voice and a version of ChatGPT Voice have been around for some time, the comparison to Johansson became more obvious due to OpenAI CEO Sam Altman, and many others, drawing the similarity between the new AI model and the movie "Her". In "Her," Scarlett Johansson voices an advanced AI operating system named Samantha, who develops a romantic relationship with a lonely writer played by Joaquin Phoenix. With its ability to mimic emotional responses, the parallels from GPT-4o were obvious.

Supercomputing

Linux Foundation Announces Launch of 'High Performance Software Foundation' (linuxfoundation.org) 4

This week the nonprofit Linux Foundation announced the launch of the High Performance Software Foundation, which "aims to build, promote, and advance a portable core software stack for high performance computing" (or HPC) by "increasing adoption, lowering barriers to contribution, and supporting development efforts."

It promises initiatives focused on "continuously built, turnkey software stacks," as well as other initiatives including architecture support and performance regression testing. Its first open source technical projects are:

- Spack: the HPC package manager.

- Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way.

- Viskores (formerly VTK-m): a toolkit of scientific visualization algorithms for accelerator architectures.

- HPCToolkit: performance measurement and analysis tools for computers ranging from desktop systems to GPU-accelerated supercomputers.

- Apptainer: Formerly known as Singularity, Apptainer is a Linux Foundation project providing a high performance, full featured HPC and computing optimized container subsystem.

- E4S: a curated, hardened distribution of scientific software packages.

As use of HPC becomes ubiquitous in scientific computing and digital engineering, and AI use cases multiply, more and more data centers deploy GPUs and other compute accelerators. The High Performance Software Foundation will provide a neutral space for pivotal projects in the high performance computing ecosystem, enabling industry, academia, and government entities to collaborate on the scientific software.

The High Performance Software Foundation benefits from strong support across the HPC landscape, including Premier Members Amazon Web Services (AWS), Hewlett Packard Enterprise, Lawrence Livermore National Laboratory, and Sandia National Laboratories; General Members AMD, Argonne National Laboratory, Intel, Kitware, Los Alamos National Laboratory, NVIDIA, and Oak Ridge National Laboratory; and Associate Members University of Maryland, University of Oregon, and Centre for Development of Advanced Computing.

In a statement, an AMD vice president said that by joining "we are using our collective hardware and software expertise to help develop a portable, open-source software stack for high-performance computing across industry, academia, and government." And an AWS executive said the high-performance computing community "has a long history of innovation being driven by open source projects. AWS is thrilled to join the High Performance Software Foundation to build on this work. In particular, AWS has been deeply involved in contributing upstream to Spack, and we're looking forward to working with the HPSF to sustain and accelerate the growth of key HPC projects so everyone can benefit."

The new foundation will "set up a technical advisory committee to manage working groups tackling a variety of HPC topics," according to the announcement, following a governance model based on the Cloud Native Computing Foundation.
China

China Uses Giant Rail Gun to Shoot a Smart Bomb Nine Miles Into the Sky (futurism.com) 134

"China's navy has apparently tested out a hypersonic rail gun," reports Futurism, describing it as "basically a device that uses a series of electromagnets to accelerate a projectile to incredible speeds."

But "during a demonstration of its power, things didn't go quite as planned." As the South China Morning Post reports, the rail gun test lobbed a precision-guided projectile — or smart bomb — nine miles into the stratosphere. But because it apparently didn't go up as high as it was supposed to, the test was ultimately declared unsuccessful. This conclusion came after an analysis led by Naval Engineering University professor Lu Junyong, whose team found with the help of AI that even though the winged smart bomb exceeded Mach 5 speeds, it didn't perform as well as it could have. This occurred, as Lu's team found, because the projectile was spinning too fast during its ascent, resulting in an "undesirable tilt."
But what's more interesting is the project itself. "Successful or not, news of the test is a pretty big deal given that it was just a few months ago that reports emerged about China's other proposed super-powered rail gun, which is intended to send astronauts on a Boeing 737-size ship into space.... which for the record did not make it all the way to space..." Chinese officials, meanwhile, are paying lip service to the hypersonic rail gun technology's potential to revolutionize civilian travel by creating even faster railways and consumer space launches, too.
Japan and France also have railgun projects, according to a recent article from Defense One. "Yet the nation that has demonstrated the most continuing interest is China," with records of railgun work dating back as far as 2011: The Chinese team claimed that their railgun can fire a projectile 100 to 200 kilometers at Mach 6. Perhaps most importantly, it uses up to 100,000 AI-enabled sensors to identify and fix any problems before critical failure, and can slowly improve itself over time. This, they said, had enabled them to test-fire 120 rounds in a row without failure, which, if true, suggests that they solved a longstanding problem that reportedly bedeviled U.S. researchers. However, the team still has a ways to go before mounting an operational railgun on a ship; according to one Chinese article, the projectiles fired were only 25mm caliber, well below the size of even lightweight naval artillery.

As with many other Chinese defense technology programs, much remains opaque about the program...

While railguns tend to get the headlines, this lab has made advances in a wide range of electric and electromagnetic applications for the PLA Navy's warships. For example, the lab's research on electromagnetic launch technology has also been applied to the development of electromagnetic catapults for the PLAN's growing aircraft carrier fleet...

While it remains to be seen whether the Chinese navy can develop a full-scale railgun, produce it at scale, and integrate it onto its warships, it is obvious that it has made steady advances in recent years on a technology of immense military significance that the US has abandoned.

Thanks to long-time Slashdot reader Tangential for sharing the news.
AI

AI 'Godfather' Geoffrey Hinton: If AI Takes Jobs We'll Need Universal Basic Income (bbc.com) 250

"The computer scientist regarded as the 'godfather of artificial intelligence' says the government will have to establish a universal basic income to deal with the impact of AI on inequality," reports the BBC: Professor Geoffrey Hinton told BBC Newsnight that a benefits reform giving fixed amounts of cash to every citizen would be needed because he was "very worried about AI taking lots of mundane jobs".

"I was consulted by people in Downing Street and I advised them that universal basic income was a good idea," he said. He said while he felt AI would increase productivity and wealth, the money would go to the rich "and not the people whose jobs get lost and that's going to be very bad for society".

"Until last year he worked at Google, but left the tech giant so he could talk more freely about the dangers from unregulated AI," according to the article. Professor Hinton also made this predicction to the BBC. "My guess is in between five and 20 years from now there's a probability of half that we'll have to confront the problem of AI trying to take over".

He recommended a prohibition on the military use of AI, warning that currently "in terms of military uses I think there's going to be a race".
The Military

Robot Dogs Armed With AI-aimed Rifles Undergo US Marines Special Ops Evaluation (arstechnica.com) 74

Long-time Slashdot reader SonicSpike shared this report from Ars Technica: The United States Marine Forces Special Operations Command (MARSOC) is currently evaluating a new generation of robotic "dogs" developed by Ghost Robotics, with the potential to be equipped with gun systems from defense tech company Onyx Industries, reports The War Zone.

While MARSOC is testing Ghost Robotics' quadrupedal unmanned ground vehicles (called "Q-UGVs" for short) for various applications, including reconnaissance and surveillance, it's the possibility of arming them with weapons for remote engagement that may draw the most attention. But it's not unprecedented: The US Marine Corps has also tested robotic dogs armed with rocket launchers in the past.

MARSOC is currently in possession of two armed Q-UGVs undergoing testing, as confirmed by Onyx Industries staff, and their gun systems are based on Onyx's SENTRY remote weapon system (RWS), which features an AI-enabled digital imaging system and can automatically detect and track people, drones, or vehicles, reporting potential targets to a remote human operator that could be located anywhere in the world. The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously. On LinkedIn, Onyx Industries shared a video of a similar system in action.

In a statement to The War Zone, MARSOC states that weaponized payloads are just one of many use cases being evaluated. MARSOC also clarifies that comments made by Onyx Industries to The War Zone regarding the capabilities and deployment of these armed robot dogs "should not be construed as a capability or a singular interest in one of many use cases during an evaluation."

Government

Are AI-Generated Search Results Still Protected by Section 230? (msn.com) 63

Starting this week millions will see AI-generated answers in Google's search results by default. But the announcement Tuesday at Google's annual developer conference suggests a future that's "not without its risks, both to users and to Google itself," argues the Washington Post: For years, Google has been shielded for liability for linking users to bad, harmful or illegal information by Section 230 of the Communications Decency Act. But legal experts say that shield probably won't apply when its AI answers search questions directly. "As we all know, generative AIs hallucinate," said James Grimmelmann, professor of digital and information law at Cornell Law School and Cornell Tech. "So when Google uses a generative AI to summarize what webpages say, and the AI gets it wrong, Google is now the source of the harmful information," rather than just the distributor of it...

Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn't extend Section 230 to cover AI tools. "As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors," he predicted. "It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate." But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be "a really good outcome."

Meanwhile, some lawmakers are looking to ditch Section 230 altogether. [Last] Sunday, the top Democrat and Republican on the House Energy and Commerce Committee released a draft of a bill that would sunset the statute within 18 months, giving Congress time to craft a new liability framework in its place. In a Wall Street Journal op-ed, Reps. Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr. (D-N.J.) argued that the law, which helped pave the way for social media and the modern internet, has "outlived its usefulness."

The tech industry trade group NetChoice [which includes Google, Meta, X, and Amazon] fired back on Monday that scrapping Section 230 would "decimate small tech" and "discourage free speech online."

The digital law professor points out Google has traditionally escaped legal liability by attributing its answers to specific sources — but it's not just Google that has to worry about the issue. The article notes that Microsoft's Bing search engine also supplies AI-generated answers (from Microsoft's Copilot). "And Meta recently replaced the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot."

The article also note sthat several U.S. Congressional committees are considering "a bevy" of AI bills...

Slashdot Top Deals