×
AI

OpenAI Releases Former Employees From Controversial Exit Agreements (cnbc.com) 11

OpenAI has reversed its decision requiring former employees to sign a perpetual non-disparagement agreement to retain their vested equity, stating that they will not cancel any vested units and will remove non-disparagement clauses from departure documents. CNBC reports: The internal memo, which was viewed by CNBC, was sent to former employees and shared with current ones. The memo, addressed to each former employee, said that at the time of the person's departure from OpenAI, "you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity]." "Regardless of whether you executed the Agreement, we write to notify you that OpenAI has not canceled, and will not cancel, any Vested Units," stated the memo, which was viewed by CNBC.

The memo said OpenAI will also not enforce any other non-disparagement or non-solicitation contract items that the employee may have signed. "As we shared with employees, we are making important updates to our departure process," an OpenAI spokesperson told CNBC in a statement. "We have not and never will take away vested equity, even when people didn't sign the departure documents. We'll remove non-disparagement clauses from our standard departure paperwork, and we'll release former employees from existing non-disparagement obligations unless the non-disparagement provision was mutual," said the statement, adding that former employees would be informed of this as well. "We're incredibly sorry that we're only changing this language now; it doesn't reflect our values or the company we want to be," the OpenAI spokesperson added.

Google

Google Search's 'udm=14' Trick Lets You Kill AI Search For Good (arstechnica.com) 40

An anonymous reader quotes a report from Ars Technica: If you're tired of Google's AI Overview extracting all value from the web while also telling people to eat glue or run with scissors, you can turn it off -- sort of. Google has been telling people its AI box at the top of search results is the future, and you can't turn it off, but that ignores how Google search works: A lot of options are powered by URL parameters. That means you can turn off AI search with this one simple trick! (Sorry.) Our method for killing AI search is defaulting to the new "web" search filter, which Google recently launched as a way to search the web without Google's alpha-quality AI junk. It's actually pretty nice, showing only the traditional 10 blue links, giving you a clean (well, other than the ads), uncluttered results page that looks like it's from 2011. Sadly, Google's UI doesn't have a way to make "web" search the default, and switching to it means digging through the "more" options drop-down after you do a search, so it's a few clicks deep.

Check out the URL after you do a search, and you'll see a mile-long URL full of esoteric tracking information and mode information. We'll put each search result URL parameter on a new line so the URL is somewhat readable [...]. Most of these only mean something to Google's internal tracking system, but that "&udm=14" line is the one that will put you in a web search. Tack it on to the end of a normal search, and you'll be booted into the clean 10 blue links interface. While Google might not let you set this as a default, if you have a way to automatically edit the Google search URL, you can create your own defaults. One way to edit the search URL is a proxy site like udm14.com, which is probably the biggest site out there popularizing this technique. A proxy site could, if it wanted to, read all your search result queries, though (your query is also in the URL), so whether you trust this site is up to you.

The Courts

Political Consultant Behind Fake Biden Robocalls Faces $6 Million Fine, Criminal Charges (apnews.com) 49

Political consultant Steven Kramer faces a $6 million fine and over two dozen criminal charges for using AI-generated robocalls mimicking President Joe Biden's voice to mislead New Hampshire voters ahead of the presidential primary. The Associated Press reports: The Federal Communications Commission said the fine it proposed Thursday for Steven Kramer is its first involving generative AI technology. The company accused of transmitting the calls, Lingo Telecom, faces a $2 million fine, though in both cases the parties could settle or further negotiate, the FCC said. Kramer has admitted orchestrating a message that was sent to thousands of voters two days before the first-in-the-nation primary on Jan. 23. The message played an AI-generated voice similar to the Democratic president's that used his phrase "What a bunch of malarkey" and falsely suggested that voting in the primary would preclude voters from casting ballots in November.

Kramer is facing 13 felony charges alleging he violated a New Hampshire law against attempting to deter someone from voting using misleading information. He also faces 13 misdemeanor charges accusing him of falsely representing himself as a candidate by his own conduct or that of another person. The charges were filed in four counties and will be prosecuted by the state attorney general's office. Attorney General John Formella said New Hampshire was committed to ensuring that its elections "remain free from unlawful interference."

Kramer, who owns a firm that specializes in get-out-the-vote projects, did not respond to an email seeking comment Thursday. He told The Associated Press in February that he wasn't trying to influence the outcome of the election but rather wanted to send a wake-up call about the potential dangers of artificial intelligence when he paid a New Orleans magician $150 to create the recording. "Maybe I'm a villain today, but I think in the end we get a better country and better democracy because of what I've done, deliberately," Kramer said in February.

Facebook

Mark Zuckerberg Assembles Team of Tech Execs For AI Advisory Council (qz.com) 17

An anonymous reader quotes a report from Quartz: Mark Zuckerberg has assembled some of his fellow tech chiefs into an advisory council to guide Meta on its artificial intelligence and product developments. The Meta Advisory Group will periodically meet with Meta's management team, Bloomberg reported. Its members include: Stripe CEO and co-founder Patrick Collison, former GitHub CEO Nat Friedman, Shopify CEO Tobi Lutke, and former Microsoft executive and investor Charlie Songhurst.

"I've come to deeply respect this group of people and their achievements in their respective areas, and I'm grateful that they're willing to share their perspectives with Meta at such an important time as we take on new opportunities with AI and the metaverse," Zuckerberg wrote in an internal note to Meta employees, according to Bloomberg. The advisory council differs from Meta's 11-person board of directors because its members are not elected by shareholders, nor do they have fiduciary duty to Meta, a Meta spokesperson told Bloomberg. The spokesperson said that the men will not be paid for their roles on the advisory council.
TechCrunch notes that the council features "only white men on it." This "differs from Meta's actual board of directors and its Oversight Board, which is more diverse in gender and racial representation," reports TechCrunch.

"It's telling that the AI advisory council is composed entirely of businesspeople and entrepreneurs, not ethicists or anyone with an academic or deep research background. ... it's been proven time and time again that AI isn't like other products. It's a risky business, and the consequences of getting it wrong can be far-reaching, particularly for marginalized groups."
AI

AI Software Engineers Make $100,000 More Than Their Colleagues (qz.com) 43

The AI boom and a growing talent shortage has resulted in companies paying AI software engineers a whole lot more than their non-AI counterparts. From a report: As of April 2024, AI software engineers in the U.S. were paid a median salary of nearly $300,000, while other software technicians made about $100,000 less, according to data compiled by salary data website Levels.fyi. The pay gap that was already about 30% in mid-2022 has grown to almost 50%.

"It's clear that companies value AI skills and are willing to pay a premium for them, no matter what job level you're at," wrote data scientist Alina Kolesnikova in the Levels.fyi report. That disparity is more pronounced at some companies. The robotaxi company Cruise, for example, pays AI engineers at the staff level a median of $680,500 -- while their non-AI colleagues make $185,500 less, according to Levels.fyi.

AI

US Lawmakers Advance Bill To Make It Easier To Curb Exports of AI Models (reuters.com) 30

The House Foreign Affairs Committee on Wednesday voted overwhelmingly to advance a bill that would make it easier for the Biden administration to restrict the export of AI systems, citing concerns China could exploit them to bolster its military capabilities. From a report: The bill, sponsored by House Republicans Michael McCaul and John Molenaar and Democrats Raja Krishnamoorthi and Susan Wild, also would give the Commerce Department express authority to bar Americans from working with foreigners to develop AI systems that pose risks to U.S. national security. Without this legislation "our top AI companies could inadvertently fuel China's technological ascent, empowering their military and malign ambitions," McCaul, who chairs the committee, warned on Wednesday.

"As the (Chinese Communist Party) looks to expand their technological advancements to enhance their surveillance state and war machine, it is critical we protect our sensitive technology from falling into their hands," McCaul added. The Chinese Embassy in Washington did not immediately respond to a request for comment. The bill is the latest sign Washington is gearing up to beat back China's AI ambitions over fears Beijing could harness the technology to meddle in other countries' elections, create bioweapons or launch cyberattacks.

AI

FCC Chair Proposes Disclosure Rules For AI-Generated Content In Political Ads (qz.com) 37

FCC Chairwoman Jessica Rosenworcel has proposed (PDF) disclosure rules for AI-generated content used in political ads. "If adopted, the proposal would look into whether the FCC should require political ads on radio and TV to disclose when there is AI-generated content," reports Quartz. From the report: The FCC is seeking comment on whether on-air and written disclosure should be required in broadcasters' political files when AI-generated content is used in political ads; proposing that the rules apply to both candidates and issue advertisements; requesting comment on what a specific definition of AI-generated comment should look like; and proposing that disclosure rules be applied to broadcasters and entities involved in programming, such as cable operators and radio providers.

The proposed disclosure rules do not prohibit the use of AI-generated content in political ads. The FCC has authority through the Bipartisan Campaign Reform Act to make rules around political advertising. If the proposal is adopted, the FCC will take public comment on the rules.
"As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used," Rosenworcel said in a statement. "Today, I've shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue."
Businesses

Nvidia Reports a 262% Jump In Sales, 10-1 Stock Split (cnbc.com) 11

Nvidia reported fiscal first-quarter earnings surpassing expectations with strong forecasts, indicating sustained demand for its AI chips. Following the news, the company's stock rose over 6% in extended trading. Nvidia also said it was splitting its stock 10 to 1. CNBC reports: Nvidia said it expected sales of $28 billion in the current quarter. Wall Street was expecting earnings per share of $5.95 on sales of $26.61 billion, according to LSEG. Nvidia reported net income for the quarter of $14.88 billion, or $5.98 per share, compared with $2.04 billion, or 82 cents, in the year-ago period. [...] Nvidia said its data center category rose 427% from the year-ago quarter to $22.6 billion in revenue. Nvidia CFO Colette Kress said in a statement that it was due to shipments of the company's "Hopper" graphics processors, which include the company's H100 GPU.

Nvidia also highlighted strong sales of its networking parts, which are increasingly important as companies build clusters of tens of thousands of chips that need to be connected. Nvidia said that it had $3.2 billion in networking revenue, primarily its Infiniband products, which was over three times higher than last year's sales. Nvidia, before it became the top supplier to big companies building AI, was known primarily as a company making hardware for 3D gaming. The company's gaming revenue was up 18% during the quarter to $2.65 billion, which Nvidia attributed to strong demand.

The company also sells chips for cars and chips for advanced graphics workstations, which remain much smaller than its data center business. The company reported $427 million in professional visualization sales, and $329 million in automotive sales. Nvidia said it bought back $7.7 billion worth of its shares and paid $98 million in dividends during the quarter. Nvidia also said that it's increasing its quarterly cash dividend from 4 cents per share to 10 cents on a pre-split basis. After the split, the dividend will be a penny a share.

Mozilla

Mozilla Says It's Concerned About Windows Recall (theregister.com) 67

Microsoft's Windows Recall feature is attracting controversy before even venturing out of preview. From a report: The principle is simple. Windows takes a snapshot of a user's active screen every few seconds and dumps it to disk. The user can then scroll through the snapshots and, when something is selected, the user is given options to interact with the content.

Mozilla's Chief Product Officer, Steve Teixeira, told The Register: "Mozilla is concerned about Windows Recall. From a browser perspective, some data should be saved, and some shouldn't. Recall stores not just browser history, but also data that users type into the browser with only very coarse control over what gets stored. While the data is stored in encrypted format, this stored data represents a new vector of attack for cybercriminals and a new privacy worry for shared computers.

"Microsoft is also once again playing gatekeeper and picking which browsers get to win and lose on Windows -- favoring, of course, Microsoft Edge. Microsoft's Edge allows users to block specific websites and private browsing activity from being seen by Recall. Other Chromium-based browsers can filter out private browsing activity but lose the ability to block sensitive websites (such as financial sites) from Recall. "Right now, there's no documentation on how a non-Chromium based, third-party browser, such as Firefox, can protect user privacy from Recall. Microsoft did not engage our cooperation on Recall, but we would have loved for that to be the case, which would have enabled us to partner on giving users true agency over their privacy, regardless of the browser they choose."

AI

Amazon Plans To Give Alexa an AI Overhaul - and a Monthly Subscription Price (cnbc.com) 36

Amazon is upgrading its decade-old Alexa voice assistant with generative AI and plans to charge a monthly subscription fee to offset the cost of the technology, CNBC reported Wednesday, citing people with knowledge of Amazon's plans. From the report: The Seattle-based tech and retail giant will launch a more conversational version of Alexa later this year, potentially positioning it to better compete with new generative AI-powered chatbots from companies including Google and OpenAI, according to two sources familiar with the matter, who asked not to be named because the discussions were private. Amazon's subscription for Alexa will not be included in the $139-per-year Prime offering, and Amazon has not yet nailed down the price point, one source said.

While Amazon wowed consumers with Alexa's voice-driven tasks in 2014, its capabilities could seem old-fashioned amid recent leaps in artificial intelligence. Last week, OpenAI announced GPT-4o, with the capability for two-way conversations that can go significantly deeper than Alexa. For example, it can translate conversations into different languages in real time. Google launched a similar generative-AI-powered voice feature for Gemini.

AI

Wearable AI Startup Humane Explores Potential Sale 18

AI startup Humane has been seeking a buyer for its business, Bloomberg News reported, citing people familiar with the matter, just weeks after the company's closely watched wearable AI device had a rocky public launch. From the report: The company is working with a financial adviser to assist it, said the people, who asked not to be identified because the matter is private. Humane is seeking a price of between $750 million and $1 billion in a sale [non-paywalled link], one person said. The process is still early and may not result in a deal. Humane was founded in 2018 by two longtime Apple veterans, the married couple Imran Chaudhri and Bethany Bongiorno, in an attempt to come up with a new, AI-powered device that could potentially rival the iPhone. Last year it was valued by investors at $850 million, according to tech news site the Information.
AI

Meta AI Chief Says Large Language Models Will Not Reach Human Intelligence (ft.com) 78

Meta's AI chief said the large language models that power generative AI products such as ChatGPT would never achieve the ability to reason and plan like humans, as he focused instead on a radical alternative approach to create "superintelligence" in machines. From a report: Yann LeCun, chief AI scientist at the social media giant that owns Facebook and Instagram, said LLMs had "very limited understanding of logicâ... do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot planâ...âhierarchically."

In an interview with the Financial Times, he argued against relying on advancing LLMs in the quest to make human-level intelligence, as these models can only answer prompts accurately if they have been fed the right training data and are, therefore, "intrinsically unsafe." Instead, he is working to develop an entirely new generation of AI systems that he hopes will power machines with human-level intelligence, although he said this vision could take 10 years to achieve. Meta has been pouring billions of dollars into developing its own LLMs as generative AI has exploded, aiming to catch up with rival tech groups, including Microsoft-backed OpenAI and Alphabet's Google.

AI

DOJ Makes Its First Known Arrest For AI-Generated CSAM (engadget.com) 98

In what's believed to be the first case of its kind, the U.S. Department of Justice arrested a Wisconsin man last week for generating and distributing AI-generated child sexual abuse material (CSAM). Even if no children were used to create the material, the DOJ "looks to establish a judicial precedent that exploitative materials are still illegal," reports Engadget. From the report: The DOJ says 42-year-old software engineer Steven Anderegg of Holmen, WI, used a fork of the open-source AI image generator Stable Diffusion to make the images, which he then used to try to lure an underage boy into sexual situations. The latter will likely play a central role in the eventual trial for the four counts of "producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16." The government says Anderegg's images showed "nude or partially clothed minors lasciviously displaying or touching their genitals or engaging in sexual intercourse with men." The DOJ claims he used specific prompts, including negative prompts (extra guidance for the AI model, telling it what not to produce) to spur the generator into making the CSAM.

Cloud-based image generators like Midjourney and DALL-E 3 have safeguards against this type of activity, but Ars Technica reports that Anderegg allegedly used Stable Diffusion 1.5, a variant with fewer boundaries. Stability AI told the publication that fork was produced by Runway ML. According to the DOJ, Anderegg communicated online with the 15-year-old boy, describing how he used the AI model to create the images. The agency says the accused sent the teen direct messages on Instagram, including several AI images of "minors lasciviously displaying their genitals." To its credit, Instagram reported the images to the National Center for Missing and Exploited Children (NCMEC), which alerted law enforcement. Anderegg could face five to 70 years in prison if convicted on all four counts. He's currently in federal custody before a hearing scheduled for May 22.

EU

EU Sets Benchmark For Rest of the World With Landmark AI Laws (reuters.com) 28

An anonymous reader quotes a report from Reuters: Europe's landmark rules on artificial intelligence will enter into force next month after EU countries endorsed on Tuesday a political deal reached in December, setting a potential global benchmark for a technology used in business and everyday life. The European Union's AI Act is more comprehensive than the United States' light-touch voluntary compliance approach while China's approach aims to maintain social stability and state control. The vote by EU countries came two months after EU lawmakers backed the AI legislation drafted by the European Commission in 2021 after making a number of key changes. [...]

The AI Act imposes strict transparency obligations on high-risk AI systems while such requirements for general-purpose AI models will be lighter. It restricts governments' use of real-time biometric surveillance in public spaces to cases of certain crimes, prevention of terrorist attacks and searches for people suspected of the most serious crimes. The new legislation will have an impact beyond the 27-country bloc, said Patrick van Eecke at law firm Cooley. "The Act will have global reach. Companies outside the EU who use EU customer data in their AI platforms will need to comply. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR," he said, referring to EU privacy rules.

While the new legislation will apply in 2026, bans on the use of artificial intelligence in social scoring, predictive policing and untargeted scraping of facial images from the internet or CCTV footage will kick in in six months once the new regulation enters into force. Obligations for general purpose AI models will apply after 12 months and rules for AI systems embedded into regulated products in 36 months. Fines for violations range from $8.2 million or 1.5% of turnover to 35 million euros or 7% of global turnover depending on the type of violations.

Windows

Windows Now Has AI-Powered Copy and Paste 59

Umar Shakir reports via The Verge: Microsoft is adding a new Advanced Paste feature to PowerToys for Windows 11 that can convert your clipboard content on the fly with the power of AI. The new feature can help people speed up their workflows by doing things like copying code in one language and pasting it in another, although its best tricks require OpenAI API credits.

Advanced Paste is included in PowerToys version 0.81 and, once enabled, can be activated with a special key command: Windows Key + Shift + V. That opens an Advanced Paste text window that offers paste conversion options including plaintext, markdown, and JSON. If you enable Paste with AI in the Advanced Paste settings, you'll also see an OpenAI prompt where you can enter the conversion you want -- summarized text, translations, generated code, a rewrite from casual to professional style, Yoda syntax, or whatever you can think to ask for.
Google

Google's Moonshot Factory Falls Back Down to Earth 25

Alphabet's moonshot factory, X, is scaling back its ambitious projects amid concerns over Google's core search business facing competition from AI chatbots like ChatGPT. The lab, once a symbol of Google's commitment to innovation, is now spinning off projects as startups rather than integrating them into Alphabet. The shift reflects a broader trend among tech giants, who are cutting costs and focusing on their core businesses in response to the rapidly evolving AI landscape.
Microsoft

Microsoft Edge Will Dub Streamed Video With AI-Translated Audio (pcworld.com) 19

Microsoft is planning to either add subtitles or even dub video produced by major video sites, using AI to translate the audio into foreign languages within Microsoft Edge in real time. From a report: At its Microsoft Build developer conference, Microsoft named several sites that would benefit from the new real-time translation capabilities within Edge, including Reuters, CNBC News, Bloomberg, and Coursera, plus Microsoft's own LinkedIn. Interestingly, Microsoft also named Google's YouTube as a beneficiary of the translation capabilities. Microsoft plans to translate the video from Spanish to English and from English to German, Hindi, Italian, Russian, and Spanish. There are plans to add additional languages and video platforms in the future, Microsoft said.
Education

Microsoft Launches Free AI Assistant For All Educators in US in Deal With Khan Academy (nbcnewyork.com) 35

Microsoft is partnering with tutoring organization Khan Academy to provide a generative AI assistant to all teachers in the U.S. for free. From a report: Khanmigo for Teachers, which helps teachers prepare lessons for class, is free to all educators in the U.S. as of Tuesday. The program can help create lessons, analyze student performance, plan assignments, and provide teachers with opportunities to enhance their own learning.

"Unlike most things in technology and education in the past where this is a 'nice-to-have,' this is a 'must-have' for a lot of teachers," Sal Khan, founder and CEO of Khan Academy, said in a CNBC "Squawk Box" interview last Friday ahead of the deal. Khan Academy has roughly 170 million registered users in over 50 languages around the world, and while its videos are best known, its interactive exercise platform was one which Microsoft-funded artificial intelligence company OpenAI's top executives, Sam Altman and Greg Brockman, zeroed in on early when they were looking for a partner to pilot GPT with that offered socially positive use cases.

Technology

Match Group, Meta, Coinbase And More Form Anti-Scam Coalition (engadget.com) 23

An anonymous reader shares a report: Scams are all over the internet, and AI is making matters worse (no, Taylor Swift didn't giveaway Le Creuset pans, and Tom Hanks didn't promote a dental plan). Now, companies such as Match Group, Meta and Coinbase are launching Tech Against Scams, a new coalition focused on collaboration to prevent online fraud and financial schemes. They will "collaborate on ways to take action against the tools used by scammers, educate and protect consumers and disrupt rapidly evolving financial scams."

Meta, Coinbase and Match Group -- which owns Hinge and Tinder -- first joined forces on this issue last summer but are now teaming up with additional digital, social media and crypto companies, along with the Global Anti-Scam Organization. A major focus of this coalition is pig butchering scams, a type of fraud in which a scammer tricks someone into giving them more and more money through trusting digital relationships, both romantic and platonic in nature.

Slashdot Top Deals