×
AI

Project Astra Is Google's 'Multimodal' Answer to the New ChatGPT (wired.com) 9

At Google I/O today, Google introduced a "next-generation AI assistant" called Project Astra that can "make sense of what your phone's camera sees," reports Wired. It follows yesterday's launch of GPT-4o, a new AI model from OpenAI that can quickly respond to prompts via voice and talk about what it 'sees' through a smartphone camera or on a computer screen. It "also uses a more humanlike voice and emotionally expressive tone, simulating emotions like surprise and even flirtatiousness," notes Wired. From the report: In response to spoken commands, Astra was able to make sense of objects and scenes as viewed through the devices' cameras, and converse about them in natural language. It identified a computer speaker and answered questions about its components, recognized a London neighborhood from the view out of an office window, read and analyzed code from a computer screen, composed a limerick about some pencils, and recalled where a person had left a pair of glasses. [...] Google says Project Astra will be made available through a new interface called Gemini Live later this year. [Demis Hassabis, the executive leading the company's effort to reestablish leadership inÂAI] said that the company is still testing several prototype smart glasses and has yet to make a decision on whether to launch any of them.

Hassabis believes that imbuing AI models with a deeper understanding of the physical world will be key to further progress in AI, and to making systems like Project Astra more robust. Other frontiers of AI, including Google DeepMind's work on game-playing AI programs could help, he says. Hassabis and others hope such work could be revolutionary for robotics, an area that Google is also investing in. "A multimodal universal agent assistant is on the sort of track to artificial general intelligence," Hassabis said in reference to a hoped-for but largely undefined future point where machines can do anything and everything that a human mind can. "This is not AGI or anything, but it's the beginning of something."

Movies

Google Targets Filmmakers With Veo, Its New Generative AI Video Model (theverge.com) 12

At its I/O developer conference today, Google announced Veo, its latest generative AI video model, that "can generate 'high-quality' 1080p resolution videos over a minute in length in a wide variety of visual and cinematic styles," reports The Verge. From the report: Veo has "an advanced understanding of natural language," according to Google's press release, enabling the model to understand cinematic terms like "timelapse" or "aerial shots of a landscape." Users can direct their desired output using text, image, or video-based prompts, and Google says the resulting videos are "more consistent and coherent," depicting more realistic movement for people, animals, and objects throughout shots. Google DeepMind CEO Demis Hassabis said in a press preview on Monday that video results can be refined using additional prompts and that Google is exploring additional features to enable Veo to produce storyboards and longer scenes.

As is the case with many of these AI model previews, most folks hoping to try Veo out themselves will likely have to wait a while. Google says it's inviting select filmmakers and creators to experiment with the model to determine how it can best support creatives and will build on these collaborations to ensure "creators have a voice" in how Google's AI technologies are developed. Some Veo features will also be made available to "select creators in the coming weeks" in a private preview inside VideoFX -- you can sign up for the waitlist here for an early chance to try it out. Otherwise, Google is also planning to add some of its capabilities to YouTube Shorts "in the future."
Along with its new AI models and tools, Google said it's expanding its AI content watermarking and detection technology. The company's new upgraded SynthID watermark imprinting system "can now mark video that was digitally generated, as well as AI-generated text," reports The Verge in a separate report.
Businesses

OpenAI's Chief Scientist and Co-Founder Is Leaving the Company (nytimes.com) 19

OpenAI's co-founder and Chief Scientist, Ilya Sutskever, is leaving the company to work on "something personally meaningful," wrote CEO Sam Altman in a post on X. "This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend. [...] I am forever grateful for what he did here and committed to finishing the mission we started together." He will be replaced by OpenAI researcher Jakub Pachocki. Here's Altman's full X post announcing the departure: Ilya and OpenAI are going to part ways. This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend. His brilliance and vision are well known; his warmth and compassion are less well known but no less important.

OpenAI would not be what it is without him. Although he has something personally meaningful he is going to go work on, I am forever grateful for what he did here and committed to finishing the mission we started together. I am happy that for so long I got to be close to such genuinely remarkable genius, and someone so focused on getting to the best future for humanity.

Jakub is going to be our new Chief Scientist. Jakub is also easily one of the greatest minds of our generation; I am thrilled he is taking the baton here. He has run many of our most important projects, and I am very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone.
The New York Times notes that Ilya joined three other board members to force out Altman in a chaotic weekend last November. Ultimately, Altman returned as CEO five days later. Ilya said he regretted the move.
AI

AI in Gmail Will Sift Through Emails, Provide Search Summaries, Send Emails (arstechnica.com) 43

An anonymous reader shares a report: Google's Gemini AI often just feels like a chatbot built into a text-input field, but you can really start to do special things when you give it access to a ton of data. Gemini in Gmail will soon be able to search through your entire backlog of emails and show a summary in a sidebar. That's simple to describe but solves a huge problem with email: even searching brings up a list of email subjects, and you have to click-through to each one just to read it.

Having an AI sift through a bunch of emails and provide a summary sounds like a huge time saver and something you can't do with any other interface. Google's one-minute demo of this feature showed a big blue Gemini button at the top right of the Gmail web app. Tapping it opens the normal chatbot sidebar you can type in. Asking for a summary of emails from a certain contact will get you a bullet-point list of what has been happening, with a list of "sources" at the bottom that will jump you right to a certain email. In the last second of the demo, the user types, "Reply saying I want to volunteer for the parent's group event," hits "enter," and then the chatbot instantly, without confirmation, sends an email.

AI

Google's Invisible AI Watermark Will Help Identify Generative Text and Video 17

Among Google's swath of new AI models and tools announced today, the company is also expanding its AI content watermarking and detection technology to work across two new mediums. The Verge: Google's DeepMind CEO, Demis Hassabis, took the stage for the first time at the Google I/O developer conference on Tuesday to talk not only about the team's new AI tools, like the Veo video generator, but also about the new upgraded SynthID watermark imprinting system. It can now mark video that was digitally generated, as well as AI-generated text.

[...] Google had also enabled SynthID to inject inaudible watermarks into AI-generated music that was made using DeepMind's Lyria model. SynthID is just one of several AI safeguards in development to combat misuse by the tech, safeguards that the Biden administration is directing federal agencies to build guidelines around.
Google

Google Search Will Now Show AI-Generated Answers To Millions By Default (engadget.com) 59

Google is shaking up Search. On Tuesday, the company announced big new AI-powered changes to the world's dominant search engine at I/O, Google's annual conference for developers. From a report: With the new features, Google is positioning Search as more than a way to simply find websites. Instead, the company wants people to use its search engine to directly get answers and help them with planning events and brainstorming ideas. "[With] generative AI, Search can do more than you ever imagined," wrote Liz Reid, vice president and head of Google Search, in a blog post. "So you can ask whatever's on your mind or whatever you need to get done -- from researching to planning to brainstorming -- and Google will take care of the legwork."

Google's changes to Search, the primary way that the company makes money, are a response to the explosion of generative AI ever since OpenAI's ChatGPT released at the end of 2022. [...] Starting today, Google will show complete AI-generated answers in response to most search queries at the top of the results page in the US. Google first unveiled the feature a year ago at Google I/O in 2023, but so far, anyone who wanted to use the feature had to sign up for it as part of the company's Search Labs platform that lets people try out upcoming features ahead of their general release. Google is now making AI Overviews available to hundreds of millions of Americans, and says that it expects it to be available in more countries to over a billion people by the end of the year.

Facebook

Meta Will Shut Down Workplace, Its Business Chat Tool (axios.com) 21

Meta is shutting down Workplace, the tool it sold to businesses that combined social and productivity features, according to messages to customers obtained by Axios and confirmed by Meta. From the report:Meta has been cutting jobs and winnowing its product line for the last few years while investing billions first in the metaverse and now in AI. Micah Collins, Meta's senior director of product management, sent a message to customers alerting them of the shutdown.

Collins said customers can use Workplace through September 2025, when it will become available only to download or read existing data. The service will shut down completely in 2026. Workplace was formerly Facebook at Work, and launched in its current form in 2016. In 2021 the company reported it had 7 million paid subscribers.

AI

Slashdot Asks: How Do You Protest AI Development? (wired.com) 170

An anonymous reader quotes a report from Wired: On a side street outside the headquarters of the Department of Science, Innovation and Technology in the center of London on Monday, 20 or so protesters are getting their chants in order. "What do we want? Safe AI! When do we want it?" The protesters hesitate. "Later?" someone offers. The group of mostly young men huddle for a moment before breaking into a new chant. "What do we want? Pause AI! When do we want it? Now!" These protesters are part of Pause AI, a group of activists petitioning for companies to pause development of large AI models which they fear could pose a risk to the future of humanity. Other PauseAI protests are taking place across the globe: In San Francisco, New York, Berlin, Rome, Ottawa, and ahandful of other cities. Their aim is to grab the attention of voters and politicians ahead of the AI Seoul Summit -- a follow-up to the AI Safety Summit held in the UK in November 2023. But the loosely organized group of protesters itself is still figuring out exactly the best way to communicate its message.

"The Summit didn't actually lead to meaningful regulations," says Joep Meindertsma, the founder of PauseAI. The attendees at the conference agreed to the "Bletchley Declaration," but that agreement doesn't mean much, Meindertsma says. "It's only a small first step, and what we need are binding international treaties." [...] There is also the question of how PauseAI should achieve its aims. On the group's Discord, some members discussed the idea of staging sit-ins at the headquarters of AI developers. OpenAI, in particular, has become a focal point of AI protests. In February, Pause AI protests gathered in front of OpenAI'sSan Francisco offices, after the company changed its usage policies to remove a ban on military and warfare applications for its products. Would it be too disruptive if protests staged sit-ins or chained themselves to the doors of AI developers, one member of the Discord asked. "Probably not. We do what we have to, in the end, for a future with humanity, while we still can." [...]

Director of Pause AI US, Holly Elmore, wants the movement to be a "broad church" that includes artists, writers, and copyright owners whose livelihoods are put at risk from AI systems that can mimic creative works. "I'm a utilitarian. I'm thinking about the consequences ultimately, but the injustice that really drives me to do this kind of activism is the lack of consent" from companies producing AI models, she says. "We don't have to choose which AI harm is the most important when we're talking about pausing as a solution. Pause is the only solution that addresses all of them." [Joseph Miller, the organizer of PauseAI's protest in London] echoed this point. He says he's spoken to artists whose livelihoods have been impacted by the growth of AI art generators. "These are problems that are real today, and are signs of much more dangerous things to come." One of the London protesters, Gideon Futerman, has a stack of leaflets he's attempting to hand out to civil servants leaving the building opposite. He has been protesting with the group since last year. "The idea of a pause being possible has really taken root since then," he says.
According to Wired, the leaders of Pause AI said they were not considering sit-ins or encampments near AI offices at this time. "Our tactics and our methods are actually very moderate," says Elmore. "I want to be the moderate base for a lot of organizations in this space. I'm sure we would never condone violence. I also want Pause AI to go further than that and just be very trustworthy."

Meindertsma agrees, saying that more disruptive action isn't justified at the moment. "I truly hope that we don't need to take other actions. I don't expect that we'll need to. I don't feel like I'm the type of person to lead a movement that isn't completely legal."

Slashdotters, what is the most effective way to protest AI development? Is the AI genie out of the bottle? Curious to hear your thoughts
Supercomputing

Intel Aurora Supercomputer Breaks Exascale Barrier 28

Josh Norem reports via ExtremeTech: At the recent International supercomputing conference called ISC 2024, Intel's newest Aurora supercomputer installed at Argonne National Laboratory raised a few eyebrows by finally surpassing the exascale barrier. Before this, only AMD's Frontier system had been able to achieve this level of performance. Intel also achieved what it says is the world's best performance for AI at 10.61 "AI exaflops." Intel reported the news on its blog, stating Aurora was now officially the fastest supercomputer for AI in the world. It shares the distinction in collaboration with Argonne National Laboratory and Hewlett Packard Enterprise (HPE), which both built and houses the system in its current state, which Intel says was at 87% functionality for the recent tests. In the all-important Linpack (HPL) test, the Aurora computer hit 1.012 exaflops, meaning it has almost doubled the performance on tap since its initial "partial run" in late 2023, where it hit just 585.34 petaflops. The company then said it expected to cross the exascale barrier with Aurora eventually, and now it has.

Intel says for the ISC 2024 tests, Aurora was operating with 9,234 nodes. The company notes it ranked second overall in LINPACK, meaning it's still unable to dethrone AMD's Frontier system, which is also an HPE supercomputer. AMD's Frontier was the first supercomputer to break the exascale barrier in June 2022. Frontier sits at around 1.2 exaflops in Linpack, so Intel is knocking on its door but still has a way to go before it can topple it. However, Intel says Aurora came in first in the Linpack-mixed benchmark, reportedly highlighting its unparalleled AI performance. Intel's Aurora supercomputer uses the company's latest CPU and GPU hardware, with 21,248 Sapphire Rapids Xeon CPUs and 63,744 Ponte Vecchio GPUs. When it's fully operational later this year, Intel believes the system will eventually be capable of crossing the 2-exaflop barrier.
AI

ChatGPT Is Getting a Mac App 9

OpenAI has launched an official macOS app for ChatGPT, with a Windows version coming "later this year." "Both free and paid users will be able to access the new app, but it will only be available to ChatGPT Plus users starting today before a broader rollout in 'the coming weeks,'" reports The Verge. From the report: In the demo shown by OpenAI, users could open the ChatGPT desktop app in a small window, alongside another program. They asked ChatGPT questions about what's on their screen -- whether by typing or saying it. ChatGPT could then respond based on what it "sees." OpenAI says users can ask ChatGPT a question by using the Option + Space keyboard shortcut, as well as take and discuss screenshots within the app. Further reading: OpenAI Launches New Free Model GPT-4o
IBM

IBM Open-Sources Its Granite AI Models (zdnet.com) 10

An anonymous reader quotes a report from ZDNet: IBM managed the open sourcing of Granite code by using pretraining data from publicly available datasets, such as GitHub Code Clean, Starcoder data, public code repositories, and GitHub issues. In short, IBM has gone to great lengths to avoid copyright or legal issues. The Granite Code Base models are trained on 3- to 4-terabyte tokens of code data and natural language code-related datasets. All these models are licensed under the Apache 2.0 license for research and commercial use. It's that last word -- commercial -- that stopped the other major LLMs from being open-sourced. No one else wanted to share their LLM goodies.

But, as IBM Research chief scientist Ruchir Puri said, "We are transforming the generative AI landscape for software by releasing the highest performing, cost-efficient code LLMs, empowering the open community to innovate without restrictions." Without restrictions, perhaps, but not without specific applications in mind. The Granite models, as IBM ecosystem general manager Kate Woolley said last year, are not "about trying to be everything to everybody. This is not about writing poems about your dog. This is about curated models that can be tuned and are very targeted for the business use cases we want the enterprise to use. Specifically, they're for programming."

These decoder-only models, trained on code from 116 programming languages, range from 3 to 34 billion parameters. They support many developer uses, from complex application modernization to on-device memory-constrained tasks. IBM has already used these LLMs internally in IBM Watsonx Code Assistant (WCA) products, such as WCA for Ansible Lightspeed for IT Automation and WCA for IBM Z for modernizing COBOL applications. Not everyone can afford Watsonx, but now, anyone can work with the Granite LLMs using IBM and Red Hat's InstructLab.

AI

AI Hitting Labour Forces Like a 'Tsunami', IMF Chief Says (yahoo.com) 90

AI is hitting the global labour market "like a tsunami" International Monetary Fund Managing Director Kristalina Georgieva said on Monday. AI is likely to impact 60% of jobs in advanced economies and 40% of jobs around the world in the next two years, Georgieva told an event in Zurich. From a report: "We have very little time to get people ready for it, businesses ready for it," she told the event organised by the Swiss Institute of International Studies, associated to the University of Zurich. "It could bring tremendous increase in productivity if we manage it well, but it can also lead to more misinformation and, of course, more inequality in our society."
Microsoft

Microsoft Places Uses AI To Find the Best Time For Your Next Office Day 55

An anonymous reader shares a report: Microsoft is attempting to solve the hassle of coordinating with colleagues on when everyone will be in the office. It's a problem that emerged with the increase in hybrid and flexible work after the recent covid-19 pandemic, with workers spending less time in the office. Microsoft Places is an AI-powered app that goes into preview today and should help businesses that rely on Outlook and Microsoft Teams to better coordinate in-office time together.

"When employees get to the office, they don't want to be greeted by a sea of empty desks -- they want face-time with their manager and the coworkers they collaborate with most frequently," says Microsoft's corporate vice president of AI at work, Jared Spataro, in a blog post. "With Places, you can more easily coordinate across coworkers and spaces in the office."
Facebook

Meta Explores AI-Assisted Earphones With Cameras (theinformation.com) 23

An anonymous reader shares a report: Meta Platforms is exploring developing AI-powered earphones with cameras, which the company hopes could be used to identify objects and translate foreign languages, according to three current employees. Meta's work on a new AI device comes as several tech companies look to develop AI wearables, and after Meta added an AI assistant to its Ray-Ban smart glasses.

Meta CEO Mark Zuckerberg has seen several possible designs for the device but has not been satisfied with them, one of the employees said. It's unclear if the final design will be in-ear earbuds or over-the-ear headphones. Internally, the project goes by the name Camerabuds. The timeline is also unclear. Company leaders had expected a design to be approved in the first quarter, one of the people said. But employees have identified multiple potential problems with the project, including that long hair may cover the cameras on the earbuds. Also, putting a camera and batteries into tiny devices could make the earbuds bulky and risk making them uncomfortably hot. Attaching discreet cameras to a wearable device may also raise privacy concerns, as Google learned with Google Glass.

AI

OpenAI Launches New Free Model GPT-4o 28

OpenAI unveiled its latest foundation model, GPT-4o, and a ChatGPT desktop app at its Spring Updates event on Monday. GPT-4o, which will be available to all free users, boasts the ability to reason across voice, text, and vision, according to OpenAI's chief technology officer Mira Murati. The model can respond in real-time, detect emotion, and adjust its voice accordingly. Developers will have access to GPT-4o through OpenAI's API at half the price and twice the speed of its predecessor, GPT-4 Turbo.

Further reading: VentureBeat.
AI

US Kicks Off AI Safety Talks With China (axios.com) 20

The United States is heading to Geneva this week to start a series of diplomatic talks with the Chinese government about artificial intelligence safety and risk standards. From a report: The U.S. and China are in tight competition to dominate the AI market, both in the private sector and within their own governments. However, the two world powers have yet to agree on what it means to safely use the technologies they're developing.

The United States and China will meet in Switzerland on Tuesday, senior administration officials told reporters during a briefing Friday. Officials from the White House and State Department will lead the U.S. delegation in the talks, while China will bring a delegation co-led by its Ministry of Foreign Affairs and National Development and Reform Commission. The talks will primarily focus on AI risk and safety "with an emphasis on advanced systems," one official said. Officials from the U.S. and China also plan to discuss the work they're doing in their own countries domestically to address AI risks.

Microsoft

How Microsoft Employees Pressured the Company Over Its Oil Industry Ties (grist.org) 144

The non-profit environmental site Grist reports on "an internal, employee-led effort to raise ethical concerns about Microsoft's work helping oil and gas producers boost their profits by providing them with cloud computing resources and AI software tools." There's been some disappointments — but also some successes, starting with the founding of an internal sustainability group within Microsoft that grew to nearly 10,000 employees: Former Microsoft employees and sources familiar with tech industry advocacy say that, broadly speaking, employee pressure has had an enormous impact on sustainability at Microsoft, encouraging it to announce industry-leading climate goals in 2020 and support key federal climate policies.

But convincing the world's most valuable company to forgo lucrative oil industry contracts proved far more difficult... Over the past seven years, Microsoft has announced dozens of new deals with oil and gas producers and oil field services companies, many explicitly aimed at unlocking new reserves, increasing production, and driving up oil industry profits...

As concerns over the company's fossil fuel work mounted, Microsoft was gearing up to make a big sustainability announcement. In January 2020, the company pledged to become "carbon negative" by 2030, meaning that in 10 years, the tech giant would pull more carbon out of the air than it emitted on an annual basis... For nearly two years, employees watched and waited. Following its carbon negative announcement, Microsoft quickly expanded its internal carbon tax, which charges the company's business groups a fee for the carbon they emit via electricity use, employee travel, and more. It also invested in new technologies like direct air capture and purchased carbon removal contracts from dozens of projects worldwide.

But Microsoft's work with the oil industry continued unabated, with the company announcing a slew of new partnerships in 2020 and 2021 aimed at cutting fossil fuel producers' costs and boosting production.

The last straw for one technical account manager was a 2023 LinkedIn post by a Microsoft technical architect about the company's work on oil and gas industry automation. The post said Microsoft's cloud service was "unlocking previously inaccessible reserves" for the fossil fuel industry, promising that with Microsoft's Azure service, "the future of oil and gas exploration and production is brighter than ever."

The technical account manager resigned from the position they'd held for nearly a decade, citing the blog post in a resignation letter which accused Microsoft of "extending the age of fossil fuels, and enabling untold emissions."

Thanks to Slashdot reader joshuark for sharing the news.
AI

OpenAI's Sam Altman Wants AI in the Hands of the People - and Universal Basic Compute? (youtube.com) 79

OpenAI CEO Sam Altman gave an hour-long interview to the "All-In" podcast (hosted by Chamath Palihapitiya, Jason Calacanis, David Sacks and David Friedberg).

And when asked about this summer's launch of the next version of ChatGPT, Altman said they hoped to "be thoughtful about how we do it, like we may release it in a different way than we've released previous models...

Altman: One of the things that we really want to do is figure out how to make more advanced technology available to free users too. I think that's a super-important part of our mission, and this idea that we build AI tools and make them super-widely available — free or, you know, not-that-expensive, whatever that is — so that people can use them to go kind of invent the future, rather than the magic AGI in the sky inventing the future, and showering it down upon us. That seems like a much better path. It seems like a more inspiring path.

I also think it's where things are actually heading. So it makes me sad that we have not figured out how to make GPT4-level technology available to free users. It's something we really want to do...

Q: It's just very expensive, I take it?

Altman: It's very expensive.

But Altman said later he's confident they'll be able to reduce cost. Altman: I don't know, like, when we get to intelligence too cheap to meter, and so fast that it feels instantaneous to us, and everything else, but I do believe we can get there for, you know, a pretty high level of intelligence. It's important to us, it's clearly important to users, and it'll unlock a lot of stuff.
Altman also thinks there's "great roles for both" open-source and closed-source models, saying "We've open-sourced some stuff, we'll open-source more stuff in the future.

"But really, our mission is to build toward AGI, and to figure out how to broadly distribute its benefits... " Altman even said later that "A huge part of what we try to do is put the technology in the hands of people..." Altman: The fact that we have so many people using a free version of ChatGPT that we don't — you know, we don't run ads on, we don't try to make money on it, we just put it out there because we want people to have these tools — I think has done a lot to provide a lot of value... But also to get the world really thoughtful about what's happening here. It feels to me like we just stumbled on a new fact of nature or science or whatever you want to call it... I am sure, like any other industry, I would expect there to be multiple approaches and different peoiple like different ones.
Later Altman said he was "super-excited" about the possibility of an AI tutor that could reinvent how people learn, and "doing faster and better scientific discovery... that will be a triumph."

But at some point the discussion led him to where the power of AI intersects with the concept of a universal basic income: Altman: Giving people money is not going to go solve all the problems. It is certainly not going to make people happy. But it might solve some problems, and it might give people a better horizon with which to help themselves.

Now that we see some of the ways that AI is developing, I wonder if there's better things to do than the traditional conceptualization of UBI. Like, I wonder — I wonder if the future looks something more like Universal Basic Compute than Universal Basic Income, and everybody gets like a slice of GPT-7's compute, and they can use it, they can re-sell it, they can donate it to somebody to use for cancer research. But what you get is not dollars but this like slice — you own part of the the productivity.

Altman was also asked about the "ouster" period where he was briefly fired from OpenAI — to which he gave a careful response: Altman: I think there's always been culture clashes at — look, obviously not all of those board members are my favorite people in the world. But I have serious respect for the gravity with which they treat AGI and the importance of getting AI safety right. And even if I stringently disagree with their decision-making and actions, which I do, I have never once doubted their integrity or commitment to the sort of shared mission of safe and beneficial AGI...

I think a lot of the world is, understandably, very afraid of AGI, or very afraid of even current AI, and very excited about it — and even more afraid, and even more excited about where it's going. And we wrestle with that, but I think it is unavoidable that this is going to happen. I also think it's going to be tremendously beneficial. But we do have to navigate how to get there in a reasonable way. And, like a lot of stuff is going to change. And change is pretty uncomfortable for people. So there's a lot of pieces that we've got to get right...

I really care about AGI and think this is like the most interesting work in the world.

Social Networks

Reddit Grows, Seeks More AI Deals, Plans 'Award' Shops, and Gets Sued (yahoo.com) 45

Reddit reported its first results since going public in late March. Yahoo Finance reports: Daily active users increased 37% year over year to 82.7 million. Weekly active unique users rose 40% from the prior year. Total revenue improved 48% to $243 million, nearly doubling the growth rate from the prior quarter, due to strength in advertising. The company delivered adjusted operating profits of $10 million, versus a $50.2 million loss a year ago. [Reddit CEO Steve] Huffman declined to say when the company would be profitable on a net income basis, noting it's a focus for the management team. Other areas of focus include rolling out a new user interface this year, introducing shopping capabilities, and searching for another artificial intelligence content licensing deal like the one with Google.
Bloomberg notes that already Reddit "has signed licensing agreements worth $203 million in total, with terms ranging from two to three years. The company generated about $20 million from AI content deals last quarter, and expects to bring in more than $60 million by the end of the year."

And elsewhere Bloomberg writes that Reddit "plans to expand its revenue streams outside of advertising into what Huffman calls the 'user economy' — users making money from others on the platform... " In the coming months Reddit plans to launch new versions of awards, which are digital gifts users can give to each other, along with other products... Reddit also plans to continue striking data licensing deals with artificial intelligence companies, expanding into international markets and evaluating potential acquisition targets in areas such as search, he said.
Meanwhile, ZDNet notes that this week a Reddit announcement "introduced a new public content policy that lays out a framework for how partners and third parties can access user-posted content on its site." The post explains that more and more companies are using unsavory means to access user data in bulk, including Reddit posts. Once a company gets this data, there's no limit to what it can do with it. Reddit will continue to block "bad actors" that use unauthorized methods to get data, the company says, but it's taking additional steps to keep users safe from the site's partners.... Reddit still supports using its data for research: It's creating a new subreddit — r/reddit4researchers — to support these initiatives, and partnering with OpenMined to help improve research. Private data is, however, going to stay private.

If a company wants to use Reddit data for commercial purposes, including advertising or training AI, it will have to pay. Reddit made this clear by saying, "If you're interested in using Reddit data to power, augment, or enhance your product or service for any commercial purposes, we require a contract." To be clear, Reddit is still selling users' data — it's just making sure that unscrupulous actors have a tougher time accessing that data for free and researchers have an easier time finding what they need.

And finally, there's some court action, according to the Register. Reddit "was sued by an unhappy advertiser who claims that internet giga-forum sold ads but provided no way to verify that real people were responsible for clicking on them." The complaint [PDF] was filed this week in a U.S. federal court in northern California on behalf of LevelFields, a Virginia-based investment research platform that relies on AI. It says the biz booked pay-per-click ads on the discussion site starting September 2022... That arrangement called for Reddit to use reasonable means to ensure that LevelField's ads were delivered to and clicked on by actual people rather than bots and the like. But according to the complaint, Reddit broke that contract...

LevelFields argues that Reddit is in a particularly good position to track click fraud because it's serving ads on its own site, as opposed to third-party properties where it may have less visibility into network traffic... Nonetheless, LevelFields's effort to obtain IP address data to verify the ads it was billed for went unfulfilled. The social media site "provided click logs without IP addresses," the complaint says. "Reddit represented that it was not able to provide IP addresses."

"The plaintiffs aspire to have their claim certified as a class action," the article adds — along with an interesting statistic.

"According to Juniper Research, 22 percent of ad spending last year was lost to click fraud, amounting to $84 billion."
AI

OpenAI's Sam Altman on iPhones, Music, Training Data, and Apple's Controversial iPad Ad (youtube.com) 37

OpenAI CEO Sam Altman gave an hour-long interview to the "All-In" podcast (hosted by Chamath Palihapitiya, Jason Calacanis, David Sacks and David Friedberg). And speaking on technology's advance, Altman said "Phones are unbelievably good.... I personally think the iPhone is like the greatest piece of technology humanity has ever made. It's really a wonderful product."


Q: What comes after it?

Altman: I don't know. I mean, that was what I was saying. It's so good, that to get beyond it, I think the bar is quite high.

Q: You've been working with Jony Ive on something, right?

Altman: We've been discussing ideas, but I don't — like, if I knew...


Altman said later he thought voice interaction "feels like a different way to use a computer."

But the conversation turned to Apple in another way. It happened in a larger conversation where Altman said OpenAI has "currently made the decision not to do music, and partly because exactly these questions of where you draw the lines..."

Altman: Even the world in which — if we went and, let's say we paid 10,000 musicians to create a bunch of music, just to make a great training set, where the music model could learn everything about song structure and what makes a good, catchy beat and everything else, and only trained on that — let's say we could still make a great music model, which maybe we could. I was posing that as a thought experiment to musicians, and they were like, "Well, I can't object to that on any principle basis at that point — and yet there's still something I don't like about it." Now, that's not a reason not to do it, um, necessarily, but it is — did you see that ad that Apple put out... of like squishing all of human creativity down into one really iPad...?

There's something about — I'm obviously hugely positive on AI — but there is something that I think is beautiful about human creativity and human artistic expression. And, you know, for an AI that just does better science, like, "Great. Bring that on." But an AI that is going to do this deeply beautiful human creative expression? I think we should figure out — it's going to happen. It's going to be a tool that will lead us to greater creative heights. But I think we should figure out how to do it in a way that preserves the spirit of what we all care about here.

What about creators whose copyrighted materials are used for training data? Altman had a ready answer — but also some predictions for the future. "On fair use, I think we have a very reasonable position under the current law. But I think AI is so different that for things like art, we'll need to think about them in different ways..." Altman:I think the conversation has been historically very caught up on training data, but it will increasingly become more about what happens at inference time, as training data becomes less valuable and what the system does accessing information in context, in real-time... what happens at inference time will become more debated, and what the new economic model is there.
Altman gave the example of an AI which was never trained on any Taylor Swift songs — but could still respond to a prompt requesting a song in her style. Altman: And then the question is, should that model, even if it were never trained on any Taylor Swift song whatsoever, be allowed to do that? And if so, how should Taylor get paid? So I think there's an opt-in, opt-out in that case, first of all — and then there's an economic model.
Altman also wondered if there's lessons in the history and economics of music sampling...

Slashdot Top Deals