×
News

Robert Dennard, Inventor of DRAM, Dies At 91 20

necro81 writes: Robert Dennard was working at IBM in the 1960s when he invented a way to store one bit using a single transistor and capacitor. The technology became dynamic random access memory (DRAM), which when implemented using the emerging technology of silicon integrated circuits, helped catapult computing by leaps and bounds. The first commercial DRAM chips in the late 1960s held just 1024 bits; today's DDR5 modules hold hundreds of billions.

Dr. Robert H. Dennard passed away last month at age 91. (alternate link)

In the 1970s he helped guide technology roadmaps for the ever-shrinking feature size of lithography, enabling the early years of Moore's Law. He wrote a seminal paper in 1974 relating feature size and power consumption that is now referred to as Dennard Scaling. His technological contributions earned him numerous awards, and accolades from the National Academy of Engineering, IEEE, and the National Inventor's Hall of Fame.
Programming

Apple Geofences Third-Party Browser Engine Work for EU Devices (theregister.com) 81

Apple's grudging accommodation of European law -- allowing third-party browser engines on its mobile devices -- apparently comes with a restriction that makes it difficult to develop and support third-party browser engines for the region. From a report: The Register has learned from those involved in the browser trade that Apple has limited the development and testing of third-party browser engines to devices physically located in the EU. That requirement adds an additional barrier to anyone planning to develop and support a browser with an alternative engine in the EU.

It effectively geofences the development team. Browser-makers whose dev teams are located in the US will only be able to work on simulators. While some testing can be done in a simulator, there's no substitute for testing on device -- which means developers will have to work within Apple's prescribed geographical boundary. Prior to iOS 17.4, Apple required all web browsers on iOS or iPadOS to use Apple's WebKit rendering engine. Alternatives like Gecko (used by Mozilla Firefox) or Blink (used by Google and other Chromium-based browsers) were not permitted. Whatever brand of browser you thought you were using on your iPhone, under the hood it was basically Safari. Browser makers have objected to this for years, because it limits competitive differentiation and reduces the incentive for Apple owners to use non-Safari browsers.

AI

Hugging Face Is Sharing $10 Million Worth of Compute To Help Beat the Big AI Companies (theverge.com) 10

Kylie Robison reports via The Verge: Hugging Face, one of the biggest names in machine learning, is committing $10 million in free shared GPUs to help developers create new AI technologies. The goal is to help small developers, academics, and startups counter the centralization of AI advancements. [...] Delangue is concerned about AI startups' ability to compete with the tech giants. Most significant advancements in artificial intelligence -- like GPT-4, the algorithms behind Google Search, and Tesla's Full Self-Driving system -- remain hidden within the confines of major tech companies. Not only are these corporations financially incentivized to keep their models proprietary, but with billions of dollars at their disposal for computational resources, they can compound those gains and race ahead of competitors, making it impossible for startups to keep up. Hugging Face aims to make state-of-the-art AI technologies accessible to everyone, not just the tech giants. [...]

Access to compute poses a significant challenge to constructing large language models, often favoring companies like OpenAI and Anthropic, which secure deals with cloud providers for substantial computing resources. Hugging Face aims to level the playing field by donating these shared GPUs to the community through a new program called ZeroGPU. The shared GPUs are accessible to multiple users or applications concurrently, eliminating the need for each user or application to have a dedicated GPU. ZeroGPU will be available via Hugging Face's Spaces, a hosting platform for publishing apps, which has over 300,000 AI demos created so far on CPU or paid GPU, according to the company.

Access to the shared GPUs is determined by usage, so if a portion of the GPU capacity is not actively utilized, that capacity becomes available for use by someone else. This makes them cost-effective, energy-efficient, and ideal for community-wide utilization. ZeroGPU uses Nvidia A100 GPU devices to power this operation -- which offer about half the computation speed of the popular and more expensive H100s. "It's very difficult to get enough GPUs from the main cloud providers, and the way to get them -- which is creating a high barrier to entry -- is to commit on very big numbers for long periods of times," Delangue said. Typically, a company would commit to a cloud provider like Amazon Web Services for one or more years to secure GPU resources. This arrangement disadvantages small companies, indie developers, and academics who build on a small scale and can't predict if their projects will gain traction. Regardless of usage, they still have to pay for the GPUs. "It's also a prediction nightmare to know how many GPUs and what kind of budget you need," Delangue said.

Google

Revolutionary New Google Feature Hidden Under 'More' Tab Shows Links To Web Pages (404media.co) 32

An anonymous reader shares a report: After launching a feature that adds more AI junk than ever to search results, Google is experimenting with a radical new feature that lets users see only the results they were looking for, in the form of normal text links. As in, what most people actually use Google for. "We've launched a new 'Web' filter that shows only text-based links, just like you might filter to show other types of results, such as images or videos," the official Google Search Liaison Twitter account, run by Danny Sullivan, posted on Tuesday. The option will appear at the top of search results, under the "More" option.

"We've added this after hearing from some that there are times when they'd prefer to just see links to web pages in their search results, such as if they're looking for longer-form text documents, using a device with limited internet access, or those who just prefer text-based results shown separately from search features," Sullivan wrote. "If you're in that group, enjoy!" Searching Google has become a bloated, confusing experience for users in the last few years, as it's gradually started prioritizing advertisements and sponsored results, spammy affiliate content, and AI-generated web pages over authentic, human-created websites.

Android

Smartphones Can Now Last 7 Years (nytimes.com) 141

Google and Samsung used to update smartphone software for only three years. That has changed. From a report: Every smartphone has an expiration date. That day arrives when the software updates stop coming and you start missing out on new apps and security protections. With most phones, this used to happen after about only three years. But things are finally starting to change. The new number is seven. I first noticed this shift when I reviewed Google's $700 Pixel 8 smartphone in October. Google told me that it had committed to provide software updates for the phone for seven years, up from three years for its previous Pixels, because it was the right thing to do.

I was skeptical that this would become a trend. But this year, Samsung, the most profitable Android phone maker, set a similar software timeline for its $800 Galaxy S24 smartphone. Then Google said it would do the same for its $500 Pixel 8A, the budget version of the Pixel 8, which arrived in stores this week. Both companies said they had expanded their software support to make their phones last longer. This is a change from how companies used to talk about phones. Not long ago, tech giants unveiled new devices that encouraged people to upgrade every two years. But in the last few years, smartphone sales have slowed down worldwide as their improvements have become more marginal. Nowadays, people want their phones to endure.

Samsung and Google, the two most influential Android device makers, are playing catch-up with Apple, which has traditionally provided software updates for iPhones for roughly seven years. These moves will make phones last much longer and give people more flexibility to decide when it's time to upgrade. Google said in a statement that it had expanded its software commitment for the Pixel 8A because it wanted customers to feel confident in Pixel phones. And Samsung said it would deliver seven years of software updates, which increase security and reliability, for all its Galaxy flagship phones from now on.

Communications

AT&T Goes Up Against T-Mobile, Starlink With AST SpaceMobile Satellite Deal (pcmag.com) 14

Michael Kan reports via PCMag: AT&T has struck a deal to bring satellite internet connectivity to phones through AST SpaceMobile, a potential rival to SpaceX's Starlink. AT&T says the commercial agreement will last until 2030. The goal is "to provide a space-based broadband network to everyday cell phones," a spokesperson tells PCMag, meaning customers can receive a cellular signal in remote areas where traditional cell towers are few and far between. All they'll need to do is ensure their phone has a clear view of the sky.

AT&T has been working with Texas-based AST SpaceMobile since 2018 on the technology, which involves using satellites in space as orbiting cell towers. In January, AT&T was one of several companies (including Google) to invest $110 million in AST. In addition, the carrier created a commercial starring actor Ben Stiller to showcase AST's technology. In today's announcement, AT&T notes that "previously, the companies were working together under a Memorandum of Understanding," which is usually nonbinding. Hence, the new commercial deal suggests AT&T is confident AST can deliver fast and reliable satellite internet service to consumer smartphones -- even though it hasn't launched a production satellite.

AST has only launched one prototype satellite; in tests last year, it delivered download rates at 14Mbps and powered a 5G voice call. Following a supply chain-related delay, the company is now preparing to launch its first batch of "BlueBird" production satellites later this year, possibly in Q3. In Wednesday's announcement, AT&T adds: "This summer, AST SpaceMobile plans to deliver its first commercial satellites to Cape Canaveral for launch into low Earth orbit. These initial five satellites will help enable commercial service that was previously demonstrated with several key milestones." Still, AST needs to launch 45 to 60 BlueBird satellites before it can offer continuous coverage in the U.S., although in an earnings call, the company said it'll still be able to offer "non-continuous coverage" across 5,600 cells in the country.

Advertising

Netflix To Take On Google and Amazon By Building Its Own Ad Server (techcrunch.com) 20

Lauren Forristal writes via TechCrunch: Netflix announced during its Upfronts presentation on Wednesday that it's launching its own advertising technology platform only a year and a half after entering the ads business. This move pits it against other industry heavyweights with ad servers, like Google, Amazon and Comcast. The announcement signifies a significant shake-up in the streaming giant's advertising approach. The company originally partnered with Microsoft to develop its ad tech, letting Netflix enter the ad space quickly and catch up with rivals like Hulu, which has had its own ad server for over a decade.

With the launch of its in-house ad tech, Netflix is poised to take full control of its advertising future. This strategic move will empower the company to create targeted and personalized ad experiences that resonate with its massive user base of 270 million subscribers. [...] Netflix didn't say exactly how its in-house solution will change the way ads are delivered, but it's likely it'll move away from generic advertisements. According to the Financial Times, Netflix wants to experiment with "episodic" campaigns, which involve a series of ads that tell a story rather than delivering repetitive ads. During the presentation, Netflix also noted that it'll expand its buying capabilities this summer, which will now include The Trade Desk, Google's Display & Video 360 and Magnite as partners. Notably, competitor Disney+ also has an advertising agreement with The Trade Desk. Netflix also touted the success of its ad-supported tier, reporting that 40 million global monthly active users opt for the plan. The ad tier had around 5 million users within six months of launching.

Google

Google Opens Up Its Smart Home To Everyone (theverge.com) 27

Google is opening up API access to its Google Home smart home platform, allowing app developers to access over 600 million connected devices and tap into the Google Home automation engine. In addition, Google announced that it'll be turning Google TVs into Google Home hubs and Matter controllers. The Verge reports: The Home APIs can access any Matter device or Works with Google Home device, and allows developers to build their own experiences using Google Home devices and automations into their apps on both iOS and Android. This is a significant move for Google in opening up its smart home platform, following shutting down its Works with Nest program back in 2019. [...] The Home APIs are already available to Google's early access partners, and Google is opening up a waitlist for any developer to sign up today. "We are opening up access on a rolling basis so they can begin building and testing within their apps," Anish Kattukaran, head of product at Google Home and Nest, told The Verge. "The first apps using the home APIs will be able to publish to the Play and App stores in the fall."

The access is not just limited to smart home developers. In the blog post, Matt Van Der Staay, engineering director at Google Home, said the Home APIs could be used to connect smart home devices to fitness or delivery apps. "You can build a complex app to manage any aspect of a smart home, or simply integrate with a smart device to solve pain points -- like turning on the lights automatically before the food delivery driver arrives." The APIs allow access to most devices connected to Google Home and to the Google Home structure, letting apps control and manage devices such as Matter light bulbs or the Nest Learning Thermostat. They also leverage Google Home's automation signals, such as motion from sensors, an appliance's mode changing, or Google's Home and Away mode, which uses various signals to determine if a home is occupied. [...]

What's also interesting here is that developers will be able to use the APIs to access and control any device that works with the new smart home standard Matter and even let people set up Matter devices directly in their app. This should make it easier for them to implement Matter into their apps, as it will add devices to the Google Home fabric, so they won't have to develop their own. In addition, Google announced that it's vastly expanding its Matter infrastructure by turning Google TVs into Google Home hubs and Matter controllers. Any app using the APIs would need a Google hub in a customer's home in order to control Matter devices locally. Later this year, Chromecast with Google TV, select panel TVs with Google TV running Android 14 or higher, and some LG TVs will be upgraded to become Google Home hubs.

Additionally, Kattukaran said Google will upgrade all of its existing home hubs -- which include Nest Hub (second-gen), Nest Hub Max, and Google Wifi -- with a new ability called Home runtime. "With this update, all hubs for Google Home will be able to directly route commands from any app built with Home APIs (such as the Google Home app) to a customer's Matter device locally, when the phone is on the same Wi-Fi network as the hub," said Kattukaran. This means you should see "significant latency improvements using local control via a hub for Google Home," he added.

Android

Android 15 Gets 'Private Space,' Theft Detection, and AV1 Support (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica: Google's I/O conference is still happening, and while the big keynote was yesterday, major Android beta releases have apparently been downgraded to Day 2 of the show. Google really seems to want to be primarily an AI company now. Android already had some AI news yesterday, but now that the code-red requirements have been met, we have actual OS news. One of the big features in this release is "Private Space," which Google says is a place where users can "keep sensitive apps away from prying eyes, under an additional layer of authentication."

First, there's a new hidden-by-default portion of the app drawer that can hold these sensitive apps, and revealing that part of the app drawer requires a second round of lock-screen authentication, which can be different from the main phone lock screen. Just like "Work" apps, the apps in this section run on a separate profile. To the system, they are run by a separate "user" with separate data, which your non-private apps won't be able to see. Interestingly, Google says, "When private space is locked by the user, the profile is paused, i.e., the apps are no longer active," so apps in a locked Private Space won't be able to show notifications unless you go through the second lock screen.

Another new Android 15 feature is "Theft Detection Lock," though it's not in today's beta and will be out "later this year." The feature uses accelerometers and "Google AI" to "sense if someone snatches your phone from your hand and tries to run, bike, or drive away with it." Any of those theft-like shock motions will make the phone auto-lock. Of course, Android's other great theft prevention feature is "being an Android phone." Android 12L added a desktop-like taskbar to the tablet UI, showing recent and favorite apps at the bottom of the screen, but it was only available on the home screen and recent apps. Third-party OEMs immediately realized that this bar should be on all the time and tweaked Android to allow it. In Android 15, an always-on taskbar will be a normal option, allowing for better multitasking on tablets and (presumably) open foldable phones. You can also save split-screen-view shortcuts to the taskbar now.

An Android 13 developer feature, predictive back, will finally be turned on by default. When performing the back gesture, this feature shows what screen will show up behind the current screen you're swiping away. This gives a smoother transition and a bit of a preview, allowing you to cancel the back gesture if you don't like where it's going. [...] Because this is a developer release, there are tons of under-the-hood changes. Google is a big fan of its own next-generation AV1 video codec, and AV1 support has arrived on various devices thanks to hardware decoding being embedded in many flagship SoCs. If you can't do hardware AV1 decoding, though, Android 15 has a solution for you: software AV1 decoding.

Google

Google Will Use Gemini To Detect Scams During Calls (techcrunch.com) 57

At Google I/O on Tuesday, Google previewed a feature that will alert users to potential scams during a phone call. TechCrunch reports: The feature, which will be built into a future version of Android, uses Gemini Nano, the smallest version of Google's generative AI offering, which can be run entirely on-device. The system effectively listens for "conversation patterns commonly associated with scams" in real time. Google gives the example of someone pretending to be a "bank representative." Common scammer tactics like password requests and gift cards will also trigger the system. These are all pretty well understood to be ways of extracting your money from you, but plenty of people in the world are still vulnerable to these sorts of scams. Once set off, it will pop up a notification that the user may be falling prey to unsavory characters.

No specific release date has been set for the feature. Like many of these things, Google is previewing how much Gemini Nano will be able to do down the road sometime. We do know, however, that the feature will be opt-in.

AI

Project Astra Is Google's 'Multimodal' Answer to the New ChatGPT (wired.com) 9

At Google I/O today, Google introduced a "next-generation AI assistant" called Project Astra that can "make sense of what your phone's camera sees," reports Wired. It follows yesterday's launch of GPT-4o, a new AI model from OpenAI that can quickly respond to prompts via voice and talk about what it 'sees' through a smartphone camera or on a computer screen. It "also uses a more humanlike voice and emotionally expressive tone, simulating emotions like surprise and even flirtatiousness," notes Wired. From the report: In response to spoken commands, Astra was able to make sense of objects and scenes as viewed through the devices' cameras, and converse about them in natural language. It identified a computer speaker and answered questions about its components, recognized a London neighborhood from the view out of an office window, read and analyzed code from a computer screen, composed a limerick about some pencils, and recalled where a person had left a pair of glasses. [...] Google says Project Astra will be made available through a new interface called Gemini Live later this year. [Demis Hassabis, the executive leading the company's effort to reestablish leadership inÂAI] said that the company is still testing several prototype smart glasses and has yet to make a decision on whether to launch any of them.

Hassabis believes that imbuing AI models with a deeper understanding of the physical world will be key to further progress in AI, and to making systems like Project Astra more robust. Other frontiers of AI, including Google DeepMind's work on game-playing AI programs could help, he says. Hassabis and others hope such work could be revolutionary for robotics, an area that Google is also investing in. "A multimodal universal agent assistant is on the sort of track to artificial general intelligence," Hassabis said in reference to a hoped-for but largely undefined future point where machines can do anything and everything that a human mind can. "This is not AGI or anything, but it's the beginning of something."

Movies

Google Targets Filmmakers With Veo, Its New Generative AI Video Model (theverge.com) 12

At its I/O developer conference today, Google announced Veo, its latest generative AI video model, that "can generate 'high-quality' 1080p resolution videos over a minute in length in a wide variety of visual and cinematic styles," reports The Verge. From the report: Veo has "an advanced understanding of natural language," according to Google's press release, enabling the model to understand cinematic terms like "timelapse" or "aerial shots of a landscape." Users can direct their desired output using text, image, or video-based prompts, and Google says the resulting videos are "more consistent and coherent," depicting more realistic movement for people, animals, and objects throughout shots. Google DeepMind CEO Demis Hassabis said in a press preview on Monday that video results can be refined using additional prompts and that Google is exploring additional features to enable Veo to produce storyboards and longer scenes.

As is the case with many of these AI model previews, most folks hoping to try Veo out themselves will likely have to wait a while. Google says it's inviting select filmmakers and creators to experiment with the model to determine how it can best support creatives and will build on these collaborations to ensure "creators have a voice" in how Google's AI technologies are developed. Some Veo features will also be made available to "select creators in the coming weeks" in a private preview inside VideoFX -- you can sign up for the waitlist here for an early chance to try it out. Otherwise, Google is also planning to add some of its capabilities to YouTube Shorts "in the future."
Along with its new AI models and tools, Google said it's expanding its AI content watermarking and detection technology. The company's new upgraded SynthID watermark imprinting system "can now mark video that was digitally generated, as well as AI-generated text," reports The Verge in a separate report.
AI

AI in Gmail Will Sift Through Emails, Provide Search Summaries, Send Emails (arstechnica.com) 43

An anonymous reader shares a report: Google's Gemini AI often just feels like a chatbot built into a text-input field, but you can really start to do special things when you give it access to a ton of data. Gemini in Gmail will soon be able to search through your entire backlog of emails and show a summary in a sidebar. That's simple to describe but solves a huge problem with email: even searching brings up a list of email subjects, and you have to click-through to each one just to read it.

Having an AI sift through a bunch of emails and provide a summary sounds like a huge time saver and something you can't do with any other interface. Google's one-minute demo of this feature showed a big blue Gemini button at the top right of the Gmail web app. Tapping it opens the normal chatbot sidebar you can type in. Asking for a summary of emails from a certain contact will get you a bullet-point list of what has been happening, with a list of "sources" at the bottom that will jump you right to a certain email. In the last second of the demo, the user types, "Reply saying I want to volunteer for the parent's group event," hits "enter," and then the chatbot instantly, without confirmation, sends an email.

AI

Google's Invisible AI Watermark Will Help Identify Generative Text and Video 17

Among Google's swath of new AI models and tools announced today, the company is also expanding its AI content watermarking and detection technology to work across two new mediums. The Verge: Google's DeepMind CEO, Demis Hassabis, took the stage for the first time at the Google I/O developer conference on Tuesday to talk not only about the team's new AI tools, like the Veo video generator, but also about the new upgraded SynthID watermark imprinting system. It can now mark video that was digitally generated, as well as AI-generated text.

[...] Google had also enabled SynthID to inject inaudible watermarks into AI-generated music that was made using DeepMind's Lyria model. SynthID is just one of several AI safeguards in development to combat misuse by the tech, safeguards that the Biden administration is directing federal agencies to build guidelines around.
Google

Google Search Will Now Show AI-Generated Answers To Millions By Default (engadget.com) 59

Google is shaking up Search. On Tuesday, the company announced big new AI-powered changes to the world's dominant search engine at I/O, Google's annual conference for developers. From a report: With the new features, Google is positioning Search as more than a way to simply find websites. Instead, the company wants people to use its search engine to directly get answers and help them with planning events and brainstorming ideas. "[With] generative AI, Search can do more than you ever imagined," wrote Liz Reid, vice president and head of Google Search, in a blog post. "So you can ask whatever's on your mind or whatever you need to get done -- from researching to planning to brainstorming -- and Google will take care of the legwork."

Google's changes to Search, the primary way that the company makes money, are a response to the explosion of generative AI ever since OpenAI's ChatGPT released at the end of 2022. [...] Starting today, Google will show complete AI-generated answers in response to most search queries at the top of the results page in the US. Google first unveiled the feature a year ago at Google I/O in 2023, but so far, anyone who wanted to use the feature had to sign up for it as part of the company's Search Labs platform that lets people try out upcoming features ahead of their general release. Google is now making AI Overviews available to hundreds of millions of Americans, and says that it expects it to be available in more countries to over a billion people by the end of the year.

Android

Google is Experimenting With Running Chrome OS on Android (androidauthority.com) 23

An anonymous reader shares a report: At a privately held event, Google recently demonstrated a special build of Chromium OS -- code-named "ferrochrome" -- running in a virtual machine on a Pixel 8. However, Chromium OS wasn't shown running on the phone's screen itself. Rather, it was projected to an external display, which is possible because Google recently enabled display output on its Pixel 8 series. Time will tell if Google is thinking of positioning Chrome OS as a platform for its desktop mode ambitions and Samsung DeX rival.
Google

Apple and Google Introduce Alerts for Unwanted Bluetooth Tracking 39

Apple and Google have launched a new industry standard called "Detecting Unwanted Location Trackers" to combat the misuse of Bluetooth trackers for stalking. Starting Monday, iPhone and Android users will receive alerts when an unknown Bluetooth device is detected moving with them. The move comes after numerous cases of trackers like Apple's AirTags being used for malicious purposes.

Several Bluetooth tag companies have committed to making their future products compatible with the new standard. Apple and Google said they will continue collaborating with the Internet Engineering Task Force to further develop this technology and address the issue of unwanted tracking.
Facebook

Meta Explores AI-Assisted Earphones With Cameras (theinformation.com) 23

An anonymous reader shares a report: Meta Platforms is exploring developing AI-powered earphones with cameras, which the company hopes could be used to identify objects and translate foreign languages, according to three current employees. Meta's work on a new AI device comes as several tech companies look to develop AI wearables, and after Meta added an AI assistant to its Ray-Ban smart glasses.

Meta CEO Mark Zuckerberg has seen several possible designs for the device but has not been satisfied with them, one of the employees said. It's unclear if the final design will be in-ear earbuds or over-the-ear headphones. Internally, the project goes by the name Camerabuds. The timeline is also unclear. Company leaders had expected a design to be approved in the first quarter, one of the people said. But employees have identified multiple potential problems with the project, including that long hair may cover the cameras on the earbuds. Also, putting a camera and batteries into tiny devices could make the earbuds bulky and risk making them uncomfortably hot. Attaching discreet cameras to a wearable device may also raise privacy concerns, as Google learned with Google Glass.

Google

Google Bringing Project Starline's 'Magic Window' Experience To Real Video Calls 18

Google announced on Monday that it is preparing to bring its experimental Project Starline videoconferencing technology to the market. The company is collaborating with HP to integrate the system, which creates 3D projections of participants, into existing platforms like Google Meet and Zoom. The move aims to make the technology more accessible for offices and conference rooms, potentially transforming the way people communicate and collaborate remotely.
Social Networks

Reddit Grows, Seeks More AI Deals, Plans 'Award' Shops, and Gets Sued (yahoo.com) 45

Reddit reported its first results since going public in late March. Yahoo Finance reports: Daily active users increased 37% year over year to 82.7 million. Weekly active unique users rose 40% from the prior year. Total revenue improved 48% to $243 million, nearly doubling the growth rate from the prior quarter, due to strength in advertising. The company delivered adjusted operating profits of $10 million, versus a $50.2 million loss a year ago. [Reddit CEO Steve] Huffman declined to say when the company would be profitable on a net income basis, noting it's a focus for the management team. Other areas of focus include rolling out a new user interface this year, introducing shopping capabilities, and searching for another artificial intelligence content licensing deal like the one with Google.
Bloomberg notes that already Reddit "has signed licensing agreements worth $203 million in total, with terms ranging from two to three years. The company generated about $20 million from AI content deals last quarter, and expects to bring in more than $60 million by the end of the year."

And elsewhere Bloomberg writes that Reddit "plans to expand its revenue streams outside of advertising into what Huffman calls the 'user economy' — users making money from others on the platform... " In the coming months Reddit plans to launch new versions of awards, which are digital gifts users can give to each other, along with other products... Reddit also plans to continue striking data licensing deals with artificial intelligence companies, expanding into international markets and evaluating potential acquisition targets in areas such as search, he said.
Meanwhile, ZDNet notes that this week a Reddit announcement "introduced a new public content policy that lays out a framework for how partners and third parties can access user-posted content on its site." The post explains that more and more companies are using unsavory means to access user data in bulk, including Reddit posts. Once a company gets this data, there's no limit to what it can do with it. Reddit will continue to block "bad actors" that use unauthorized methods to get data, the company says, but it's taking additional steps to keep users safe from the site's partners.... Reddit still supports using its data for research: It's creating a new subreddit — r/reddit4researchers — to support these initiatives, and partnering with OpenMined to help improve research. Private data is, however, going to stay private.

If a company wants to use Reddit data for commercial purposes, including advertising or training AI, it will have to pay. Reddit made this clear by saying, "If you're interested in using Reddit data to power, augment, or enhance your product or service for any commercial purposes, we require a contract." To be clear, Reddit is still selling users' data — it's just making sure that unscrupulous actors have a tougher time accessing that data for free and researchers have an easier time finding what they need.

And finally, there's some court action, according to the Register. Reddit "was sued by an unhappy advertiser who claims that internet giga-forum sold ads but provided no way to verify that real people were responsible for clicking on them." The complaint [PDF] was filed this week in a U.S. federal court in northern California on behalf of LevelFields, a Virginia-based investment research platform that relies on AI. It says the biz booked pay-per-click ads on the discussion site starting September 2022... That arrangement called for Reddit to use reasonable means to ensure that LevelField's ads were delivered to and clicked on by actual people rather than bots and the like. But according to the complaint, Reddit broke that contract...

LevelFields argues that Reddit is in a particularly good position to track click fraud because it's serving ads on its own site, as opposed to third-party properties where it may have less visibility into network traffic... Nonetheless, LevelFields's effort to obtain IP address data to verify the ads it was billed for went unfulfilled. The social media site "provided click logs without IP addresses," the complaint says. "Reddit represented that it was not able to provide IP addresses."

"The plaintiffs aspire to have their claim certified as a class action," the article adds — along with an interesting statistic.

"According to Juniper Research, 22 percent of ad spending last year was lost to click fraud, amounting to $84 billion."

Slashdot Top Deals