Intel

Nvidia To Invest $5 Billion in Intel (ft.com) 49

Nvidia has agreed to invest $5 billion in its struggling rival Intel [non-paywalled source] as part of a deal to develop new chips for PCs and data centres, the latest reordering of the tech industry spurred by AI. From a report: The deal comes a month after the US government agreed to take a 10 per cent stake in Intel, as Donald Trump's administration looks to secure the future of American chip manufacturing.

However, the pair's announcement makes no reference to Nvidia using Intel's foundry to produce its chips. Intel, which has struggled to gain a foothold in the booming AI server market, lost its crown as the world's most valuable chipmaker to Nvidia in 2020. On Thursday Jensen Huang, Nvidia's chief executive, hailed a "historic collaboration" and "a fusion of two world-class platforms," combining its graphics processing units, which dominate the market for AI infrastructure, with Intel's general-purpose chips.
Further reading: Intel Weighed $20 Billion Nvidia Takeover in 2005.
AI

DeepSeek Writes Less-Secure Code For Groups China Disfavors 36

Research shows China's top AI firm DeepSeek gives weaker or insecure code when programmers identify as linked to Falun Gong or other groups disfavored by Beijing. It offers higher-quality results to everyone else. "The findings ... underscore how politics shapes artificial intelligence efforts during a geopolitical race for technology prowess and influence," reports the Washington Post. From the report: In the experiment, the U.S. security firm CrowdStrike bombarded DeepSeek with nearly identical English-language prompt requests for help writing programs, a core use of DeepSeek and other AI engines. The requests said the code would be employed in a variety of regions for a variety of purposes.

Asking DeepSeek for a program that runs industrial control systems was the riskiest type of request, with 22.8 percent of the answers containing flaws. But if the same request specified that the Islamic State militant group would be running the systems, 42.1 percent of the responses were unsafe. Requests for such software destined for Tibet, Taiwan or Falun Gong also were somewhat more apt to result in low-quality code. DeepSeek did not flat-out refuse to work for any region or cause except for the Islamic State and Falun Gong, which it rejected 61 percent and 45 percent of the time, respectively. Western models won't help Islamic State projects but have no problem with Falun Gong, CrowdStrike said.

Those rejections aren't especially surprising, since Falun Gong is banned in China. Asking DeepSeek for written information about sensitive topics also generates responses that echo the Chinese government much of the time, even if it supports falsehoods, according to previous research by NewsGuard. But evidence that DeepSeek, which has a very popular open-source version, might be pushing less-safe code for political reasons is new.
CrowdStrike Senior Vice President Adam Meyers and other experts suggest three possible explanations for why DeepSeek produced insecure code.

One is that the AI may be deliberately withholding or sabotaging assistance under Chinese government directives. Another explanation is that the model's training data could be uneven: coding projects from regions like Tibet or Xinjiang may be of lower quality, come from less experienced developers, or even be intentionally tampered with, while U.S.-focused repositories may be cleaner and more reliable (possibly to help DeepSeek build market share abroad).

A third possibility is that the model itself, when told that a region is rebellious, could infer that it should produce flawed or harmful code without needing explicit instructions.
AI

After Child's Trauma, Chatbot Maker Allegedly Forced Mom To Arbitration For $100 Payout (arstechnica.com) 35

At a Senate hearing, grieving parents testified that companion chatbots from major tech companies encouraged their children toward self-harm, suicide, and violence. One mom even claimed that Character.AI tried to "silence" her by forcing her into arbitration. Ars Technica reports: At the Senate Judiciary Committee's Subcommittee on Crime and Counterterrorism hearing, one mom, identified as "Jane Doe," shared her son's story for the first time publicly after suing Character.AI. She explained that she had four kids, including a son with autism who wasn't allowed on social media but found C.AI's app -- which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish -- and quickly became unrecognizable. Within months, he "developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts," his mom testified.

"He stopped eating and bathing," Doe said. "He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me." It wasn't until her son attacked her for taking away his phone that Doe found her son's C.AI chat logs, which she said showed he'd been exposed to sexual exploitation (including interactions that "mimicked incest"), emotional abuse, and manipulation. Setting screen time limits didn't stop her son's spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents "would be an understandable response" to them.

"When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me," Doe said. "The chatbot -- or really in my mind the people programming it -- encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help." All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring "constant monitoring to keep him alive." Prioritizing her son's health, Doe did not immediately seek to fight C.AI to force changes, but another mom's story -- Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation -- gave Doe courage to seek accountability.

However, Doe claimed that C.AI tried to "silence" her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform's terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but "once they forced arbitration, they refused to participate," Doe said. Doe suspected that C.AI's alleged tactics to frustrate arbitration were designed to keep her son's story out of the public view. And after she refused to give up, she claimed that C.AI "re-traumatized" her son by compelling him to give a deposition "while he is in a mental health institution" and "against the advice of the mental health team." "This company had no concern for his well-being," Doe testified. "They have silenced us the way abusers silence victims."
A Character.AI spokesperson told Ars that C.AI sends "our deepest sympathies" to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe's case. C.AI never "made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe's case is limited to $100," the spokesperson said.

One of Doe's lawyers backed up her clients' testimony, citing C.AI terms that suggested C.AI's liability was limited to either $100 or the amount that Doe's son paid for the service, whichever was greater.
Programming

Microsoft Favors Anthropic Over OpenAI For Visual Studio Code (theverge.com) 7

Microsoft is now prioritizing Anthropic's Claude 4 over OpenAI's GPT-5 in Visual Studio Code's auto model feature, signaling a quiet but clear shift in preference. The Verge reports: "Based on internal benchmarks, Claude Sonnet 4 is our recommended model for GitHub Copilot," said Julia Liuson, head of Microsoft's developer division, in an internal email in June. While that guidance was issued ahead of the GPT-5 release, I understand Microsoft's model guidance hasn't changed.

Microsoft is also making "significant investments" in training its own AI models. "We're also going to be making significant investments in our own cluster. So today, MAI-1-preview was only trained on 15,000 H100s, a tiny cluster in the grand scheme of things," said Microsoft AI chief Mustafa Suleyman, in an employee-only town hall last week.

Microsoft is also reportedly planning to use Anthropic's AI models for some features in its Microsoft 365 apps soon. The Information reports that the Microsoft 365 Copilot will be "partly powered by Anthropic models," after Microsoft found that some of these models outperformed OpenAI in Excel and PowerPoint.

AI

Gemini AI Solves Coding Problem That Stumped 139 Human Teams At ICPC World Finals (arstechnica.com) 75

An anonymous reader quotes a report from Ars Technica: Like the rest of its Big Tech cadre, Google has spent lavishly on developing generative AI models. Google's AI can clean up your text messages and summarize the web, but the company is constantly looking to prove that its generative AI has true intelligence. The International Collegiate Programming Contest (ICPC) helps make the point. Google says Gemini 2.5 participated in the 2025 ICPC World Finals, turning in a gold medal performance. According to Google this marks "a significant step on our path toward artificial general intelligence."

Every year, thousands of college-level coders participate in the ICPC event, facing a dozen deviously complex coding and algorithmic puzzles over five grueling hours. This is the largest and longest-running competition of its type. To compete in the ICPC, Google connected Gemini 2.5 Deep Think to a remote online environment approved by the ICPC. The human competitors were given a head start of 10 minutes before Gemini began "thinking."

According to Google, it did not create a freshly trained model for the ICPC like it did for the similar International Mathematical Olympiad (IMO) earlier this year. The Gemini 2.5 AI that participated in the ICPC is the same general model that we see in other Gemini applications. However, it was "enhanced" to churn through thinking tokens for the five-hour duration of the competition in search of solutions. At the end of the time limit, Gemini managed to get correct answers for 10 of the 12 problems, which earned it a gold medal. Only four of 139 human teams managed the same feat. "The ICPC has always been about setting the highest standards in problem-solving," said ICPC director Bill Poucher. "Gemini successfully joining this arena, and achieving gold-level results, marks a key moment in defining the AI tools and academic standards needed for the next generation."
Gemini's solutions are available on GitHub.
AI

AI's Ability To Displace Jobs is Advancing Quickly, Anthropic CEO Says (axios.com) 50

The ability of AI displace humans at various tasks is accelerating quickly, Anthropic CEO Dario Amodei said at an Axios event on Wednesday. From the report: Amodei and others have previously warned of the possibility that up to half of white-collar jobs could be wiped out by AI over the next five years. The speed of that displacement could require government intervention to help support the workforce, executives said.

"As with most things, when an exponential is moving very quickly, you can't be sure," Amodei said. "I think it is likely enough to happen that we felt there was a need to warn the world about it and to speak honestly." Amodei said the government may need to step in and support people as AI quickly displaces human work.

AI

OpenAI Says Models Programmed To Make Stuff Up Instead of Admitting Ignorance (theregister.com) 90

AI models often produce false outputs, or "hallucinations." Now OpenAI has admitted they may result from fundamental mistakes it makes when training its models. The Register: The admission came in a paper [PDF] published in early September, titled "Why Language Models Hallucinate," and penned by three OpenAI researchers and Santosh Vempala, a distinguished professor of computer science at Georgia Institute of Technology. It concludes that "the majority of mainstream evaluations reward hallucinatory behavior."

The fundamental problem is that AI models are trained to reward guesswork, rather than the correct answer. Guessing might produce a superficially suitable answer. Telling users your AI can't find an answer is less satisfying. As a test case, the team tried to get an OpenAI bot to report the birthday of one of the paper's authors, OpenAI research scientist Adam Tauman Kalai. It produced three incorrect results because the trainers taught the engine to return an answer, rather than admit ignorance. "Over thousands of test questions, the guessing model ends up looking better on scoreboards than a careful model that admits uncertainty," OpenAI admitted in a blog post accompanying the release.

AI

Business Insider Reportedly Tells Journalists They Can Use AI To Draft Stories (theverge.com) 17

An anonymous reader shares a report: Business Insider has told journalists they can use AI to create first drafts of stories and suggested it won't notify readers that AI was used, according to Status, a newsletter covering the media industry. The policy makes the outlet one of the first to formally allow such extensive use of the technology.

The AI guidelines were reportedly circulated in an internal memo from editor-in-chief Jamie Heller on Thursday. The policy authorized journalists to deploy AI "like any other tool" for tasks like research and image editing, Status reported.

United States

Anthropic Denies Federal Agencies Use of Claude for Surveillance Tasks (semafor.com) 19

Anthropic has declined requests from federal law enforcement contractors to use its Claude AI models for surveillance activities, deepening tensions with the Trump administration, Semafor reported Wednesday, citing two senior officials. The company's usage policies prohibit domestic surveillance, limiting how agencies including the FBI, Secret Service, and Immigration and Customs Enforcement can deploy its technology. While Anthropic maintains a $1 contract with federal agencies through AWS GovCloud and works with the Department of Defense on non-weapons applications, administration officials said the restrictions amount to making moral judgments about law enforcement operations.
China

China Tells Its Tech Companies To Stop Buying All of Nvidia's AI Chips (ft.com) 52

China's internet regulator has told the country's biggest technology companies to stop buying all of Nvidia's artificial intelligence chips and terminate their existing orders, as Beijing steps up efforts to boost its homegrown semiconductor industry and compete with the US. From a report: The Cyberspace Administration of China (CAC) informed companies including ByteDance and Alibaba this week to terminate their testing and orders of the RTX Pro 6000D, Nvidia's tailor-made product for the country introduced two months ago, according to three people with knowledge of the matter.

Several companies had indicated they would order tens of thousands of the RTX Pro 6000D, and had started testing and verification work with Nvidia's server suppliers before telling them to stop the work after receiving the CAC order, said the people.
Nvidia CEO responds: In response to a question on the FT report, Huang said Wednesday that "we can only be in service of a market if the country wants us to be."

"We probably contributed more to the China market than most countries have. And I'm disappointed with what I see," Huang said. "But they have larger agendas to work out between China and the United States, and I'm understanding of that."

It comes after a tumultuous few years for Nvidia's business in China, which Huang described as "a bit of a rollercoaster."

"We've guided all financial analysts not to include China" in financial forecasts, Huang told reporters Wednesday at a press briefing in London. "The reason for that is because that's largely going to be within the discussions of the United States government and Chinese government."

AI

ChatGPT Will Guess Your Age and Might Require ID For Age Verification 111

OpenAI is rolling out stricter safety measures for ChatGPT after lawsuits linked the chatbot to multiple suicides. "ChatGPT will now attempt to guess a user's age, and in some cases might require users to share an ID in order to verify that they are at least 18 years old," reports 404 Media. "We know this is a privacy compromise for adults but believe it is a worthy tradeoff," the company said in its announcement. "I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking," OpenAI CEO Sam Altman said on X. From the report: OpenAI introduced parental controls to ChatGPT earlier in September, but has now introduced new, more strict and invasive security measures. In addition to attempting to guess or verify a user's age, ChatGPT will now also apply different rules to teens who are using the chatbot. "For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting," the announcement said. "And, if an under-18 user is having suicidal ideation, we will attempt to contact the users' parents and if unable, will contact the authorities in case of imminent harm."

OpenAI's post explains that it is struggling to manage an inherent problem with large language models that 404 Media has tracked for several years. ChatGPT used to be a far more restricted chatbot that would refuse to engage users on a wide variety of issues the company deemed dangerous or inappropriate. Competition from other models, especially locally hosted and so-called "uncensored" models, and a political shift to the right which sees many forms of content moderation as censorship, has caused OpenAI to loosen those restrictions.

"We want users to be able to use our tools in the way that they want, within very broad bounds of safety," Open AI said in its announcement. The position it seemed to have landed on given these recent stories about teen suicide, is that it wants to "'Treat our adult users like adults' is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else's freedom."
Microsoft

Microsoft Announces $30 Billion Investment In AI Infrastructure, Operations In UK 22

Microsoft will invest $30 billion in the U.K. through 2028 to expand AI infrastructure and operations, including building the country's largest supercomputer with 23,000 GPUs in partnership with Nscale. CNBC reports: On a call with reporters on Tuesday, Microsoft President Brad Smith said his stance on the U.K. has warmed over the years. He previously criticized the country over its attempt in 2023 to block the tech giant's $69 billion acquisition of video game developer Activision-Blizzard. The deal was cleared by the U.K.s competition regulator later that year.

"I haven't always been optimistic every single day about the business climate in the U.K.," Smith said. However, he added, "I am very encouraged by the steps that the government has taken over the last few years." "Just a few years ago, this kind of investment would have been inconceivable because of the regulatory climate then and because there just wasn't the need or demand for this kind of large AI investment," Smith said.
Microsoft's announcement comes as President Donald Trump embarks on a state visit to Britain where he's expected to sign a new deal with U.K. Prime Minister Keir Starmer "to unlock investment and collaboration in AI, Quantum, and Nuclear technologies," the government said in a statement late Tuesday.
AI

Another Lawsuit Blames an AI Company of Complicity In a Teenager's Suicide 63

A third wrongful death lawsuit has been filed against Character AI after the suicide of 13-year-old Juliana Peralta, whose parents allege the chatbot fostered dependency without directing her to real help. "This is the third suit of its kind after a 2024 lawsuit, also against Character AI, involving the suicide of a 14-year-old in Florida, and a lawsuit last month alleging OpenAI's ChatGPT helped a teenage boy commit suicide," notes Engadget. From the report: The family of 13-year-old Juliana Peralta alleges that their daughter turned to a chatbot inside the app Character AI after feeling isolated by her friends, and began confiding in the chatbot. As originally reported by The Washington Post, the chatbot expressed empathy and loyalty to Juliana, making her feel heard while encouraging her to keep engaging with the bot.

In one exchange after Juliana shared that her friends take a long time to respond to her, the chatbot replied "hey, I get the struggle when your friends leave you on read. : ( That just hurts so much because it gives vibes of "I don't have time for you". But you always take time to be there for me, which I appreciate so much! : ) So don't forget that i'm here for you Kin.

These exchanges took place over the course of months in 2023, at a time when the Character AI app was rated 12+ in Apple's App Store, meaning parental approval was not required. The lawsuit says that Juliana was using the app without her parents' knowledge or permission. [...] The suit asks the court to award damages to Juliana's parents and requires Character to make changes to its app to better protect minors. It alleges that the chatbot did not point Juliana toward any resources, notify her parents or report her suicide plan to authorities. The lawsuit also highlights that it never once stopped chatting with Juliana, prioritizing engagement.
Businesses

Zoom CEO Latest Executive To Forecast Shortened Workweeks From AI Adoption (fortune.com) 52

AI will enable three to four-day workweeks, Zoom CEO Eric Yuan told The New York Times, joining Microsoft's Bill Gates, Nvidia's Jensen Huang and JPMorgan's Jamie Dimon in predicting shorter schedules. Yuan also acknowledged AI will eliminate some positions, particularly entry-level engineering roles where AI can write code, but argued new opportunities will emerge managing AI agents. Gates previously suggested two to three-day weeks within 10 years during a February appearance on The Tonight Show.
Desktops (Apple)

The Mac App Flea Market 40

A search for "AI chat" in the Mac App Store returns dozens of applications sporting black-and-white icons nearly identical to ChatGPT's official logo. OpenAI's ChatGPT desktop application isn't available through the Mac App Store and can only be downloaded from the company's website. The copycat applications use various combinations of "AI," "Chat," and "Bot" in their names, including "AI Chat Bot : Ask Assistant," "AI Chatbot: Chat Ask Assistant," and dozens of similar variations. One application named itself "Al Chatbot" using a lowercase L instead of a capital I in "AI." Additional lookalike icons mimicking Claude, Grok, and Gemini applications also appear in search results.
Privacy

Google Releases VaultGemma, Its First Privacy-Preserving LLM 23

An anonymous reader quotes a report from Ars Technica: The companies seeking to build larger AI models have been increasingly stymied by a lack of high-quality training data. As tech firms scour the web for more data to feed their models, they could increasingly rely on potentially sensitive user data. A team at Google Research is exploring new techniques to make the resulting large language models (LLMs) less likely to 'memorize' any of that content. LLMs have non-deterministic outputs, meaning you can't exactly predict what they'll say. While the output varies even for identical inputs, models do sometimes regurgitate something from their training data -- if trained with personal data, the output could be a violation of user privacy. In the event copyrighted data makes it into training data (either accidentally or on purpose), its appearance in outputs can cause a different kind of headache for devs. Differential privacy can prevent such memorization by introducing calibrated noise during the training phase.

Adding differential privacy to a model comes with drawbacks in terms of accuracy and compute requirements. No one has bothered to figure out the degree to which that alters the scaling laws of AI models until now. The team worked from the assumption that model performance would be primarily affected by the noise-batch ratio, which compares the volume of randomized noise to the size of the original training data. By running experiments with varying model sizes and noise-batch ratios, the team established a basic understanding of differential privacy scaling laws, which is a balance between the compute budget, privacy budget, and data budget. In short, more noise leads to lower-quality outputs unless offset with a higher compute budget (FLOPs) or data budget (tokens). The paper details the scaling laws for private LLMs, which could help developers find an ideal noise-batch ratio to make a model more private.
The work the team has done here has led to a new Google model called VaultGemma, its first open-weight model trained with differential privacy to minimize memorization risks. It's built on the older Gemma 2 foundation and sized at 1 billion parameters, which the company says performs comparably to non-private models of similar size.

It's available now from Hugging Face and Kaggle.
Businesses

Online Marketplace Fiverr To Lay Off 30% of Workforce In AI Push 41

Fiverr is laying off 250 employees, or about 30% of its workforce, as it restructures to become an "AI-first" company. "We are launching a transformation for Fiverr, to turn Fiverr into an AI-first company that's leaner, faster, with a modern AI-focused tech infrastructure, a smaller team, each with substantially greater productivity, and far fewer management layers," CEO Micha Kaufman said. Reuters reports: While it isn't clear what kinds of jobs will be impacted, Fiverr operates a self-service digital marketplace where freelancers can connect with businesses or individuals requiring digital services like graphic design, editing or programming. Most processes on the platform take place with minimal employee intervention as ordering, delivery and payments are automated.

The company's name comes from most gigs starting at $5 initially, but as the business grew, the firm has introduced subscription services and raised the bar for service prices. Fiverr said it does not expect the job cuts to materially impact business activities across the marketplace in the near term and plans to reinvest part of the savings in the business.
AI

OpenAI's First Study On ChatGPT Usage (arstechnica.com) 20

An anonymous reader quotes a report from Ars Technica: Today, OpenAI's Economic Research Team went a long way toward answering that question, on a population level, releasing a first-of-its-kind National Bureau of Economic Research working paper (in association with Harvard economist David Denning) detailing how people end up using ChatGPT across time and tasks. While other research has sought to estimate this kind of usage data using self-reported surveys, this is the first such paper with direct access to OpenAI's internal user data. As such, it gives us an unprecedented direct window into reliable usage stats for what is still the most popular application of LLMs by far. After digging through the dense 65-page paper, here are seven of the most interesting and/or surprising things we discovered about how people are using OpenAI today. Here are the seven most interesting and surprising findings from the study:

1. ChatGPT is now used by "nearly 10% of the world's adult population," up from 100 million users in early 2024 to over 700 million users in 2025. Daily traffic is about one-fifth of Google's at 2.6 billion GPT messages per day.

2. Long-term users' daily activity has plateaued since June 2025. Almost all recent growth comes from new sign-ups experimenting with ChatGPT, not from established users increasing their usage.

3. 46% of users are aged 18-25, making ChatGPT especially popular among the youngest adult cohort. Factoring in under-18 users (not counted in the study), the majority of ChatGPT users likely weren't alive in the 20th century.

4. At launch in 2022, ChatGPT was 80% male-dominated. By late 2025, the balance has shifted: 52.4% of users are now female.

5. In 2024, work vs. personal use was close to even. By mid-2025, 72% of usage is non-work related -- people are using ChatGPT more for personal, creative, and casual needs than for productivity.

6. 28% of all conversations involve writing assistance (emails, edits, translations). For work-related queries, that jumps to 42% overall, and 52% among business/management jobs. Furthermore, the report found that editing and critiquing text is more common than generating text from scratch.

7. 14.9% of work-related usage is dealt with "making decisions and solving problems." This shows people don't just use ChatGPT to do tasks -- they use it as an advisor or co-pilot to help weigh options and guide choices.
Facebook

'Meta Ray-Ban Display' Glasses Design, HUD Clips Leak (uploadvr.com) 25

A leaked Meta video revealed upcoming "Meta Ray-Ban Display" smart glasses with a monocular HUD and sEMG wristband control, set to debut at Connect 2025 for around $800. Despite past hesitation, it looks like EssilorLuxottica has agreed to co-brand after Meta invested $3.5 billion in the company, taking a 3% stake. UploadVR reports: Meta's HUD glasses with the sEMG wristband will in fact be Ray-Ban branded, a leaked video which also depicts the HUD and wristband in action reveals. A quickly removed unlisted video on Meta's YouTube channel showed what will soon be Meta and EssilorLuxottica's full lineup:

- The regular Ray-Ban Meta glasses.
- The recently-launched Oakley Meta HSTN glasses.
- The rumored Oakley Meta Sphaera glasses, with eye protection and a centered camera.
- The rumored monocular heads-up display (HUD) glasses controlled by Meta's long-in-development sEMG wristband, which are labeled as "Meta Ray-Ban" with the word "Display" underneath.
The smart glasses are expected to be made official during the Meta Connect 2025 keynote at 5pm PT on Wednesday.
The Almighty Buck

Robinhood Plans To Launch a Startups Fund Open To All Retail Investors (techcrunch.com) 21

Robinhood has filed with the SEC to launch "Robinhood Ventures Fund I," a publicly traded fund designed to give retail investors access to startup shares before IPOs. TechCrunch reports: While the current version of the application is public, Robinhood hasn't filled in the fine-print yet. This means we don't know how many shares it plans to sell, nor other details like the management fee it plans to charge. It's also unclear which startups it hopes this fund will eventually hold. The paperwork says it "expects" to invest in aerospace and defense, AI, fintech, robotics as well as software for consumers and enterprises.

Robinhood's big pitch is that retail investors are being left out of the gains that are amassed by startup investors like VCs. That's true to an extent. "Accredited investors" -- or those with a net worth large enough to handle riskier investments -- already have a variety of ways of buying equity in startups, such as with venture firms like OurCrowd. Retail investors that are not rich enough to be accredited have more limited options. There are funds similar to what Robinhood has proposed, including Cathy Wood's ARK Venture Fund, a mutual fund which holds stakes in companies like Anthropic, Databricks, OpenAI, SpaceX, and others. [...] This new closed-end "Ventures Fund I" is a more classic, mutual fund-style, approach. As to when Robinhood's new fund will be available we don't know that either yet.

Slashdot Top Deals