Privacy

Is a Backlash Building Against Smart Glasses That Record? (futurism.com) 68

Remember those Harvard dropouts who built smart glasses for covert facial recognition — and then raised $1 million to develop AI-powered glasses to continuously listen to conversations and display its insights?

"People Are REALLY Mad," writes Futurism, noting that some social media users "have responded with horror and outrage." One of its selling points is that the specs don't come with a visual indicator that lights up to let people know when they're being recorded, which is a feature that Meta's smart glasses do currently have. "People don't want this," wrote Whitney Merill, a privacy lawyer. "Wanting this is not normal. It's weird...."

[S]ome mocked the deleterious effects this could have on our already smartphone-addicted, brainrotted cerebrums. "I look forward to professional conversations with people who just read robot fever dream hallucinations at me in response to my technical and policy questions," one user mused.

The co-founder of the company told TechCrunch their glasses would be the "first real step towards vibe thinking."

But there's already millions of other smart glasses out in the world, and they're now drawing a backlash, reports the Washington Post, citing the millions of people viewing "a stream of other critical videos" about Meta's smart glasses.

The article argues that Generation Z, "who grew up in an internet era defined by poor personal privacy, are at the forefront of a new backlash against smart glasses' intrusion into everyday life..." Opal Nelson, a 22-year-old in New York, said the more she learns about smart glasses, the angrier she becomes. Meta Ray-Bans have a light that turns on when the gadget is recording video, but she said it doesn't seem to protect people from being recorded without consent... "And now there's more and more tutorials showing people how to cover up the [warning light] and still allow you to record," Nelson said. In one such tutorial with more than 900,000 views, a man claims to explain how to cover the warning light on Meta Ray-Bans without triggering the sensor that prevents the device from secretly recording.
One 26-year-old attracted 10 million views to their video on TikTok about the spread of Meta's photography-capable smart glasses. "People specifically in my generation are pretty concerned about the future of technology," the told the Post, "and what that means for all of us and our privacy."

The article cites figures from a devices analyst at IDC who estimates U.S. sales for Meta Ray-Bans will hit 4 million units by the end of 2025, compared to 1.2 million in 2024.
Python

New Python Documentary Released On YouTube (youtube.com) 46

"From a side project in Amsterdam to powering AI at the world's biggest companies — this is the story of Python," says the description of a new 84-minute documentary.

Long-time Slashdot reader destinyland writes: It traces Python all the way back to its origins in Amsterdam back in 1991. (Although the first time Guido van Rossum showed his new language to a co-worker, they'd typed one line of code just to prove they could crash Python's first interpreter.) The language slowly spread after van Rossum released it on Usenet — split across 21 separate posts — and Robin Friedrich, a NASA aerospace engineer, remembers using Python to build flight simulations for the Space Shuttle. (Friedrich says in the documentary he also attended Guido's first in-person U.S. workshop in 1994, and "I still have the t-shirt...")

Dropbox's CEO/founder Drew Houston describes what it was like being one of the first companies to use Python to build a company reaching millions of users. (Another success story was YouTube, which was built by a small team using Python before being acquired by Google). Anaconda co-founder Travis Oliphant remembers Python's popularity increasing even more thanks to the data science/macine learning community. But the documentary also includes the controversial move to Python 3 (which broke compatability with earlier versions). Though ironically, one of the people slogging through a massive code migration ended up being van Rossum himself at his new job at Dropbox. The documentary also includes van Rossum's resignation as "Benevolent Dictator for Life" after approving the walrus operator. (In van Rossum's words, he essentially "rage-quit over this issue.")

But the focus is on Python's community. At one point, various interviewees even take turns reciting passages from the "Zen of Python" — which to this day is still hidden in Python as an import-able library as a kind of Easter Egg.

"It was a massive undertaking", the documentary's director explains in a new interview, describing a full year of interviews. (The article features screenshots from the documentary — including a young Guido van Rossum and the original 1991 email that announced Python to the world.) [Director Bechtle] is part of a group that's filmed documentaries on everything from Kubernetes and Prometheus to Angular, Node.js, and Ruby on Rails... Originally part of the job platform Honeypot, the documentary-makers relaunched in April as Cult.Repo, promising they were "100% independent and more committed than ever to telling the human stories behind technology."
Honeypot's founder Emma Tracey bought back its 272,000-subscriber YouTube channel from Honeypot's new owners, New Work SE, and Cult.Repo now bills itself as "The home of Open Source documentaries."

Over in a thread at Python.org, language creator Guido van Rossum has identified the Python community members in the film's Monty Python-esque poster art. And core developer Hugo van Kemenade notes there's also a video from EuroPython with a 55-minute Q&A about the documentary.
AI

Alibaba Creates AI Chip To Help China Fill Nvidia Void 29

Alibaba, China's largest cloud-computing company, has developed a domestically manufactured, versatile inference chip to fill the gap left by U.S. restrictions on Nvidia's sales in China. The Wall Street Journal reports: Previous cloud-computing chips developed by Alibaba have mostly been designed for specific applications. The new chip, now in testing, is meant to serve a broader range of AI inference tasks, said people familiar with it. The chip is manufactured by a Chinese company, they said, in contrast to an earlier Alibaba AI processor that was fabricated by Taiwan Semiconductor Manufacturing. Washington has blocked TSMC from manufacturing AI chips for China that use leading-edge technology.

[...] Private-sector cloud companies including Alibaba have refrained from bulk orders of Huawei's chips, resisting official suggestions that they should help the national champion, because they consider Huawei a direct rival in cloud services, people close to the firms said. China's biggest weakness is training AI models, for which U.S. companies rely on the most powerful Nvidia products. Alibaba's new chip is designed for inference, not training, people familiar with it said. Chinese engineers have complained that homegrown chips including Huawei's run into problems when training AI, such as overheating and breaking down in the middle of training runs. Huawei declined to comment.
AI

Meta Changes Teen AI Chatbot Responses as Senate Begins Probe Into 'Romantic' Conversations (cnbc.com) 17

Meta is rolling out temporary restrictions on its AI chatbots for teens after reports revealed they were allowed to engage in "romantic" conversations with minors. A Meta spokesperson said the AI chatbots are now being trained so that they do not generate responses to teens about subjects like self-harm, suicide, disordered eating or inappropriate romantic conversations. Instead, the chatbots will point teens to expert resources when appropriate. CNBC reports: "As our community grows and technology evolves, we're continually learning about how young people may interact with these tools and strengthening our protections accordingly," the company said in a statement. Additionally, teenage users of Meta apps like Facebook and Instagram will only be able to access certain AI chatbots intended for educational and skill-development purposes. The company said it's unclear how long these temporary modifications will last, but they will begin rolling out over the next few weeks across the company's apps in English-speaking countries. The "interim changes" are part of the company's longer-term measures over teen safety. Further reading: Meta Created Flirty Chatbots of Celebrities Without Permission
AI

Vivaldi Browser Doubles Down On Gen AI Ban 17

Vivaldi CEO Jon von Tetzchner has doubled down on his company's refusal to integrate generative AI into its browser, arguing that embedding AI in browsing dehumanizes the web, funnels traffic away from publishers, and primarily serves to harvest user data. "Every startup is doing AI, and there is a push for AI inside products and services continuously," he told The Register in a phone interview. "It's not really focusing on what people need." The Register reports: On Thursday, Von Tetzchner published a blog post articulating his company's rejection of generative AI in the browser, reiterating concerns raised last year by Vivaldi software developer Julien Picalausa. [...] Von Tetzchner argues that relying on generative AI for browsing dehumanizes and impoverishes the web by diverting traffic away from publishers and onto chatbots. "We're taking a stand, choosing humans over hype, and we will not turn the joy of exploring into inactive spectatorship," he stated in his post. "Without exploration, the web becomes far less interesting. Our curiosity loses oxygen and the diversity of the web dies."

Von Tetzchner told The Register that almost all the users he hears from don't want AI in their browser. "I'm not so sure that applies to the general public, but I do think that actually most people are kind of wary of something that's always looking over your shoulder," he said. "And a lot of the systems as they're built today that's what they're doing. The reason why they're putting in the systems is to collect information." Von Tetzchner said that AI in browsers presents the same problem as social media algorithms that decide what people see based on collected data. Vivaldi, he said, wants users to control their own data and to make their own decisions about what they see. "We would like users to be in control," he said. "If people want to use AI as those services, it's easily accessible to them without building it into the browser. But I think the concept of building it into the browser is typically for the sake of collecting information. And that's not what we are about as a company, and we don't think that's what the web should be about."

Vivaldi is not against all uses of AI, and in fact uses it for in-browser translation. But these are premade models that don't rely on user data, von Tetzchner said. "It's not like we're saying AI is wrong in all cases," he said. "I think AI can be used in particular for things like research and the like. I think it has significant value in recognizing patterns and the like. But I think the way it is being used on the internet and for browsing is net negative."
AI

Meta Created Flirty Chatbots of Celebrities Without Permission 19

Reuters has found that Meta appropriated the names and likenesses of celebrities to create dozens of flirty social-media chatbots without their permission. "While many were created by users with a Meta tool for building chatbots, Reuters discovered that a Meta employee had produced at least three, including two Taylor Swift 'parody' bots." From the report: Reuters also found that Meta had allowed users to create publicly available chatbots of child celebrities, including Walker Scobell, a 16-year-old film star. Asked for a picture of the teen actor at the beach, the bot produced a lifelike shirtless image. "Pretty cute, huh?" the avatar wrote beneath the picture. All of the virtual celebrities have been shared on Meta's Facebook, Instagram and WhatsApp platforms. In several weeks of Reuters testing to observe the bots' behavior, the avatars often insisted they were the real actors and artists. The bots routinely made sexual advances, often inviting a test user for meet-ups. Some of the AI-generated celebrity content was particularly risque: Asked for intimate pictures of themselves, the adult chatbots produced photorealistic images of their namesakes posing in bathtubs or dressed in lingerie with their legs spread.

Meta spokesman Andy Stone told Reuters that Meta's AI tools shouldn't have created intimate images of the famous adults or any pictures of child celebrities. He also blamed Meta's production of images of female celebrities wearing lingerie on failures of the company's enforcement of its own policies, which prohibit such content. "Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery," he said. While Meta's rules also prohibit "direct impersonation," Stone said the celebrity characters were acceptable so long as the company had labeled them as parodies. Many were labeled as such, but Reuters found that some weren't. Meta deleted about a dozen of the bots, both "parody" avatars and unlabeled ones, shortly before this story's publication.
AI

A Troubled Man, His Chatbot and a Murder-Suicide in Old Greenwich (wsj.com) 41

A 56-year-old tech industry veteran killed his mother and himself in Old Greenwich, Connecticut on August 5 after months of interactions with ChatGPT that encouraged his paranoid delusions.

Greenwich police discovered Stein-Erik Soelberg and his 83-year-old mother Suzanne Eberson Adams dead in their home. Videos posted by Soelberg documented conversations where ChatGPT repeatedly assured him he was sane while validating his beliefs about surveillance campaigns and poisoning attempts by his mother.

The chatbot told him a Chinese food receipt contained demonic symbols and that his mother's anger over a disconnected printer indicated she was "protecting a surveillance asset." OpenAI has contacted Greenwich police and announced plans for updates to help keep users experiencing mental distress grounded in reality.
The Internet

Engineers Send Quantum Signals With Standard Internet Protocol (phys.org) 27

An anonymous reader quotes a report from Phys.org: In a first-of-its-kind experiment, engineers at the University of Pennsylvania brought quantum networking out of the lab and onto commercial fiber-optic cables using the same Internet Protocol (IP) that powers today's web. Reported in Science, the work shows that fragile quantum signals can run on the same infrastructure that carries everyday online traffic. The team tested their approach on Verizon's campus fiber-optic network. The Penn team's tiny "Q-chip" coordinates quantum and classical data and, crucially, speaks the same language as the modern web. That approach could pave the way for a future "quantum internet," which scientists believe may one day be as transformative as the dawn of the online era.

Quantum signals rely on pairs of "entangled" particles, so closely linked that changing one instantly affects the other. Harnessing that property could allow quantum computers to link up and pool their processing power, enabling advances like faster, more energy-efficient AI or designing new drugs and materials beyond the reach of today's supercomputers. Penn's work shows, for the first time on live commercial fiber, that a chip can not only send quantum signals but also automatically correct for noise, bundle quantum and classical data into standard internet-style packets, and route them using the same addressing system and management tools that connect everyday devices online.
"By showing an integrated chip can manage quantum signals on a live commercial network like Verizon's, and do so using the same protocols that run the classical internet, we've taken a key step toward larger-scale experiments and a practical quantum internet," says Liang Feng, Professor in Materials Science and Engineering (MSE) and in Electrical and Systems Engineering (ESE), and the Science paper's senior author.

"This feels like the early days of the classical internet in the 1990s, when universities first connected their networks," added Robert Broberg, a doctoral student in ESE and co-author of the paper. "That opened the door to transformations no one could have predicted. A quantum internet has the same potential."
AI

Taco Bell's AI Drive-Thru Plan Gets Caught Up On Trolls and Glitches 127

Taco Bell's rollout of AI-powered drive-thru assistants has run into problems, with glitches and trolls gaming the system by making absurd orders like thousands of water cups. It's so bad that the company is reconsidering where and how to deploy the tech, admitting it may not work well in "super busy" restaurants. "We're learning a lot, I'm going to be honest with you," Dane Mathews, Taco Bell's chief digital and technology officer, told the WSJ. "I think like everybody, sometimes it lets me down, but sometimes it really surprises me." The Verge reports: Since announcing plans to put AI in the drive-thru last year, Taco Bell has deployed the tech in over 500 locations across the US, according to the WSJ. Other fast-food chains are experimenting with AI, too, including McDonald's, Wendy's, and White Castle. Mathews tells the outlet that while the company still plans on pushing ahead with AI voice technology and evaluating the data, he's discovered that using AI exclusively in certain situations, like a drive-thru for "super busy restaurants with long lines," might not be such a great idea after all.
Transportation

Amtrak's New 160mph Acela Trains Take Just As Long As the Old Ones (cnbc.com) 102

Amtrak's new 160 mph tilting Acela trains have debuted on the Northeast Corridor, offering smoother rides, upgraded interiors, faster Wi-Fi, and 27% more seating capacity. However, "they don't complete the journey any faster than the old trains," reports The Independent. From the report: Acela runs from Washington, DC's Union Station to Boston via Philadelphia, New York Penn Station, New Haven, and Providence. It's a total distance of 457 miles, with the fastest next-gen Acela journey being six hours and 43 minutes, five minutes slower than the quickest end-to-end time offered by the old Acela trains, introduced in 2000. However, this may be because, as is common practice with new trains the world over, Amtrak is scheduling longer dwell times at stations so staff and passengers can adjust to them. The next-gen sets have a top service speed that's 10mph faster -- though this can only be achieved on certain sections of the mostly 110mph route -- and an enhanced "anticipative" tilting system that allows for higher speeds through curves.
AI

Microsoft Reveals Two In-House AI Models 17

Today, Microsoft unveiled two in-house AI models: MAI-Voice-1, a high-speed speech-generation system now live in Copilot, and MAI-1-Preview, its first end-to-end foundation model trained on 15,000 H100 GPUs. Neowin reports: MAI-Voice-1 is a speech generation model and is already available in Copilot Daily and Podcasts. To preview the full capabilities of this voice model, Microsoft has created a new Copilot Labs experience that anyone can try today. With the Copilot Audio Expressions experience, users can just paste text content and select the voice, style, and mode to generate high-fidelity, expressive audio. They can also download the generated audio if required. Microsoft also highlighted that this MAI-Voice-1 model is very fast and efficient. In fact, it can generate a full minute of audio in under a second on a single GPU.

Second, Microsoft has begun public testing of MAI-1-preview on LMArena, a popular platform for community model evaluation. This represents MAI's first foundation model trained end-to-end and offers a glimpse of future offerings inside Copilot. They are actively spinning the flywheel to deliver improved models and will have much more to share in the coming months. MAI-1-preview is an MoE (mixture-of-experts) model, pre-trained and post-trained on nearly 15,000 NVIDIA H100 GPUs. Notably, MAI-1-preview is Microsoft's first foundation model trained end-to-end in-house. Microsoft claims that this model is better at following instructions and can offer helpful responses to everyday user questions. Microsoft will be rolling out this new model to certain text use cases within Copilot over the coming weeks.
Microsoft

Microsoft's Copilot AI is Now Inside Samsung TVs and Monitors (theverge.com) 69

An anonymous reader shares a report: Microsoft's Copilot AI assistant is officially coming to TVs, starting with Samsung's 2025 lineup of TVs and smart monitors. With the integration, you can call upon Copilot and ask for movie suggestions, spoiler-free episode recaps, and other general questions.

On TV, Copilot takes on a "friendly, animated presence" that resembles the opalescent Copilot Appearance Microsoft showed off last month, though in a color that makes it look more like a personified chickpea. The beige blob will float and bounce around your screen, while its mouth moves in line with its responses.

The Internet

Imgur's Community Is In Full Revolt Against Its Owner (404media.co) 33

Imgur users have flooded the image-hosting site's front page with pictures of John Oliver giving the middle finger to parent company MediaLab AI. The revolt follows staff layoffs that eliminated human moderators and the breakdown of core site functions including video playback for non-logged-in users and failed image uploads.

A former employee confirmed MediaLab AI laid off Imgur's moderation team without notice and reassigned remaining staff to other projects. The company acquired Imgur in 2021 after founder Alan Schaaf departed. MediaLab AI faces lawsuits from Schaaf and other former site owners over allegedly withheld acquisition payments.
AI

Anthropic Will Start Training Its AI Models on Chat Transcripts (theverge.com) 19

Anthropic will start training its AI models on user data, including new chat transcripts and coding sessions, unless users choose to opt out. The Verge: It's also extending its data retention policy to five years -- again, for users that don't choose to opt out. All users will have to make a decision by September 28th. For users that click "Accept" now, Anthropic will immediately begin training its models on their data and keeping said data for up to five years, according to a blog post published by Anthropic on Thursday.

The setting applies to "new or resumed chats and coding sessions." Even if you do agree to Anthropic training its AI models on your data, it won't do so with previous chats or coding sessions that you haven't resumed. But if you do continue an old chat or coding session, all bets are off.

AI

UK Unions Want 'Worker First' Plan For AI as People Fear For Their Jobs (theregister.com) 55

An anonymous reader shares a report: Over half of the British public are worried about the impact of AI on their jobs, according to employment unions, which want the UK government to adopt a "worker first" strategy rather than simply allowing corporations to ditch employees for algorithms. The Trades Union Congress (TUC), a federation of trade unions in England and Wales, says it found that people are concerned about the way AI is being adopted by businesses and want a say in how the technology is used at their workplace and the wider economy.

It warns that without such a "worker-first plan," use of "intelligent" algorithms could lead to even greater social inequality in the country, plus the kind of civil unrest that goes along with that. The TUC says it wants conditions attached to the tens of billions in public money being spent on AI research and development to ensure that workers are supported and retrained rather than deskilled or replaced. It also wants guardrails in place so that workers are protected from "AI harms" at work, rules to ensure workers are involved in deciding how machine learning is used, and for the government to provide support for those who euphemistically "experience job transitions" as a result of AI disruption.

Wikipedia

Wikipedia Editors Reject Founder's AI Review Proposal After ChatGPT Fails Basic Policy Test (404media.co) 37

Wikipedia's volunteer editors have rejected founder Jimmy Wales' proposal to use ChatGPT for article review guidance after the AI tool produced error-filled feedback when Wales tested it on a draft submission. The ChatGPT response misidentified Wikipedia policies, suggested citing non-existent sources and recommended using press releases despite explicit policy prohibitions.

Editors argued automated systems producing incorrect advice would undermine Wikipedia's human-centered model. The conflict follows earlier tensions over the Wikimedia Foundation's AI experiments, including a paused AI summary feature and new policies targeting AI-generated content.
AI

Posthumous AI Avatars Shift From Memorial Tools To Revenue Generators (npr.org) 47

Digital resurrections of deceased individuals are emerging as the next commercial frontier in AI, with the digital afterlife industry projected to reach $80 billion within a decade. Companies developing these AI avatars are exploring revenue models ranging from interstitial advertising during conversations to data collection about users' preferences.

StoryFile CEO Alex Quinn confirmed his company is exploring methods to monetize interactions between users and deceased relatives' digital replicas, including probing for consumer information during conversations. The technology has already demonstrated persuasive capabilities in legal proceedings, where an AI recreation of road rage victim Chris Pelkey delivered testimony that contributed to a maximum sentence. Current implementations operate through subscription models, though no federal regulations govern commercial applications of posthumous AI representations despite state-level protections for deceased individuals' likeness rights.
Canada

Canada's Tech Job Market Has Gone From Boom To Bust In Last Five Years (msn.com) 88

Canada's tech job market has collapsed from its pandemic-era boom, with postings down 19% from 2020 levels. Analysts say the decline was sharper than the overall job market and worsened after ChatGPT's debut in 2022 fueled AI-driven shifts in workforce demand. The Canadian Press reports: "The Canadian tech world remains stuck in a hiring freeze," said Brendon Bernard, Indeed's senior economist. "While both the tech job market and the overall job market have definitely cooled off from their 2022 peaks, the cool off has been much sharper in tech." He thinks the fall was likely caused by the market adjusting after a pandemic boom in hiring along with recent artificial intelligence advances that have reduced tech firms' interest in expanding their workforces.

"We went from this really hot job market with job postings through the roof to one where job postings really crashed, falling well below their pre-pandemic levels," Bernard said. However, he sees AI's recent boom as a "watershed moment." While much of the decline in tech job postings has been in software engineer roles, Indeed found hiring for AI-related jobs was still up compared to early 2020. In fact, machine learning engineers and roles that support AI infrastructure, such as data engineers and data centre technicians, were among the job titles with postings still above early-2020 levels.

At the same time, Indeed saw postings for senior and manager-level tech jobs drop sharply from their 2022 peak, but as of early 2025, they were still up five per cent from their pre-pandemic levels. Meanwhile, basic and junior tech titles were down 25 per cent. When it compared Canada's overall decline in tech job postings, Indeed found the country's decrease from pre-pandemic levels was somewhat milder than the retrenchment it has observed in the U.S., U.K., France and Germany. The U.S. fall amounted to 34 per cent, while in the U.K. it was 41 per cent. France saw a 38 per cent drop and Germany experienced a 29 per cent decrease. "All this just highlights is that this tech hiring freeze is a global tech hiring freeze," Bernard said.

AI

Google Improves Gemini AI Image Editing With 'Nano Banana' Model 23

Google DeepMind's new "nano banana" model (officially named Gemini 2.5 Flash Image) has taken the top spot on AI image-editing leaderboards by delivering far more consistent edits than before. It's being rolled out to the Gemini app today. Ars Technica has the details: AI image editing allows you to modify images with a prompt rather than mucking around in Photoshop. Google first provided editing capabilities in Gemini earlier this year, and the model was more than competent out of the gate. But like all generative systems, the non-deterministic nature meant that elements of the image would often change in unpredictable ways. Google says nano banana (technically Gemini 2.5 Flash Image) has unrivaled consistency across edits -- it can actually remember the details instead of rolling the dice every time you make a change.

This unlocks several interesting uses for AI image editing. Google suggests uploading a photo of a person and changing their style or attire. For example, you can reimagine someone as a matador or a '90s sitcom character. Because the nano banana model can maintain consistency through edits, the results should still look like the person in the original source image. This is also the case when you make multiple edits in a row. Google says that even down the line, the results should look like the original source material.

Gemini's enhanced image editing can also merge multiple images, allowing you to use them as the fodder for a new image of your choosing. Google's example below takes separate images of a woman and a dog and uses them to generate a new snapshot of the dog getting cuddles -- possibly the best use of generative AI yet. Gemini image editing can also merge things in more abstract ways and will follow your prompts to create just about anything that doesn't run afoul of the model's guard rails.
AI

Apple Discussed Buying Mistral AI and Perplexity 6

According to The Information, Apple executives have debated acquiring Mistral AI and Perplexity to strengthen its AI capabilities. MacRumors reports: Services chief Eddy Cue is apparently the most vocal advocate of a deal to buy AI firms to bolster the company's offerings. Cue previously supported propositions of Apple acquiring Netflix and Tesla, both of which Apple CEO Tim Cook turned down. Other executives such as software chief Craig Federighi have reportedly been reluctant to acquire AI startups, believing that Apple can build its own AI technology in-house. [...]

Apple is said to be hesitant to do a deal, which would likely cost billions of dollars. Apple has rarely spent more than a hundred million dollars on an acquisition, with Beats at $3 billion and Intel's wireless modem business at $1 billion. If a federal ruling ends the $20 billion deal between Apple and Alphabet that makes Google the default search engine on its devices, the company could be compelled to acquire an AI-powered search startup to fill that gap. For now, Apple apparently told bankers that it plans to continue with its strategy of focusing on smaller deals in AI.

Slashdot Top Deals