Microsoft

Microsoft Is Calling Too Many Things 'Copilot,' Watchdog Says (businessinsider.com) 49

An anonymous reader shares a report: Microsoft has a long history of being criticized for coming up with clunky product names, and for changing them so often it's hard for customers to keep up. The company's own employees once joked in a viral video that the iPod would have been called the "Microsoft I-pod Pro 2005 XP Human Ear Professional Edition with Subscription" had it been created by Microsoft. The latest gripe among some employees and customers: The company's tendency to slap "Copilot" on everything AI.

"There is a delusion on our marketing side where literally everything has been renamed to have Copilot it in," one employee told Business Insider late last year. "Everything is Copilot. Nothing else matters. They want a Copilot tie-in for everything." Now, an advertising watchdog is weighing in. The Better Business Bureau's National Advertising Division reviewed Microsoft's advertising for its Copilot AI tools. NAD called out Microsoft's "universal use of the product description as 'Copilot'" and said "consumers would not necessarily understand the difference," according to a recent report from the watchdog.

"Microsoft is using 'Copilot' across all Microsoft Office applications and Business Chat, despite differences in functionality and the manual steps that are required for Business Chat to produce the same results as Copilot in a specific Microsoft Office app," NAD further explained in an email to BI. NAD did not mention any specific recommendations on product names. But it did say Microsoft should modify claims that Copilot works "seamlessly across all your data" because all of the company's tools with the Copilot moniker don't work together continuously in a way consumers might expect.

Government

California AI Policy Report Warns of 'Irreversible Harms' 52

An anonymous reader quotes a report from Time Magazine: While AI could offer transformative benefits, without proper safeguards it could facilitate nuclear and biological threats and cause "potentially irreversible harms," a new report commissioned by California Governor Gavin Newsom has warned. "The opportunity to establish effective AI governance frameworks may not remain open indefinitely," says the report, which was published on June 17 (PDF). Citing new evidence that AI can help users source nuclear-grade uranium and is on the cusp of letting novices create biological threats, it notes that the cost for inaction at this current moment could be "extremely high." [...]

"Foundation model capabilities have rapidly advanced since Governor Newsom vetoed SB 1047 last September," the report states. The industry has shifted from large language AI models that merely predict the next word in a stream of text toward systems trained to solve complex problems and that benefit from "inference scaling," which allows them more time to process information. These advances could accelerate scientific research, but also potentially amplify national security risks by making it easier for bad actors to conduct cyberattacks or acquire chemical and biological weapons. The report points to Anthropic's Claude 4 models, released just last month, which the company said might be capable of helping would-be terrorists create bioweapons or engineer a pandemic. Similarly, OpenAI's o3 model reportedly outperformed 94% of virologists on a key evaluation. In recent months, new evidence has emerged showing AI's ability to strategically lie, appearing aligned with its creators' goals during training but displaying other objectives once deployed, and exploit loopholes to achieve its goals, the report says. While "currently benign, these developments represent concrete empirical evidence for behaviors that could present significant challenges to measuring loss of control risks and possibly foreshadow future harm," the report says.

While Republicans have proposed a 10 year ban on all state AI regulation over concerns that a fragmented policy environment could hamper national competitiveness, the report argues that targeted regulation in California could actually "reduce compliance burdens on developers and avoid a patchwork approach" by providing a blueprint for other states, while keeping the public safer. It stops short of advocating for any specific policy, instead outlining the key principles the working group believes California should adopt when crafting future legislation. It "steers clear" of some of the more divisive provisions of SB 1047, like the requirement for a "kill switch" or shutdown mechanism to quickly halt certain AI systems in case of potential harm, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, and a lead-writer of the report.

Instead, the approach centers around enhancing transparency, for example through legally protecting whistleblowers and establishing incident reporting systems, so that lawmakers and the public have better visibility into AI's progress. The goal is to "reap the benefits of innovation. Let's not set artificial barriers, but at the same time, as we go, let's think about what we're learning about how it is that the technology is behaving," says Cuellar, who co-led the report. The report emphasizes this visibility is crucial not only for public-facing AI applications, but for understanding how systems are tested and deployed inside AI companies, where concerning behaviors might first emerge. "The underlying approach here is one of 'trust but verify,'" Singer says, a concept borrowed from Cold War-era arms control treaties that would involve designing mechanisms to independently check compliance. That's a departure from existing efforts, which hinge on voluntary cooperation from companies, such as the deal between OpenAI and Center for AI Standards and Innovation (formerly the U.S. AI Safety Institute) to conduct pre-deployment tests. It's an approach that acknowledges the "substantial expertise inside industry," Singer says, but "also underscores the importance of methods of independently verifying safety claims."
China

Why China is Giving Away Its Tech For Free 39

An anonymous reader shares a report: [...] the rise in China of open technology, which relies on transparency and decentralisation, is awkward for an authoritarian state. If the party's patience with open-source fades, and it decides to exert control, that could hinder both the course of innovation at home, and developers' ability to export their technology abroad.

China's open-source movement first gained traction in the mid-2010s. Richard Lin, co-founder of Kaiyuanshe, a local open-source advocacy group, recalls that most of the early adopters were developers who simply wanted free software. That changed when they realised that contributing to open-source projects could improve their job prospects. Big firms soon followed, with companies like Huawei backing open-source work to attract talent and cut costs by sharing technology.

Momentum gathered in 2019 when Huawei was, in effect, barred by America from using Android. That gave new urgency to efforts to cut reliance on Western technology. Open-source offered a faster way for Chinese tech firms to take existing code and build their own programs with help from the country's vast community of developers. In 2020 Huawei launched OpenHarmony, a family of open-source operating systems for smartphones and other devices. It also joined others, including Alibaba, Baidu and Tencent, to establish the OpenAtom Foundation, a body dedicated to open-source development. China quickly became not just a big contributor to open-source programs, but also an early adopter of software. JD.com, an e-commerce firm, was among the first to deploy Kubernetes.

AI has lately given China's open-source movement a further boost. Chinese companies, and the government, see open models as the quickest way to narrow the gap with America. DeepSeek's models have generated the most interest, but Qwen, developed by Alibaba, is also highly rated, and Baidu has said it will soon open up the model behind its Ernie chatbot.
Businesses

AI Will Shrink Amazon's Workforce In the Coming Years, CEO Jassy Says 36

In a memo to employees on Tuesday, Amazon CEO Andy Jassy said that the company's corporate workforce will shrink in the coming years as it adopts more generative AI tools and agents. "We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs," Jassy said. "It's hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce." CNBC reports: Jassy wrote that employees should learn how to use AI tools and experiment and figure out "how to get more done with scrappier teams." The directive comes as Amazon has laid off more than 27,000 employees since 2022 and made several cuts this year. Amazon cut about 200 employees in its North America stores unit in January and a further 100 in its devices and services unit in May. Amazon had 1.56 million full-time and part-time employees in its global workforce as of the end of March, according to financial filings. The company also employs temporary workers in its warehouse operations, along with some contractors.

Amazon is using generative AI broadly across its internal operations, including in its fulfillment network where the technology is being deployed to assist with inventory placement, demand forecasting and the efficiency of warehouse robots, Jassy said. [...] In his most recent letter to shareholders, Jassy called generative AI a "once-in-a-lifetime reinvention of everything we know." He added that the technology is "saving companies lots of money," and stands to shift the norms in coding, search, financial services, shopping and other areas. "It's moving faster than almost anything technology has ever seen," Jassy said.
Businesses

OpenAI Weighs 'Nuclear Option' of Antitrust Complaint Against Microsoft (arstechnica.com) 28

An anonymous reader quotes a report from Ars Technica: OpenAI executives have discussed filing an antitrust complaint with US regulators against Microsoft, the company's largest investor, The Wall Street Journal reported Monday, marking a dramatic escalation in tensions between the two long-term AI partners. OpenAI, which develops ChatGPT, has reportedly considered seeking a federal regulatory review of the terms of its contract with Microsoft for potential antitrust law violations, according to people familiar with the matter. The potential antitrust complaint would likely argue that Microsoft is using its dominant position in cloud services and contractual leverage to suppress competition, according to insiders who described it as a "nuclear option," the WSJ reports.

The move could unravel one of the most important business partnerships in the AI industry -- a relationship that started with a $1 billion investment by Microsoft in 2019 and has grown to include billions more in funding, along with Microsoft's exclusive rights to host OpenAI models on its Azure cloud platform. The friction centers on OpenAI's efforts to transition from its current nonprofit structure into a public benefit corporation, a conversion that needs Microsoft's approval to complete. The two companies have not been able to agree on details after months of negotiations, sources told Reuters. OpenAI's existing for-profit arm would become a Delaware-based public benefit corporation under the proposed restructuring.

The companies are discussing revising the terms of Microsoft's investment, including the future equity stake it will hold in OpenAI. According to The Information, OpenAI wants Microsoft to hold a 33 percent stake in a restructured unit in exchange for foregoing rights to future profits. The AI company also wants to modify existing clauses that give Microsoft exclusive rights to host OpenAI models in its cloud. The restructuring debate attracted criticism from multiple quarters. Elon Musk alleges that OpenAI violated contract provisions by prioritizing profit over the public good in its push to advance AI and has sued to block the conversion. In December, Meta Platforms also asked California's attorney general to block OpenAI's conversion to a for-profit company.

AI

Salesforce Announces 6% Price Increase as It Pushes AI Features (salesforce.com) 22

Salesforce will raise prices by an average of 6% across its Enterprise and Unlimited Editions starting August 1, 2025, while simultaneously launching new AI-focused product tiers that significantly expand the cost structure for its platform. The price increases will affect Sales Cloud, Service Cloud, Field Service, and select Industries Clouds, though the company's Foundations, Starter, and Pro Editions will remain unchanged, the company said Tuesday.

Salesforce is justifying the move by citing "significant ongoing innovation and customer value delivered through our products." The company is also rolling out new Agentforce add-ons starting at $125 per user monthly, which provide unlimited AI agent usage for employees, while premium Agentforce 1 Editions begin at $550 per user monthly and include comprehensive AI capabilities plus cloud-specific features. Slack pricing has also been restructured, with the Business+ plan now costing $15 per user monthly and a new Enterprise+ tier added, though basic Slack access will be free for all Salesforce customers.
Firefox

'Firefox Is Dead To Me' (theregister.com) 240

Veteran columnist Steven J. Vaughan-Nichols declared that Firefox was "dead" to him in a scathing opinion piece Tuesday that cites Mozilla's strategic missteps and the browser's declining technical performance as evidence of terminal decline. Vaughan-Nichols argues that Mozilla has fundamentally betrayed user trust by removing a longstanding promise never to sell personal data from its privacy policy in February, replacing it with a weaker pledge to "protect your personal information."

The veteran technology writer also criticized Mozilla's decision to discontinue Pocket, a popular article-saving service, and Fakespot, which identified fake online reviews, while pursuing what he called a misguided AI strategy. He cited user reports of Firefox running up to 30% slower than Chrome, consuming excessive memory, and failing to properly load major websites. Mozilla has also become financially more vulnerable, he argued, noting CFO Eric Muhlheim's admission that the company depends on Google for 90% of its revenue. According to federal data he cited, Firefox holds just 1.9% of the browser market, leading him to conclude the browser is "done."
AI

AI Use at Work Nearly Doubles in Two Years (gallup.com) 34

AI use among U.S. workers has nearly doubled over two years, with 40% of employees now using artificial intelligence tools at least a few times annually, up from 21% in 2023, according to new Gallup research.

Daily AI usage has doubled in the past year alone, jumping from 4% to 8% of workers. The growth concentrates heavily among white-collar employees, where 27% report frequent AI use compared to just 9% of production and front-line workers.
AI

How Do Olympiad Medalists Judge LLMs in Competitive Programming? 23

A new benchmark assembled by a team of International Olympiad medalists suggests the hype about large language models beating elite human coders is premature. LiveCodeBench Pro, unveiled in a 584-problem study [PDF] drawn from Codeforces, ICPC and IOI contests, shows the best frontier model clears just 53% of medium-difficulty tasks on its first attempt and none of the hard ones, while grandmaster-level humans routinely solve at least some of those highest-tier problems.

The researchers measured models and humans on the same Elo scale used by Codeforces and found that OpenAI's o4-mini-high, when stripped of terminal tools and limited to one try per task, lands at an Elo rating of 2,116 -- hundreds of points below the grandmaster cutoff and roughly the 1.5 percentile among human contestants. A granular tag-by-tag autopsy identified implementation-friendly, knowledge-heavy problems -- segment trees, graph templates, classic dynamic programming -- as the models' comfort zone; observation-driven puzzles such as game-theory endgames and trick-greedy constructs remain stubborn roadblocks.

Because the dataset is harvested in real time as contests conclude, the authors argue it minimizes training-data leakage and offers a moving target for future systems. The broader takeaway is that impressive leaderboard jumps often reflect tool use, multiple retries or easier benchmarks rather than genuine algorithmic reasoning, leaving a conspicuous gap between today's models and top human problem-solvers.
Social Networks

Social Media Now Main Source of News In US, Research Suggests (bbc.com) 169

An anonymous reader quotes a report from the BBC: Social media and video networks have become the main source of news in the US, overtaking traditional TV channels and news websites, research suggests. More than half (54%) of people get news from networks like Facebook, X and YouTube -- overtaking TV (50%) and news sites and apps (48%), according to the Reuters Institute. "The rise of social media and personality-based news is not unique to the United States, but changes seem to be happening faster -- and with more impact -- than in other countries," a report found. Podcaster Joe Rogan was the most widely-seen personality, with almost a quarter (22%) of the population saying they had come across news or commentary from him in the previous week. The report's author Nic Newman said the rise of social video and personality-driven news "represents another significant challenge for traditional publishers." Other key findings from the report include:
- TikTok is the fastest-growing social and video platform, now used for news by 17% globally (up 4% from last year).
- AI chatbot use for news is increasing, especially among under-25s, where it's twice as popular as in the general population.
- Most people believe AI will reduce transparency, accuracy, and trust in news.
- Across all age groups, trusted news brands with proven accuracy remain valued, even if used less frequently.
AI

Salesforce Study Finds LLM Agents Flunk CRM and Confidentiality Tests 21

A new Salesforce-led study found that LLM-based AI agents struggle with real-world CRM tasks, achieving only 58% success on simple tasks and dropping to 35% on multi-step ones. They also demonstrated poor confidentiality awareness. "Agents demonstrate low confidentiality awareness, which, while improvable through targeted prompting, often negatively impacts task performance," a paper published at the end of last month said. The Register reports: The Salesforce AI Research team argued that existing benchmarks failed to rigorously measure the capabilities or limitations of AI agents, and largely ignored an assessment of their ability to recognize sensitive information and adhere to appropriate data handling protocols.

The research unit's CRMArena-Pro tool is fed a data pipeline of realistic synthetic data to populate a Salesforce organization, which serves as the sandbox environment. The agent takes user queries and decides between an API call or a response to the users to get more clarification or provide answers.

"These findings suggest a significant gap between current LLM capabilities and the multifaceted demands of real-world enterprise scenarios," the paper said. [...] AI agents might well be useful, however, organizations should be wary of banking on any benefits before they are proven.
AI

OpenAI, Growing Frustrated With Microsoft, Has Discussed Making Antitrust Complaints To Regulators (wsj.com) 19

Tensions between OpenAI and Microsoft over the future of their famed AI partnership are flaring up. WSJ, minutes ago: OpenAI wants to loosen Microsoft's grip on its AI products and computing resources, and secure the tech giant's blessing for its conversion into a for-profit company. Microsoft's approval of the conversion is key to OpenAI's ability to raise more money and go public.

But the negotiations have been so difficult that in recent weeks, OpenAI's executives have discussed what they view as a nuclear option: accusing Microsoft of anticompetitive behavior during their partnership, people familiar with the matter said. That effort could involve seeking federal regulatory review of the terms of the contract for potential violations of antitrust law, as well as a public campaign, the people said.

Windows

LibreOffice Explains 'Real Costs' of Upgrading to Microsoft's Windows 11, Urges Taking Control with Linux (documentfoundation.org) 221

KDE isn't the only organization reaching out to " as Microsoft prepares to end support for Windows 10.

"Now, The Document Foundation, maker of LibreOffice, has also joined in to support the Endof10 initiative," reports the tech blog Neowin: The foundation writes: "You don't have to follow Microsoft's upgrade path. There is a better option that puts control back in the hands of users, institutions, and public bodies: Linux and LibreOffice. Together, these two programmes offer a powerful, privacy-friendly and future-proof alternative to the Windows + Microsoft 365 ecosystem."

It further adds the "real costs" of upgrading to Windows 11 as it writes:

"The move to Windows 11 isn't just about security updates. It increases dependence on Microsoft through aggressive cloud integration, forcing users to adopt Microsoft accounts and services. It also leads to higher costs due to subscription and licensing models, and reduces control over how your computer works and how your data is managed. Furthermore, new hardware requirements will render millions of perfectly good PCs obsolete.... The end of Windows 10 does not mark the end of choice, but the beginning of a new era. If you are tired of mandatory updates, invasive changes, and being bound by the commercial choices of a single supplier, it is time for a change. Linux and LibreOffice are ready — 2025 is the right year to choose digital freedom!"

The first words on LibreOffice's announcement? "The countdown has begun...."
Youtube

Fake Bands and Artificial Songs are Taking Over YouTube and Spotify (elpais.com) 137

Spain's newspaper El Pais found an entire fake album on YouTube titled Rumba Congo (1973). And they cite a study from France's International Confederation of Societies of Authors and Composers that estimated revenue from AI-generated music will rise to $4 billion in 2028, generating 20% of all streaming platforms' revenue: One of the major problems with this trend is the lack of transparency. María Teresa Llano, an associate professor at the University of Sussex who studies the intersection of creativity, art and AI, emphasizes this aspect: "There's no way for people to know if something is AI or not...." On Spotify Community — a forum for the service's users — a petition is circulating that calls for clear labeling of AI-generated music, as well as an option for users to block these types of songs from appearing on their feeds. In some of these forums, the rejection of AI-generated music is palpable.

Llano mentions the feelings of deception or betrayal that listeners may experience, but asserts that this is a personal matter. There will be those who feel this way, as well as those who admire what the technology is capable of... One of the keys to tackling the problem is to include a warning on AI-generated songs. YouTube states that content creators must "disclose to viewers when realistic content [...] is made with altered or synthetic media, including generative AI." Users will see this if they glance at the description. But this is only when using the app, because on a computer, they will have to scroll down to the very end of the description to get the warning....

The professor from the University of Sussex explains one of the intangibles that justifies the labeling of content: "In the arts, we can establish a connection with the artist; we can learn about their life and what influenced them to better understand their career. With artificial intelligence, that connection no longer exists."

YouTube says they may label AI-generated content if they become aware of it, and may also remove it altogether, according to the article. But Spotify "hasn't shared any policy for labeling AI-powered content..." In an interview with Gustav Söderström, Spotify's co-president and chief product & technology officer, he emphasized that AI "increases people's creativity" because more people can be creative, thanks to the fact that "you don't need to have fine motor skills on the piano." He also made a distinction between music generated entirely with AI and music in which the technology is only partially used. But the only limit he mentioned for moderating artificial music was copyright infringement... something that has been a red line for any streaming service for many years now. And such a violation is very difficult to legally prove when artificial intelligence is involved.
IT

Amazon's Return-to-Office Mandate Sparks Complaints from Disabled Employees (yahoo.com) 85

An anonymous reader shared this report from Bloomberg: Amazon's hard-line stance on getting disabled employees to return to the office has sparked a backlash, with workers alleging the company is violating the Americans with Disabilities Act as well as their rights to collectively bargain. At least two Amazon employees have filed complaints with the Equal Employment Opportunity Commission (EEOC) and the National Labor Relations Board, federal agencies that regulate working conditions. One of the workers said they provided the EEOC with a list of 18 "similarly situated" employees to emphasize that their experience isn't isolated and to help federal regulators with a possible investigation.

Disabled workers frustrated with how Amazon is handling their requests for accommodations — including exemptions to a mandate that they report to the office five days a week — are also venting their displeasure on internal chat rooms and have encouraged colleagues to answer surveys about the policies. Amazon has been deleting such posts and warning that they violate rules governing internal communications. One employee said they were terminated and another said they were told to find a different position after advocating for disabled workers on employee message boards. Both filed complaints with the EEOC and NLRB.

Amazon has told employees with disabilities they must now submit to a "multilevel leader review," Bloomberg reported in October, "and could be required to return to the office for monthlong trials to determine if accommodations meet their needs." (They received calls from "accommodation consultants" who also reviewed medical documentation, after which "another Amazon manager must sign off. If they don't, the request goes to a third manager...")

Bloomberg's new article remembers how several employees told them in November. "that they believed the system was designed to deny work-from-home accommodations and prompt employees with disabilities to quit, which some have done. Amazon denied the system was designed to encourage people to resign." Since then, workers have mobilized against the policy. One employee repeatedly posted an online survey seeking colleagues' reactions, defying the company's demands to stop. The survey ultimately generated feedback from more than 200 workers even though Amazon kept deleting it, and the results reflected strong opposition to Amazon's treatment of disabled workers. More than 71% of disabled Amazon employees surveyed said the company had denied or failed to meet most of their accommodation requests, while half indicated they faced "hostile" work environments after disclosing their disabilities and requesting accommodations.

One respondent said they sought permission to work from home after suffering multiple strokes that prevented them from driving. Amazon suggested moving closer to the office and taking mass transit, the person said in the survey. Another respondent said they couldn't drive for longer than 15-minute intervals due to chronic pain. Amazon's recommendation was to pull over and stretch during their commute, which the employee said was unsafe since they drive on a busy freeway... Amazon didn't dispute the accounts and said it considered a range of solutions to disability accommodations, including changes to an employee's commute.

Amazon is also "using AI to parse accommodation requests, read doctors' notes and make recommendations based on keywords," according to the article — another policy that's also generated internal opposition (and formed a "key element" of the complaint to the Equal Employment Opportunity Commission).

"The dispute could affect thousands of Amazon workers. An internal Slack channel for employees with disabilities has 13,000 members, one of the people said..."
AI

Meta's Llama 3.1 Can Recall 42% of the First Harry Potter Book (understandingai.org) 85

Timothy B. Lee has written for the Washington Post, Vox.com, and Ars Technica — and now writes a Substack blog called "Understanding AI."

This week he visits recent research by computer scientists and legal scholars from Stanford, Cornell, and West Virginia University that found that Llama 3.1 70BÂ(released in July 2024) has memorized 42% of the first Harry Potter book well enough to reproduce 50-token excerpts at least half the time... The paper was published last month by a team of computer scientists and legal scholars from Stanford, Cornell, and West Virginia University. They studied whether five popular open-weight models — three from Meta and one each from Microsoft and EleutherAI — were able to reproduce text from Books3, a collection of books that is widely used to train LLMs. Many of the books are still under copyright... Llama 3.1 70B — a mid-sized model Meta released in July 2024 — is far more likely to reproduce Harry Potter text than any of the other four models....

Interestingly, Llama 1 65B, a similar-sized model released in February 2023, had memorized only 4.4 percent of Harry Potter and the Sorcerer's Stone. This suggests that despite the potential legal liability, Meta did not do much to prevent memorization as it trained Llama 3. At least for this book, the problem got much worse between Llama 1 and Llama 3. Harry Potter and the Sorcerer's Stone was one of dozens of books tested by the researchers. They found that Llama 3.1 70B was far more likely to reproduce popular books — such as The Hobbit and George Orwell's 1984 — than obscure ones. And for most books, Llama 3.1 70B memorized more than any of the other models...

For AI industry critics, the big takeaway is that — at least for some models and some books — memorization is not a fringe phenomenon. On the other hand, the study only found significant memorization of a few popular books. For example, the researchers found that Llama 3.1 70B only memorized 0.13 percent of Sandman Slim, a 2009 novel by author Richard Kadrey. That's a tiny fraction of the 42 percent figure for Harry Potter... To certify a class of plaintiffs, a court must find that the plaintiffs are in largely similar legal and factual situations. Divergent results like these could cast doubt on whether it makes sense to lump J.K. Rowling, Richard Kadrey, and thousands of other authors together in a single mass lawsuit. And that could work in Meta's favor, since most authors lack the resources to file individual lawsuits.

Why is it happening? "Maybe Meta had trouble finding 15 trillion distinct tokens, so it trained on the Books3 dataset multiple times. Or maybe Meta added third-party sources — such as online Harry Potter fan forums, consumer book reviews, or student book reports — that included quotes from Harry Potter and other popular books..."

"Or there could be another explanation entirely. Maybe Meta made subtle changes in its training recipe that accidentally worsened the memorization problem."
AI

Facial Recognition Error Sees Woman Wrongly Accused of Theft (bbc.com) 60

A chain of stores called Home Bargains installed facial recognition software to spot returning shoplifters. Unfortunately, "Facewatch" made a mistake.

"We acknowledge and understand how distressing this experience must have been," an anonymous Facewatch spokesperson tells the BBC, adding that the store using their technology "has since undertaken additional staff training."

A woman was accused by a store manager of stealing about £10 (about $13) worth of items ("Everyone was looking at me"). And then it happened again at another store when she was shopping with her 81-year-old mother on June 4th: "As soon as I stepped my foot over the threshold of the door, they were radioing each other and they all surrounded me and were like 'you need to leave the store'," she said. "My heart sunk and I was anxious and bothered for my mum as well because she was stressed...."

It was only after repeated emails to both Facewatch and Home Bargains that she eventually found there had been an allegation of theft of about £10 worth of toilet rolls on 8 May. Her picture had somehow been circulated to local stores alerting them that they should not allow her entry. Ms. Horan said she checked her bank account to confirm she had indeed paid for the items before Facewatch eventually responded to say a review of the incident showed she had not stolen anything. "Because I was persistent I finally got somewhere but it wasn't easy, it was really stressful," she said. "My anxiety was really bad — it really played with my mind, questioning what I've done for days. I felt anxious and sick. My stomach was turning for a week."

In one email from Facewatch seen by the BBC, the firm told Ms Horan it "relies on information submitted by stores" and the Home Bargains branches involved had since been "suspended from using the Facewatch system". Madeleine Stone, senior advocacy officer at the civil liberties campaign group Big Brother Watch, said they had been contacted by more than 35 people who have complained of being wrongly placed on facial recognition watchlists.

"They're being wrongly flagged as criminals," Ms Stone said.

"They've given no due process, kicked out of stores," adds the senior advocacy officer. "This is having a really serious impact." The group is now calling for the technology to be banned. "Historically in Britain, we have a history that you are innocent until proven guilty but when an algorithm, a camera and a facial recognition system gets involved, you are guilty. The Department for Science, Innovation and Technology said: "While commercial facial recognition technology is legal in the UK, its use must comply with strict data protection laws. Organisations must process biometric data fairly, lawfully and transparently, ensuring usage is necessary and proportionate.

"No one should find themselves in this situation."

Thanks to alanw (Slashdot reader #1,822) for sharing the article.
United States

New York State Begins Asking Employers to Offically Identify Layoffs Caused by AI (entrepreneur.com) 32

The state of New York is "asking companies to disclose whether AI is the reason for their layoffs," reports Entrepreneur: The move applies to New York State's existing Worker Adjustment and Retraining Notification (WARN) system and took effect in March, Bloomberg reported. New York is the first state in the U.S. to add the disclosure, which could help regulators understand AI's effects on the labor market.

The change takes the form of a checkbox added to a form employers fill out at least 90 days before a mass layoff or plant closure through the WARN system. Companies have to select whether "technological innovation or automation" is a reason for job cuts. If they choose that option, they are directed to a second menu where they are asked to name the specific technology responsible for layoffs, like AI or robots.

AI

Site for 'Accelerating' AI Use Across the US Government Accidentally Leaked on GitHub (404media.co) 18

America's federal government is building a website and API called ai.gov to "accelerate government innovation with AI", according to an early version spotted by 404 Media that was posted on GitHub by the U.S. government's General Services Administration.

That site "is supposed to launch on July 4," according to 404 Media's report, "and will include an analytics feature that shows how much a specific government team is using AI..." AI.gov appears to be an early step toward pushing AI tools into agencies across the government, code published on Github shows....

The early version of the page suggests that its API will integrate with OpenAI, Google, and Anthropic products. But code for the API shows they are also working on integrating with Amazon Web Services' Bedrock and Meta's LLaMA. The page suggests it will also have an AI-powered chatbot, though it doesn't explain what it will do... Currently, AI.gov redirects to whitehouse.gov. The demo website is linked to from Github (archive here) and is hosted on cloud.gov on what appears to be a staging environment. The text on the page does not show up on other websites, suggesting that it is not generic placeholder text...

In February, 404 Media obtained leaked audio from a meeting in which [the director of the GSA's Technology Transformation Services] told his team they would be creating "AI coding agents" that would write software across the entire government, and said he wanted to use AI to analyze government contracts.

AI

Do People Actually Want Smart Glasses Now? (cnn.com) 141

It's the technology "Google tried (and failed at) more than a decade ago," writes CNN. (And Meta and Amazon have also previously tried releasing glasses with cameras, speakers and voice assistants.)

Yet this week Snap announced that "it's building AI-equipped eyewear to be released in 2026."

Why the "renewed buzz"? CNN sees two factors:

- Smartphones "are no longer exciting enough to entice users to upgrade often."
- "A desire to capitalize on AI by building new hardware around it." Advancements in AI could make them far more useful than the first time around. Emerging AI models can process images, video and speech simultaneously, answer complicated requests and respond conversationally... And market research indicates the interest will be there this time. The smart glasses market is estimated to grow from 3.3 million units shipped in 2024 to nearly 13 million by 2026, according to ABI Research. The International Data Corporation projects the market for smart glasses like those made by Meta will grow from 8.8 in 2025 to nearly 14 million in 2026....

Apple is also said to be working on smart glasses to be released next year that would compete directly with Meta's, according to Bloomberg. Amazon's head of devices and services Panos Panay also didn't rule out the possibility of camera-equipped Alexa glasses similar to those offered by Meta in a February CNN interview. "But I think you can imagine, there's going to be a whole slew of AI devices that are coming," he said in February."

More than two million Ray-Ban Meta AI glasses have been sold since their launch in 2023, the article points out. But besides privacy concerns, "Perhaps the biggest challenge will be convincing consumers that they need yet another tech device in their life, particularly those who don't need prescription glasses. The products need to be worth wearing on people's faces all day."

But still, "Many in the industry believe that the smartphone will eventually be replaced by glasses or something similar to it," says Jitesh Ubrani, a research manager covering wearable devices for market research firm IDC.

"It's not going to happen today. It's going to happen many years from now, and all these companies want to make sure that they're not going to miss out on that change."

Slashdot Top Deals