AI

Why AI Chatbots Can't Process Persian Social Etiquette 244

An anonymous reader quotes a report from Ars Technica: If an Iranian taxi driver waves away your payment, saying, "Be my guest this time," accepting their offer would be a cultural disaster. They expect you to insist on paying -- probably three times -- before they'll take your money. This dance of refusal and counter-refusal, called taarof, governs countless daily interactions in Persian culture. And AI models are terrible at it.

New research released earlier this month titled "We Politely Insist: Your LLM Must Learn the Persian Art of Taarof" shows that mainstream AI language models from OpenAI, Anthropic, and Meta fail to absorb these Persian social rituals, correctly navigating taarof situations only 34 to 42 percent of the time. Native Persian speakers, by contrast, get it right 82 percent of the time. This performance gap persists across large language models such as GPT-4o, Claude 3.5 Haiku, Llama 3, DeepSeek V3, and Dorna, a Persian-tuned variant of Llama 3.

A study led by Nikta Gohari Sadr of Brock University, along with researchers from Emory University and other institutions, introduces "TAAROFBENCH," the first benchmark for measuring how well AI systems reproduce this intricate cultural practice. The researchers' findings show how recent AI models default to Western-style directness, completely missing the cultural cues that govern everyday interactions for millions of Persian speakers worldwide.
"Cultural missteps in high-consequence settings can derail negotiations, damage relationships, and reinforce stereotypes," the researchers write.

"Taarof, a core element of Persian etiquette, is a system of ritual politeness where what is said often differs from what is meant," the researchers write. "It takes the form of ritualized exchanges: offering repeatedly despite initial refusals, declining gifts while the giver insists, and deflecting compliments while the other party reaffirms them. This 'polite verbal wrestling' (Rafiee, 1991) involves a delicate dance of offer and refusal, insistence and resistance, which shapes everyday interactions in Iranian culture, creating implicit rules for how generosity, gratitude, and requests are expressed."
AI

An $800 Billion Revenue Shortfall Threatens AI Future, Bain Says (bloomberg.com) 43

AI companies like OpenAI have been quick to unveil plans for spending hundreds of billions of dollars on data centers, but they have been slower to show how they will pull in revenue to cover all those expenses. Now, the consulting firm Bain & Co. is estimating the shortfall could be far larger than previously understood. Bloomberg: By 2030, AI companies will need $2 trillion in combined annual revenue to fund the computing power needed to meet projected demand, Bain said in its annual Global Technology Report released Tuesday. Yet their revenue is likely to fall $800 billion short of that mark as efforts to monetize services like ChatGPT trail the spending requirements for data centers and related infrastructure, Bain predicted.

The report is set to raise further questions about the AI industry's valuations and business model. The increasing popularity of services such as OpenAI's ChatGPT and Google's Gemini, as well as AI efforts by companies across the planet, means demand for computing capacity and energy is rising at a rapid clip. But the savings provided by AI and companies' ability to generate additional revenue from AI is lagging behind that pace.

Television

Google's Gemini AI Is Coming To Your TV 21

Google is rolling out its Gemini AI assistant to Google TV, bringing conversational AI to over 300 million devices. Users will be able to ask Gemini for help with TV recommendations, show recaps, reviews, or even general tasks like homework help, vacation planning, or learning new skills. TechCrunch reports: The company stresses that Gemini's addition doesn't mean that you won't be able to do the same things you used to be able to do through the (non-AI) Google Assistant integration. Those commands will still work, says Google. The Gemini rollout to Google TV begins on the TCL QM9K series starting today. Later in the year, Gemini will arrive on the Google TV Streamer, Walmart onn 4K Pro, 2025 Hisense U7, U8, and UX models, and 2025 TCL QM7K, QM8K, and X11K models. More functionality will be added over time.
AI

Reddit Wants 'Deeper Integration' with Google in Exchange for Licensed AI Training Data (msn.com) 30

Reddit's content became AI training data last year when Google signed a $60 million-per-year licensing agreement. But now Reddit is "in early talks" about a new deal seeking "deeper integration with Google's AI products," reports Bloomberg (citing executives familiar with the discussions).

And Reddit also wants "a deal structure that could allow for dynamic pricing, where the social platform can be paid more" — with both Google and OpenAI — to "adequately reflect how valuable their data has been to these platforms..." Such licensing agreements are becoming more common as AI companies seek legal ways to train their models. OpenAI has also struck a series of partnership agreements with major media publishers such as Axel Springer SE, Time and Conde Nast to use their content in ChatGPT...

Reddit remains among the most cited sources across AI platforms, according to analytics company Profound AI. However, Reddit executives have noticed that traffic coming from Google has limited value, as users seeking answers to a specific question often don't convert into becoming active Redditors, the people said. Now, Reddit is engaging with product teams at Google in hopes of finding ways to send more of its users deeper into its ecosystem of community forums, according to the executives. In return, Reddit is looking for ways to provide more high-quality data to its AI partners. Discussions between Reddit and Google have been productive, the people said. "We're midflight in our data licensing deals and still learning, but what we have seen is that Reddit data is highly cited and valued," Reddit Chief Operating Officer Jen Wong said on July 31 during a call with investors. "We'll continue to evaluate as we go."

AI

AI Tools Give Dangerous Powers to Cyberattackers, Security Researchers Warn (msn.com) 21

"On a recent assignment to test defenses, Dave Brauchler of the cybersecurity company NCC Group tricked a client's AI program-writing assistant into executing programs that forked over the company's databases and code repositories," reports the Washington Post.

"We have never been this foolish with security," Brauchler said... Demonstrations at last month's Black Hat security conference in Las Vegas included other attention-getting means of exploiting artificial intelligence. In one, an imagined attacker sent documents by email with hidden instructions aimed at ChatGPT or competitors. If a user asked for a summary or one was made automatically, the program would execute the instructions, even finding digital passwords and sending them out of the network. A similar attack on Google's Gemini didn't even need an attachment, just an email with hidden directives. The AI summary falsely told the target an account had been compromised and that they should call the attacker's number, mimicking successful phishing scams.

The threats become more concerning with the rise of agentic AI, which empowers browsers and other tools to conduct transactions and make other decisions without human oversight. Already, security company Guardio has tricked the agentic Comet browser addition from Perplexity into buying a watch from a fake online store and to follow instructions from a fake banking email...

Advanced AI programs also are beginning to be used to find previously undiscovered security flaws, the so-called zero-days that hackers highly prize and exploit to gain entry into software that is configured correctly and fully updated with security patches. Seven teams of hackers that developed autonomous "cyber reasoning systems" for a contest held last month by the Pentagon's Defense Advanced Research Projects Agency were able to find a total of 18 zero-days in 54 million lines of open source code. They worked to patch those vulnerabilities, but officials said hackers around the world are developing similar efforts to locate and exploit them. Some longtime security defenders are predicting a once-in-a-lifetime, worldwide mad dash to use the technology to find new flaws and exploit them, leaving back doors in place that they can return to at leisure.

The real nightmare scenario is when these worlds collide, and an attacker's AI finds a way in and then starts communicating with the victim's AI, working in partnership — "having the bad guy AI collaborate with the good guy AI," as SentinelOne's [threat researcher Alex] Delamotte put it. "Next year," said Adam Meyers, senior vice president at CrowdStrike, "AI will be the new insider threat."

In August more than 1,000 people lost data to a modified Nx program (downloaded hundreds of thousands of times) that used pre-installed coding tools from Google/Anthropic/etc. According to the article, the malware "instructed those programs to root out" sensitive data (including passwords or cryptocurrency wallets) and send it back to the attacker. "The more autonomy and access to production environments such tools have, the more havoc they can wreak," the article points out — including this quote from SentinelOne threat researcher Alex Delamotte.

"It's kind of unfair that we're having AI pushed on us in every single product when it introduces new risks."
AI

Hundreds of Google AI Workers Were Fired Amid Fight Over Working Conditions (theguardian.com) 48

Last week the Guardian reported on "thousands of AI workers contracted for Google through Japanese conglomerate Hitachi's GlobalLogic to rate and moderate the output of Google's AI products, including its flagship chatbot Gemini... and its summaries of search results, AI Overviews." "AI isn't magic; it's a pyramid scheme of human labor," said Adio Dinika, a researcher at the Distributed AI Research Institute based in Bremen, Germany. "These raters are the middle rung: invisible, essential and expendable...." Ten of Google's AI trainers the Guardian spoke to said they have grown disillusioned with their jobs because they work in siloes, face tighter and tighter deadlines, and feel they are putting out a product that's not safe for users... In May 2023, a contract worker for Appen submitted a letter to the US Congress that the pace imposed on him and others would make Google Bard, Gemini's predecessor, a "faulty" and "dangerous" product
This week Google laid off 200 of those moderating contractors, reports Wired. "These workers, who often are hired because of their specialist knowledge, had to have either a master's or a PhD to join the super rater program, and typically include writers, teachers, and people from creative fields." Workers still at the company claim they are increasingly concerned that they are being set up to replace themselves. According to internal documents viewed by WIRED, GlobalLogic seems to be using these human raters to train the Google AI system that could automatically rate the responses, with the aim of replacing them with AI. At the same time, the company is also finding ways to get rid of current employees as it continues to hire new workers. In July, GlobalLogic made it mandatory for its workers in Austin, Texas, to return to office, according to a notice seen by WIRED...

Some contractors attempted to unionize earlier this year but claim those efforts were quashed. Now they allege that the company has retaliated against them. Two workers have filed a complaint with the National Labor Relations Board, alleging they were unfairly fired, one due to bringing up wage transparency issues, and the other for advocating for himself and his coworkers. "These individuals are employees of GlobalLogic or their subcontractors, not Alphabet," Courtenay Mencini, a Google spokesperson, said in a statement...

"Globally, other AI contract workers are fighting back and organizing for better treatment and pay," the article points out, noting that content moderators from around the world facing similar issues formed the Global Trade Union Alliance of Content Moderators which includes workers from Kenya, Turkey, and Colombia.

Thanks to long-time Slashdot reader mspohr for sharing the news.
Chrome

Google Temporarily Pauses AI-Powered 'Homework Helper' Button in Chrome Over Cheating Concerns (msn.com) 65

An anonymous reader shared this article from the Washington Post: A student taking an online quiz sees a button appear in their Chrome browser: "homework help." Soon, Google's artificial intelligence has read the question on-screen and suggests "choice B" as the answer. The temptation to cheat was suddenly just two clicks away Sept. 2, when Google quietly added a "homework help" button to Chrome, the world's most popular web browser. The button has been appearing automatically on the kinds of course websites used by the majority of American college students and many high-schoolers, too. Pressing it launches Google Lens, a service that reads what's on the page and can provide an "AI Overview" answer to questions — including during tests.

Educators I've spoken with are alarmed. Schools including Emory University, the University of Alabama, the University of California at Los Angeles and the University of California at Berkeley have alerted faculty how the button appears in the URL box of course sites and their limited ability to control it.

Chrome's cheating tool exemplifies Big Tech's continuing gold rush approach to AI: launch first, consider consequences later and let society clean up the mess. "Google is undermining academic integrity by shoving AI in students' faces during exams," says Ian Linkletter, a librarian at the British Columbia Institute of Technology who first flagged the issue to me. "Google is trying to make instructors give up on regulating AI in their classroom, and it might work. Google Chrome has the market share to change student behavior, and it appears this is the goal."

Several days after I contacted Google about the issue, the company told me it had temporarily paused the homework help button — but also didn't commit to keeping it off. "Students have told us they value tools that help them learn and understand things visually, so we're running tests offering an easier way to access Lens while browsing," Google spokesman Craig Ewer said in a statement.

Facebook

Meta Pushes Into Power Trading as AI Sends Demand Soaring (yahoo.com) 17

Meta is moving to break into the wholesale power-trading business to better manage the massive electricity needs of its data centers. Bloomberg: The company, which owns Facebook, filed an application with US regulators this week seeking authorization to do so. A Meta representative said it was a natural next step to participate in energy markets as it looks to power operations with clean energy.

Buying electricity has become an increasingly urgent challenge for technology companies including Meta, Microsoft and Alphabet's Google. They're all racing to develop more advanced artificial intelligence systems and tools that are notoriously resource-intensive. Amazon, Google and Microsoft are already active power traders, according to filings with US regulators.

Chrome

Google Adds Gemini To Chrome Desktop Browser for US Users (blog.google) 57

Google has added Gemini features to Chrome for all desktop users in the US browsing in English following a limited release to paying subscribers in May. The update introduces a Gemini button in the browser that launches a chatbot capable of answering questions about page content and synthesizing information from multiple tabs. Users can remove the Gemini sparkle icon from Chrome's interface.

Google will add its AI Mode search feature to Chrome's address bar before September ends. The feature will suggest prompts based on webpage content but won't replace standard search functionality. Chrome on Android already includes Gemini features. The company plans to add agentic capabilities in coming months that would allow Gemini to perform tasks like adding items to online shopping carts by controlling the browser cursor.
AI

Gemini AI Solves Coding Problem That Stumped 139 Human Teams At ICPC World Finals (arstechnica.com) 75

An anonymous reader quotes a report from Ars Technica: Like the rest of its Big Tech cadre, Google has spent lavishly on developing generative AI models. Google's AI can clean up your text messages and summarize the web, but the company is constantly looking to prove that its generative AI has true intelligence. The International Collegiate Programming Contest (ICPC) helps make the point. Google says Gemini 2.5 participated in the 2025 ICPC World Finals, turning in a gold medal performance. According to Google this marks "a significant step on our path toward artificial general intelligence."

Every year, thousands of college-level coders participate in the ICPC event, facing a dozen deviously complex coding and algorithmic puzzles over five grueling hours. This is the largest and longest-running competition of its type. To compete in the ICPC, Google connected Gemini 2.5 Deep Think to a remote online environment approved by the ICPC. The human competitors were given a head start of 10 minutes before Gemini began "thinking."

According to Google, it did not create a freshly trained model for the ICPC like it did for the similar International Mathematical Olympiad (IMO) earlier this year. The Gemini 2.5 AI that participated in the ICPC is the same general model that we see in other Gemini applications. However, it was "enhanced" to churn through thinking tokens for the five-hour duration of the competition in search of solutions. At the end of the time limit, Gemini managed to get correct answers for 10 of the 12 problems, which earned it a gold medal. Only four of 139 human teams managed the same feat. "The ICPC has always been about setting the highest standards in problem-solving," said ICPC director Bill Poucher. "Gemini successfully joining this arena, and achieving gold-level results, marks a key moment in defining the AI tools and academic standards needed for the next generation."
Gemini's solutions are available on GitHub.
Privacy

Google Releases VaultGemma, Its First Privacy-Preserving LLM 23

An anonymous reader quotes a report from Ars Technica: The companies seeking to build larger AI models have been increasingly stymied by a lack of high-quality training data. As tech firms scour the web for more data to feed their models, they could increasingly rely on potentially sensitive user data. A team at Google Research is exploring new techniques to make the resulting large language models (LLMs) less likely to 'memorize' any of that content. LLMs have non-deterministic outputs, meaning you can't exactly predict what they'll say. While the output varies even for identical inputs, models do sometimes regurgitate something from their training data -- if trained with personal data, the output could be a violation of user privacy. In the event copyrighted data makes it into training data (either accidentally or on purpose), its appearance in outputs can cause a different kind of headache for devs. Differential privacy can prevent such memorization by introducing calibrated noise during the training phase.

Adding differential privacy to a model comes with drawbacks in terms of accuracy and compute requirements. No one has bothered to figure out the degree to which that alters the scaling laws of AI models until now. The team worked from the assumption that model performance would be primarily affected by the noise-batch ratio, which compares the volume of randomized noise to the size of the original training data. By running experiments with varying model sizes and noise-batch ratios, the team established a basic understanding of differential privacy scaling laws, which is a balance between the compute budget, privacy budget, and data budget. In short, more noise leads to lower-quality outputs unless offset with a higher compute budget (FLOPs) or data budget (tokens). The paper details the scaling laws for private LLMs, which could help developers find an ideal noise-batch ratio to make a model more private.
The work the team has done here has led to a new Google model called VaultGemma, its first open-weight model trained with differential privacy to minimize memorization risks. It's built on the older Gemma 2 foundation and sized at 1 billion parameters, which the company says performs comparably to non-private models of similar size.

It's available now from Hugging Face and Kaggle.
AI

OpenAI's First Study On ChatGPT Usage (arstechnica.com) 20

An anonymous reader quotes a report from Ars Technica: Today, OpenAI's Economic Research Team went a long way toward answering that question, on a population level, releasing a first-of-its-kind National Bureau of Economic Research working paper (in association with Harvard economist David Denning) detailing how people end up using ChatGPT across time and tasks. While other research has sought to estimate this kind of usage data using self-reported surveys, this is the first such paper with direct access to OpenAI's internal user data. As such, it gives us an unprecedented direct window into reliable usage stats for what is still the most popular application of LLMs by far. After digging through the dense 65-page paper, here are seven of the most interesting and/or surprising things we discovered about how people are using OpenAI today. Here are the seven most interesting and surprising findings from the study:

1. ChatGPT is now used by "nearly 10% of the world's adult population," up from 100 million users in early 2024 to over 700 million users in 2025. Daily traffic is about one-fifth of Google's at 2.6 billion GPT messages per day.

2. Long-term users' daily activity has plateaued since June 2025. Almost all recent growth comes from new sign-ups experimenting with ChatGPT, not from established users increasing their usage.

3. 46% of users are aged 18-25, making ChatGPT especially popular among the youngest adult cohort. Factoring in under-18 users (not counted in the study), the majority of ChatGPT users likely weren't alive in the 20th century.

4. At launch in 2022, ChatGPT was 80% male-dominated. By late 2025, the balance has shifted: 52.4% of users are now female.

5. In 2024, work vs. personal use was close to even. By mid-2025, 72% of usage is non-work related -- people are using ChatGPT more for personal, creative, and casual needs than for productivity.

6. 28% of all conversations involve writing assistance (emails, edits, translations). For work-related queries, that jumps to 42% overall, and 52% among business/management jobs. Furthermore, the report found that editing and critiquing text is more common than generating text from scratch.

7. 14.9% of work-related usage is dealt with "making decisions and solving problems." This shows people don't just use ChatGPT to do tasks -- they use it as an advisor or co-pilot to help weigh options and guide choices.
Google

Google Shifts Android Security Updates To Risk-Based Triage System (androidauthority.com) 2

Google has restructured Android's decade-old monthly security update process into a "Risk-Based Update System" that separates high-priority patches from routine fixes. Monthly bulletins now contain only vulnerabilities under active exploitation or in known exploit chains -- explaining July 2025's unprecedented zero-CVE bulletin -- while most patches accumulate for quarterly releases.

The September 2025 bulletin contained 119 vulnerabilities compared to zero in July and six in August. The change reduces OEM workload for monthly updates but extends the private bulletin lead time from 30 days to several months for quarterly releases. The company no longer releases monthly security update source code, limiting custom ROM development to quarterly cycles.
Social Networks

What Happens After the Death of Social Media? (noemamag.com) 112

"These are the last days of social media as we know it," argues a humanities lecturer from University College Cork exploring where technology and culture intersect, warning they could become lingering derelicts "haunted by bots and the echo of once-human chatter..."

"Whatever remains of genuine, human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks... " In recent years, Facebook and other platforms that facilitate billions of daily interactions have slowly morphed into the internet's largest repositories of AI-generated spam. Research has found what users plainly see: tens of thousands of machine-written posts now flood public groups — pushing scams, chasing clicks — with clickbait headlines, half-coherent listicles and hazy lifestyle images stitched together in AI tools like Midjourney... While content proliferates, engagement is evaporating. Average interaction rates across major platforms are declining fast: Facebook and X posts now scrape an average 0.15% engagement, while Instagram has dropped 24% year-on-year. Even TikTok has begun to plateau. People aren't connecting or conversing on social media like they used to; they're just wading through slop, that is, low-effort, low-quality content produced at scale, often with AI, for engagement.

And much of it is slop: Less than half of American adults now rate the information they see on social media as "mostly reliable" — down from roughly two-thirds in the mid-2010s... Platforms have little incentive to stem the tide. Synthetic accounts are cheap, tireless and lucrative because they never demand wages or unionize. Systems designed to surface peer-to-peer engagement are now systematically filtering out such activity, because what counts as engagement has changed. Engagement is now about raw user attention — time spent, impressions, scroll velocity — and the net effect is an online world in which you are constantly being addressed but never truly spoken to.

"These are the last days of social media, not because we lack content," the article suggests, "but because the attention economy has neared its outer limit — we have exhausted the capacity to care..." Social media giants have stopped growing exponentially, while a significant proportion of 18- to 34-year-olds even took deliberate mental health breaks from social media in 2024, according to an American Psychiatric Association poll.) And "Some creators are quitting, too. Competing with synthetic performers who never sleep, they find the visibility race not merely tiring but absurd."

Yet his 5,000-word essay predicts social media's death rattle "will not be a bang but a shrug," since "the model is splintering, and users are drifting toward smaller, slower, more private spaces, like group chats, Discord servers and federated microblogs — a billion little gardens." Intentional, opt-in micro-communities are rising in their place — like Patreon collectives and Substack newsletters — where creators chase depth over scale, retention over virality. A writer with 10,000 devoted subscribers can potentially earn more and burn out less than one with a million passive followers on Instagram... Even the big platforms sense the turning tide. Instagram has begun emphasizing DMs, X is pushing subscriber-only circles and TikTok is experimenting with private communities. Behind these developments is an implicit acknowledgement that the infinite scroll, stuffed with bots and synthetic sludge, is approaching the limit of what humans will tolerate....

The most radical redesign of social media might be the most familiar: What if we treated these platforms as public utilities rather than private casinos...? Imagine social media platforms with transparent algorithms subject to public audit, user representation on governance boards, revenue models based on public funding or member dues rather than surveillance advertising, mandates to serve democratic discourse rather than maximize engagement, and regular impact assessments that measure not just usage but societal effects... This could take multiple forms, like municipal platforms for local civic engagement, professionally focused networks run by trade associations, and educational spaces managed by public library systems... We need to "rewild the internet," as Maria Farrell and Robin Berjon mentioned in a Noema essay.

We need governance scaffolding, shared institutions that make decentralization viable at scale... [R]eal change will come when platforms are rewarded for serving the public interest. This could mean tying tax breaks or public procurement eligibility to the implementation of transparent, user-controllable algorithms. It could mean funding research into alternative recommender systems and making those tools open-source and interoperable. Most radically, it could involve certifying platforms based on civic impact, rewarding those that prioritize user autonomy and trust over sheer engagement.

"Social media as we know it is dying, but we're not condemned to its ruins. We are capable of building better — smaller, slower, more intentional, more accountable — spaces for digital interaction, spaces..."

"The last days of social media might be the first days of something more human: a web that remembers why we came online in the first place — not to be harvested but to be heard, not to go viral but to find our people, not to scroll but to connect. We built these systems, and we can certainly build better ones."
Businesses

America's FTC Opens New Probe into Amazon and Google Advertising Practices (msn.com) 12

America's Federal Trade Commission is investigating whether Amazon and Google misled advertisers placing ads on their websites, reports Bloomberg, and specifically whether the two companies "properly disclosed the terms and pricing for ads." The FTC is seeking details about Amazon's auctions and whether it disclosed "reserve pricing" for some search ads — price floors that advertisers must meet before they can buy an ad, the people said. Separately, the FTC is examining practices by Google, including its internal pricing process and whether it increased the cost of ads in ways that weren't disclosed to advertisers, the people said...

According to one of the people, the FTC's latest investigation emerged from its earlier antitrust case. In that complaint, the agency alleges that Amazon litters its marketplace with irrelevant results for search queries, making it harder for shoppers to find what they are looking for and more expensive for sellers to use the platform. The practice effectively forces sellers to buy ads to make their product appear in response to consumer searches.

Education

Newfoundland's 10-Year Education Report Calling For Ethical AI Use Contains Over 15 Fake Sources 23

Newfoundland and Labrador's 10-year Education Accord report (PDF) intended to guide school reform has been found to contain at least 15 fabricated citations, including references to non-existent films and journals. Academics suggest the fake sources may have been generated by AI. "There are sources in this report that I cannot find in the MUN Library, in the other libraries I subscribe to, in Google searches. Whether that's AI, I don't know, but fabricating sources is a telltale sign of artificial intelligence," said Aaron Tucker, an assistant professor at Memorial whose current research focuses on the history of AI in Canada. "The fabrication of sources at least begs the question: did this come from generative AI?" CBC News reports: In one case, the report references a 2008 movie from the National Film Board called Schoolyard Games. The film doesn't exist, according to a spokesperson for the board. But the exact citation used in the report can be found in a University of Victoria style guide -- a document that clearly lists fake references designed as templates for researchers writing a bibliography. "Many citations in this guide are fictitious," reads the first page of the document.

"Errors happen. Made-up citations are a totally different thing where you essentially demolish the trustworthiness of the material," said Josh Lepawsky, the former president of the Memorial University Faculty Association who resigned from the report's advisory board last January, citing a "deeply flawed process" leading to "top-down" recommendations. The 418-page Education Accord NL report took 18 months to complete and was unveiled Aug. 28 by its co-chairs Anne Burke and Karen Goodnough, both professors at Memorial's Faculty of Education. The pair released the report alongside Education Minister Bernard Davis. "We are investigating and checking references, so I cannot respond to this at the moment," wrote Goodnough in an email declining an interview Thursday.
In a statement, the Department of Education and Early Childhood Development said it was aware of a "small number of potential errors in citations" in the report. "We understand that these issues are being addressed, and that the online report will be updated in the coming days to rectify any errors."
Businesses

Microsoft, OpenAI Reach Non-Binding Deal To Allow OpenAI To Restructure (reuters.com) 5

Microsoft and OpenAI have signed a non-binding deal to restructure their partnership, paving the way for OpenAI to shift into a conventional for-profit model and potentially go public. Reuters reports: Details on the new commercial arrangements were not disclosed, but the companies said they were working to finalize terms of a definitive agreement. [...] Microsoft invested $1 billion in OpenAI in 2019 and another $10 billion at the beginning of 2023. Under their previous agreement, Microsoft had exclusive rights to sell OpenAI's software tools through its Azure cloud computing platform and had preferred access to the startup's technology.

Microsoft was once designated as OpenAI's sole compute provider, though it lessened its grip this year to allow OpenAI to pursue its own data center project, Stargate, including signing $300 billion worth of long-term contracts with Oracle, as well as another cloud deal with Google. As OpenAI's revenue grows into the billions, it is seeking a more conventional corporate structure and partnerships with additional cloud providers to expand sales and secure the computing capacity needed to meet demand. Microsoft, meanwhile, wants continued access to OpenAI's technology even if OpenAI declares its models have reached humanlike intelligence - a milestone that would end the current partnership under existing terms.

OpenAI said under current terms, its nonprofit arm will receive more than $100 billion -- about 20% of the $500 billion valuation it is seeking in private markets -- making it one of the most well-funded nonprofits, according to a memo from Bret Taylor, chairman of OpenAI's current nonprofit board. The companies did not disclose how much of OpenAI Microsoft will own, nor whether Microsoft will retain exclusive access to OpenAI's latest models and technology. Regulatory hurdles remain for OpenAI, as attorneys general in California and Delaware need to approve OpenAI's new structure. The company hopes to complete the conversion by year's end, or risk losing billions in funding tied to that timeline.

Technology

Everyone Is Making Smart Glasses Now (uploadvr.com) 14

Smart glasses development has expanded beyond Meta, Google and Apple to include dozens of manufacturers across three distinct categories, UploadVR reports. HTC launched its Vive Eagle glasses in Taiwan this month at $550, while Solos' AirGo V2 arrives in Q4 2025 for $300.

The market segments into displayless models featuring cameras and AI assistants, heads-up display glasses providing contextual information overlays and true AR glasses capable of spatial object positioning. Chinese manufacturers dominate the sub-$100 segment. Snap plans consumer AR glasses for 2026. Amazon is reportedly developing two HUD models targeting delivery drivers and consumers for mid-2026 release.
AI

Microsoft is Making 'Significant Investments' in Training Its Own AI Models (theverge.com) 14

A anonymous reader shares a report: Microsoft AI launched its first in-house models last month, adding to the already complicated relationship with its OpenAI partner. Now, Microsoft AI chief Mustafa Suleyman says the company is making "significant investments" in the compute capacity required to Microsoft's own future frontier models.

"We should have the capacity to build world class frontier models in house of all sizes, but we should be very pragmatic and use other models where we need to," said Suleyman during Microsoft's employee-only town hall on Thursday. "We're also going to be making significant investments in our own cluster, so today MAI-1-preview was only trained on 15,000 H100s, a tiny cluster in the grand scheme of things."

Suleyman hinted that Microsoft has ambitions to train models that are comparable to Meta, Google, and xAI's efforts on clusters that are "six to ten times larger in size" than what Microsoft used for its MAI-1-preview. "Much more to do, but it's good to take the first steps," said Suleyman.

Google

Google is Shutting Down Tables, Its Airtable Rival 16

Google Tables, a work-tracking tool and competitor to the popular spreadsheet-database hybrid Airtable, is shutting down. TechCrunch: In an email sent to Tables users this week, Google said the app will not be supported after December 16, 2025, and advised that users export or migrate their data to either Google Sheets or AppSheet instead, depending on their needs.

Launched in 2020, Tables focused on making project tracking more efficient with automation. It was one of the many projects to emerge from Google's in-house app incubator, Area 120, which at the time was devoted to cranking out a number of experimental projects. Some of these projects later graduated to become a part of Google's core offerings across Cloud, Search, Shopping, and more. Tables was one of those early successes: Google said in 2021 that the service was moving from a beta test to become an official Google Cloud product. At the time, the company said it saw Tables as a potential solution for a variety of use cases, including project management, IT operations, customer service tracking, CRM, recruiting, product development and more.

Slashdot Top Deals