177711177
submission
An anonymous reader writes:
One of Microsoft’s latest AI models can accurately predict air quality, hurricanes, typhoons, and other weather-related phenomena, the company claims. In a paper published in the journal Nature and an accompanying blog post this week, Microsoft detailed Aurora, which the tech giant says can forecast atmospheric events with greater precision and speed than traditional meteorological approaches. Aurora, which has been trained on more than a million hours of data from satellites, radar and weather stations, simulations, and forecasts, can be fine-tuned with additional data to make predictions for particular weather events.
AI weather models are nothing new. Google DeepMind has released a handful over the past several years, including WeatherNext, which the lab claims beats some of the world’s best forecasting systems. Microsoft is positioning Aurora as one of the field’s top performers — and a potential boon for labs studying weather science. In experiments, Aurora predicted Typhoon Doksuri’s landfall in the Philippines four days in advance of the actual event, beating some expert predictions, Microsoft says. The model also bested the National Hurricane Center in forecasting five-day tropical cyclone tracks for the 2022-2023 season, and successfully predicted the 2022 Iraq sandstorm.
While Aurora required substantial computing infrastructure to train, Microsoft says the model is highly efficient to run. It generates forecasts in seconds compared to the hours traditional systems take using supercomputer hardware. Microsoft, which has made the source code and model weights publicly available, says that it’s incorporating Aurora’s AI modeling into its MSN Weather app via a specialized version of the model that produces hourly forecasts, including for clouds.
177676277
submission
theodp writes:
In what reads a bit like a Sopranos plot, The Information suggests some of those in the recent batch of terminated Microsoft engineers may have in effect been forced to dig their own AI graves.
The (paywalled) story begins: "Jeff Hulse, a Microsoft vice president who oversees roughly 400 software engineers, told the team in recent months to use the company's artificial intelligence chatbot, powered by OpenAI, to generate half the computer code they write, according to a person who heard the remarks. That would represent an increase from the 20% to 30% of code AI currently produces at the company, and shows how rapidly Microsoft is moving to incorporate such technology. Then on Tuesday, Microsoft laid off more than a dozen engineers on Hulse 's team as part of a broader layoff of 6,000 people across the company that appeared to hit engineers harder than other types of roles, this person said."
The report comes as tech company CEOs have taken to boasting in earnings calls, tech conferences, and public statements that their AI is responsible for an ever-increasing share of the code written at their organizations. Microsoft's recent job cuts hit coders the hardest. So how much credence should one place on CEOs' claims of AI programming productivity gains — which researchers have struggled to measure for 50+ years — if engineers are forced to increase their use of AI, boosting the numbers their far-removed-from-programming CEOs are presenting to Wall Street?
177675427
submission
mspohr writes:
US chip exports controls have been a “failure”, the head of Nvidia, Jensen Huang, told a tech forum on Wednesday, as the Chinese government separately slammed US warnings to other countries against using Chinese tech.
“The local companies are very, very talented and very determined, and the export control gave them the spirit, the energy and the government support to accelerate their development,” Huang told media the Computex tech show in Taipei.
“China has a vibrant technology ecosystem, and it’s very important to realise that China has 50% of the world’s AI researchers, and China is incredibly good at software,” Huang said.
177643749
submission
BrianFagioli writes:
In a world full of AI agents seemingly trying to take control away, Microsoft has done something surprisingly refreshing — it is handing the wheel back to the user. With the launch of Magentic-UI, a new open source research prototype, Microsoft is inviting developers and researchers to explore a different kind of AI assistant. One that doesnâ(TM)t just act on its own, but actually collaborates with people in a transparent, controllable way.
And yes, itâ(TM)s completely open source. That matters a lot. Too often, the tech industry hides this kind of innovation behind corporate walls. But Microsoft has released Magentic-UI under the MIT license on GitHub, allowing anyone to inspect, study, and build on top of it. That alone makes it worth paying attention to.
Unlike typical autonomous agents that operate in the shadows, Magentic-UI is built for the web and designed to work side-by-side with users. It operates in real time inside a browser, handling tasks like form-filling, site navigation, and even executing code. But hereâ(TM)s the twist, folks — users can co-plan, edit steps, pause the agent, override decisions, and even block risky actions before they happen.
That means if Magentic-UI is about to click a purchase button or close a tab, it wonâ(TM)t do it blindly. It asks first. And users can customize how often those approvals are needed.
177633567
submission
optical_phiber writes:
In March 2025, the New South Wales (NSW) Department of Education discovered that Microsoft Teams had begun collecting students' voice and facial biometric data without their prior knowledge. This occurred after Microsoft enabled a Teams feature called "voice and face enrollment" by default, which creates biometric profiles to enhance meeting experiences and transcriptions via its CoPilot AI tool. The NSW department learned of the data collection a month after it began and promptly disabled the feature and deleted the data within 24 hours. However, the department did not disclose how many individuals were affected or whether they were notified. Despite Microsoft’s policy of retaining data only while the user is enrolled and deleting it within 90 days of account deletion, privacy experts have raised serious concerns. Rys Farthing of Reset Tech Australia criticized the unnecessary collection of children's data, warning of the long-term risks and calling for stronger protections. A concerned parent highlighted the lack of transparency, fearing that others may remain unaware of the data collection. Microsoft has declined to comment on the issue.
177629069
submission
jjslash writes:
Hardware Unboxed has raised serious concerns about Nvidia's handling of the upcoming GeForce RTX 5060 launch. In a recent video, the independent tech reviewers allege that Nvidia is using tightly controlled preview programs to manipulate public perception, while actively sidelining critical voices.The company is favoring a handful of more "friendly" outlets with early access, under strict conditions. These outlets were given preview drivers – but only under guidelines that make their products shine beyond what's real-world testing would conclude. To cite two examples:
- One of the restrictions is not comparing the new RTX 5060 to the RTX 4060. Don't even need to explain than one.
- Another restriction or heavy-handed suggestion: run the RTX 5060 with 4x multi-frame generation turned on, inflating FPS results, while older GPUs that dont support MFG look considerably worse in charts.
The result: glowing previews published just days before the official launch, creating a first impression based almost entirely on Nvidia's marketing narrative.
177623583
submission
theodp writes:
In earnings calls, tech conferences, and public statements, tech CEOs have lately taken to singing the praises of eating their companies' AI dogfood, suggesting that their AI is so good and responsible for so much of the productivity at their organizations that it's forced them to rid their workforce of software engineers who have now been rendered unnecessary by AI. It's a powerful pitch, no doubt, but people have struggled to measure programming productivity for 50+ years, so one would be advised to take the claims with a grain of salt, especially coming from the mouths of tech CEOs who are far removed from programming.
Now, The Information reports that some of those in the recent batch of terminated Microsoft engineers may have in effect been ordered to dig their own AI graves. The (paywalled) story begins: "Jeff Hulse, a Microsoft vice president who oversees roughly 400 software engineers, told the team in recent months to use the company’s artificial intelligence chatbot, powered by OpenAI, to generate half the computer code they write, according to a person who heard the remarks. That would represent an increase from the 20% to 30% of code AI currently produces at the company, and shows how rapidly Microsoft is moving to incorporate such technology. Then on Tuesday, Microsoft laid off more than a dozen engineers on Hulse’s team as part of a broader layoff of 6,000 people across the company that appeared to hit engineers harder than other types of roles, this person said."