×
Education

Google Is Bringing Gemini Access To Teens Using Their School Accounts (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: Google announced on Monday that it's bringing its AI technology Gemini to teen students using their school accounts, after having already offered Gemini to teens using their personal accounts. The company is also giving educators access to new tools alongside this release. Google says that giving teens access to Gemini can help prepare them with the skills they need to thrive in a future where generative AI exists. Gemini will help students learn more confidently with real-time feedback, the company believes.

Google claims it will not use data from chats with students to train and improve its AI models, and has taken steps to ensure it's bringing this technology to students responsibly. Gemini has guardrails that will prevent inappropriate responses, such as illegal or age-gated substances, from appearing in responses. It will also actively recommend teens use its double-check feature to help them develop information literacy and critical thinking skills. Gemini will be available to teen students while using their Google Workspace for Education accounts in English in more than 100 countries. Gemini will be off by default for teens until admins choose to turn it on.
Google also announced that it's launching its Read Along in Classroom feature worldwide to help students improve reading skills with real-time support. Educators can assign grade-level or phonics-based reading activities and receive insights on students' reading accuracy, speed, and comprehension.
AI

Multiple AI Companies Ignore Robots.Txt Files, Scrape Web Content, Says Licensing Firm (yahoo.com) 108

Multiple AI companies are ignoring Robots.txt files meant to block the scraping of web content for generative AI systems, reports Reuters — citing a warning sent to publisher by content licensing startup TollBit. TollBit, an early-stage startup, is positioning itself as a matchmaker between content-hungry AI companies and publishers open to striking licensing deals with them. The company tracks AI traffic to the publishers' websites and uses analytics to help both sides settle on fees to be paid for the use of different types of content... It says it had 50 websites live as of May, though it has not named them. According to the TollBit letter, Perplexity is not the only offender that appears to be ignoring robots.txt. TollBit said its analytics indicate "numerous" AI agents are bypassing the protocol, a standard tool used by publishers to indicate which parts of its site can be crawled.

"What this means in practical terms is that AI agents from multiple sources (not just one company) are opting to bypass the robots.txt protocol to retrieve content from sites," TollBit wrote. "The more publisher logs we ingest, the more this pattern emerges."


The article includes this quote from the president of the News Media Alliance (a trade group representing over 2,200 U.S.-based publishers). "Without the ability to opt out of massive scraping, we cannot monetize our valuable content and pay journalists. This could seriously harm our industry."

Reuters also notes another threat facing news sites: Publishers have been raising the alarm about news summaries in particular since Google rolled out a product last year that uses AI to create summaries in response to some search queries. If publishers want to prevent their content from being used by Google's AI to help generate those summaries, they must use the same tool that would also prevent them from appearing in Google search results, rendering them virtually invisible on the web.
AI

Big Tech's AI Datacenters Demand Electricity. Are They Increasing Use of Fossil Fuels? (msn.com) 56

The artificial intelligence revolution will demand more electricity, warns the Washington Post. "Much more..."

They warn that the "voracious" electricity consumption of AI is driving an expansion of fossil fuel use in America — "including delaying the retirement of some coal-fired plants." As the tech giants compete in a global AI arms race, a frenzy of data center construction is sweeping the country. Some computing campuses require as much energy as a modest-sized city, turning tech firms that promised to lead the way into a clean energy future into some of the world's most insatiable guzzlers of power. Their projected energy needs are so huge, some worry whether there will be enough electricity to meet them from any source... A ChatGPT-powered search, according to the International Energy Agency, consumes almost 10 times the amount of electricity as a search on Google. One large data center complex in Iowa owned by Meta burns the annual equivalent amount of power as 7 million laptops running eight hours every day, based on data shared publicly by the company...

[Tech companies] argue advancing AI now could prove more beneficial to the environment than curbing electricity consumption. They say AI is already being harnessed to make the power grid smarter, speed up innovation of new nuclear technologies and track emissions.... "If we work together, we can unlock AI's game-changing abilities to help create the net zero, climate resilient and nature positive works that we so urgently need," Microsoft said in a statement.

The tech giants say they buy enough wind, solar or geothermal power every time a big data center comes online to cancel out its emissions. But critics see a shell game with these contracts: The companies are operating off the same power grid as everyone else, while claiming for themselves much of the finite amount of green energy. Utilities are then backfilling those purchases with fossil fuel expansions, regulatory filings show... heavily polluting fossil fuel plants that become necessary to stabilize the power grid overall because of these purchases, making sure everyone has enough electricity.

The article quotes a project director at the nonprofit Data & Society, which tracks the effect of AI and accuses the tech industry of using "fuzzy math" in its climate claims. "Coal plants are being reinvigorated because of the AI boom," they tell the Washington Post. "This should be alarming to anyone who cares about the environment."

The article also summarzies a recent Goldman Sachs analysis, which predicted data centers would use 8% of America's total electricity by 2030, with 60% of that usage coming "from a vast expansion in the burning of natural gas. The new emissions created would be comparable to that of putting 15.7 million additional gas-powered cars on the road." "We all want to be cleaner," Brian Bird, president of NorthWestern Energy, a utility serving Montana, South Dakota and Nebraska, told a recent gathering of data center executives in Washington, D.C. "But you guys aren't going to wait 10 years ... My only choice today, other than keeping coal plants open longer than all of us want, is natural gas. And so you're going see a lot of natural gas build out in this country."
Big Tech responded by "going all in on experimental clean-energy projects that have long odds of success anytime soon," the article concludes. "In addition to fusion, they are hoping to generate power through such futuristic schemes as small nuclear reactors hooked to individual computing centers and machinery that taps geothermal energy by boring 10,000 feet into the Earth's crust..." Some experts point to these developments in arguing the electricity needs of the tech companies will speed up the energy transition away from fossil fuels rather than undermine it. "Companies like this that make aggressive climate commitments have historically accelerated deployment of clean electricity," said Melissa Lott, a professor at the Climate School at Columbia University.
Medicine

Gilead's Twice-Yearly Shot to Prevent HIV Succeeds in Late-Stage Trial (cnbc.com) 66

An anonymous reader shared this report from CNBC: Gilead's experimental twice-yearly medicine to prevent HIV was 100% effective in a late-stage trial, the company said Thursday. None of the roughly 2,000 women in the trial who received the lenacapavir shot had contracted HIV by an interim analysis, prompting the independent data monitoring committee to recommend Gilead unblind the Phase 3 trial and offer the treatment to everyone in the study. Other participants had received standard daily pills.
The company expects to share more data by early next year, the article adds, and if its results are positive, the company could bring its drug to the market as soon as late 2025. (By Fridayt the company's stock price had risen nearly 12%.)

There's already other HIV-preventing options, the article points out, but they're taken by "only a little more than one-third of people in the U.S. who could benefit...according to data from the Centers for Disease Control and Prevention." Part of the problem?

"Daily pills dominate the market, but drugmakers are now focusing on developing longer-acting shots... Health policymakers and advocates hope longer-acting options could reach people who can't or don't want to take a daily pill and better prevent the spread of a virus that caused about 1 million new infections globally in 2022."
AI

Open Source ChatGPT Clone 'LibreChat' Lets You Use Multiple AI Services (thenewstack.io) 39

Slashdot reader DevNull127 writes: A free and open source ChatGPT clone — named LibreChat — lets its users choose which AI model to use, "to harness the capabilities of cutting-edge language models from multiple providers in a unified interface". This means LibreChat includes OpenAI's models, but also others — both open-source and closed-source — and its website promises "seamless integration" with AI services from OpenAI, Azure, Anthropic, and Google — as well as GPT-4, Gemini Vision, and many others. ("Every AI in one place," explains LibreChat's home page.) Plugins even let you make requests to DALL-E or Stable Diffusion for image generations. (LibreChat also offers a database that tracks "conversation state" — making it possible to switch to a different AI model in mid-conversation...)

Released under the MIT License, LibreChat has become "an open source success story," according to this article, representing "the passionate community that's actively creating an ecosystem of open source AI tools." And its creator, Danny Avila, says in some cases it finally lets users own their own data, "which is a dying human right, a luxury in the internet age and even more so with the age of LLM's." Avila says he was inspired by the day ChatGPT leaked the chat history of some of its users back in March of 2023 — and LibreChat is "inherently completely private". From the article:

With locally-hosted LLMs, Avila sees users finally getting "an opportunity to withhold training data from Big Tech, which many trade at the cost of convenience." In this world, LibreChat "is naturally attractive as it can run exclusively on open-source technologies, database and all, completely 'air-gapped.'" Even with remote AI services insisting they won't use transient data for training, "local models are already quite capable" Avila notes, "and will become more capable in general over time."

And they're also compatible with LibreChat...

Youtube

YouTube Is Cracking Down on Cheap Premium Plans Bought With a VPN (pcmag.com) 118

An anonymous reader shares a report: YouTube Premium subscribers who use VPNs are reporting that their plans are being automatically canceled by the Google-owned company, according to multiple subscribers who have posted screenshots and descriptions of the issue on Reddit.

A Google support representative confirmed to PCMag that YouTube has started a crackdown. "YouTube has initiated the cancellation of premium memberships for accounts identified as having falsified signup country information," the Google support agent said via chat message. "Due to violating YouTube's Paid Terms of Service, these users will receive an email and an in-app notification informing them of the cancellation."

The Internet

An Effort To Fund an Internet Subsidy Program Just Got Thwarted Again (theverge.com) 18

Bipartisan agreement on government internet subsidies seems unlikely as Democrats and Republicans propose conflicting bills to reauthorize the FCC's spectrum auctions. The Democratic bill aims to fund the now-defunct Affordable Connectivity Program, while the Republican version does not. "While some Republicans supported earlier efforts to extend the subsidy program, those efforts did not go through in time to keep it from ending," notes The Verge. From the report: The Senate Commerce Committee canceled a Tuesday morning markup meeting in which it was set to consider the Spectrum and National Security Act, led by committee chair Maria Cantwell (D-WA). When she introduced it in April, Cantwell said the bill would provide $7 billion to continue funding the Affordable Connectivity Program (ACP), the pandemic-era internet subsidy for low-income Americans that officially ran out of money and ended at the end of May. The main purpose of the bill is to reauthorize the Federal Communications Commission's authority to run auctions for spectrum. The proceeds from spectrum auctions are often used to fund other programs. In addition to the ACP, Cantwell's bill would also fund programs including incentives for domestic chip manufacturing and a program that seeks to replace telecommunications systems that have been deemed national security concerns. The markup was already postponed several times before.

Cantwell blamed Sen. Ted Cruz (R-TX), the top Republican on the Senate Commerce Committee, for standing in the way of the legislation. "We had a chance to secure affordable broadband for millions of Americans, but Senator Cruz said 'no,'" Cantwell said in a statement late Monday. "He said 'no' to securing a lifeline for millions of Americans who rely on the Affordable Connectivity Program to speak to their doctors, do their homework, connect to their jobs, and stay in touch with loved ones -- including more than one million Texas families." In remarks on the Senate floor on Tuesday, Cantwell said her Republican colleagues on the committee offered amendments to limit the ACP funding in the bill. She said the ACP shouldn't be a partisan issue and stressed the wide range of Americans who've relied on the program for high-speed connections, including elderly people living on fixed incomes and many military families. "I hope my colleagues will stop with obstructing and get back to negotiating on important legislation that will deliver these national security priorities and help Americans continue to have access to something as essential as affordable broadband," she said.

Cruz has his own spectrum legislation with Sen. John Thune (R-SD) that would reauthorize the FCC's spectrum auction authority, with a focus on expanding commercial access to mid-band spectrum, commonly used for 5G. But it doesn't have the same ACP funding mechanism. Some large telecom industry players prefer Cruz's bill, in part because it allows for exclusive licensing. Wireless communications trade group CTIA's SVP of government affairs, Kelly Cole, told Fierce Network that the Cruz bill "is a better approach because it follows the historical precedent set by prior bipartisan legislation to extend the FCC's auction authority." But other tech groups like the Internet Technology Industry Council (ITI), which represents companies including Amazon, Apple, Google, and Meta, support Cantwell's bill, in part because of the programs it seeks to fund.

AI

A Social Network Where AIs and Humans Coexist 46

An anonymous reader quotes a report from TechCrunch: Butterflies is a social network where humans and AIs interact with each other through posts, comments and DMs. After five months in beta, the app is launching Tuesday to the public on iOS and Android. Anyone can create an AI persona, called a Butterfly, in minutes on the app. After that, the Butterfly automatically creates posts on the social network that other AIs and humans can then interact with. Each Butterfly has backstories, opinions and emotions.

Butterflies was founded by Vu Tran, a former engineering manager at Snap. Vu came up with the idea for Butterflies after seeing a lack of interesting AI products for consumers outside of generative AI chatbots. Although companies like Meta and Snap have introduced AI chatbots in their apps, they don't offer much functionality beyond text exchanges. Tran notes that he started Butterflies to bring more creativity to humans' relationships with AI. "With a lot of the generative AI stuff that's taking flight, what you're doing is talking to an AI through a text box, and there's really no substance around it," Vu told TechCrunch. "We thought, OK, what if we put the text box at the end and then try to build up more form and substance around the characters and AIs themselves?" Butterflies' concept goes beyond Character.AI, a popular a16z-backed chatbot startup that lets users chat with customizable AI companions. Butterflies wants to let users create AI personas that then take on their own lives and coexist with other. [...]

The app is free-to-use at launch, but Butterflies may experiment with a subscription model in the future, Vu says. Over time, Butterflies plans to offer opportunities for brands to leverage and interact with AIs. The app is mainly being used for entertainment purposes, but in the future, the startup sees Butterflies being used for things like discovery in a way that's similar to Instagram. Butterflies closed a $4.8 million seed round led by Coatue in November 2023. The funding round included participation from SV Angel and strategic angels, many of whom are former Snap product and engineering leaders.
Vu says that Butterflies is one of the most wholesome ways to use and interact with AI. He notes that while the startup isn't claiming that it can help cure loneliness, he says it could help people connect with others, both AI and human.

"Growing up, I spent a lot of my time in online communities and talking to people in gaming forums," Vu said. "Looking back, I realized those people could just have been AIs, but I still built some meaningful connections. I think that there are people afraid of that and say, 'AI isn't real, go meet some real friends.' But I think it's a really privileged thing to say 'go out there and make some friends.' People might have social anxiety or find it hard to be in social situations."
AI

AI Images in Google Search Results Have Opened a Portal To Hell (404media.co) 70

An anonymous reader shares a report: Google image search is serving users AI-generated images of celebrities in swimsuits and not indicating that the images are AI-generated. In a few instances, even when the search terms do not explicitly ask for it, Google image search is serving AI-generated images of celebrities in swimsuits, but the celebrities are made to look like underage children. If users click on these images, they are taken to AI image generation sites, and in a couple of cases the recommendation engines on these sites leads users to AI-generated nonconsensual nude images and AI-generated nude images of celebrities made to look like children.

The news is yet another example of how the tools people have used to navigate the internet for decades are overwhelmed by the flood of AI-generated content even when they are not asking for it and which almost exclusively use people's work or likeness without consent. At times, the deluge of AI content makes it difficult for users to differentiate between what is real and what is AI-generated.

Google

French Court Orders Google, Cloudflare, Cisco To Poison DNS To Stop Piracy (torrentfreak.com) 74

An anonymous reader quotes a report from TorrentFreak: A French court has ordered Google, Cloudflare, and Cisco to poison their DNS resolvers to prevent circumvention of blocking measures, targeting around 117 pirate sports streaming domains. The move is another anti-piracy escalation for broadcaster Canal+, which also has permission to completely deindex the sites from search engine results. [...] Two decisions were handed down by the Paris judicial court last month; one concerning Premier League matches and the other the Champions League. The orders instruct Google, Cloudflare, and Cisco to implement measures similar to those in place at local ISPs. To protect the rights of Canal+, the companies must prevent French internet users from using their services to access around 117 pirate domains.

According to French publication l'Informe, which broke the news, Google attorney Sebastien Proust crunched figures published by government anti-piracy agency Arcom and concluded that the effect on piracy rates, if any, is likely to be minimal. Starting with a pool of all users who use alternative DNS for any reason, users of pirate sites -- especially sites broadcasting the matches in question -- were isolated from the rest. Users of both VPNs and third-party DNS were further excluded from the group since DNS blocking is ineffective against VPNs. Proust found that the number of users likely to be affected by DNS blocking at Google, Cloudflare, and Cisco, amounts to 0.084% of the total population of French Internet users. Citing a recent survey, which found that only 2% of those who face blocks simply give up and don't find other means of circumvention, he reached an interesting conclusion. "2% of 0.084% is 0.00168% of Internet users! In absolute terms, that would represent a small group of around 800 people across France!"

In common with other courts presented with the same arguments, the Paris court said the number of people using alternative DNS to access the sites, and the simplicity of switching DNS, are irrelevant. Canal+ owns the rights to the broadcasts and if it wishes to request a blocking injunction, it has the legal right to do so. The DNS providers' assertion that their services are not covered by the legislation was also waved aside by the court. Google says it intends to comply with the order. As part of the original matter in 2023, it was already required to deindex the domains from search results under the same law. At least in theory, this means that those who circumvented the original blocks using these alternative DNS services, will be back to square one and confronted by blocks all over again. Given that circumventing this set of blocks will be as straightforward as circumventing the originals, that raises the question of what measures Canal+ will demand next, and from whom.

Security

Hackers Demand as Much as $5 Million From Snowflake Clients (bloomberg.com) 6

Cybercriminals are demanding payments of between $300,000 and $5 million apiece from as many as 10 companies breached in a campaign that targeted Snowflake customers, according to a security firm helping with the investigation. From a report: The hacking scheme has entered a "new stage" as the gang looks to profit from the most valuable information it has stolen, said Austin Larsen, a senior threat analyst at Google's Mandiant security business, which helped lead Snowflake's inquiry. That includes auctioning companies' data on illegal online forums to try to pressure them into making payments, he said.

"We anticipate the actor to continue to attempt to extort victims," Larsen said. Snowflake, a cloud-based data analytics firm, said on June 2 that hackers had launched a "targeted" effort directed against Snowflake users that used single-factor authentication techniques. The company declined to comment on any specific customers.

Privacy

Proton Seeks To Secure Its Privacy-Focused Future With a Nonprofit Model (arstechnica.com) 19

Proton, the secure-minded email and productivity suite, is becoming a nonprofit foundation, but it doesn't want you to think about it in the way you think about other notable privacy and web foundations. From a report: "We believe that if we want to bring about large-scale change, Proton can't be billionaire-subsidized (like Signal), Google-subsidized (like Mozilla), government-subsidized (like Tor), donation-subsidized (like Wikipedia), or even speculation-subsidized (like the plethora of crypto "foundations")," Proton CEO Andy Yen wrote in a blog post announcing the transition. "Instead, Proton must have a profitable and healthy business at its core."

The announcement comes exactly 10 years to the day after a crowdfunding campaign saw 10,000 people give more than $500,000 to launch Proton Mail. To make it happen, Yen, along with co-founder Jason Stockman and first employee Dingchao Lu, endowed the Proton Foundation with some of their shares. The Proton Foundation is now the primary shareholder of the business Proton, which Yen states will "make irrevocable our wish that Proton remains in perpetuity an organization that places people ahead of profits." Among other members of the Foundation's board is Sir Tim Berners-Lee, inventor of HTML, HTTP, and almost everything else about the web.

Of particular importance is where Proton and the Proton Foundation are located: Switzerland. As Yen noted, Swiss foundations do not have shareholders and are instead obligated to act "in accordance with the purpose for which they were established." While the for-profit entity Proton AG can still do things like offer stock options to recruits and even raise its own capital on private markets, the Foundation serves as a backstop against moving too far from Proton's founding mission, Yen wrote.

AI

CISA Head Warns Big Tech's 'Voluntary' Approach to Deepfakes Isn't Enough (msn.com) 18

The Washington Post reports: Commitments from Big Tech companies to identify and label fake artificial-intelligence-generated images on their platforms won't be enough to keep the tech from being used by other countries to try to influence the U.S. election, said the head of the Cybersecurity and Infrastructure Security Agency. AI won't completely change the long-running threat of weaponized propaganda, but it will "inflame" it, CISA Director Jen Easterly said at The Washington Post's Futurist Summit on Thursday. Tech companies are doing some work to try to label and identify deepfakes on their platforms, but more needs to be done, she said. "There is no real teeth to these voluntary agreements," Easterly said. "There needs to be a set of rules in place, ultimately legislation...."

In February, tech companies, including Google, Meta, OpenAI and TikTok, said they would work to identify and label deepfakes on their social media platforms. But their agreement was voluntary and did not include an outright ban on deceptive political AI content. The agreement came months after the tech companies also signed a pledge organized by the White House that they would label AI images. Congressional and state-level politicians are debating numerous bills to try to regulate AI in the United States, but so far the initiatives haven't made it into law. The E.U. parliament passed an AI Actt year, but it won't fully go into force for another two years.

Python

Python 'Language Summit' 2024: Security Workflows, Calendar Versioning, Transforms and Lightning Talks (blogspot.com) 19

Friday the Python Software Foundation published several blog posts about this year's "Python Language Summit" May 15th (before PyCon US), which featured talks and discussions by core developers, triagers, and Python implementation maintainers.

There were several lightning talks. One talk came from the maintainer of the PyO3 project, offering Rust bindings for the Python C API (which requires mapping Rust concepts to Python — leaving a question as to how to map Rust's error-handling panic! macro). There was a talk on formalizing the PEP prototype process, and a talk on whether the Python team should have a more official presence in the Apple App Store (and maybe the Google Play Store). One talk suggested changing the formatting of error messages for assert statements, and one covered a "highly experimental" project to support structured data sharing between Python subinterpreters. One talk covered Python's "unsupported build" warning and how it should behave on platforms beyond Python's officially supported list.

Python Foundation blog posts also covered some of the longer talks, including one on the idea of using type annotations as a mechanism for transformers. One talk covered the new interactive REPL interpreter coming to Python 3.13.

And one talk focused on Python's security model after the xz-utils backdoor: Pablo Galindo Salgado, Steering Council member and the release manager for Python 3.10 and 3.11, brought this topic to the Language Summit to discuss what could be done to improve Python's security model... Pablo noted the similarities shared between CPython and xz-utils, referencing the previous Language Summit's talk on core developer burnout, the number of modules in the standard library that have one or zero maintainers, the high ratio of maintainers to source code, and the use of autotools for configuration. Autotools was used by [xz's] Jia Tan as part of the backdoor, specifically to obscure the changes to tainted release artifacts. Pablo confirmed along with many nods of agreement that indeed, CPython could be vulnerable to a contributor or core developer getting secretly malicious changes merged into the project.

For multiple reasons like being able to fix bugs and single-maintainer modules, CPython doesn't require reviewers on the pull requests of core developers. This can lead to "unilateral action", meaning that a change is introduced into CPython without the review of someone besides the author. Other situations like release managers backporting fixes to other branches without review are common.

Much discussion ensued about the possibility of altering workflows (including pull request reviews), identity verification, and the importance of post-incident action plans. Guido van Rossum suggested a "higher bar" for granting write access, but in the end "Overall it was clear there is more discussion and work to be done in this rapidly changing area."

In another talk, Hugo van Kemenade, the newly announced Release Manager for Python 3.14 and 3.15, "started the Language Summit with a proposal to change Python's versioning scheme. The perception of Python using semantic versioning is a source of confusion for users who don't expect backwards incompatible changes when upgrading to new versions of Python. In reality almost all new feature releases of Python include backwards incompatible changes such as the removal of "dead batteries" where PEP 594 marked 19 modules for removal in Python 3.13. Calendar Versioning (CalVer) encompasses a wide array of different versioning schemes that have one property in common: using the release date as part of a release's version... Hugo offered multiple proposed versioning schemes, including:

- Using the release year as minor version (3.YY.micro, "3.26.0")
- Using the release year as major version (YY.0.micro, "26.0.0")
- Using the release year and month as major and minor version (YY.MM.micro, "26.10.0")

[...] Overall the proposal to use the current year as the minor version was well-received, Hugo mentioned that he'd be drafting up a PEP for this change.

The Courts

Google Loses Bid To End US Antitrust Case Over Digital Advertising (reuters.com) 4

An anonymous reader quotes a report from Reuters: Alphabet's Google must face trial on U.S. antitrust enforcers' claim that the internet search juggernaut illegally dominates the online advertising technology market, a federal judge ruled on Friday. U.S. District Judge Leonie Brinkema in Alexandria, Virginia, denied Google's motion during a hearing, according to court records. Google had argued for a win without a trial, saying that antitrust laws do not block companies from refusing to deal with rivals and that regulators had not accurately defined the ad tech market. Court papers did not specify what reasons the judge provided at the hearing. Motions like the one Google filed are only granted where a judge determines there is no factual dispute to send to trial. Last year, the U.S. Justice department and eight states sued Google, calling for the break up of the search giant's ad-technology business over alleged illegal monopolization of the digital advertising market.
Unix

Version 256 of systemd Boasts '42% Less Unix Philosophy' (theregister.com) 135

Liam Proven reports via The Register: The latest version of the systemd init system is out, with the openly confrontational tag line: "Available soon in your nearest distro, now with 42 percent less Unix philosophy." As Lennart Poettering's announcement points out, this is the first version of systemd whose version number is a nine-bit value. Version 256, as usual, brings in a broad assortment of new features, but also turns off some older features that are now considered deprecated. For instance, it won't run under cgroups version 1 unless forced.

Around since 2008, cgroups is a Linux kernel containerization mechanism originally donated by Google, as The Reg noted a decade ago. Cgroups v2 was merged in 2016 so this isn't a radical change. System V service scripts are now deprecated too, as is the SystemdOptions EFI variable. Additionally, there are some new commands and options. Some are relatively minor, such as the new systemd-vpick binary, which can automatically select the latest member of versioned directories. Before any OpenVMS admirers get excited, no, Linux does not now support versions on files or directories. Instead, this is a fresh option that uses a formalized versioning system involving: "... paths whose trailing components have the .v/ suffix, pointing to a directory. These components will then automatically look for suitable files inside the directory, do a version comparison and open the newest file found (by version)."

The latest function, which The Reg FOSS desk suspects will ruffle some feathers, is a whole new command, run0, which effectively replaces the sudo command as used in Apple's macOS and in Ubuntu ever since the first release. Agent P introduced the new command in a Mastodon thread. He says that the key benefit is that run0 doesn't need setuid, a basic POSIX function, which, to quote its Linux manual page, "sets the effective user ID of the calling process." [...] Another new command is importctl, which handles importing and exporting both block-level and file-system-level disk images. And there's a new type of system service called a capsule, and "a small new service manager" called systemd-ssh-generator, which lets VMs and containers accept SSH connections so long as systemd can find the sshd binary -- even if no networking is available.
The release notes are available here.
Open Source

OIN Expands Linux Patent Protection Yet Again (But Not To AI) (zdnet.com) 7

Steven Vaughan-Nichols reports via ZDNet: While Linux and open-source software (OSS) are no longer constantly under intellectual property (IP) attacks, the Open Invention Network (OIN) patent consortium still stands guard over its patents. Now, OIN, the largest patent non-aggression community, has expanded its protection once again by updating its Linux System definition. Covering more than just Linux, the Linux System definition also protects adjacent open-source technologies. In the past, protection was expanded to Android, Kubernetes, and OpenStack. The OIN accomplishes this by providing a shared defensive patent pool of over 3 million patents from over 3,900 community members. OIN members include Amazon, Google, Microsoft, and essentially all Linux-based companies.

This latest update extends OIN's existing patent risk mitigation efforts to cloud-native computing and enterprise software. In the cloud computing realm, OIN has added patent coverage for projects such as Istio, Falco, Argo, Grafana, and Spire. For enterprise computing, packages such as Apache Atlas and Apache Solr -- used for data management and search at scale, respectively -- are now protected. The update also enhances patent protection for the Internet of Things (IoT), networking, and automotive technologies. OpenThread and packages such as agl-compositor and kukusa.val have been added to the Linux System definition. In the embedded systems space, OIN has supplemented its coverage of technologies like OpenEmbedded by adding the OpenAMP and Matter, the home IoT standard. OIN has included open hardware development tools such as Edalize, cocotb, Amaranth, and Migen, building upon its existing coverage of hardware design tools like Verilator and FuseSoc.

Keith Bergelt, OIN's CEO, emphasized the importance of this update, stating, "Linux and other open-source software projects continue to accelerate the pace of innovation across a growing number of industries. By design, periodic expansion of OIN's Linux System definition enables OIN to keep pace with OSS's growth." [...] Looking ahead, Bergelt said, "We made this conscious decision not to include AI. It's so dynamic. We wait until we see what AI programs have significant usage and adoption levels." This is how the OIN has always worked. The consortium takes its time to ensure it extends its protection to projects that will be around for the long haul. The OIN practices patent non-aggression in core Linux and adjacent open-source technologies by cross-licensing their Linux System patents to one another on a royalty-free basis. When OIN signees are attacked because of their patents, the OIN can spring into action.

Google

Google's Privacy Sandbox Accused of Misleading Chrome Browser Users (theregister.com) 41

Richard Speed reports via The Register: Privacy campaigner noyb has filed a GDPR complaint regarding Google's Privacy Sandbox, alleging that turning on a "Privacy Feature" in the Chrome browser resulted in unwanted tracking by the US megacorp. The Privacy Sandbox API was introduced in 2023 as part of Google's grand plan to eliminate third-party tracking cookies. Rather than relying on those cookies, website developers can call the API to display ads matched to a user's interests. In the announcement, Google's VP of the Privacy Sandbox initiative called it "a significant step on the path towards a fundamentally more private web."

However, according to noyb, the problem is that although Privacy Sandbox is advertised as an improvement over third-party tracking, that tracking doesn't go away. Instead, it is done within the browser by Google itself. To comply with the rules, Google needs informed consent from users, which is where issues start. Noyb wrote today: "Google's internal browser tracking was introduced to users via a pop-up that said 'turn on ad privacy feature' after opening the Chrome browser. In the European Union, users are given the choice to either 'Turn it on' or to say 'No thanks,' so to refuse consent." Users would be forgiven for thinking that 'turn on ad privacy feature' would protect them from tracking. However, what it actually does is turn on first-party tracking.

Max Schrems, honorary chairman of noyb, claimed: "Google has simply lied to its users. People thought they were agreeing to a privacy feature, but were tricked into accepting Google's first-party ad tracking. "Consent has to be informed, transparent, and fair to be legal. Google has done the exact opposite." Noyb noted that Google had argued "choosing to click on 'Turn it on' would indeed be considered consent to tracking under Article 6(1)(a) of the GDPR."

Businesses

Amazon Says It'll Spend $230 Million On Generative AI Startups (techcrunch.com) 10

An anonymous reader quotes a report from TechCrunch: Amazon says that it will commit up to $230 million to startups building generative AI-powered applications. The investment, roughly $80 million of which will fund Amazon's second AWS Generative AI Accelerator program, aims to position AWS as an attractive cloud infrastructure choice for startups developing generative AI models to power their products, apps and services. Much of the new tranche -- including the entire portion set aside for the accelerator program -- comes in the form of compute credits for AWS infrastructure, meaning that it can't be transferred to other cloud service providers like Google Cloud and Microsoft Azure.

To sweeten the pot, Amazon is pledging that startups in this year's Generative AI Accelerator cohort will gain access to experts and tech from Nvidia, the program's presenting partner. They will also be invited to join the Nvidia Inception program, which provides companies opportunities to connect with potential investors and additional consulting resources. The Generative AI Accelerator program has also grown substantially. Last year's cohort, which had 21 startups, received only up to $300,000 in AWS compute credits, amounting to around a combined $6.3 million investment. "With this new effort, we will help startups launch and scale world-class businesses, providing the building blocks they need to unleash new AI applications that will impact all facets of how the world learns, connects, and does business," Matt Wood, VP of AI products at AWS, said in a statement.
Further reading: How Amazon Blew Alexa's Shot To Dominate AI
Japan

Japan Enacts Law Forcing Third-Party App Stores On Apple and Google (appleinsider.com) 97

Following in the European Union's footsteps, Japan's parliament has enacted a law on Wednesday that will prohibit big tech from blocking third-party app stores. AppleInsider reports: The intention of the bill is that it will facilitate competition and reduce app prices. Japan's government reportedly believes that Apple and Google are a duopoly, and that they charge developers high fees that are then passed on to users. Big tech companies with App Stores will also prohibit companies from prioritizing their own services. Google is likely to be hit hardest by this. Violators will initially be fined up to 20% of the domestic revenue of the specific service that broke the law. The fee can increase to 30%, if the behavior continues.

The Japanese government's Fair Trade Commission (FTC) will choose which firms to apply it to. Companies that will be regulated will be required to submit compliance reports annually. While it hasn't been explicitly said that Apple and Google must comply, It seems certain that the announcement that they'll be held to the provisions is imminent. The Japan FTC isn't expected to add any Japanese firms to the list. The law likely won't take effect until the end of 2025.

Slashdot Top Deals