Social Networks

Bluesky's Newest Product: an AI Tool That Gives You Custom Feeds (attie.ai) 10

"What happens when you can describe the social experience you want and have it built for you...?" asks Bluesky? "We've just started experimenting, but we're sharing it now because we want you to build alongside us."

Called "Attie" — because it's built with Bluesky's decentralized publishing framework, AT Protocol (which is open source) — the new assistant turns natural language prompts into social feeds, without users having to know how to code. (It's part of Bluesky's mission to "develop and drive large-scale adoption of technologies for open and decentralized public conversation.")

Engadget reports: On the Attie website, examples include prompts like, "Show me electronic music and experimental sound from people in my network" or "Builders working on agent infrastructure and open protocol design."

"It feels more like having a conversation than configuring software," [writes Bluesky's former CEO/current chief innovation officer, Jay Graber, in a blog post]. "You describe the sort of posts you want to see, and the coding agent builds the feed you described."

Graber added that Attie is a separate app from Bluesky and users don't have to use the new AI assistant if they don't want to. However, since Attie and Bluesky were built on the same framework, it could mean there will be some cross-app implementation between the two or any other app built on the AT Protocol.

"Attie is open for beta signups today, and we'll be sharing what we learn along the way," Graber writes in the blog post. "To learn more about Attie, visit: Attie.AI. Come help us find out what this can be."

The blog post warns that "Right now, AI is undermining human agency at the same time it's enhancing it," since "The proliferation of low-quality AI-generated content is making public social networks noisier and less trustworthy..." And in a world where "signal is getting harder to find... The major platforms aren't trying to fix this problem." They're using AI to increase the time users spend on-platform, to harvest training data, and to shape what users see and believe through systems they can't inspect and didn't choose. We think AI should serve people, not platforms...

An open protocol puts this power directly in users' hands. You can use it to build your own feeds, create software that works the way you want it to, and find signal in the noise. We built the AT Protocol so anyone could build any app they imagine on top of it, but until recently "anyone" really meant "anyone who can code." Agentic coding tools change that. For the first time, an open protocol can be genuinely open to everyone...

The Atmosphere [Bluesky's interoperable ecosystem] is an open data layer with a clearly defined schema for applications, which makes it uniquely well-suited for coding agents to build on... Bluesky will continue to evolve as a social app millions of people rely on. Attie will be where we experiment with agentic social.

AI is an accelerant on whatever it's applied to. I want it to accelerate decentralizing social and putting power back in users' hands. But I don't think the most interesting things built on AT Protocol will come from us. They're going to come from everyone who picks up these tools and starts building.

United Kingdom

Apple Now Requires Device-Level Age Verification in the UK. Could the US Be Next? (gizmodo.com) 55

Apple unveiled new device-level age restrictions in the UK on Wednesday. "After downloading a new update, users will now have to confirm that they are 18 or older to access unrestricted features," reports Gizmodo.

"Users will be able to confirm their age with a credit card or by scanning an ID." For those underage or who have not confirmed their age, Apple will turn on Web Content Filter and Communication Safety, which will not only restrict access to certain apps or websites, but will also monitor messages, shared photo albums, AirDrop, and FaceTime calls for nudity. Apple didn't specify exactly which services and features are banned for under-18 users, but it will likely be in compliance with UK legislation...

The British government does not require Apple and other OS providers to institute device-level age checks, but it does restrict minor access to online pornography under the Online Safety Act, which passed in 2023. So far, that restriction has only been implemented at the website level, but UK officials have been worried about easy loopholes to evade the age restrictions, like VPNs.

The broader tech industry has been campaigning for some time to use device-level age checks instead in response to the rising tide of under-16 social media and internet bans around the world. Last month, in a landmark social media trial in California, Meta CEO Mark Zuckerberg also supported this idea, saying that conducting age verification "at the level of the phone is just a lot clearer than having every single app out there have to do this separately." Pornhub-operator Aylo had advocated for device-level restrictions in the UK as well, and even sent out letters to Apple, Google, and Microsoft in November asking for OS-level age verification...

The most obvious question: Could this be brought stateside?

AI

Disney Ends $1B OpenAI Investment After Sora's Surprise Closure. What's Next? (deadline.com) 27

Just six days ago — and 30 minutes after a Disney-OpenAI meeting about a project with Sora — Disney's team was "blindsided" with the news Sora was being discontinued, a person familiar with the matter told Reuters, describing OpenAI's move as "a big rug-pull."

Even some Sora employees were surprised by the cancellation. It was just 14 weeks ago Disney announced a $1 billion investment in OpenAI's AI-powered video generation tool — plus a three-year licensing deal. But that deal "never closed," Reuters adds, citing two other people familiar with the matter, "and no money changed hands." (Although the two sides are still "discussing if there is another way they can partner or invest with one another, one of the people familiar with the matter said.")

But Variety wonders if the end of the Sora deal is "a blessing in disguise" for Disney: Before Disney's officially sanctioned AI-generated versions of Mickey Mouse, Darth Vader, Baby Yoda, Deadpool and more debuted in OpenAI's Sora, the AI company abruptly pulled the plug on the video app...

[M]any aficionados of Disney's franchises were not, in fact, excited about what Sora's video generator might do to the likes of the Avengers superheroes or the characters from Frozen or Moana. And despite [departed Disney CEO Bob] Iger's bullishness on the Sora deal, other Disney execs were said to be concerned that going into business with OpenAI would expose the Magic Kingdom's crown jewels to the risk of being turned into so much AI slop, according to industry sources. Hollywood unions — for which AI adoption has been a hot-button issue — weren't thrilled about the Disney-Sora deal either. "Disney's announcement with OpenAI appears to sanction its theft of our work and cedes the value of what we create to a tech company that has built its business off our backs," the Writers Guild of America said in December... [S]ources say, Disney was encountering roadblocks in getting the OK from voice actors for the Sora pact...

At least publicly, Disney says it is still looking at ways it can tap into the AI ecosystem. The company, in a statement Tuesday, said, "we will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators." But at this point, Disney may decide that "meeting fans where they are" means keeping its beloved and world-famous characters away from the AI machinery.

Or, as Gizmodo puts it, "Disney Says It Will Find Ways to Peddle Slop Elsewhere After Pulling Out of OpenAI Deal."

But Deadline sees the deal's collapses as a lost opportunity: The OpenAI partnership was a template on which to build, potentially allowing for other deals that end the exploitation of human creativity by unscrupulous AI models. It was also the kind of partnership that was palatable for the Human Artistry Campaign and Creators Coalition on AI, lobby groups that have been critical of tech business models and command support from A-listers including Scarlett Johansson, Cate Blanchett and Joseph Gordon-Levitt.

Dr. Moiya McTier, an advisor to the Human Artistry Campaign, puts it this way: Part of the problem is getting "artsy people and the techie people to talk." OpenAI sinking Sora will not make these discussions easier. It's a move that starkly exposes Hollywood's vulnerability to the capriciousness of big tech.

IBM

IBM Quantum Computer Simulates Real Magnetic Materials and Matches Lab Data (nerds.xyz) 15

"IBM says its quantum computer can now simulate real magnetic materials and match actual lab experiment results," writes Slashdot reader BrianFagioli, "which is something people have been waiting years to see." Instead of just theoretical output, the system reproduced neutron scattering data from a known material, meaning it lines up with real world physics. It still relies on a mix of quantum and classical computing and this is a narrow use case for now, but it is one of the first times quantum hardware has produced results that scientists can directly validate against experiments, which makes it a lot more interesting than the usual hype.
Classical computers "are not great at modeling quantum systems," according to this article at Nerds.xyz. "The math gets messy fast, and scientists end up relying on approximations... Quantum computers are supposed to solve that problem..." If this direction continues, it could start to matter in areas like superconductors, battery tech, and even drug development. Those are the kinds of problems where better simulations can actually lead to better outcomes, not just nicer charts in a research paper.
"I am extremely excited about what this means for science," said study co-author Allen Scheie from the Los Alamos National Laboratory. In an announcement from IBM, Scheie calls this "the most impressive match I've seen between experimental data and qubit simulation, and it definitely raises the bar for what can be expected from quantum computers."
PlayStation (Games)

Sony is Raising PlayStation 5 Prices Again, Between $100 and $150 (arstechnica.com) 39

Memory and storage shortages and price hikes have "steadily rippled outward across all kinds of consumer tech," reports Ars Technica.

"Today's bad news comes from Sony, which is raising prices for PlayStation 5 consoles in the US just eight months after their last price hike." The drive-less Digital Edition will increase from $500 to $600; the base PS5 with an optical drive will increase from $550 to $650; and the PS5 Pro is going up from $750 to a whopping $900. At the beginning of 2025, these consoles cost $450, $500, and $700, respectively...

RAM and flash memory chips are in short supply primarily because of demand from AI data centers — memory manufacturers have shifted more production toward making the kind of memory found in AI accelerators like Nvidia's H200, leaving less for the consumer market. And the situation is unlikely to improve any time soon, barring a major shift in demand from the AI industry.

Social Networks

Austria Plans Social Media Ban For Under-14s (bbc.com) 11

Austria plans to restrict under-14s from using social media platforms over concerns about addictive algorithms and harmful content. The government says draft legislation should be ready by the end of June, though details around enforcement and age verification have yet to be finalized. The BBC reports: Announcing the plans, Vice-Chancellor Andreas Babler of the Social Democrats said the government could not stand by and watch as social media made children "addicted and also often ill." He said it was the responsibility of politicians to protect children and argued that the issue should be treated no different to alcohol or tobacco: "There must be clear rules in the digital world too." In future, said Babler, children under 14 would be protected from algorithms that were addictive. "Other information providers have clear rules to protect young people from harmful content." These, he said, should now be implemented in the digital space. Yesterday, juries in two separate cases found social media giants liable for harming young people's mental health. The verdicts are being hailed as social media's Big Tobacco moment.

Further reading: California Bill Would Require Parent Bloggers To Delete Content of Minors On Social Media
AI

OpenAI Abandons ChatGPT's Erotic Mode (techcrunch.com) 79

OpenAI has indefinitely paused plans for an erotic mode in ChatGPT as part of a broader strategy shift away from side projects and toward business and coding tools. TechCrunch reports: The proposed "adult mode," which CEO Sam Altman first floated in October, had inspired considerable controversy from tech watchdog groups as well as from OpenAI's own staff. In January, a meeting between company executives and its council of advisers got heated, with one of the advisers cautioning that OpenAI could be in the process of developing a "sexy suicide coach," The Wall Street Journal previously reported.

Amidst all of the criticism, the release of the feature was delayed multiple times. FT notes that the erotic feature now has no timeline for release. When reached for comment by TechCrunch, an OpenAI spokesperson said the company had "nothing further to add."

Government

Senators Demand to Know How Much Energy Data Centers Use (wired.com) 49

Elizabeth Warren and Josh Hawley are pressing the Energy Information Administration (EIA) to provide better information on how much electricity data centers actually use. In a joint letter sent to the EIA on Thursday, the two senators press the agency to publicly collect "comprehensive, annual energy-use disclosures" on data centers, saying it's "essential for accurate grid planning and will support policymaking to prevent large companies from increasing electricity costs for American families." Wired reports: In December, EIA administrator Tristan Abbey said at a roundtable that he expects the EIA "is going to be an essential player in providing objective data and analysis to policymakers" with respect to data centers. The agency announced on Wednesday that it would be conducting a voluntary pilot program to collect energy consumption information from nearly 200 companies operating data centers in Texas, Washington, and Virginia, which will cover "energy sources, electricity consumption, site characteristics, server metrics, and cooling systems."

While the senators praise the EIA pilot program, their letter includes several questions about how the agency plans to move forward with more data collection, such as whether or not the energy surveys will be mandatory and whether or not the EIA will collect information on behind-the-meter power. This information will be especially crucial, the senators say, to make sure that big tech companies that signed the agreement at the White House earlier this month pledging that consumers won't bear the costs of data center electricity use will stick to their promises. "Without this data, policymakers, utility companies, and local communities are operating in the dark," the senators write.

The EIA mandates that other industries, including oil and gas and manufacturing, provide regular data to the agency; Hawley and Warren assert that the EIA should be able to collect similar information from data centers under the same provision. The provision is broad enough, Peskoe says, that it could absolutely be interpreted to encompass data centers.
Yesterday, Senator Bernie Sanders and Rep. Alexandria Ocasio-Cortez announced a bill that would "enact a reasonable pause to the development of AI to ensure the safety of humanity." It calls for a federal moratorium on AI data centers until stronger national safeguards are in place around safety, jobs, privacy, energy costs, and environmental impact.
Mozilla

Mozilla and Mila Team Up On Open Source AI Push 31

BrianFagioli writes: Mozilla just teamed up with Mila, the Quebec Artificial Intelligence Institute, to push open source AI -- and it feels like a direct response to Big Tech tightening its grip on the space. Instead of relying on closed models, the goal here is to build "sovereign AI" that's more transparent, privacy-focused, and actually under the control of developers and even governments. They're starting with things like private memory for AI agents, which sounds niche but matters if you care about where your data goes. Big question is whether open source can realistically keep up with the billions being poured into proprietary AI, but at least someone's trying to give folks an alternative. "Canada has what it takes to lead on frontier AI that the world can actually trust: the research depth, the values, and the will to do it differently. The next frontier in AI isn't just capability, it is trustworthiness, and Canada is uniquely positioned to lead on both. This partnership is a concrete step in that direction. Open, trustworthy AI isn't a compromise on ambition. It's the higher bar," said Valerie Pisano, president and CEO of Mila.
Robotics

Melania Trump Welcomes Humanoid Robot At White House Summit 88

Longtime Slashdot reader theodp writes: In Melania and the Robot, the New York Times reports on First Lady Melania Trump's inaugural Fostering the Future Together Coalition Summit, which brought together international leaders, First Spouses from around the world, tech leaders, educators, and nonprofits to collaborate on practical solutions that expand access to educational tools while strengthening protections for children in digital environments (Day 2 WH summary). The Times begins:

"On Wednesday, Mrs. Trump appeared at the White House alongside Figure 3, a humanoid, A.I.-powered robot whose uses, according to the company that makes it, include fetching towels, carrying groceries and serving champagne. But Mrs. Trump joins tech executives and some researchers in envisioning a world beyond robot butlery. She is interested in how these robots could cut it as educators. Both clad in shades of white, the first lady and the visiting robot walked into a gathering of first spouses from around the world, a group that included Sara Netanyahu of Israel, Olena Zelenska of Ukraine, and Brigitte Macron of France. The dulcet tones from a (presumably human) military orchestra played as the first lady and her guest entered the event. Both lady and robot extolled the virtues of further integrating robots into the educational and social lives of children. In the history of modern first-lady initiatives, which have included building a national book festival (Laura Bush), reshuffling the food pyramid (Michelle Obama) and advocating for free community college (Jill Biden), Mrs. Trump's involvement of a humanoid robot in education policy was a first."

"Figure 3 delivered brief remarks and delivered salutations in several languages. With its sleek black-and-white appearance, Figure 3 would fit right in with the first lady's branding aesthetic, which includes a self-titled coffee table book and movie, not least because the name "MELANIA" was emblazoned on the side of its glossy plastic head. After Figure 3 teetered gingerly away, Mrs. Trump looked around the room and told them that the future looked a lot like what they had just witnessed. 'The future of A.I. is personified,' she told her audience. 'It will be formed in the shape of humans. Very soon artificial intelligence will move from our mobile phones to humanoids that deliver utility.' She invited her guests to envision a future in which a robot philosopher educated children."
Social Networks

Meta and YouTube Found Negligent in Landmark Social Media Addiction Case 112

A jury found Meta and YouTube negligent in a landmark social media addiction case, ruling that addictive design features such as infinite scroll and algorithmic recommendations harmed a young user and contributed to her mental health distress. The verdict awards $3 million in compensatory damages so far and could pave the way for more lawsuits seeking financial penalties and product changes across the social media industry. "Meta is responsible for 70 percent of that cost and YouTube for the remainder," notes The New York Times. "TikTok and Snap both settled with the plaintiff for undisclosed terms before the trial started." From the report: The bellwether case, which was brought by a now 20-year-old woman identified as K.G.M., had accused social media companies of creating products as addictive as cigarettes or digital casinos. K.G.M. sued Meta, which owns Instagram and Facebook, and Google's YouTube over features like infinite scroll and algorithmic recommendations that she claimed led to anxiety and depression.

The jury of seven women and five men will deliberate further to decide what further punitive damages the companies should pay for malice or fraud. The verdict in K.G.M.'s case -- one of thousands of lawsuits filed by teenagers, school districts and state attorneys general against Meta, YouTube, TikTok and Snap, which owns Snapchat -- was a major win for the plaintiffs. The finding validates a novel legal theory that social media sites or apps can cause personal injury. It is likely to factor into similar cases expected to go to trial this year, which could expose the internet giants to further financial damages and force changes to their products.
The verdict also comes on the heels of a New Mexico jury ruling that found Meta liable for violating state law by failing to protect users of its apps from child predators.
Facebook

Meta Loses Trial After Arguing Child Exploitation Was 'Inevitable' (arstechnica.com) 45

Meta lost a child safety trial in New Mexico after a court found that its platforms failed to adequately protect children from exploitation and misled parents about app safety. According to Ars Technica, the jury on Tuesday "deliberated for only one day before agreeing that Meta should pay $375 million in civil damages..." While the jury declined to impose the maximum penalty New Mexico sought, which could have cost the company $2.2 billion, Meta may still face additional financial penalties and could be forced to make changes to its apps. From the report: The trial followed a 2023 lawsuit filed by New Mexico Attorney General Raul Torrez after The Guardian published a two-year investigation exposing child sex trafficking markets on Facebook and Instagram. Torrez's office then conducted an undercover investigation codenamed "Operation MetaPhile," in which officers posed as children on Facebook, Instagram, and WhatsApp. The jury heard that these fake profiles were "simply inundated with images and targeted solicitations" from child abusers, Torrez told CNBC in 2024. Ultimately, three men were arrested amid the sting for attempting to use Meta's social networks to prey on children. At trial, Mark Zuckerberg and Instagram chief Adam Mosseri testified that "harms to children, such as sexual exploitation and detriments to mental health, were inevitable on the company's platforms due to their vast user bases," The Guardian reported. Internal messages and documents, as well as testimony from child safety experts within and outside the company, showed that Meta repeatedly ignored warnings and failed to fix platforms to protect kids, New Mexico's AG successfully argued.

Perhaps most troubling to the jury, law enforcement and the National Center for Missing and Exploited Children also testified that Meta's reporting of crimes to children on its apps -- including child sexual abuse materials (CSAM) -- was "deficient," The Guardian reported. Rather than make it easy to trace harms on its platforms, the jury learned from frustrated cops that Meta "generated high volumes of 'junk' reports by overly relying on AI to moderate its platforms." This made its reporting "useless" and "meant crimes could not be investigated," The Guardian reported.

Celebrating the win as a "historic victory," Torrez told CNBC that families had previously paid the price for "Meta's choice to put profits over kids' safety." "Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew," Torrez said. "Today the jury joined families, educators, and child safety experts in saying enough is enough."
Meta said the company plans to appeal the verdict. "We respectfully disagree with the verdict and will appeal," Meta's spokesperson said. "We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online."
Android

Google's Android Automotive Is Moving From the Dashboard To the 'Brain' of the Car (theverge.com) 123

Google is expanding Android Automotive from the infotainment screen into the broader non-safety "brain" of software-defined vehicles. With its new Android Automotive OS for Software-Defined Vehicles, the in-car experience will feel "much more cohesive and the latest features will reach your driveway faster," Matt Crowley, Android Automotive's group product manager, writes in a blog post. "From a truly integrated voice experience to proactive maintenance reminders, your car will become a true extension of your digital life," Crowley adds. The Verge reports: With its new software, Google is promising faster over-the-air software updates, better voice assistants, and more proactive vehicle maintenance alerts. Non-driving functions like climate control, lighting, and seating adjustment would fall under Android's control. And the system would move beyond basic infotainment to create a unified ecosystem for features like remote cabin conditioning, digital key management, and personalized driver profiles.

For automakers, the new system promises less expensive software development costs and an opportunity to focus on what matters most to them: branding. By providing the "foundational code and a common language for their software," Google says automakers will be free to design cool experiences for their customers. Google says its already working with companies like Renault Group and Qualcomm to bring its new software-defined vehicle version of Android Automotive to more cars. A variety of automakers already use regular Android Automotive, like Volvo, Polestar, General Motors, Nissan, and Honda.

AI

OpenAI Discontinues Sora Video Platform App 45

OpenAI is shutting down Sora, its generative-AI video creation platform it launched in December 2024. "The move is one of a number of steps OpenAI is taking to refocus on business and coding functions ahead of a potential initial public offering as soon as the fourth quarter of this year," reports the Wall Street Journal.

CEO Sam Altman announced the changes to staff on Tuesday. "We're saying goodbye to Sora," the Sora Team said in a post on X. "To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. We'll share more soon, including timelines for the app and API and details on preserving your work."

Last week, OpenAI announced plans to combine its Atlas web browser, ChatGPT app, and Codex coding app into a singular desktop "superapp." "We realized we were spreading our efforts across too many apps and stacks, and that we need to simplify our efforts," said CEO of Applications, Fidji Simo. "That fragmentation has been slowing us down and making it harder to hit the quality bar we want." This could behind the decision to kill Sora as the company redirects its resources and top talent towards productivity tools that benefit both enterprises and individual users.
AI

Anthropic's Claude Can Now Use Your Computer To Finish Tasks 42

Anthropic is testing a new Claude feature that lets users send a request from their phone and have the AI carry it out directly on their computer, such as opening apps, using a browser, or editing files. The move follows the viral spread of OpenClaw earlier this year, which has gained cult popularity among devs for the ability to run local, 24/7 personal workflows. CNBC reports: Users can now message Claude a task from a phone, and the AI agent will then complete that task, Anthropic announced Monday. After being prompted, Claude can open apps on your computer, navigate a web browser and fill in spreadsheets, Anthropic said. One prompt Anthropic demonstrated in a video posted Monday is a user running late for a meeting. The user asks Claude to export a pitch deck as a PDF file and attach it to a meeting invite. The video shows Claude carrying out the task. [...]

Anthropic cautioned that computer use "is still early compared to Claude's ability to code or interact with text." "Claude can make mistakes, and while we continue to improve our safeguards, threats are constantly evolving," Anthropic warned. The company added that it has built the computer use capability "with safeguards that minimize risk," and that Claude will always request permission before accessing new apps. Users can use Dispatch, a feature it released last week in Claude Cowork. That lets users have a continuous conversation with Claude from a phone or desktop and assign the agent tasks.

Slashdot Top Deals