Education

Code.org Taps No-Code Tableau To Make the Case For K-12 Programming Courses 62

theodp writes: "Computer science education is a necessity for all students," argues tech-backed nonprofit Code.org in its newly-published 2024 State of Computer Science Education (Understanding Our National Imperative) report. "Students of all identities and chosen career paths need quality computer science education to become informed citizens and confident creators of content and digital tools."

In the 200-page report, Code.org pays special attention to participation in "foundational computer science courses" in high school. "Across the country, 60% of public high schools offer at least one foundational computer science course," laments Code.org (curiously promoting a metric that ignores school size which nonetheless was embraced by Education Week and others).

"A course that teaches foundational computer science includes a minimum amount of time applying learned concepts through programming (at least 20 hours of programming/coding for grades 9-12 high schools)," Code.org explains in a separate 13-page Defining Foundational Computer Science document. Interestingly, Code.org argues that Data and Informatics courses -- in which "students may use Oracle WebDB, SQL, PL/SQL, SPSS, and SAS" to learn "the K-12 CS Framework concepts about data and analytics" -- do not count, because "the course content focuses on querying using a scripting language rather than creating programs [the IEEE's Top Programming Languages 2024 begs to differ]." Code.org similarly dissed the use of the Wolfram Language for broad educational use back in 2016.

With its insistence on the importance of kids taking Code.org-defined 'programming' courses in K-12 to promote computational thinking, it's probably no surprise to see that the data behind the 2024 State of Computer Science Education report was prepared using Python (the IEEE's top programming language) and presented to the public in a Jupyter notebook. Just kidding. Ironically, the data behind the 2024 State of Computer Science Education analysis is prepared and presented by Code.org in a no-code Tableau workbook.
Android

Huawei Makes Divorce From Android Official With HarmonyOS NEXT Launch (theregister.com) 67

The Register's Laura Dobberstein reports: Huawei formally launched its home-brewed operating system, HarmonyOS NEXT, on Wednesday, marking its official separation from the Android ecosystem. Huawei declared it released and "officially started public beta testing" of the OS for some of its smartphones and tablets that run its own Kirin and Kunpeng chips.

Unlike previous iterations of HarmonyOS, HarmonyOS NEXT no longer supports Android apps. Huawei maintains top Chinese outfits aren't deterred by that. It cited Meituan, Douyin, Taobao, Xiaohongshu, Alipay, and JD.com as among those who have developed native apps for the OS. In case you're not familiar, they're China's top shopping, payment, and social media apps.

Huawei also claimed that at the time of its announcement, over 15,000 HarmonyOS native applications and meta-services were also launched. That's a nice number, but well short of the millions of apps found on the Google Play Store and Apple's App Store. The Chinese tech player also revealed that the operating system has 110 million lines of code and claimed it improves the overall performance of mobile devices running it by 30 percent. It also purportedly increases battery life by 56 minutes and leaves an average of 1.5GB of memory for purposes other than running the OS.

AI

Anthropic's AI Model Gains Computer Control in New Upgrade (anthropic.com) 8

Anthropic has released an upgraded version of its AI model Claude 3.5 Sonnet and announced a new model, Claude 3.5 Haiku, alongside a public beta feature enabling AI to operate computers like humans. The enhanced Sonnet model improved its coding capabilities, scoring 49% on the SWEbench Verified benchmark, surpassing OpenAI and other competitors. The Haiku model matches the performance of Anthropic's previous flagship Claude 3 Opus while maintaining lower costs and faster speeds.

The computer use feature, available through Anthropic's API and cloud partners, allows Claude to perform tasks like navigating web browsers, filling forms, and manipulating data. Early adopters include Asana, DoorDash, and Replit, though Anthropic -- backed by investors including Google and Amazon -- acknowledges the feature remains experimental and error-prone. Claude 3.5 Haiku will launch later this month, initially supporting text-only inputs with image capabilities to follow.
United States

Democrats Press For Criminal Charges Against Tax Prep Firms Over Data Sharing (theverge.com) 62

Democratic senators Elizabeth Warren, Ron Wyden, Richard Blumenthal and Representative Katie Porter are demanding the Justice Department prosecute tax preparation companies for allegedly sharing sensitive taxpayer data with Meta and Google through tracking pixels. The lawmakers' call follows a Treasury Inspector General audit confirming their earlier investigation into TaxSlayer, H&R Block, and Tax Act. The audit found multiple companies failed to properly obtain consent before sharing tax return information via advertising tools. Violations could result in one-year prison terms and $1,000 fines per incident, potentially reaching billions in penalties given the scale of affected users.

In a letter shared with The Verge, the lawmakers said: "Accountability for these tax preparation companies -- who disclosed millions of taxpayers' tax return data, meaning they could potentially face billions of dollars in criminal liability -- is essential for protecting the rule of law and the privacy of taxpayers," the letter reads. "We urge you to follow the facts and the conclusions of TIGTA and the IRS and to take appropriate action against any companies or individuals that have violated the law."
AI

Can We Turn Off AI Tools From Google, Microsoft, Apple, and Meta? Sometimes... (seattletimes.com) 80

"Who asked for any of this in the first place?" wonders a New York Times consumer-tech writer. (Alternate URL here.) "Judging from the feedback I get from readers, lots of people outside the tech industry remain uninterested in AI — and are increasingly frustrated with how difficult it has become to ignore." The companies rely on user activity to train and improve their AI systems, so they are testing this tech inside products we use every day. Typing a question such as "Is Jay-Z left-handed?" in Google will produce an AI-generated summary of the answer on top of the search results. And whenever you use the search tool inside Instagram, you may now be interacting with Meta's chatbot, Meta AI. In addition, when Apple's suite of AI tools, Apple Intelligence, arrives on iPhones and other Apple products through software updates this month, the tech will appear inside the buttons we use to edit text and photos.

The proliferation of AI in consumer technology has significant implications for our data privacy, because companies are interested in stitching together and analyzing our digital activities, including details inside our photos, messages and web searches, to improve AI systems. For users, the tools can simply be an annoyance when they don't work well. "There's a genuine distrust in this stuff, but other than that, it's a design problem," said Thorin Klosowski, a privacy and security analyst at the Electronic Frontier Foundation, a digital rights nonprofit, and a former editor at Wirecutter, the reviews site owned by The New York Times. "It's just ugly and in the way."

It helps to know how to opt out. After I contacted Microsoft, Meta, Apple and Google, they offered steps to turn off their AI tools or data collection, where possible. I'll walk you through the steps.

The article suggests logged-in Google users can toggle settings at myactivity.google.com. (Some browsers also have extensions that force Google's search results to stop inserting an AI summary at the top.) And you can also tell Edge to remove Copilot from its sidebar at edge://settings.

But "There is no way for users to turn off Meta AI, Meta said. Only in regions with stronger data protection laws, including the EU and Britain, can people deny Meta access to their personal information to build and train Meta's AI." On Instagram, for instance, people living in those places can click on "settings," then "about" and "privacy policy," which will lead to opt-out instructions. Everyone else, including users in the United States, can visit the Help Center on Facebook to ask Meta only to delete data used by third parties to develop its AI.
By comparison, when Apple releases new AI services this month, users will have to opt in, according to the article. "If you change your mind and no longer want to use Apple Intelligence, you can go back into the settings and toggle the Apple Intelligence switch off, which makes the tools go away."
United States

Could Geothermal Power Revolutionize US Energy Consumption? (msn.com) 95

That massive geothermal energy project in Utah gets a closer look from the Washington Post, which calls it "a significant advance for a climate-friendly technology that is gaining momentum in the United States." Once fully operational, the project could generate up to 2 gigawatts of electricity — enough to power more than 2 million homes. In addition, the BLM proposed Thursday to speed up the permitting process for geothermal projects on public lands across the country. Earlier this month, the agency also hosted the biggest lease sale for geothermal developers in more than 15 years...

White House national climate adviser Ali Zaidi said in an interview Thursday, "Enhanced geothermal technology has the opportunity to deliver something in the range of 65 million homes' worth of clean power — power that can be generated without putting any pollution in the sky. So we see it as a really meaningful contributor to our technology tool kit...."

The developments Thursday come as tech companies race to find new sources of zero-emission power for data centers that can use as much energy as entire cities. With major backing from Google parent Alphabet, Fervo recently got its first project up and running in the northern Nevada desert... The advanced geothermal technology that Fervo is trying to scale up is an attractive option for tech firms. Enhanced geothermal plants do not pose all the safety concerns that come with nuclear power, but they have the potential to provide the round-the-clock energy that data centers need. The challenge Fervo faces is whether it can bring this technology online quickly enough.

Fervo (a seven-year-old start-up) was co-founded by Tim Latimer, who previously worked as a drilling engineer, according to the article. But "Early in my career I got passionate about climate change. I started looking at where could a drilling engineer from the oil and gas industry make a difference," Latimer said during a Washington Post Live event in September. "And I realized that geothermal had been so overlooked ... even though the primary technical challenge to making geothermal work is dropping drilling costs."
AMD

Spectre Flaws Still Haunt Intel, AMD as Researchers Found Fresh Attack Method (theregister.com) 33

"Six years after the Spectre transient execution processor design flaws were disclosed, efforts to patch the problem continue to fall short," writes the Register: Johannes Wikner and Kaveh Razavi of Swiss University ETH Zurich on Friday published details about a cross-process Spectre attack that derandomizes Address Space Layout Randomization and leaks the hash of the root password from the Set User ID (suid) process on recent Intel processors. The researchers claim they successfully conducted such an attack.... [Read their upcomong paper here.] The indirect branch predictor barrier (IBPB) was intended as a defense against Spectre v2 (CVE-2017-5715) attacks on x86 Intel and AMD chips. IBPB is designed to prevent forwarding of previously learned indirect branch target predictions for speculative execution. Evidently, the barrier wasn't implemented properly.

"We found a microcode bug in the recent Intel microarchitectures — like Golden Cove and Raptor Cove, found in the 12th, 13th and 14th generations of Intel Core processors, and the 5th and 6th generations of Xeon processors — which retains branch predictions such that they may still be used after IBPB should have invalidated them," explained Wikner. "Such post-barrier speculation allows an attacker to bypass security boundaries imposed by process contexts and virtual machines." Wikner and Razavi also managed to leak arbitrary kernel memory from an unprivileged process on AMD silicon built with its Zen 2 architecture.

Videos of the Intel and AMD attacks have been posted, with all the cinematic dynamism one might expect from command line interaction.

Intel chips — including Intel Core 12th, 13th, and 14th generation and Xeon 5th and 6th — may be vulnerable. On AMD Zen 1(+) and Zen 2 hardware, the issue potentially affects Linux users. The relevant details were disclosed in June 2024, but Intel and AMD found the problem independently. Intel fixed the issue in a microcode patch (INTEL-SA-00982) released in March, 2024. Nonetheless, some Intel hardware may not have received that microcode update. In their technical summary, Wikner and Razavi observe: "This microcode update was, however, not available in Ubuntu repositories at the time of writing this paper." It appears Ubuntu has subsequently dealt with the issue.

AMD issued its own advisory in November 2022, in security bulletin AMD-SB-1040. The firm notes that hypervisor and/or operating system vendors have work to do on their own mitigations. "Because AMD's issue was previously known and tracked under AMD-SB-1040, AMD considers the issue a software bug," the researchers explain. "We are currently working with the Linux kernel maintainers to merge our proposed software patch."

BleepingComputer adds that the ETH Zurich team "is working with Linux kernel maintainers to develop a patch for AMD processors, which will be available here when ready."
AI

Cheap AI 'Video Scraping' Can Now Extract Data From Any Screen Recording (arstechnica.com) 25

An anonymous reader quotes a report from Ars Technica: Recently, AI researcher Simon Willison wanted to add up his charges from using a cloud service, but the payment values and dates he needed were scattered among a dozen separate emails. Inputting them manually would have been tedious, so he turned to a technique he calls "video scraping," which involves feeding a screen recording video into an AI model, similar to ChatGPT, for data extraction purposes. What he discovered seems simple on its surface, but the quality of the result has deeper implications for the future of AI assistants, which may soon be able to see and interact with what we're doing on our computer screens.

"The other day I found myself needing to add up some numeric values that were scattered across twelve different emails," Willison wrote in a detailed post on his blog. He recorded a 35-second video scrolling through the relevant emails, then fed that video into Google's AI Studio tool, which allows people to experiment with several versions of Google's Gemini 1.5 Pro and Gemini 1.5 Flash AI models. Willison then asked Gemini to pull the price data from the video and arrange it into a special data format called JSON (JavaScript Object Notation) that included dates and dollar amounts. The AI model successfully extracted the data, which Willison then formatted as CSV (comma-separated values) table for spreadsheet use. After double-checking for errors as part of his experiment, the accuracy of the results -- and what the video analysis cost to run -- surprised him.

"The cost [of running the video model] is so low that I had to re-run my calculations three times to make sure I hadn't made a mistake," he wrote. Willison says the entire video analysis process ostensibly cost less than one-tenth of a cent, using just 11,018 tokens on the Gemini 1.5 Flash 002 model. In the end, he actually paid nothing because Google AI Studio is currently free for some types of use.

AI

OpenAI's Lead Over Other AI Companies Has Largely Vanished, 'State of AI' Report Finds (yahoo.com) 61

An anonymous reader shares a report: Every year for the past seven, Nathan Benaich, the founder and solo general partner at the early-stage AI investment firm Air Street Capital, has produced a magisterial "State of AI" report. Benaich and his collaborators marshal an impressive array of data to provide a great snapshot of the technology's evolving capabilities, the landscape of companies developing it, a survey of how AI is being deployed, and a critical examination of the challenges still facing the field.

One of the big takeaways from this year's report, which was published late last week, is that OpenAI's lead over other AI labs has largely eroded. Anthropic's Claude 3.5 Sonnet, Google's Gemini 1.5, X's Grok 2, and even Meta's open-source Llama 3.1 405 B model have equaled, or narrowly surpassed on some benchmarks, OpenAI's GPT-4o.ââBut, on the other hand, OpenAI still retains an edge for the moment on reasoning tasks with the release of its o1 "Strawberry" model -- which Air Street's report rightly characterized as a weird mix of incredibly strong logical abilities for some tasks, and surprisingly weak ones for others.

Another big takeaway, Benaich told me, is the extent to which the cost of using a trained AI model -- an activity known as "inference" -- is falling rapidly. There are several reasons for this. One is linked to that first big takeaway: With models less differentiated from one another on capabilities and performance, companies are forced to compete on price.ââAnother reason is that engineers for companies such as OpenAI and Anthropic -- and their hyperscaler partners Microsoft and AWS, respectively -- are discovering ways to optimize how the largest models run on big GPU clusters. The cost of outputs from OpenAI's GPT-4o today is 100-times less per token (which is about equivalent to 1.5 words) than it was for GPT-4 when that model debuted in March 2023. Google's Gemini 1.5 Pro now costs 76% less per output token than it did when that model was launched in February 2024.â

IT

FIDO Alliance Working on Making Passkeys Portable Across Platforms (macrumors.com) 31

The FIDO Alliance is developing new specifications to enable secure transfer of passkeys between different password managers and platforms. Announced this week, the initiative is the result of collaboration among members of the FIDO Alliance's Credential Provider Special Interest Group, including Apple, Google, Microsoft, 1Password, Bitwarden, Dashlane, and others. From a report: Passkeys are an industry standard developed by the FIDO Alliance and the World Wide Web Consortium, and were integrated into Apple's ecosystem with iOS 16, iPadOS 16.1, and macOS Ventura. They offer a more secure and convenient alternative to traditional passwords, allowing users to sign in to apps and websites in the same way they unlock their devices: With a fingerprint, a face scan, or a passcode.

Passkeys are also resistant to online attacks like phishing, making them more secure than things like SMS one-time codes. The draft specifications, called Credential Exchange Protocol (CXP) and Credential Exchange Format (CXF), will standardize the secure transfer of credentials across different providers. This addresses a current limitation where passkeys are often tied to specific ecosystems or password managers.
Further reading: Passwords Have Problems, But Passkeys have more.
Republicans

Trump Says Tim Cook Called Him To Complain About the EU (theverge.com) 278

An anonymous reader quotes a report from The Verge: Donald Trump said Apple CEO Tim Cook called him to discuss the billions of dollars that Apple has been fined in the European Union. Trump made the statement during his appearance on the PBD Podcast -- and said that he won't let the EU "take advantage" of US companies like Apple if reelected. "Two hours ago, three hours ago, he [Cook] called me," Trump said. "He said the European Union has just fined us $15 billion... Then on top of that, they got fined by the European Union another $2 billion." In March, the EU fined Apple around $2 billion after finding that Apple used its dominance to restrict music streaming apps from telling customers about cheaper subscription deals outside the App Store. The EU later won its fight to make Apple pay $14.4 billion in unpaid taxes.

"He [Cook] said something that was interesting," Trump said. "He said they're using that to run their enterprise, meaning Europe is their enterprise. "I said, 'That's a lot... But Tim, I got to get elected first, but I'm not going to let them take advantage of our companies -- that won't, you know, be happening.'"
Trump has talked to several Big Tech executives over the past several months. "During an interview this week, Trump said he spoke with Google CEO Sundar Pichai to complain about all the 'bad stories' the search engine shows about him," notes The Verge. "Elon Musk recently spoke at a Trump rally in Pennsylvania, while Meta CEO Mark Zuckerberg called Trump over the summer 'a few times,' according to the former president."
AI

Google's NotebookLM Now Lets You Customize Its AI Podcasts (wired.com) 9

Google's NotebookLM app has been updated to let you generate custom podcasts from almost any source material. The AI software is also dropping the "experimental" tag. Wired reports: To make an AI podcast using NotebookLM, open up the Google Labs website and start a New Notebook. Then, add any source documents you would like to be used for the audio output. These can be anything from files on your computer to YouTube links. Next, when you click on the Notebook guide, you'll now see the option to generate a deep dive as well as the option to customize it first. Choose Customize and add your prompt for how you'd like the AI podcast to come out. The software suggests that you consider what sections of the sources you'd like highlighted, larger topics you want further explored, or different intended audiences who you want the message to reach.

One tip [Raiza Martin, who leads the NotebookLM team inside of Google Labs] shares for trying out the new feature is to generate the Audio Overview without changes, and while you're listening to this first iteration, write down any burning questions you have or topics you wish it expanded on. Afterwards, use these notes as a launching pad to create your prompts for NotebookLM and regenerate that AI podcast with your interests in mind. [...] Yes, Google's NotebookLM might flatten the specifics of a big document or get some details mixed up, but being able to generate more personalized podcasts from disparate sources truly does feel like a transformation -- and luckily nothing like turning into a giant bug.
You can view some examples of AI-generated podcasts here.
Security

Fake Google Meet Conference Errors Push Infostealing Malware (bleepingcomputer.com) 6

An anonymous reader quotes a report from BleepingComputer: A new ClickFix campaign is luring users to fraudulent Google Meet conference pages showing fake connectivity errors that deliver info-stealing malware for Windows and macOS operating systems. ClickFix is a social-engineering tactic that emerged in May, first reported by cybersecurity company Proofpoint, from a threat actor (TA571) that used messages impersonating errors for Google Chrome, Microsoft Word, and OneDrive. The errors prompted the victim to copy to clipboard a piece of PowerShell code that would fix the issues by running it in Windows Command Prompt. Victims would thus infect systems with various malware such as DarkGate, Matanbuchus, NetSupport, Amadey Loader, XMRig, a clipboard hijacker, and Lumma Stealer.

In July, McAfee reported that the ClickFix campaigns were becoming mode frequent, especially in the United States and Japan. A new report from Sekoia, a SaaS cybersecurity provider, notes that ClickFix campaigns have evolved significantly and now use a Google Meet lure, phishing emails targeting transport and logistics firms, fake Facebook pages, and deceptive GitHub issues. According to the French cybersecurity company, some of the more recent campaigns are conducted by two threat groups, the Slavic Nation Empire (SNE) and Scamquerteo, considered to be sub-teams of the cryptocurrency scam gangs Marko Polo and CryptoLove.

Google

Google Shifts Gemini App Team To DeepMind (reuters.com) 5

In a memo from CEO Sundar Pichai, Google said it is moving the team behind the Gemini app to its AI research lab DeepMind. The shift "will improve feedback loops, enable fast deployment of our new models in the Gemini app," said Pichai. Reuters reports: Gemini is Google's most advanced AI technology, developed by DeepMind. The Gemini app is the direct consumer interface to the latest Gemini models. The Gemini app team, led by Sissie Hsiao, will join Google DeepMind under the leadership of its CEO Demis Hassabis.

Google also announced that Prabhakar Raghavan, who has led the company's products including search, ads and commerce will become chief technologist and work closely with Pichai. Raghavan's role as lead of the Knowledge and Information team will be taken up by Nick Fox, who has closely worked with Google on its AI product roadmap.

Iphone

Apple's New Feature Lets Brands Put Their Stamp On Emails, Calls To Your iPhone 27

Apple is enhancing its Business Connect tool, allowing companies to customize how they appear in emails, phone calls, and payment interfaces on iPhones. The Verge reports: Each registered business can confirm its info is accurate and add additional details like photos or special offers. Collecting verified, up-to-date business information could be useful for Apple if it ever launches its own search engine or inside features for Apple Intelligence instead of sending users to outside sources like Google, Yelp, or Meta. Branded Mail is a feature businesses can sign up for today before it starts rolling out to users later this year, potentially making emails easier to identify in a sea of unread messages.

Additionally, if companies opt into Business Caller ID, Apple will display their name, logo, and department on an iPhone's inbound call screen. This feature should come in handy when you're trying to figure out whether the random number that's calling you is spam, or if it's a legitimate business. It will start rolling out next year. A smaller update coming to Apple's Tap to Pay service will let companies show their logo when accepting payments instead of just displaying a category icon.
You can read more about it in Apple's press release.
Power

Petroleum Drilling Technology Is Now Making Carbon-Free Power (npr.org) 69

An anonymous reader quotes a report from NPR: There's a valley in rural southwest Utah that's become a hub for renewable energy. Dozens of tall white wind turbines whoosh up in the sky. A sea of solar panels glistens in the distance. But the new kid on the block is mostly hidden underground. From the surface, Fervo Energy's Cape Station looks more or less like an oil derrick, with a thin metal tower rising above the sagebrush steppe. But this $2 billion geothermal project, which broke ground last year, is not drilling for gas. It's drilling for underground heat that CEO Tim Latimer believes holds the key to generating carbon-free power -- lots of it.

"Just these three well pads alone will produce 100 megawatts of electricity. Around-the-clock, 24/7 electricity," he said. Latimer stood overlooking the project, which is currently under construction, on one of the drill rig's metal platforms 40 feet off the ground. This well is one of the 24 Fervo is in the process of completing at Cape Station to harness the Earth's natural heat and generate electricity. This isn't the type of geothermal that's already active in volcanic hot spots like Iceland or The Geysers project in California. It's called an enhanced geothermal system. Cold water goes down into a well that curves like a hockey stick as it reaches more than 13,000 feet underground. Then the water squeezes through cracks in 400-degree rock. The water heats up and returns to the surface through a second well that runs parallel to the first. That creates steam that turns turbines to produce electricity, and the water gets sent back underground in a closed loop.

This horizontal well technique has been pioneered at a $300 million federal research project called Utah FORGE located in this same valley, which has paved the way for private companies to take the tech and run with it. Recent innovations like better drill bits -- made with synthetic diamonds to eat through hard subterranean granite -- have helped Fervo drill its latest well in a quarter of the time that it took just a couple of years ago. That efficiency has meant an 80% drop in drilling costs, Latimer said. Last year, Fervo's pilot project in Nevada used similar techniques to begin sending electricity to a Google data center. And the company's early tests at Cape Station in Utah show the new project can produce power at triple the rate of its Nevada pilot. "This is now a proven tech. That's not a statement you could have made two or three years ago," Latimer said. "Now, it just comes down to how do we get more of these megawatts on the grid so we have a bigger impact?"
The report notes that Fervo signed a landmark deal with Southern California Edison, one of the country's largest electric utilities with 15 million customers. "It will send the first 70 megawatts of geothermal juice to the grid in 2026," reports NPR. "By the time the project is fully completed in 2028, this Utah plant will deliver 320 megawatts total -- enough to power 350,000 homes. The project's full output will be 400 megawatts."
Security

Sysadmins Rage Over Apple's 'Nightmarish' SSL/TLS Cert Lifespan Cuts (theregister.com) 293

The Register's Jessica Lyons reports: Apple wants to shorten SSL/TLS security certificates' lifespans, down from 398 days now to just 45 days by 2027, and sysadmins have some very strong feelings about this "nightmarish" plan. As one of the hundreds that took to Reddit to lament the proposal said: "This will suck. My least favorite vendor manages something like 10 websites for us, and we have to provide the certs manually every time. Between live and test this is gonna suck."

The Apple proposal, a draft ballot measure that will likely go up for a vote among Certification Authority Browser Forum (CA/B Forum) members in the upcoming months, was unveiled by the iThings maker during the Forum's fall meeting. If approved, it will affect all Safari certificates, which follows a similar push by Google, that plans to reduce the max-validity period on Chrome for these digital trust files down to 90 days.

... [W]hile it's generally agreed that shorter lifespans improve internet security overall -- longer certificate terms mean criminals have more time to exploit vulnerabilities and old website certificates -- the burden of managing these expired certs will fall squarely on the shoulders of systems administrators. [...] Even certificate provider Sectigo, which sponsored the Apple proposal, admitted that the shortened lifespans "will no doubt prove a headache for busy IT security teams, juggling with lots of certificates expiring at different times."
While automation is often touted as the solution to this problem, sysadmins were quick to point out that some SSL certs can't be automated. "This is somewhat nightmarish," said one sysadmin. "I have about 20 appliance like services that have no support for automation. Almost everything in my environment is automated to the extent that is practical. SSL renewal is the lone achilles heel that I have to deal with once every 365 days."
Intel

Intel and AMD Form an x86 Ecosystem Advisory Group (phoronix.com) 55

Phoronix's Michael Larabel reports: Intel and AMD have jointly announced the creation of an x86 ecosystem advisory group to bring together the two companies as well as other industry leaders -- both companies and individuals such as Linux creator Linus Torvalds. Intel and AMD are forming this x86 ecosystem advisory group to help foster collaboration and innovations around the x86 (x86_64) ISA. [...] Besides Intel amd AMD, other founding members include Broadcom, Dell, Google, HPE, HP Inc, Lenovo, Microsoft, Oracle, and Red Hat. Here are the "intended outcomes" for the group, as stated in the press release: The intended outcomes include:
- Enhancing customer choice and compatibility across hardware and software, while accelerating their ability to benefit from new, cutting-edge features.
- Simplifying architectural guidelines to enhance software consistency and standardize interfaces across x86 product offerings from Intel and AMD.
- Enabling greater and more efficient integration of new capabilities into operating systems, frameworks and applications.

Chrome

Google's Chrome Browser Starts Disabling uBlock Origin (pcmag.com) 205

An anonymous reader shares a report: If you're a fan of uBlock Origin, don't be surprised if it stops functioning on Chrome. The Google-owned browser has started disabling the free ad blocker as part of the company's plan to phase out older "Manifest V2" extensions. On Tuesday, the developer of uBlock Origin, Raymond Hill, retweeted a screenshot from one user, showing the Chrome browser disabling the ad blocker. "These extensions are no longer supported. Chrome recommends that you remove them," the pop-up from the Chrome browser told the user. In response, Hill wrote: "The depreciation of uBO in the Chrome Web Store has started."
AI

National Archives Pushes Google Gemini AI on Employees 19

An anonymous reader shares a report: In June, the U.S. National Archives and Records Administration (NARA) gave employees a presentation and tech demo called "AI-mazing Tech-venture" in which Google's Gemini AI was presented as a tool archives employees could use to "enhance productivity." During a demo, the AI was queried with questions about the John F. Kennedy assassination, according to a copy of the presentation obtained by 404 Media using a public records request.

In December, NARA plans to launch a public-facing AI-powered chatbot called "Archie AI," 404 Media has learned. "The National Archives has big plans for AI," a NARA spokesperson told 404 Media. "It's going to be essential to how we conduct our work, how we scale our services for Americans who want to be able to access our records from anywhere, anytime, and how we ensure that we are ready to care for the records being created today and in the future."

Employee chat logs given during the presentation show that National Archives employees are concerned about the idea that AI tools will be used in archiving, a practice that is inherently concerned with accurately recording history. One worker who attended the presentation told 404 Media "I suspect they're going to introduce it to the workplace. I'm just a person who works there and hates AI bullshit." The presentation was given about a month after the National Archives banned employees from using ChatGPT because it said it posted an "unacceptable risk to NARA data security," and cautioned employees that they should "not rely on LLMs for factual information."

Slashdot Top Deals