×
Security

Maximum-Severity GitLab Flaw Allowing Account Hijacking Under Active Exploitation (arstechnica.com) 1

Dan Goodin reports via Ars Technica: A maximum severity vulnerability that allows hackers to hijack GitLab accounts with no user interaction required is now under active exploitation, federal government officials warned as data showed that thousands of users had yet to install a patch released in January. A change GitLab implemented in May 2023 made it possible for users to initiate password changes through links sent to secondary email addresses. The move was designed to permit resets when users didn't have access to the email address used to establish the account. In January, GitLab disclosed that the feature allowed attackers to send reset emails to accounts they controlled and from there click on the embedded link and take over the account.

While exploits required no user interaction, hijackings worked only against accounts that weren't configured to use multi-factor authentication. Even with MFA, accounts remained vulnerable to password resets. The vulnerability, tracked as CVE-2023-7028, carries a severity rating of 10 out of a possible 10. The vulnerability, classified as an improper access control flaw, could pose a grave threat. GitLab software typically has access to multiple development environments belonging to users. With the ability to access them and surreptitiously introduce changes, attackers could sabotage projects or plant backdoors that could infect anyone using software built in the compromised environment. An example of a similar supply chain attack is the one that hit SolarWinds in 2021, infecting more than 18,000 of its customers. Other recent examples of supply chain attacks are here, here, and here. These sorts of attacks are powerful. By hacking a single, carefully selected target, attackers gain the means to infect thousands of downstream users, often without requiring them to take any action at all. According to Internet scans performed by security organization Shadowserver, more than 2,100 IP addresses showed they were hosting one or more vulnerable GitLab instances.
In order to protect your system, you should enable MFA and install the latest patch. "GitLab users should also remember that patching does nothing to secure systems that have already been breached through exploits," notes Goodin.
Security

Dropbox Says Hackers Breached Digital-Signature Product (yahoo.com) 12

An anonymous reader quotes a report from Bloomberg: Dropbox said its digital-signature product, Dropbox Sign, was breached by hackers, who accessed user information including emails, user names and phone numbers. The software company said it became aware of the cyberattack on April 24, sought to limit the incident and reported it to law enforcement and regulatory authorities. "We discovered that the threat actor had accessed data related to all users of Dropbox Sign, such as emails and user names, in addition to general account settings," Dropbox said Wednesday in a regulatory filing. "For subsets of users, the threat actor also accessed phone numbers, hashed passwords, and certain authentication information such as API keys, OAuth tokens, and multi-factor authentication."

Dropbox said there is no evidence hackers obtained user accounts or payment information. The company said it appears the attack was limited to Dropbox Sign and no other products were breached. The company didn't disclose how many customers were affected by the hack. The hack is unlikely to have a material impact on the company's finances, Dropbox said in the filing. The shares declined about 2.5% in extended trading after the cyberattack was disclosed and have fallen 20% this year through the close.

Microsoft

Microsoft Concern Over Google's Lead Drove OpenAI Investment (yahoo.com) 10

Microsoft's motivation for investing heavily and partnering with OpenAI came from a sense of falling badly behind Google, according to an internal email released Tuesday as part of the Justice Department's antitrust case against the search giant. Bloomberg: The Windows software maker's chief technology officer, Kevin Scott, was "very, very worried" when he looked at the AI model-training capability gap between Alphabet's efforts and Microsoft's, he wrote in a 2019 message to Chief Executive Officer Satya Nadella and co-founder Bill Gates. The exchange shows how the company's top executives privately acknowledged they lacked the infrastructure and development speed to catch up to the likes of OpenAI and Google's DeepMind.

[...] Scott, who also serves as executive vice president of artificial intelligence at Microsoft, observed that Google's search product had improved on competitive metrics because of the Alphabet company's advancements in AI. The Microsoft executive wrote that he made a mistake by dismissing some of the earlier AI efforts of its competitors. "We are multiple years behind the competition in terms of machine learning scale," Scott said in the email. Significant portions of the message, titled 'Thoughts on OpenAI,' remain redacted. Nadella endorsed Scott's email, forwarding it to Chief Financial Officer Amy Hood and saying it explains "why I want us to do this."

Open Source

Google Removes RISC-V Support From Android Common Kernel, Denies Abandoning Its Efforts (androidauthority.com) 31

Mishaal Rahman reports via Android Authority: Earlier today, a Senior Staff Software Engineer at Google who, according to their LinkedIn, leads the Android Systems Team and works on Android's Linux kernel fork, submitted a series of patches to AOSP that "remove ACK's support for riscv64." The description of these patches states that "support for risc64 GKI kernels is discontinued."

ACK stands for Android Common Kernel and refers to the downstream branches of the official kernel.org Linux kernels that Google maintains. The ACK is basically Linux plus some "patches of interest to the Android community that haven't been merged into mainline or Long Term Supported (LTS) kernels." There are multiple ACK branches, including android-mainline, which is the primary development branch that is forked into "GKI" kernel branches that correspond to a particular combination of supported Linux kernel and Android OS version. GKI stands for Generic Kernel Image and refers to a kernel that's built from one of these branches. Every certified Android device ships with a kernel based on one of these GKI branches, as Google currently does not certify Android devices that ship with a mainline Linux kernel build.

Since these patches remove RISC-V kernel support, RISC-V kernel build support, and RISC-V emulator support, any companies looking to compile a RISC-V build of Android right now would need to create and maintain their own fork of Linux with the requisite ACK and RISC-V patches. Given that Google currently only certifies Android builds that ship with a GKI kernel built from an ACK branch, that means we likely won't see certified builds of Android on RISC-V hardware anytime soon. Our initial interpretation of these patches was that Google was preparing to kill off RISC-V support in Android since that was the most obvious conclusion. However, a spokesperson for Google told us this: "Android will continue to support RISC-V. Due to the rapid rate of iteration, we are not ready to provide a single supported image for all vendors. This particular series of patches removes RISC-V support from the Android Generic Kernel Image (GKI)."
Based on Google's statement, Rahman suggests that "there's still a ton of work that needs to be done before Android is ready for RISC-V."

"Even once it's ready, Google will need to redo the work to add RISC-V support in the kernel anyway. At the very least, Google's decision likely means that we might need to wait even longer than expected to see commercial Android devices running on a RISC-V chip."
Social Networks

Dave & Buster's To Allow Customers To Bet On Arcade Games (cnbc.com) 22

Arcade giant Dave & Buster's said it will begin allowing customers to bet on arcade games. "Customers can soon make a friendly $5 wager on a Hot Shots basketball game, a bet on a Skee-Ball competition or on another arcade game," reports CNBC. "The betting function, expected to launch in the next few months, will work through the company's app." From the report: Dave & Buster's, started in 1982, now has more than 222 venues in North America, offering everything from bowling to laser tag, plus virtual reality. The company says it has five million loyalty members and 30 million unique visitors to its locations each year. The company's stock is up more than 50% over the past year. As a boom in betting increases engagement among sports fans, digital gamification could have a similar effect within Dave & Buster's customer base by allowing loyalty members to compete with one another and earn rewards. Ultimately, it could mean people spend more time and money at the venues.

Dave and Buster's is using technology by gamification software company Lucra. [...] Lucra and Dave & Buster's said there will be a limit placed on the size of bets it will allow, but that they're not publicly disclosing that threshold just yet. Lucra said across its history the average bet size has been $10. "We're creating a new form of kind of a digital experience for folks inside of these ecosystems," said Madding, Lucra's chief operating officer. "We're getting them to engage in a new way and spend more time and money," he added. Lucra says its skills-based games are not subject to the same licenses and regulations gambling operators face with games of chance. Lucra is careful not to use the term "bet" or "wager" to describe its games. "We use real-money contests or challenges," Madding said. Lucra's contests are only available to players age 18 and older. The contests are available in 44 states.

Operating Systems

Systemd Announces 'run0' Sudo Alternative (fosspost.org) 301

An anonymous reader quotes a report from Foss Outpost: Systemd lead developer Lennart Poettering has posted on Mastodon about their upcoming v256 release of Systemd, which is expected to include a sudo replacement called "run0". The developer talks about the weaknesses of sudo, and how it has a large possible attack surface. For example, sudo supports network access, LDAP configurations, other types of plugins, and much more. But most importantly, its SUID binary provides a large attack service according to Lennart: "I personally think that the biggest problem with sudo is the fact it's a SUID binary though -- the big attack surface, the plugins, network access and so on that come after it it just make the key problem worse, but are not in themselves the main issue with sudo. SUID processes are weird concepts: they are invoked by unprivileged code and inherit the execution context intended for and controlled by unprivileged code. By execution context I mean the myriad of properties that a process has on Linux these days, from environment variables, process scheduling properties, cgroup assignments, security contexts, file descriptors passed, and so on and so on."

He's saying that sudo is a Unix concept from many decades ago, and a better privilege escalation system should be in place for 2024 security standards: "So, in my ideal world, we'd have an OS entirely without SUID. Let's throw out the concept of SUID on the dump of UNIX' bad ideas. An execution context for privileged code that is half under the control of unprivileged code and that needs careful manual clean-up is just not how security engineering should be done in 2024 anymore." [...]

He also mentioned that there will be more features in run0 that are not just related to the security backend such as: "The tool is also a lot more fun to use than sudo. For example, by default, it will tint your terminal background in a reddish tone while you are operating with elevated privileges. That is supposed to act as a friendly reminder that you haven't given up the privileges yet, and marks the output of all commands that ran with privileges appropriately. It also inserts a red dot (unicode ftw) in the window title while you operate with privileges, and drops it afterwards."

Open Source

Bruce Perens Emits Draft Post-Open Zero Cost License (theregister.com) 72

After convincing the world to buy open source and give up the Morse Code test for ham radio licenses, Bruce Perens has a new gambit: develop a license that ensures software developers receive compensation from large corporations using their work. The new Post-Open Zero Cost License seeks to address the financial disparities in open source software use and includes provisions against using content to train AI models, aligning its enforcement with non-profit performing rights organizations like ASCAP. Here's an excerpt from an interview The Register conducted with Perens: The license is one component among several -- the paid license needs to be hammered out -- that he hopes will support his proposed Post-Open paradigm to help software developers get paid when their work gets used by large corporations. "There are two paradigms that you can use for this," he explains in an interview. "One is Spotify and the other is ASCAP, BMI, and SESAC. The difference is that Spotify is a for-profit corporation. And they have to distribute profits to their stockholders before they pay the musicians. And as a result, the musicians complain that they're not getting very much at all."

"There are two paradigms that you can use for this," he explains in an interview. "One is Spotify and the other is ASCAP, BMI, and SESAC. The difference is that Spotify is a for-profit corporation. And they have to distribute profits to their stockholders before they pay the musicians. And as a result, the musicians complain that they're not getting very much at all." Perens wants his new license -- intended to complement open source licensing rather than replace it -- to be administered by a 501(c)(6) non-profit. This entity would handle payments to developers. He points to the music performing rights organizations as a template, although among ASCAP, BMI, SECAC, and GMR, only ASCAP remains non-profit. [...]

The basic idea is companies making more than $5 million annually by using Post-Open software in a paid-for product would be required to pay 1 percent of their revenue back to this administrative organization, which would distribute the funds to the maintainers of the participating open source project(s). That would cover all Post-Open software used by the organization. "The license that I have written is long -- about as long as the Affero GPL 3, which is now 17 years old, and had to deal with a lot more problems than the early licenses," Perens explains. "So, at least my license isn't excessively long. It handles all of the abuses of developers that I'm conscious of, including things I was involved in directly like Open Source Security v. Perens, and Jacobsen v. Katzer."

"It also makes compliance easier for companies than it is today, and probably cheaper even if they do have to pay. It creates an entity that can sue infringers on behalf of any developer and gets the funding to do it, but I'm planning the infringement process to forgive companies that admit the problem and cure the infringement, so most won't ever go to court. It requires more infrastructure than open source developers are used to. There's a central organization for Post-Open (or it could be three organizations if we divided all of the purposes: apportioning money to developers, running licensing, and enforcing compliance), and an outside CPA firm, and all of that has to be structured so that developers can trust it."
You can read the full interview here.
Security

Change Healthcare Hackers Broke In Using Stolen Credentials, No MFA (techcrunch.com) 24

An anonymous reader quotes a report from TechCrunch: The ransomware gang that hacked into U.S. health tech giant Change Healthcare used a set of stolen credentials to remotely access the company's systems that weren't protected by multifactor authentication (MFA), according to the chief executive of its parent company, UnitedHealth Group (UHG). UnitedHealth CEO Andrew Witty provided the written testimony ahead of a House subcommittee hearing on Wednesday into the February ransomware attack that caused months of disruption across the U.S. healthcare system. This is the first time the health insurance giant has given an assessment of how hackers broke into Change Healthcare's systems, during which massive amounts of health data were exfiltrated from its systems. UnitedHealth said last week that the hackers stole health data on a "substantial proportion of people in America."

According to Witty's testimony, the criminal hackers "used compromised credentials to remotely access a Change Healthcare Citrix portal." Organizations like Change use Citrix software to let employees access their work computers remotely on their internal networks. Witty did not elaborate on how the credentials were stolen. However, Witty did say the portal "did not have multifactor authentication," which is a basic security feature that prevents the misuse of stolen passwords by requiring a second code sent to an employee's trusted device, such as their phone. It's not known why Change did not set up multifactor authentication on this system, but this will likely become a focus for investigators trying to understand potential deficiencies in the insurer's systems. "Once the threat actor gained access, they moved laterally within the systems in more sophisticated ways and exfiltrated data," said Witty. Witty said the hackers deployed ransomware nine days later on February 21, prompting the health giant to shut down its network to contain the breach.
Last week, the medical firm admitted that it paid the ransomware hackers roughly $22 million via bitcoin.

Meanwhile, UnitedHealth said the total costs associated with the ransomware attack amounted to $872 million. "The remediation efforts spent on the attack are ongoing, so the total costs related to business disruption and repairs are likely to exceed $1 billion over time, potentially including the reported $22 million payment made [to the hackers]," notes The Register.
Cloud

How an Empty S3 Bucket Can Make Your AWS Bill Explode (medium.com) 70

Maciej Pocwierz, a senior software engineer Semantive, writing on Medium: A few weeks ago, I began working on the PoC of a document indexing system for my client. I created a single S3 bucket in the eu-west-1 region and uploaded some files there for testing. Two days later, I checked my AWS billing page, primarily to make sure that what I was doing was well within the free-tier limits. Apparently, it wasn't. My bill was over $1,300, with the billing console showing nearly 100,000,000 S3 PUT requests executed within just one day! By default, AWS doesn't log requests executed against your S3 buckets. However, such logs can be enabled using AWS CloudTrail or S3 Server Access Logging. After enabling CloudTrail logs, I immediately observed thousands of write requests originating from multiple accounts or entirely outside of AWS.

Was it some kind of DDoS-like attack against my account? Against AWS? As it turns out, one of the popular open-source tools had a default configuration to store their backups in S3. And, as a placeholder for a bucket name, they used... the same name that I used for my bucket. This meant that every deployment of this tool with default configuration values attempted to store its backups in my S3 bucket! So, a horde of misconfigured systems is attempting to store their data in my private S3 bucket. But why should I be the one paying for this mistake? Here's why: S3 charges you for unauthorized incoming requests. This was confirmed in my exchange with AWS support. As they wrote: "Yes, S3 charges for unauthorized requests (4xx) as well[1]. That's expected behavior." So, if I were to open my terminal now and type: aws s3 cp ./file.txt s3://your-bucket-name/random_key. I would receive an AccessDenied error, but you would be the one to pay for that request. And I don't even need an AWS account to do so.

Another question was bugging me: why was over half of my bill coming from the us-east-1 region? I didn't have a single bucket there! The answer to that is that the S3 requests without a specified region default to us-east-1 and are redirected as needed. And the bucket's owner pays extra for that redirected request. The security aspect: We now understand why my S3 bucket was bombarded with millions of requests and why I ended up with a huge S3 bill. At that point, I had one more idea I wanted to explore. If all those misconfigured systems were attempting to back up their data into my S3 bucket, why not just let them do so? I opened my bucket for public writes and collected over 10GB of data within less than 30 seconds. Of course, I can't disclose whose data it was. But it left me amazed at how an innocent configuration oversight could lead to a dangerous data leak! Lesson 1: Anyone who knows the name of any of your S3 buckets can ramp up your AWS bill as they like. Other than deleting the bucket, there's nothing you can do to prevent it. You can't protect your bucket with services like CloudFront or WAF when it's being accessed directly through the S3 API. Standard S3 PUT requests are priced at just $0.005 per 1,000 requests, but a single machine can easily execute thousands of such requests per second.

AI

Copilot Workspace Is GitHub's Take On AI-Powered Software Engineering 12

An anonymous reader quotes a report from TechCrunch: Ahead of its annual GitHub Universe conference in San Francisco early this fall, GitHub announced Copilot Workspace, a dev environment that taps what GitHub describes as "Copilot-powered agents" to help developers brainstorm, plan, build, test and run code in natural language. Jonathan Carter, head of GitHub Next, GitHub's software R&D team, pitches Workspace as somewhat of an evolution of GitHub's AI-powered coding assistant Copilot into a more general tool, building on recently introduced capabilities like Copilot Chat, which lets developers ask questions about code in natural language. "Through research, we found that, for many tasks, the biggest point of friction for developers was in getting started, and in particular knowing how to approach a [coding] problem, knowing which files to edit and knowing how to consider multiple solutions and their trade-offs," Carter said. "So we wanted to build an AI assistant that could meet developers at the inception of an idea or task, reduce the activation energy needed to begin and then collaborate with them on making the necessary edits across the entire corebase."

Given a GitHub repo or a specific bug within a repo, Workspace -- underpinned by OpenAI's GPT-4 Turbo model -- can build a plan to (attempt to) squash the bug or implement a new feature, drawing on an understanding of the repo's comments, issue replies and larger codebase. Developers get suggested code for the bug fix or new feature, along with a list of the things they need to validate and test that code, plus controls to edit, save, refactor or undo it. The suggested code can be run directly in Workspace and shared among team members via an external link. Those team members, once in Workspace, can refine and tinker with the code as they see fit.

Perhaps the most obvious way to launch Workspace is from the new "Open in Workspace" button to the left of issues and pull requests in GitHub repos. Clicking on it opens a field to describe the software engineering task to be completed in natural language, like, "Add documentation for the changes in this pull request," which, once submitted, gets added to a list of "sessions" within the new dedicated Workspace view. Workspace executes requests systematically step by step, creating a specification, generating a plan and then implementing that plan. Developers can dive into any of these steps to get a granular view of the suggested code and changes and delete, re-run or re-order the steps as necessary.
"Since developers spend a lot of their time working on [coding issues], we believe we can help empower developers every day through a 'thought partnership' with AI," Carter said. "You can think of Copilot Workspace as a companion experience and dev environment that complements existing tools and workflows and enables simplifying a class of developer tasks ... We believe there's a lot of value that can be delivered in an AI-native developer environment that isn't constrained by existing workflows."
EU

The EU Will Force Apple To Open Up iPadOS (engadget.com) 132

As reported by Bloomberg (paywalled), Apple's iPadOS will need to abide by EU's DMA rules, as it is now designated as a gatekeeper alongside the Safari web browser, iOS operating system and the App Store. "Apple now has six months to ensure full compliance of iPadOS with the DMA obligations," reads the EU's blog post about the change. Engadget reports: What does Apple have to do to ensure iPadOS compliance? According to the DMA, gatekeepers are prohibited from favoring their own services over rivals and from locking users into the ecosystem. The software must also allow third parties to interoperate with internal services, which is why third-party app stores are becoming a thing on iPhones in Europe. The iPad, presumably, will soon follow suit. In other words, the DMA is lobbing some serious stink bombs into Apple's walled garden. In a statement published by Forbes, Apple said it "will continue to constructively engage with the European Commission" to ensure its designated services comply with the DMA, including iPadOS. "iPadOS constitutes an important gateway on which many companies rely to reach their customers," wrote Margrethe Vestager, Executive Vice-President in charge of competition policy at the European Commission. "Today's decision will ensure that fairness and contestability are preserved also on this platform."
AI

In Race To Build AI, Tech Plans a Big Plumbing Upgrade (nytimes.com) 25

If 2023 was the tech industry's year of the A.I. chatbot, 2024 is turning out to be the year of A.I. plumbing. From a report: It may not sound as exciting, but tens of billions of dollars are quickly being spent on behind-the-scenes technology for the industry's A.I. boom. Companies from Amazon to Meta are revamping their data centers to support artificial intelligence. They are investing in huge new facilities, while even places like Saudi Arabia are racing to build supercomputers to handle A.I. Nearly everyone with a foot in tech or giant piles of money, it seems, is jumping into a spending frenzy that some believe could last for years.

Microsoft, Meta, and Google's parent company, Alphabet, disclosed this week that they had spent more than $32 billion combined on data centers and other capital expenses in just the first three months of the year. The companies all said in calls with investors that they had no plans to slow down their A.I. spending. In the clearest sign of how A.I. has become a story about building a massive technology infrastructure, Meta said on Wednesday that it needed to spend billions more on the chips and data centers for A.I. than it had previously signaled. "I think it makes sense to go for it, and we're going to," Mark Zuckerberg, Meta's chief executive, said in a call with investors.

The eye-popping spending reflects an old parable in Silicon Valley: The people who made the biggest fortunes in California's gold rush weren't the miners -- they were the people selling the shovels. No doubt Nvidia, whose chip sales have more than tripled over the last year, is the most obvious A.I. winner. The money being thrown at technology to support artificial intelligence is also a reminder of spending patterns of the dot-com boom of the 1990s. For all of the excitement around web browsers and newfangled e-commerce websites, the companies making the real money were software giants like Microsoft and Oracle, the chipmaker Intel, and Cisco Systems, which made the gear that connected those new computer networks together. But cloud computing has added a new wrinkle: Since most start-ups and even big companies from other industries contract with cloud computing providers to host their networks, the tech industry's biggest companies are spending big now in hopes of luring customers.

Businesses

Canceling Your Credit Card May Not Stop Netflix's Recurring Charges (gizmodo.com) 88

Millions of Americans pay for Netflix, doling out anywhere from $6.99 to $22.99 a month. It's a common belief that you can get out of recurring charges like this by canceling your credit card. Netflix won't be able to find you, and your account will just go away, right? You wouldn't be crazy for believing it, but it's a myth that canceling a credit card will definitely stop your recurring charges. From a report: Nearly 46% of Americans opened a new credit card last year, according to Forbes, which means millions of Americans also canceled old ones. When you switch cards, Netflix doesn't just stop your service -- they just start charging your new card. Granted, it might be easier to just cancel your Netflix subscription directly. There's a largely hidden service that enables Netflix and most other subscription services to keep throwing charges at you indefinitely.

"Banks may automatically update credit or debit card numbers when a new card is issued. This update allows your card to continue to be charged, even if it's expired," Netflix says in its help center. Most major card providers offer a feature that enables this, including Visa. In 2003, Visa U.S.A. started offering a new software product to merchants called Visa Account Updater (VAU), according to a 2003 American Banker article. The service works with a network of banks to create a virtual tracking service of Americans' financial profiles. Whenever someone renews, or switches a credit card within their bank, the institution automatically update the VAU. This system lets Netflix and countless other corporations charge whatever card you have on file.

Government

Pegasus Spyware Used on Hundreds of People, Says Poland's Prosecutor General (apnews.com) 22

An anonymous reader shared this report from the Associated Press: Poland's prosecutor general told the parliament on Wednesday that powerful Pegasus spyware was used against hundreds of people during the former government in Poland, among them elected officials. Adam Bodnar told lawmakers that he found the scale of the surveillance "shocking and depressing...." The data showed that Pegasus was used in the cases of 578 people from 2017 to 2022, and that it was used by three separate government agencies: the Central Anticorruption Bureau, the Military Counterintelligence Service and the Internal Security Agency. The data show that it was used against six people in 2017; 100 in 2018; 140 in 2019; 161 in 2020; 162 in 2021; and then nine in 2022, when it stopped.... Bodnar said that the software generated "enormous knowledge" about the "private and professional lives" of those put under surveillance. He also stressed that the Polish state doesn't have full control over the data that is gathered because the system operates on the basis of a license that was granted by an Israeli company.
"Pegasus gives its operators complete access to a mobile device, allowing them to extract passwords, photos, messages, contacts and browsing history and activate the microphone and camera for real-time eavesdropping."
Security

Why is South Korea's Military Set To Ban iPhones Over 'Security' Concerns? (appleinsider.com) 47

"South Korea is considering prohibiting the use of iPhones and smart wearable devices inside military buildings," reports the Defense Post, "due to increasing security concerns."

But the blog Apple Insider argues the move "has less to do with security and more to do with a poorly crafted mobile device management suite coupled with nationalism..." A report on Tuesday morning claims that the ban is on all devices capable of voice recording and do not allow third-party apps to lock this down — with iPhone specifically named... According to sources familiar with the matter cited by Tuesday's report, the iPhone is explicitly banned. Android-based devices, like Samsung's, are exempt from the ban...

The issue appears to be that the South Korean National Defense Mobile Security mobile device management app doesn't seem to be able to block the use of the microphone. This particular MDM was rolled out in 2013, with use enforced across all military members in 2021.

The report talks about user complaints about the software, and inconsistent limitations depending on make, model, and operating system. A military official speaking to the publication says that deficiencies on Android would be addressed in a software update. Discussions are apparently underway to extend the total ban downwards to the entire military. The Army is said to have tried the ban as well...

Seven in 10 South Korean military members are Samsung users. So, the ban appears to be mostly symbolic.

Thanks to Slashdot reader Kitkoan for sharing the news.
Data Storage

The 'Ceph' Community Now Stores 1,000 Petabytes in Its Open Source Storage Solution (linuxfoundation.org) 25

1,000 petabytes.
A million terabytes.
One quintillion bytes (or 1,000,000,000,000,000,000).

That's the amount of storage reported by users of the Ceph storage solution (across more than 3,000 Ceph clusters).

The Ceph Foundation is a "directed fund" of the Linux Foundation, providing a neutral home for Ceph, "the most popular open source storage solution for modern data storage challenges" (offering an architecture that's "highly scalable, resilient, and flexible"). It's a software-defined storage platform, providing object storage, block storage, and file storage built on a common distributed cluster foundation.

And Friday they announced the release of Ceph Squid, "which comes with several performance and space efficiency features along with enhanced protocol support." Ceph has solidified its position as the cornerstone of open source data storage. The release of Ceph Squid represents a significant milestone toward providing scalable, reliable, and flexible storage solutions that meet the ever-evolving demands of digital data storage.

Features of Ceph Squid include improvements to BlueStore [a storage back end specifically designed for managing data on disk for Ceph Object Storage Daemon workloads] to reduce latency and CPU requirements for snapshot intensive workloads. BlueStore now uses RocksDB compression by default for increased average performance and reduced space usage. [And the next-generation Crimson OSD also has improvements in stability and read performance, and "now supports scrub, partial recovery and osdmap trimming."]

Ceph continues to drive the future of storage, and welcomes developers, partners, and technology enthusiasts to get involved.

Ceph Squid also brings enhancements for the CRUSH algorithm [which computes storage locations] to support more flexible and cost effective erasure coding configurations.
Microsoft

A Windows Vulnerability Reported by the NSA Was Exploited To Install Russian Malware (arstechnica.com) 17

"Kremlin-backed hackers have been exploiting a critical Microsoft vulnerability for four years," Ars Technica reported this week, "in attacks that targeted a vast array of organizations with a previously undocumented tool, the software maker disclosed Monday.

"When Microsoft patched the vulnerability in October 2022 — at least two years after it came under attack by the Russian hackers — the company made no mention that it was under active exploitation." As of publication, the company's advisory still made no mention of the in-the-wild targeting. Windows users frequently prioritize the installation of patches based on whether a vulnerability is likely to be exploited in real-world attacks.

Exploiting CVE-2022-38028, as the vulnerability is tracked, allows attackers to gain system privileges, the highest available in Windows, when combined with a separate exploit. Exploiting the flaw, which carries a 7.8 severity rating out of a possible 10, requires low existing privileges and little complexity. It resides in the Windows print spooler, a printer-management component that has harbored previous critical zero-days. Microsoft said at the time that it learned of the vulnerability from the US National Security Agency... Since as early as April 2019, Forest Blizzard has been exploiting CVE-2022-38028 in attacks that, once system privileges are acquired, use a previously undocumented tool that Microsoft calls GooseEgg. The post-exploitation malware elevates privileges within a compromised system and goes on to provide a simple interface for installing additional pieces of malware that also run with system privileges. This additional malware, which includes credential stealers and tools for moving laterally through a compromised network, can be customized for each target.

"While a simple launcher application, GooseEgg is capable of spawning other applications specified at the command line with elevated permissions, allowing threat actors to support any follow-on objectives such as remote code execution, installing a backdoor, and moving laterally through compromised networks," Microsoft officials wrote.

Thanks to Slashdot reader echo123 for sharing the news.
AI

EyeEm Will License Users' Photos To Train AI If They Don't Delete Them 27

Sarah Perez reports via TechCrunch: EyeEm, the Berlin-based photo-sharing community that exited last year to Spanish company Freepik after going bankrupt, is now licensing its users' photos to train AI models. Earlier this month, the company informed users via email that it was adding a new clause to its Terms & Conditions that would grant it the rights to upload users' content to "train, develop, and improve software, algorithms, and machine-learning models." Users were given 30 days to opt out by removing all their content from EyeEm's platform. Otherwise, they were consenting to this use case for their work.

At the time of its 2023 acquisition, EyeEm's photo library included 160 million images and nearly 150,000 users. The company said it would merge its community with Freepik's over time. Despite its decline, almost 30,000 people are still downloading it each month, according to data from Appfigures. Once thought of as a possible challenger to Instagram -- or at least "Europe's Instagram" -- EyeEm had dwindled to a staff of three before selling to Freepik, TechCrunch's Ingrid Lunden previously reported. Joaquin Cuenca Abela, CEO of Freepik, hinted at the company's possible plans for EyeEm, saying it would explore how to bring more AI into the equation for creators on the platform. As it turns out, that meant selling their work to train AI models. [...]

Of note, the notice says that these deletions from EyeEm market and partner platforms could take up to 180 days. Yes, that's right: Requested deletions take up to 180 days but users only have 30 days to opt out. That means the only option is manually deleting photos one by one. Worse still, the company adds that: "You hereby acknowledge and agree that your authorization for EyeEm to market and license your Content according to sections 8 and 10 will remain valid until the Content is deleted from EyeEm and all partner platforms within the time frame indicated above. All license agreements entered into before complete deletion and the rights of use granted thereby remain unaffected by the request for deletion or the deletion." Section 8 is where licensing rights to train AI are detailed. In Section 10, EyeEm informs users they will forgo their right to any payouts for their work if they delete their account -- something users may think to do to avoid having their data fed to AI models. Gotcha!
Python

Fake Job Interviews Target Developers With New Python Backdoor (bleepingcomputer.com) 15

An anonymous reader quotes a report from BleepingComputer: A new campaign tracked as "Dev Popper" is targeting software developers with fake job interviews in an attempt to trick them into installing a Python remote access trojan (RAT). The developers are asked to perform tasks supposedly related to the interview, like downloading and running code from GitHub, in an effort to make the entire process appear legitimate. However, the threat actor's goal is make their targets download malicious software that gathers system information and enables remote access to the host. According to Securonix analysts, the campaign is likely orchestrated by North Korean threat actors based on the observed tactics. The connections are not strong enough for attribution, though. [...]

Although the perpetrators of the Dev Popper attack aren't known, the tactic of using job lures as bait to infect people with malware is still prevalent, so people should remain vigilant of the risks. The researchers note that the method "exploits the developer's professional engagement and trust in the job application process, where refusal to perform the interviewer's actions could compromise the job opportunity," which makes it very effective.

Slashdot Top Deals