×
Power

Do Electric Vehicles Fail at a Lower Rate Than Gas Cars In Extreme Cold? (electrek.co) 216

In a country experiencing extreme cold — and where almost 1 in 4 cars are electric — a roadside assistance company says it's still gas-powered cars that are experiencing the vast majority of problems starting.

Electrek argues that while extreme cold may affect chargers, "it mainly gets attention because it's a new technology and it fails for different reasons than gasoline vehicles in the cold." Viking, a road assistance company (think AAA), says that it responded to 34,000 assistance requests in the first 9 days of the year. Viking says that only 13% of the cases were coming from electric vehicles (via TV2 — translated from Norwegian) ["13 percent of the cases with starting difficulties are electric cars, while the remaining 87 percent are fossil cars..."]

To be fair, this data doesn't adjust for the age of the vehicles. Older gas-powered cars fail at a higher rate than the new ones and electric vehicles are obviously much more recent on average.

Thanks to long-time Slashdot reader Geoffrey.landis for sharing the article.
Power

'For Truckers Driving EVs, There's No Going Back' (yahoo.com) 153

The Washington Post looks at "a small but growing group of commercial medium-to-heavy-duty truck drivers who use electric trucks."

"These drivers — many of whom operate local or regional routes that don't require hundreds of miles on the road in a day — generally welcome the transition to electric, praising their new trucks' handling, acceleration, smoothness and quiet operation. "Everyone who has had an EV has no aspirations to go back to diesel at this point," said Khari Burton, who drives an electric Volvo VNR in the Los Angeles area for transport company IMC. "We talk about it and it's all positivity. I really enjoy the smoothness ... and just the quietness as well." Mike Roeth, the executive director of the North American Council for Freight Efficiency, said many drivers have reported that the new vehicles are easier on their bodies — thanks to both less rocking off the cab, assisted steering and the quiet motor. "Part of my hypothesis is that it will help truck driver retention," he said. "We're seeing people who would retire driving a diesel truck now working more years with an electric truck."

Most of the electric trucks on the road today are doing local or regional routes, which are easier to manage with a truck that gets only up to 250 miles of range... Trucking advocates say electric has a long way to go before it can take on longer routes. "If you're running very local, very short mileage, there may be a vehicle that can do that type of route," said Mike Tunnell, the executive director of environmental affairs for the American Trucking Association. "But for the average haul of 400 miles, there's just nothing that's really practical today."

There's other concerns, according to the article. "[S]ome companies and trucking associations worry this shift, spurred in part by a California law mandating a switch to electric or emissions-free trucks by 2042, is happening too fast. While electric trucks might work well in some cases, they argue, the upfront costs of the vehicles and their charging infrastructure are often too heavy a lift."

But this is probably the key sentence in the article: For the United States to meet its climate goals, virtually all trucks must be zero-emissions by 2050. While trucks are only 4 percent of the vehicles on the road, they make up almost a quarter of the country's transportation emissions.
The article cites estimates that right now there's 12.2 million trucks on America's highways — and barely more than 1% (13,000) are electric. "Around 10,000 of those trucks were just put on the road in 2023, up from 2,000 the year before." (And they add that Amazon alone has thousands of Rivian's electric delivery vans, operating in 1,800 cities.)

But the article's overall message seems to be that when it comes to the trucks, "the drivers operating them say they love driving electric." And it includes comments from actual truckers:
  • 49-year-old Frito-Lay trucker Gary LaBush: "I was like, 'What's going on?' There was no noise — and no fumes... it's just night and day."
  • 66-year-old Marty Boots: Diesel was like a college wrestler. And the electric is like a ballet dancer... You get back into diesel and it's like, 'What's wrong with this thing?' Why is it making so much noise? Why is it so hard to steer?"

Networking

Ceph: a Journey To 1 TiB/s (ceph.io) 16

It's "a free and open-source, software-defined storage platform," according to Wikipedia, providing object storage, block storage, and file storage "built on a common distributed cluster foundation". The charter advisory board for Ceph included people from Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE.

And Nite_Hawk (Slashdot reader #1,304) is one of its core engineers — a former Red Hat principal software engineer named Mark Nelson. (He's now leading R&D for a small cloud systems company called Clyso that provides Ceph consulting.) And he's returned to Slashdot to share a blog post describing "a journey to 1 TiB/s". This gnarly tale-from-Production starts while assisting Clyso with "a fairly hip and cutting edge company that wanted to transition their HDD-backed Ceph cluster to a 10 petabyte NVMe deployment" using object-based storage devices [or OSDs]...) I can't believe they figured it out first. That was the thought going through my head back in mid-December after several weeks of 12-hour days debugging why this cluster was slow... Half-forgotten superstitions from the 90s about appeasing SCSI gods flitted through my consciousness...

Ultimately they decided to go with a Dell architecture we designed, which quoted at roughly 13% cheaper than the original configuration despite having several key advantages. The new configuration has less memory per OSD (still comfortably 12GiB each), but faster memory throughput. It also provides more aggregate CPU resources, significantly more aggregate network throughput, a simpler single-socket configuration, and utilizes the newest generation of AMD processors and DDR5 RAM. By employing smaller nodes, we halved the impact of a node failure on cluster recovery....

The initial single-OSD test looked fantastic for large reads and writes and showed nearly the same throughput we saw when running FIO tests directly against the drives. As soon as we ran the 8-OSD test, however, we observed a performance drop. Subsequent single-OSD tests continued to perform poorly until several hours later when they recovered. So long as a multi-OSD test was not introduced, performance remained high. Confusingly, we were unable to invoke the same behavior when running FIO tests directly against the drives. Just as confusing, we saw that during the 8 OSD test, a single OSD would use significantly more CPU than the others. A wallclock profile of the OSD under load showed significant time spent in io_submit, which is what we typically see when the kernel starts blocking because a drive's queue becomes full...

For over a week, we looked at everything from bios settings, NVMe multipath, low-level NVMe debugging, changing kernel/Ubuntu versions, and checking every single kernel, OS, and Ceph setting we could think of. None these things fully resolved the issue. We even performed blktrace and iowatcher analysis during "good" and "bad" single OSD tests, and could directly observe the slow IO completion behavior. At this point, we started getting the hardware vendors involved. Ultimately it turned out to be unnecessary. There was one minor, and two major fixes that got things back on track.

It's a long blog post, but here's where it ends up:
  • Fix One: "Ceph is incredibly sensitive to latency introduced by CPU c-state transitions. A quick check of the bios on these nodes showed that they weren't running in maximum performance mode which disables c-states."
  • Fix Two: [A very clever engineer working for the customer] "ran a perf profile during a bad run and made a very astute discovery: A huge amount of time is spent in the kernel contending on a spin lock while updating the IOMMU mappings. He disabled IOMMU in the kernel and immediately saw a huge increase in performance during the 8-node tests." In a comment below, Nelson adds that "We've never seen the IOMMU issue before with Ceph... I'm hoping we can work with the vendors to understand better what's going on and get it fixed without having to completely disable IOMMU."
  • Fix Three: "We were not, in fact, building RocksDB with the correct compile flags... It turns out that Canonical fixed this for their own builds as did Gentoo after seeing the note I wrote in do_cmake.sh over 6 years ago... With the issue understood, we built custom 17.2.7 packages with a fix in place. Compaction time dropped by around 3X and 4K random write performance doubled."

The story has a happy ending, with performance testing eventually showing data being read at 635 GiB/s — and a colleague daring them to attempt 1 TiB/s. They built a new testing configuration targeting 63 nodes — achieving 950GiB/s — then tried some more performance optimizations...


Hardware

Researchers Claim First Functioning Graphene-Based Chip (ieee.org) 4

An anonymous reader quotes a report from IEEE Spectrum: Researchers at Georgia Tech, in Atlanta, have developed what they are calling the world's first functioning graphene-based semiconductor. This breakthrough holds the promise to revolutionize the landscape of electronics, enabling faster traditional computers and offering a new material for future quantum computers. The research, published on January 3 in Nature and led by Walt de Heer, a professor of physics at Georgia Tech, focuses on leveraging epitaxial graphene, a crystal structure of carbon chemically bonded to silicon carbide (SiC). This novel semiconducting material, dubbed semiconducting epitaxial graphene (SEC) -- or alternatively, epigraphene -- boasts enhanced electron mobility compared with that of traditional silicon, allowing electrons to traverse with significantly less resistance. The outcome is transistors capable of operating at terahertz frequencies, offering speeds 10 times as fast as that of the silicon-based transistors used in current chips.

De Heer describes the method used as a modified version of an extremely simple technique that has been known for over 50 years. "When silicon carbide is heated to well over 1,000C, silicon evaporates from the surface, leaving a carbon-rich surface which then forms into graphene," says de Heer. This heating step is done with an argon quartz tube in which a stack of two SiC chips are placed in a graphite crucible, according to de Heer. Then a high-frequency current is run through a copper coil around the quartz tube, which heats the graphite crucible through induction. The process takes about an hour. De Heer added that the SEC produced this way is essentially charge neutral, and when exposed to air, it will spontaneously be doped by oxygen. This oxygen doping is easily removed by heating it at about 200C in vacuum. "The chips we use cost about [US] $10, the crucible about $1, and the quartz tube about $10," said de Heer. [...]

De Heer and his research team concede, however, that further exploration is needed to determine whether graphene-based semiconductors can surpass the current superconducting technology used in advanced quantum computers. The Georgia Tech team do not envision incorporating graphene-based semiconductors with standard silicon or compound semiconductor lines. Instead, they are aiming for a paradigm shift beyond silicon, utilizing silicon carbide. They are developing methods, such as coating SEC with boron nitride, to protect and enhance its compatibility with conventional semiconductor lines. Comparing their work with commercially available graphene field-effect transistors (GFETs), de Heer explains that there is a crucial difference: "Conventional GFETs do not use semiconducting graphene, making them unsuitable for digital electronics requiring a complete transistor shutdown." He says that the SEC developed by his team allows for a complete shutdown, meeting the stringent requirements of digital electronics. De Heer says that it will take time to develop this technology. "I compare this work to the Wright brothers' first 100-meter flight. It will mainly depend on how much work is done to develop it."

Operating Systems

Huawei Makes a Break From Android With Next Version of Harmony OS 27

China's Huawei will not support Android apps on the latest iteration of its in-house Harmony operating system, domestic financial media Caixin reported, as the company looks to bolster its own software ecosystem. From a report: The company plans to roll out a developer version of its HarmonyOS Next platform in the second quarter of this year followed by a full commercial version in the fourth quarter, it said in a company statement highlighting the launch event for the platform in its home city of Shenzhen on Thursday.

Huawei first unveiled its proprietary Harmony system in 2019 and prepared to launch it on some smartphones a year later after U.S. restrictions cut its access to Google's technical support for its Android mobile OS. However, earlier versions of Harmony allowed apps built for Android to be used on the system, which will no longer be possible, according to Caixin.
Communications

Viasat Tries To Stop Citizen Effort To Revive FCC Funding for Starlink (pcmag.com) 78

A resident in Virginia has urged the Federal Communications Commission to reconsider canceling $886 million in federal funding for SpaceX's Starlink system. But rival satellite company Viasat has gone out of its way to oppose the citizen-led petition.ÂPCMag: On Jan. 1, the FCC received a petition from the Virginia resident Greg Weisiger asking the commission to reconsider denying the $886 million to SpaceX. "Petitioner is at an absolute loss to understand the Commission's logic with these denials," wrote Weisiger, who lives in Midlothian, Virginia. "It is abundantly clear that Starlink has a robust, reliable, affordable service for rural and insular locations in all states and territories."

The petition arrived a few weeks after the FCC denied SpaceX's appeal to receive $886 million from the commission's Rural Digital Opportunity Fund, which is designed to subsidize 100Mbps to gigabit broadband across the US. SpaceX wanted to use the funds to expand Starlink access in rural areas. But the FCC ruled that "Starlink is not reasonably capable of offering the required high-speed, low latency service throughout the areas where it won auction support." Weisiger disagrees. In his petition, he writes that the FCC's decision will deprive him of federal support to bring high-speed internet to his home. "Thousands of other Virginia locations were similarly denied support," he added.

Windows

Microsoft Bringing Teams Meeting Reminders To Windows 11 Start Menu (theverge.com) 47

Microsoft is getting ready to place Teams meeting reminders on the Start menu in Windows 11. From a report: The software giant has started testing a new build of Windows 11 with Dev Channel testers that includes a Teams meeting reminder in the recommended section of the Start menu. Microsoft is also testing an improved way to instantly access new photos and screenshots from Android devices. [...] The Teams meeting reminders will be displayed alongside the regular recently used and recommended file list on the Start menu, and they won't be displayed for non-business users of Windows 11.
Math

How Much of the World Is It Possible to Model? 45

Dan Rockmore, the director of the Neukom Institute for Computational Sciences at Dartmouth College, writing for The New Yorker: Recently, statistical modelling has taken on a new kind of importance as the engine of artificial intelligence -- specifically in the form of the deep neural networks that power, among other things, large language models, such as OpenAI's G.P.T.s. These systems sift vast corpora of text to create a statistical model of written expression, realized as the likelihood of given words occurring in particular contexts. Rather than trying to encode a principled theory of how we produce writing, they are a vertiginous form of curve fitting; the largest models find the best ways to connect hundreds of thousands of simple mathematical neurons, using trillions of parameters.They create a vast data structure akin to a tangle of Christmas lights whose on-off patterns attempt to capture a chunk of historical word usage. The neurons derive from mathematical models of biological neurons originally formulated by Warren S. McCulloch and Walter Pitts, in a landmark 1943 paper, titled "A Logical Calculus of the Ideas Immanent in Nervous Activity." McCulloch and Pitts argued that brain activity could be reduced to a model of simple, interconnected processing units, receiving and sending zeros and ones among themselves based on relatively simple rules of activation and deactivation.

The McCulloch-Pitts model was intended as a foundational step in a larger project, spearheaded by McCulloch, to uncover a biological foundation of psychiatry. McCulloch and Pitts never imagined that their cartoon neurons could be trained, using data, so that their on-off states linked to certain properties in that data. But others saw this possibility, and early machine-learning researchers experimented with small networks of mathematical neurons, effectively creating mathematical models of the neural architecture of simple brains, not to do psychiatry but to categorize data. The results were a good deal less than astonishing. It wasn't until vast amounts of good data -- like text -- became readily available that computer scientists discovered how powerful their models could be when implemented on vast scales. The predictive and generative abilities of these models in many contexts is beyond remarkable. Unfortunately, it comes at the expense of understanding just how they do what they do. A new field, called interpretability (or X-A.I., for "explainable" A.I.), is effectively the neuroscience of artificial neural networks.

This is an instructive origin story for a field of research. The field begins with a focus on a basic and well-defined underlying mechanism -- the activity of a single neuron. Then, as the technology scales, it grows in opacity; as the scope of the field's success widens, so does the ambition of its claims. The contrast with climate modelling is telling. Climate models have expanded in scale and reach, but at each step the models must hew to a ground truth of historical, measurable fact. Even models of covid or elections need to be measured against external data. The success of deep learning is different. Trillions of parameters are fine-tuned on larger and larger corpora that uncover more and more correlations across a range of phenomena. The success of this data-driven approach isn't without danger. We run the risk of conflating success on well-defined tasks with an understanding of the underlying phenomenon -- thought -- that motivated the models in the first place.

Part of the problem is that, in many cases, we actually want to use models as replacements for thinking. That's the raison detre of modelling -- substitution. It's useful to recall the story of Icarus. If only he had just done his flying well below the sun. The fact that his wings worked near sea level didn't mean they were a good design for the upper atmosphere. If we don't understand how a model works, then we aren't in a good position to know its limitations until something goes wrong. By then it might be too late. Eugene Wigner, the physicist who noted the "unreasonable effectiveness of mathematics," restricted his awe and wonder to its ability to describe the inanimate world. Mathematics proceeds according to its own internal logic, and so it's striking that its conclusions apply to the physical universe; at the same time, how they play out varies more the further that we stray from physics. Math can help us shine a light on dark worlds, but we should look critically, always asking why the math is so effective, recognizing where it isn't, and pushing on the places in between.
The Internet

'Where Have All the Websites Gone?' (fromjason.xyz) 171

An anonymous reader shares an essay: No one clicks a webpage hoping to learn which cat can haz cheeseburger. Weirdos, maybe. Sickos. No, we get our content from a For You Page now -- algorithmically selected videos and images made by our favorite creators, produced explicitly for our preferred platform. Which platform doesn't matter much. So long as it's one of the big five. Creators churn out content for all of them. It's a technical marvel, that internet. Something so mindblowingly impressive that if you showed it to someone even thirty years ago, their face would melt the fuck off. So why does it feel like something's missing? Why are we all so collectively unhappy with the state of the web?

A tweet went viral this Thanksgiving when a Twitter user posed a question to their followers. (The tweet said: "It feels like there are no websites anymore. There used to be so many websites you could go on. Where did all the websites go?") A peek at the comments, and I could only assume the tweet struck a nerve. Everyone had their own answer. Some comments blamed the app-ification of the web. "Everything is an app now!," one user replied. Others point to the death of Adobe Flash and how so many sites and games died along with it. Everyone agrees that websites have indeed vanished, and we all miss the days we were free to visit them.

The Internet

Bing Gained Less Than 1% Market Share Since Adding Bing Chat, Report Finds (seroundtable.com) 31

According to StatCounter, Bing's market share grew less than 1% since launching Bing Chat (now known as Copilot) roughly a year ago. From a report: Bloomberg reported (paywalled) on the StatCounter data, saying, "But Microsoft's search engine ended 2023 with just 3.4% of the global search market, according to data analytics firm StatCounter, up less than 1 percentage point since the ChatGPT announcement." Google still dominates the global search market with a 91.6% market share, followed by Bing's 3.4%, Yandex's 1.6% and Yahoo's 1.1%. "Other" search engines accounted for a total of just 2.2% of the global search market.

You can view the raw chart and data from StatCounter here.
Google

Google To Invest $1 Billion In UK Data Center (reuters.com) 6

Google announced today that it will invest $1 billion building a data center near London. Reuters reports: The data centre, located on a 33-acre (13-hectare) site bought by Google in 2020, will be located in the town of Waltham Cross, about 15 miles north of central London, the Alphabet-owned company said in a statement. The British government, which is pushing for investment by businesses to help fund new infrastructure, particularly in growth industries like technology and artificial intelligence, described Google's investment as a "huge vote of confidence" in the UK.

"Google's $1 billion investment is testament to the fact that the UK is a centre of excellence in technology and has huge potential for growth," Prime Minister Rishi Sunak said in the Google statement. The investment follows Google's $1 billion purchase of a central London office building in 2022, close to Covent Garden, and another site in nearby King's Cross, where it is building a new office and where its AI company DeepMind is also based.
In November, Microsoft announced plans to pump $3.2 billion into Britain over the next three years.
Android

Google Is Rolling Out WebGPU For Next-Gen Gaming On Android 14

In a blog post today, Google announced that WebGPU is "now enabled by default in Chrome 121 on devices running Android 12 and greater powered by Qualcomm and ARM GPUs," with support for more Android devices rolling out gradually. Previously, the API was only available on Windows PCs that support Direct3D 12, macOS, and ChromeOS devices that support Vulkan.

Google says WebGPU "offers significant benefits such as greatly reduced JavaScript workload for the same graphics and more than three times improvements in machine learning model inferences." With lower-level access to a device's GPU, developers are able to enable richer and more complex visual content in web applications. This will be especially apparent with games, as you can see in this demo.

Next up: WebGPU for Chrome on Linux.
AI

Coursera Saw Signups For AI Courses Every Minute in 2023 (reuters.com) 13

U.S. edutech platform Coursera added a new user every minute on average for its AI courses in 2023, CEO Jeff Maggioncalda said on Thursday, in a clear sign of people upskilling to tap a potential boom in generative AI. Reuters: The technology behind OpenAI's ChatGPT has taken the world by a storm and sparked a race among companies to roll out their own versions of the viral chatbot. "I'd say the real hotspot is generative AI because it affects so many people," he told Reuters in an interview at the World Economic Forum in Davos.

Coursera is looking to offer AI courses along with companies that are the frontrunners in the AI race, including OpenAI and Google's DeepMind, Maggioncalda said. Investors had earlier feared that apps based on generative AI might replace ed-tech firms, but on the contrary the technology has encouraged more people to upskill, benefiting companies such as Coursera. The company has more than 800 AI courses and saw more than 7.4 million enrollments last year. Every student on the platform gets access to a ChatGPT-like AI assistant called "Coach" that provides personalized tutoring.

AI

Mark Zuckerberg's New Goal is Creating AGI (theverge.com) 94

OpenAI's stated mission is to create the artificial general intelligence, or AGI. Demis Hassabis, the leader of Google's AI efforts, has the same goal. Now, Meta CEO Mark Zuckerberg is entering the race. From a report: While he doesn't have a timeline for when AGI will be reached, or even an exact definition for it, he wants to build it. At the same time, he's shaking things up by moving Meta's AI research group, FAIR, to the same part of the company as the team building generative AI products across Meta's apps. The goal is for Meta's AI breakthroughs to more directly reach its billions of users. "We've come to this view that, in order to build the products that we want to build, we need to build for general intelligence," Zuckerberg tells me in an exclusive interview. "I think that's important to convey because a lot of the best researchers want to work on the more ambitious problems."

[...] No one working on AI, including Zuckerberg, seems to have a clear definition for AGI or an idea of when it will arrive. "I don't have a one-sentence, pithy definition," he tells me. "You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence. But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition." He sees its eventual arrival as being a gradual process, rather than a single moment. "I'm not actually that sure that some specific threshold will feel that profound." As Zuckerberg explains it, Meta's new, broader focus on AGI was influenced by the release of Llama 2, its latest large language model, last year. The company didn't think that the ability for it to generate code made sense for how people would use a LLM in Meta's apps. But it's still an important skill to develop for building smarter AI, so Meta built it anyway.
External research has pegged Meta's H100 shipments for 2023 at 150,000, a number that is tied only with Microsoft's shipments and at least three times larger than everyone else's. When its Nvidia A100s and other AI chips are accounted for, Meta will have a stockpile of almost 600,000 GPUs by the end of 2024, according to Zuckerberg.
Bitcoin

Coinbase Compares Buying Crypto To Collecting Beanie Babies (bloomberg.com) 42

Coinbase said buying cryptocurrency on an exchange was more like collecting Beanie Babies than investing in a stock or bond. From a report: The biggest US crypto exchange made the comparison Wednesday in a New York federal court hearing. Coinbase was arguing for the dismissal of a Securities and Exchange Commission lawsuit accusing it of selling unregistered securities. William Savitt, a lawyer for Coinbase, told US District Judge Katherine Polk Failla that tokens trading on the exchange aren't securities subject to SEC jurisdiction because buyers don't gain any rights as a part of their purchases, as they do with stocks or bonds. "It's the difference between buying Beanie Babies Inc and buying Beanie Babies," Savitt said. The question of whether digital tokens are securities has divided courts.
Google

AI-Generated Content Can Sometimes Slip Into Your Google News Feed (engadget.com) 37

Google News is sometimes boosting sites that rip-off other outlets by using AI to rapidly churn out content, 404 Media claims: From the report: Google told 404 Media that although it tries to address spam on Google News, the company ultimately does not focus on whether a news article was written by an AI or a human, opening the way for more AI-generated content making its way onto Google News. The presence of AI-generated content on Google News signals two things: first, the black box nature of Google News, with entry into Google News' rankings in the first place an opaque, but apparently gameable, system. Second, is how Google may not be ready for moderating its News service in the age of consumer-access AI, where essentially anyone is able to churn out a mass of content with little to no regard for its quality or originality.
UPDATE: Engadget argues that "to find such stories required heavily manipulating the search results in Google News," noting that in the cited case, 404 Media's search parameters "are essentially set so that the original stories don't appear."

Engadget got this rebuke from Google. "Claiming that these sites were featured prominently in Google News is not accurate - the sites in question only appeared for artificially narrow queries, including queries that explicitly filtered out the date of an original article.

"We take the quality of our results extremely seriously and have clear policies against content created for the primary purpose of ranking well on News and we remove sites that violate it."

Engadget then wrote, "We apologize for overstating the issue and are including a slightly modified version of the original story that has been corrected for accuracy, and we've updated the headline to make it more accurate."
Google

Google Says Russian Espionage Crew Behind New Malware Campaign (techcrunch.com) 10

Google researchers say they have evidence that a notorious Russian-linked hacking group -- tracked as "Cold River" -- is evolving its tactics beyond phishing to target victims with data-stealing malware. From a report: Cold River, also known as "Callisto Group" and "Star Blizzard," is known for conducting long-running espionage campaigns against NATO countries, particularly the United States and the United Kingdom. Researchers believe the group's activities, which typically target high-profile individuals and organizations involved in international affairs and defense, suggest close ties to the Russian state. U.S. prosecutors in December indicted two Russian nationals linked to the group.

Google's Threat Analysis Group (TAG) said in new research this week that it has observed Cold River ramping up its activity in recent months and using new tactics capable of causing more disruption to its victims, predominantly targets in Ukraine and its NATO allies, academic institutions and non-government organizations. These latest findings come soon after Microsoft researchers reported that the Russia-aligned hacking group had improved its ability to evade detection. In research shared with TechCrunch ahead of its publication on Thursday, TAG researchers say that Cold River has continued to shift beyond its usual tactic of phishing for credentials to delivering malware via campaigns using PDF documents as lures.

Google

Google CEO Tells Employees To Expect More Job Cuts This Year (theverge.com) 53

Google has laid off over a thousand employees across various departments since January 10th. CEO Sundar Pichai's message is to brace for more cuts. The Verge: "We have ambitious goals and will be investing in our big priorities this year," Pichai told all Google employees on Wednesday in an internal memo that was shared with me. "The reality is that to create the capacity for this investment, we have to make tough choices." So far, those "tough choices" have included layoffs and reorganizations in Google's hardware, ad sales, search, shopping, maps, policy, core engineering, and YouTube teams.

"These role eliminations are not at the scale of last year's reductions, and will not touch every team," Pichai wrote in his memo -- a reference to when Google cut 12,000 jobs this time last year. "But I know it's very difficult to see colleagues and teams impacted." Pichai said the layoffs this year were about "removing layers to simplify execution and drive velocity in some areas." He confirmed what many inside Google have been fearing: that more "role eliminations" are to come. "Many of these changes are already announced, though to be upfront, some teams will continue to make specific resource allocation decisions throughout the year where needed, and some roles may be impacted," he wrote.

Microsoft

Microsoft's Bing Market Share Barely Budged With ChatGPT Add-On (bloomberg.com) 48

When Microsoft announced it was baking ChatGPT into its Bing search engine last February, bullish analysts declared the move an "iPhone moment" that could upend the search market and chip away at Google's dominance. "The entire search category is now going through a sea change," Chief Executive Officer Satya Nadella said at the time. "That opportunity comes very few times." Almost a year later, the sea has yet to change. Bloomberg: The new Bing -- powered by OpenAI's generative AI technology -- dazzled internet users with conversational replies to queries asked in a natural way. But Microsoft's search engine ended 2023 with just 3.4% of the global search market, according to data analytics firm StatCounter, up less than 1 percentage point since the ChatGPT announcement.

Bing has long struggled for relevance and attracted more mockery than recognition over the years as a serious alternative to Google. Multiple rebrandings and redesigns since its 2009 debut did little to boost Bing's popularity. A month before Microsoft infused the search engine with generative AI, people were spending 33% less time using it than they had 12 months earlier, according to SensorTower. The ChatGPT reboot at least helped reverse those declines. In the second quarter of 2023, US monthly active users more than doubled year over year to 3.1 million, according to a Bloomberg Intelligence analysis of SensorTower mobile app data. Overall, users were spending 84% more time on the search engine, the data show. By year-end, Bing's monthly active users had increased steadily to 4.4 million, according to SensorTower.

Security

A Flaw In Millions of Apple, AMD, and Qualcomm GPUs Could Expose AI Data (wired.com) 22

An anonymous reader quotes a report from Wired: As more companies ramp up development of artificial intelligence systems, they are increasingly turning to graphics processing unit (GPU) chips for the computing power they need to run large language models (LLMs) and to crunch data quickly at massive scale. Between video game processing and AI, demand for GPUs has never been higher, and chipmakers are rushing to bolster supply. In new findings released today, though, researchers are highlighting a vulnerability in multiple brands and models of mainstream GPUs -- including Apple, Qualcomm, and AMD chips -- that could allow an attacker to steal large quantities of data from a GPU's memory. The silicon industry has spent years refining the security of central processing units, or CPUs, so they don't leak data in memory even when they are built to optimize for speed. However, since GPUs were designed for raw graphics processing power, they haven't been architected to the same degree with data privacy as a priority. As generative AI and other machine learning applications expand the uses of these chips, though, researchers from New York -- based security firm Trail of Bits say that vulnerabilities in GPUs are an increasingly urgent concern. "There is a broader security concern about these GPUs not being as secure as they should be and leaking a significant amount of data," Heidy Khlaaf, Trail of Bits' engineering director for AI and machine learning assurance, tells WIRED. "We're looking at anywhere from 5 megabytes to 180 megabytes. In the CPU world, even a bit is too much to reveal."

To exploit the vulnerability, which the researchers call LeftoverLocals, attackers would need to already have established some amount of operating system access on a target's device. Modern computers and servers are specifically designed to silo data so multiple users can share the same processing resources without being able to access each others' data. But a LeftoverLocals attack breaks down these walls. Exploiting the vulnerability would allow a hacker to exfiltrate data they shouldn't be able to access from the local memory of vulnerable GPUs, exposing whatever data happens to be there for the taking, which could include queries and responses generated by LLMs as well as the weights driving the response. In their proof of concept, as seen in the GIF below, the researchers demonstrate an attack where a target -- shown on the left -- asks the open source LLM Llama.cpp to provide details about WIRED magazine. Within seconds, the attacker's device -- shown on the right -- collects the majority of the response provided by the LLM by carrying out a LeftoverLocals attack on vulnerable GPU memory. The attack program the researchers created uses less than 10 lines of code. [...] Though exploiting the vulnerability would require some amount of existing access to targets' devices, the potential implications are significant given that it is common for highly motivated attackers to carry out hacks by chaining multiple vulnerabilities together. Furthermore, establishing "initial access" to a device is already necessary for many common types of digital attacks.
The researchers did not find evidence that Nvidia, Intel, or Arm GPUs contain the LeftoverLocals vulnerability, but Apple, Qualcomm, and AMD all confirmed to WIRED that they are impacted. Here's what each of the affected companies had to say about the vulnerability, as reported by Wired:

Apple: An Apple spokesperson acknowledged LeftoverLocals and noted that the company shipped fixes with its latest M3 and A17 processors, which it unveiled at the end of 2023. This means that the vulnerability is seemingly still present in millions of existing iPhones, iPads, and MacBooks that depend on previous generations of Apple silicon. On January 10, the Trail of Bits researchers retested the vulnerability on a number of Apple devices. They found that Apple's M2 MacBook Air was still vulnerable, but the iPad Air 3rd generation A12 appeared to have been patched.
Qualcomm: A Qualcomm spokesperson told WIRED that the company is "in the process" of providing security updates to its customers, adding, "We encourage end users to apply security updates as they become available from their device makers." The Trail of Bits researchers say Qualcomm confirmed it has released firmware patches for the vulnerability.
AMD: AMD released a security advisory on Wednesday detailing its plans to offer fixes for LeftoverLocals. The protections will be "optional mitigations" released in March.
Google: For its part, Google says in a statement that it "is aware of this vulnerability impacting AMD, Apple, and Qualcomm GPUs. Google has released fixes for ChromeOS devices with impacted AMD and Qualcomm GPUs."

Slashdot Top Deals