Intel Arc B580 Battlemage Tested: $250 Graphics Cards Are Worthy Once Again (hothardware.com) 12
MojoKid writes: Today's release and review launch of the new Intel Arc B580 marks a second-gen effort from the company, with a fully refreshed Intel Xe2 graphics architecture, aka Battlemage, that promises big gains in performance and efficiency. Comparing Arc B580 to its Arc Alchemist ancestors, you can see that it's somewhat of a smaller GPU. It has fewer of nearly everything, and yet its performance specifications don't look too far off. A lot of this comes down to massive architectural improvements with an eye toward efficiency and making better use of the resources that were already there.
With 12GB of GDDR6 memory at 19Gbps, Arc B580 delivers performance that typically beats a GeForce RTX 4060 and even an RTX 4060 Ti in spots, especially when its extra 4GB of frame buffer memory comes into play. All in, Intel's latest Arc graphics offering is a strong contender in the $250 graphics card segment, and it should sell well in the months ahead, based on its value proposition, improved performance in ray tracing and advanced upscaling technologies.
With 12GB of GDDR6 memory at 19Gbps, Arc B580 delivers performance that typically beats a GeForce RTX 4060 and even an RTX 4060 Ti in spots, especially when its extra 4GB of frame buffer memory comes into play. All in, Intel's latest Arc graphics offering is a strong contender in the $250 graphics card segment, and it should sell well in the months ahead, based on its value proposition, improved performance in ray tracing and advanced upscaling technologies.
If this gets good Linux FOSS drivers (Score:5, Insightful)
Re: (Score:2)
Intel is moving to a new driver package for Linux so the jury's still out on that one.
Highlights the failure of vert integration (Score:2)
What worries me is those layoffs (Score:2)
So there's a high probability the guys who write the drivers for this card are going to be on the chopping block and the card is basically going to be useless for modern games.
Which sucks because the hardware is absol
Re: (Score:2)
Companies generally don't lay off people who are needed to continue bringing in existing cash flow. To say that they're going to lay off the driver developers is practically the same as claiming that they're abandoning the video card market.
Cherrypicked baseline? (Score:2)
I have an RTX 3060 and both it and the AMD near-equivalent were under $250 six months ago when I bought it. That's new, I paid less for one used specifically because it was EVGA and for some reason I thought it would be nice to have one of EVGA's last hurrah cards, before they decided it wasn't worth living by nVidia's rules any longer. It any case, the 3060 has 12 GB, I thought the 4060 did as well, because it's using half the memory bus of the 4090 which has 24 GB. So I don't see where this "extra 4 GB" i
Re: (Score:2)
Right now, tech reviewers are using the NVidia 4060 and AMD Radeon RX 7600 as their comparison cards, because they are (or likely will be) in the same price or performance range. For some reason, NVidia went down market for the 4060, making it a 8 GB card when the 3060 was a 12 GB card. The 4060 probably should have been called a "4050" instead, and perhaps the 4060 Ti 16 GB should have been the "real" 4060.
Re: (Score:2)
NVIDIA 4060 has less memory than the 3060 https://videocardz.net/nvidia-geforce-rtx-4060 [videocardz.net] . The 4070 is in a different price bracket, I think they have chosen their comparisonn point reasonably.
Re: (Score:2)
The base 4060 and even one model of 4060 Ti use 8GB. The better 4060 Ti uses 16GB. The base 4060 is adequate for 1080p, the 4060 Ti is good at it, the 4060 Ti 16GB can do 4k60 OK if you decrease quality slightly (still better than half) but is worthless for higher refresh rates at 4k. All of them have the 8 lane PCIE 4.0 bus, so they are also worthless as an upgrade card for a system with a PCIE 3 bus. But the 4060 Ti 16GB is a reasonable intro LLM engine and video editing accelerator, so it kind of has a r
Re: (Score:2)
When pushing generative AI models out to the VRAM, I'm reading off a 4x NVMe drive to start with. I don't see why only having 8 lanes to receive four lanes worth of data would be a huge problem. I can see where it might become a bottleneck for certain workloads, but the GPU itself would be an even bigger bottleneck. Typically, I'll see load times of 30 to 40 seconds pushing the model out to VRAM (then not again unless I have to change models, so this is a one-time cost), but 300 to 800 seconds actually gene
Re: (Score:2)
Everyone cherry picks. That's what a comparison is.
But why would Intel's card be compared to a 12gb 4060 model which costs almost 40% more than their card? That doesn't seem like a good comparison to me.
Picking a card solely based on having the same RAM seems like a much worse cherry-picked comparison. If they're gonna do that, then why not compare against the 4-year-old 3060 of yours? A new 12gb 3060 goes for $280-300, which is still more than this card. To find one under $250 as you've mentioned is defini