Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Intel

Intel Arc B580 Battlemage Tested: $250 Graphics Cards Are Worthy Once Again (hothardware.com) 13

MojoKid writes: Today's release and review launch of the new Intel Arc B580 marks a second-gen effort from the company, with a fully refreshed Intel Xe2 graphics architecture, aka Battlemage, that promises big gains in performance and efficiency. Comparing Arc B580 to its Arc Alchemist ancestors, you can see that it's somewhat of a smaller GPU. It has fewer of nearly everything, and yet its performance specifications don't look too far off. A lot of this comes down to massive architectural improvements with an eye toward efficiency and making better use of the resources that were already there.

With 12GB of GDDR6 memory at 19Gbps, Arc B580 delivers performance that typically beats a GeForce RTX 4060 and even an RTX 4060 Ti in spots, especially when its extra 4GB of frame buffer memory comes into play. All in, Intel's latest Arc graphics offering is a strong contender in the $250 graphics card segment, and it should sell well in the months ahead, based on its value proposition, improved performance in ray tracing and advanced upscaling technologies.

Intel Arc B580 Battlemage Tested: $250 Graphics Cards Are Worthy Once Again

Comments Filter:
  • by Valgrus Thunderaxe ( 8769977 ) on Friday December 13, 2024 @08:29PM (#65012187)
    I'll strongly consider picking one up. How have the older cards been?
  • Well it looks like intel chip designers can in fact actually design good stuff, when they don't have the intel-fab noose hanging around their necks. These GPU chips are made on a TMSC process which I think is very telling, and further cements my thoughts that intel really must separate into two parts: (1) a chip design firm, much like AMD and basically every other chip design company... Arm, Marvell, nVidia, MediaTek, Qualcomm, and (2) their ailing fab business. Vertical integration is killing intel.
  • Intel is planning the fire thousands of employees, the CEO said pretty much anything and everything that isn't core business. Got to get that stock price up and got to get some cash to do stock buybacks with to keep it going up. Especially since we're heading into a recession next year.

    So there's a high probability the guys who write the drivers for this card are going to be on the chopping block and the card is basically going to be useless for modern games.

    Which sucks because the hardware is absol
    • by quall ( 1441799 )

      Companies generally don't lay off people who are needed to continue bringing in existing cash flow. To say that they're going to lay off the driver developers is practically the same as claiming that they're abandoning the video card market.

  • I have an RTX 3060 and both it and the AMD near-equivalent were under $250 six months ago when I bought it. That's new, I paid less for one used specifically because it was EVGA and for some reason I thought it would be nice to have one of EVGA's last hurrah cards, before they decided it wasn't worth living by nVidia's rules any longer. It any case, the 3060 has 12 GB, I thought the 4060 did as well, because it's using half the memory bus of the 4090 which has 24 GB. So I don't see where this "extra 4 GB" i

    • Right now, tech reviewers are using the NVidia 4060 and AMD Radeon RX 7600 as their comparison cards, because they are (or likely will be) in the same price or performance range. For some reason, NVidia went down market for the 4060, making it a 8 GB card when the 3060 was a 12 GB card. The 4060 probably should have been called a "4050" instead, and perhaps the 4060 Ti 16 GB should have been the "real" 4060.

    • NVIDIA 4060 has less memory than the 3060 https://videocardz.net/nvidia-geforce-rtx-4060 [videocardz.net] . The 4070 is in a different price bracket, I think they have chosen their comparisonn point reasonably.

    • The base 4060 and even one model of 4060 Ti use 8GB. The better 4060 Ti uses 16GB. The base 4060 is adequate for 1080p, the 4060 Ti is good at it, the 4060 Ti 16GB can do 4k60 OK if you decrease quality slightly (still better than half) but is worthless for higher refresh rates at 4k. All of them have the 8 lane PCIE 4.0 bus, so they are also worthless as an upgrade card for a system with a PCIE 3 bus. But the 4060 Ti 16GB is a reasonable intro LLM engine and video editing accelerator, so it kind of has a r

      • by Mal-2 ( 675116 )

        When pushing generative AI models out to the VRAM, I'm reading off a 4x NVMe drive to start with. I don't see why only having 8 lanes to receive four lanes worth of data would be a huge problem. I can see where it might become a bottleneck for certain workloads, but the GPU itself would be an even bigger bottleneck. Typically, I'll see load times of 30 to 40 seconds pushing the model out to VRAM (then not again unless I have to change models, so this is a one-time cost), but 300 to 800 seconds actually gene

    • by quall ( 1441799 )

      Everyone cherry picks. That's what a comparison is.

      But why would Intel's card be compared to a 12gb 4060 model which costs almost 40% more than their card? That doesn't seem like a good comparison to me.

      Picking a card solely based on having the same RAM seems like a much worse cherry-picked comparison. If they're gonna do that, then why not compare against the 4-year-old 3060 of yours? A new 12gb 3060 goes for $280-300, which is still more than this card. To find one under $250 as you've mentioned is defini

    • There's nothing cherrypicked about using the current gen bottom tier GPU as a baseline. The RTX3060 is no longer a current product, people buying new GPUs won't be comparing it to the B580. They are comparing entry level cards to entry level cards, nothing more. The underlying specs are irrelevant. Or are you going to start saying we can only compare cards with 3072 CUDA cores, 96 texture mapping units, 48 ROPs, and 24 ray tracing acceleration cores to other cards with 3072 CUDA cores, 96 texture mapping un

To be is to program.

Working...