Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Intel Hardware

As Intel Gets Into Discrete GPUs, It Scales Back Support For Many Integrated GPUs (arstechnica.com) 47

An anonymous reader quotes a report from Ars Technica: Intel is slowly moving into the dedicated graphics market, and its graphics driver releases are looking a lot more like Nvidia's and AMD's than they used to. For its dedicated Arc GPUs and the architecturally similar integrated GPUs that ship with 11th- and 12th-generation Intel CPUs, the company promises monthly driver releases, along with "Day 0" drivers with specific fixes and performance enhancements for just-released games. At the same time, Intel's GPU driver updates are beginning to de-emphasize what used to be the company's bread and butter: low-end integrated GPUs. The company announced yesterday that it would be moving most of its integrated GPUs to a "legacy support model," which will provide quarterly updates to fix security issues and "critical" bugs but won't include the game-specific fixes that newer GPUs are getting.

The change affects a wide swath of GPUs, which are not all ancient history. Among others, the change affects all integrated GPUs in the following processor generations, from low-end unnumbered "HD/UHD graphics" to the faster Intel Iris-branded versions: 6th-generation Core (introduced 2015, codenamed Skylake), 7th-generation Core (introduced 2016, codenamed Kaby Lake), 8th-generation Core (introduced 2017-2018, codenamed Kaby Lake-R, Whiskey Lake, and Coffee Lake), 9th-generation Core (introduced 2018, codenamed Coffee Lake), 10th-generation Core (introduced 2019-2020, codenamed Comet Lake and Ice Lake), and various N4000, N5000, and N6000-series Celeron and Pentium CPUs (introduced 2017-2021, codenamed Gemini Lake, Elkhart Lake, and Jasper Lake).

Intel is still offering a single 1.1GB driver package that supports everything from its newest Iris Xe GPUs to Skylake-era integrated graphics. However, the install package now contains one driver for newer GPUs that are still getting new features and a second driver for older GPUs on the legacy support model. The company uses a similar approach for driver updates for its Wi-Fi adapters, including multiple driver versions in the same download package to support multiple generations of hardware.
"The upshot is that these GPUs' drivers are about as fast and well-optimized as they're going to get, and the hardware isn't powerful enough to play many of the newer games that Intel provides fixes for in new GPU drivers anyway," writes Ars Technica's Andrew Cunningham. "Practically speaking, losing out on a consistent stream of new gaming-centric driver updates is unlikely to impact the users of these GPUs much, especially since Intel will continue to fix problems as they occur."
This discussion has been archived. No new comments can be posted.

As Intel Gets Into Discrete GPUs, It Scales Back Support For Many Integrated GPUs

Comments Filter:
  • Talk about taking your time... How long has Intel been nursing this project anyhow?

    They've missed the boat in any case. GPU prices are back to near commodity levels so now you'll have a third choice.

    Some day.

    • Intel is starting to ramp this up, OEMs are producing actual cards now, they are laying out a roadmap for releases. They admit they have a lot of driver work ahead of them but they have a plan for adding more and more game optimizations.

      This Gamer's Nexus interview with someone on the GPU team is pretty interesting. He says Intel is in this for the long haul and they're hoping in the next couple generations to getting performance closer to mid-range and high-end cards fron NVidia and AMD.

      I wish them good

    • by Luthair ( 847766 )
      The idea that they'd have availability when AMD/nvidia didn't was always fiction - they're using the same foundry (TSMC) and selling the chips to third parties to make boards. Had they been available earlier they'd be the same premium.
    • How long has Intel been nursing this project anyhow?

      Since 1983 [vgamuseum.info]. Or, if you want what's recognised as a modern GPU since they didn't really exist when Intel first got into this market, since 1998 [techpowerup.com]. So it's been just under twenty-five years of trying.

      If I was Intel, I wouldn't give up my day job (making shitty integrated graphics) just yet, they announce an entry into the GPU market every few years which sinks without trace just in time for corporate memory to fade and someone to say "you know what, we should get into the GPU market!".

      • Oops, forgot about the 82786 from 1986, which is probably close enough to a GPU to count, while the 82720 from 1983 was more a display controller. So they've been working on getting to a GPU that doesn't suck since 1986, thirty-six years. Quite probably many of the people working on the current whatever-number-it-will-have weren't even born yet when this project started.

        In 600 million years the sun will burn out, Intel will have to work on future versions of their GPU without it having raytracing support

      • If I was Intel, I wouldn't give up my day job (making shitty integrated graphics) just yet

        If I am looking for a cheap PC for Jellyfin/Emby/Plex, Intel is my first stop. Their gpu hardware accelleration is great for media.
        Gaming, on the other hand...

  • This sounds like Intel came out with a newer product so it's backing away from expectations it set with its older products.

    This makes me think they will do the same thing with the newer products just as soon as even newer products come out.

    Once a company shows that it's not committed to support, many high-end customers will look elsewhere.

    nVidia is open sourcing its stuff too at the same time. Apple is dropping Intel chips. AMD is smoking it on the CPU front. Quite a shame but ever since NetBurst everybo

    • To be fair what do any of us have in terms of expectations for an on-chip Intel iGPU besides it working and being stable (and I can't ever say I've had a stability problem with their GPU's or drivers, they just perform like a bag of wet towels)

      AMD and NVidia stop driver support for older models just the same, if I punch my AMD 7970 or NVidia 770 into the driver tool I am going to get a version several years older than the current one.

      If you want something with backwards compatibility for decades Microsoft h

      • > AMD and NVidia stop driver support for older models just the same,

        Except intel just dropped support for 10th gen iGPU, which is only a 2 year old product that can still be found new on store shelves.
      • That 770 you're lying about is supported by Driver Version: 473.47 - Release Date: Mon May 16, 2022.

    • Integrated i915 in a 32-bit Pentium 4 is still fully supported and works fine, all from the same driver as newest discrete stuff. nVidia on the other hand often drops support in 3 years, while cards themselves are still being sold.

      And what's the news with nVidia "open sourcing" stuff? Do you mean moving the driver to an encrypted blob on a riscv CPU inside the card, so you can't even disassemble+analyze it anymore, then adding a kernel shim? That's all they've done on this front.

      With nVidia being this ba

      • by Trongy ( 64652 )

        Nvidia is only open sourcing their kernel modules, not the userspace driver. That's a tiny change in the big picture.

        From a practical perspective it should mean less breakage due to kernel upgrades if the Nvidia kernel modules are incorporated into the mainline kernel.

  • Their integrated GPUs sucked ass, so why would their discrete not? Hired a new team?

    • Pretty much, independent testing should be coming soon as these things trickle out but they are claiming their upcoming A750 card is comparable to an NVidia 3060 which is pretty respectable coming out of the gate.

      Intel Arc A750 to compete with GeForce RTX 3060 [videocardz.com]

    • Re:hm (Score:5, Insightful)

      by Joce640k ( 829181 ) on Thursday July 28, 2022 @08:54PM (#62743278) Homepage

      Their integrated GPUs sucked ass, so why would their discrete not?

      Because they don't have to cram it into a single chip alongside a massive CPU?

      • by _merlin ( 160982 )

        Yeah, but their previous attempts at discrete GPUs didn't go so well, either.

        They acquired Real3D and made the Intel 740 chipset which was a classic case of solving last year's problem. It was designed to get geometry from main memory to the GPU fast, which alleviated a bottleneck for GPUs without hardware transform and lighting (T&L). The trouble is, by the time the 740 arrived, hardware T&L was becoming common, and games were using far more texture data, so the lack of dedicated texture RAM kill

        • Add to that a ton of driver problems in games, since gaming never was a market for their iGPU, that even shines until today with the handheld pc based consoles. The amd based ones simply work better!

      • The 3060-level performance they're claiming on their discrete GPUs is about what Apple are providing with their integrated ones.
        • Apple's M1 and M2 GPU only scales as high as a 1050TI (trash tier)
          • by dgatwood ( 11270 )

            Apple's M1 and M2 GPU only scales as high as a 1050TI (trash tier)

            Which one? When you say "M1 GPU", that's four different GPU models, each of which comes in two different configurations. The M1 has either a 7-core or 8-core GPU. The M1 Pro has either a 14-core or 16-core GPU. The M1 Max has either a 24-core or 32-core GPU. The M1 Ultra has either a 48-core or 64-core GPU, and the M2 has either an 8-core or 10-core GPU.

            So you've lumped ten different GPUs with an 8x difference in core count into the same performance category, which is complete nonsense.

            Comparing the M1

            • Comparing the M1 against NVIDIA is challenging because in many benchmarks, the M1 series will smoke an NVIDIA chip in a bunch of benchmarks and then get smoked by the NVIDIA chip in a bunch of other benchmarks.

              In which ones does it smoke an Nvidia chip? The only ones I've seen where the M1 beats it is when you're talking about encoding very specific formats for which the PC will use the GPU but the M1 has special built-in hardware for encoding that format and is not actually running it on the GPU - not that it matters what does the processing from an end user perspective but obviously fixed-function hardware designed for a specific task is generally going to be better than general purpose hardware.

              Like for exampl

              • by dgatwood ( 11270 )

                Comparing the M1 against NVIDIA is challenging because in many benchmarks, the M1 series will smoke an NVIDIA chip in a bunch of benchmarks and then get smoked by the NVIDIA chip in a bunch of other benchmarks.

                In which ones does it smoke an Nvidia chip? The only ones I've seen where the M1 beats it is when you're talking about encoding very specific formats for which the PC will use the GPU but the M1 has special built-in hardware for encoding that format and is not actually running it on the GPU

                Not just encoding. Take a look at the OpenGL ES [tomshardware.com] benchmarks from GFXBench. The Manhattan, Manhattan 3.1, and Driver Overhead 2 benchmarks ran almost twice as fast on the M1 Max as on the RTX 3080 Mobile. But that same NVIDIA chip was half again faster on the Car Chase and T-Rex benchmarks and almost 3x as fast on the ALU 2 benchmark. The other benchmarks were fairly similar.

                Like for example the resolution scaling, if you are running a monitor with a PPI different to what Apple provides on its monitors then it can't use the built-in hardware scaling and has to defer to the GPU to do it which is very weak and the performance is woeful. I borrowed an M1 Ultra from work and connected it to my 4K monitor and struggled, wasn't until I started searching it that I found a bunch of youtube videos explaining this.

                Try BetterDummy [macworld.com]. Not perfect, but it supposedly helps a lot with third-party monitors. But yeah, external high-resolution monitor

                • Not just encoding. Take a look at the OpenGL ES [tomshardware.com] benchmarks from GFXBench.

                  Well yes that's OpenGL Embedded Systems, not really relevant on the desktop which is why desktop GPUs have never targeted it but the M1 being the evolution of a mobile chip it makes sense.

                  But yeah, external high-resolution monitor support on the M1 series is still a work in progress, not from a hardware perspective so much as from a software perspective. I'm not sure why this only happens on the M1-based hardware, but it is probably a driver-level bug in how EDID is handled.

                  The problem is hardware related, Apple only has scaling hardware for its target PPIs so with a monitor which falls outside that it ends up scaling to 5k and then scaling it back down on the GPU instead which Apple's GPUs are very poor at so things end up chugging particularly in video editing tasks. I couldn't figure out w

                  • by dgatwood ( 11270 )

                    Not just encoding. Take a look at the OpenGL ES [tomshardware.com] benchmarks from GFXBench.

                    Well yes that's OpenGL Embedded Systems, not really relevant on the desktop which is why desktop GPUs have never targeted it but the M1 being the evolution of a mobile chip it makes sense.

                    AFAIK, OpenGL ES is AFAIK a proper subset of OpenGL, so anything that's part of OpenGL ES is also part of OpenGL. So I'm not seeing your point here. There's a subset of OpenGL operations that the M1-series GPUs handle better than the RTX 3080. That subset also happens to be a subset of OpenGL ES.

                    But yeah, external high-resolution monitor support on the M1 series is still a work in progress, not from a hardware perspective so much as from a software perspective. I'm not sure why this only happens on the M1-based hardware, but it is probably a driver-level bug in how EDID is handled.

                    The problem is hardware related, Apple only has scaling hardware for its target PPIs so with a monitor which falls outside that it ends up scaling to 5k and then scaling it back down on the GPU instead which Apple's GPUs are very poor at so things end up chugging particularly in video editing tasks. I couldn't figure out why the performance was so bad and then I found this video (and there's another one on the subject as well) https://youtu.be/KvkOc2U7iAk?t... [youtu.be]

                    I'm 99.9999% sure that's wrong, at least based on my understanding of EDID.

                    The problem is that Apple doesn't make every DPI scaling option listed in the EDID available to the user through their UI. I guess they'

                    • AFAIK, OpenGL ES is AFAIK a proper subset of OpenGL, so anything that's part of OpenGL ES is also part of OpenGL. So I'm not seeing your point here.

                      Perhaps what you aren't understanding is that the fact that because it is a subset that means things that you do in OpenGLES that are necessary because it doesn't have the capabilities of full-featured OpenGL are things you would never do if you had access to full OpenGL which, on the desktop, you do. Being effective at doing things that are redundant and aren't applicable to the real world is not much of an achievement, which is precisely why you only see this advantage in synthetic benchmarks.

                      involves creating the content at 5K resolution instead of 4K, then scaling it down on the GPU at a non-whole-number ratio, then blitting the contents of the virtual screen onto the physical screen, with probably at least two extra user-kernel boundary crossings in the process, all of which is grossly undesirable from a performance perspective.

                      You can do t

                    • by dgatwood ( 11270 )

                      AFAIK, OpenGL ES is AFAIK a proper subset of OpenGL, so anything that's part of OpenGL ES is also part of OpenGL. So I'm not seeing your point here.

                      Perhaps what you aren't understanding is that the fact that because it is a subset that means things that you do in OpenGLES that are necessary because it doesn't have the capabilities of full-featured OpenGL are things you would never do if you had access to full OpenGL which, on the desktop, you do. Being effective at doing things that are redundant and aren't applicable to the real world is not much of an achievement, which is precisely why you only see this advantage in synthetic benchmarks.

                      I wouldn't call most of those synthetic benchmarks. They're rendering an actual scene.

                      And the differences between OpenGL and ES aren't that big, according to khronos.org [khronos.org]:

                      • No support for deprecated begin/end groupings (I think this is also removed in current OpenGL versions).
                      • No support for 3D or 1D textures.
                      • No support for non-triangle polygons.
                      • Fixed-point coordinate support.

                      I'm struggling to imagine how performance issues related to any of those wouldn't translate into real-world impact in some OpenGL-based

                    • I wouldn't call most of those synthetic benchmarks.

                      Whether you would call them that or not isn't really relevant, they aren't real world application benchmarks, they are synthetic benchmarks.

                      They're rendering an actual scene.

                      Yes but they are doing so in a way that nobody actually does in the real world, hence the reason this difference is only found in synthetic benchmarks and not in real application/game benchmarks.

                      And the differences between OpenGL and ES aren't that big, according to khronos.org [khronos.org]:

                      Remember on the desktop it is emulated, typically running atop either desktop OpenGL or DirectX 11 anyway, much like comparing Vulkan performance on the Mac where Vulkan is impl

                    • by dgatwood ( 11270 )

                      I wouldn't call most of those synthetic benchmarks.

                      Whether you would call them that or not isn't really relevant, they aren't real world application benchmarks, they are synthetic benchmarks.

                      Typically, a synthetic benchmark is something like "Render 1 billion polygons, then apply 1 million shaders, then..." where the number of operations is intended to mimic the number of times a real application performs a particular task, but the data is arbitrary garbage. These benchmarks are rendering an actual scene. That's not really synthetic. Artificial, perhaps, but not synthetic.

                      They're rendering an actual scene.

                      Yes but they are doing so in a way that nobody actually does in the real world, hence the reason this difference is only found in synthetic benchmarks and not in real application/game benchmarks.

                      What, specifically, are they doing in those benchmarks that you believe "nobody actually does in the real world"?

                      And the differences between OpenGL and ES aren't that big, according to khronos.org [khronos.org]:

                      Remember on the desktop it is emulated, typically running atop either desktop OpenGL or DirectX 11 anyway, much like comparing Vulkan performance on the Mac where Vulkan is implemented atop Metal.

                      Celebrating the performance of an API nobody uses is pointless anyway, just like nobody cares about OpenCL benchmarks because in the real world on Apple you use Metal Performance Shaders and on Nvidia you use CUDA because they are both way better than OpenCL.

                      Nobody i

                    • No, a synthetic benchmark is distinct from a real world application benchmark. It's not that complicated and is also why you don't see these performance differences in actual games and applications.

                      As you said, OpenGL ES is basically a thin shim on top of OpenGL, so what you're actually saying is that those scenes, if run on OpenGL or Metal or MoltenVK would have run faster on Apple's GPU than the NVIDIA.

                      No. There is a native OpenGL ES implementation in Apple's driver, predominantly because it evolved from the mobile SoC. That doesn't exist in the Nvidia drivers - again because nobody uses OpenGLES on the desktop - so what you get is a MoltenVK-like layer atop desktop GL/Vulkan/DX. It would be like comparing Appl

                    • by dgatwood ( 11270 )

                      No, a synthetic benchmark is distinct from a real world application benchmark. It's not that complicated and is also why you don't see these performance differences in actual games and applications.

                      As you said, OpenGL ES is basically a thin shim on top of OpenGL, so what you're actually saying is that those scenes, if run on OpenGL or Metal or MoltenVK would have run faster on Apple's GPU than the NVIDIA.

                      No. There is a native OpenGL ES implementation in Apple's driver, predominantly because it evolved from the mobile SoC. That doesn't exist in the Nvidia drivers - again because nobody uses OpenGLES on the desktop - so what you get is a MoltenVK-like layer atop desktop GL/Vulkan/DX. It would be like comparing Apple to AMD/Nvidia and using Vulkan on both instead of Metal on the Mac and Vulkan on AMD/Nvidia.

                      I think you're misunderstanding what "strict subset" means. AFAIK, running an OpenGL ES app on OpenGL requires translating the shaders (once), but after that, OpenGL and OpenGL ES have the same functions with the same names that take the same parameters. There should be very close to zero performance hit involved other than the shader translation (which should be a one-time performance hit). If you're seeing a 2x performance hit from that (or even a 1% hit), something is very, very wrong.

                      Also, when you r

                    • So I'm very confused by what you're saying, because it doesn't match up with my understanding of the technology.

                      As I explained to you already it is an emulation layer atop the native graphics API, that is the reason you need things like ANGLE because you can't just simply translate the shaders and then run it on the native API.

                      For example anything involving tessellation or geometry shaders take a big hit when translating Vulkan to Metal because Metal has no native support for some of those features, requiring them to do extra compute work between rendering steps while drawing.

                      Ok so firstly nobody uses Geometry Shaders anymore, they are long defunct in favor of Compute Shaders because of the flexibility of Geometry Shaders to allow variable number of output primitives within the primitive processing pipeline killing performance. But again any of the shader translatio

                    • by dgatwood ( 11270 )

                      So I'm very confused by what you're saying, because it doesn't match up with my understanding of the technology.

                      As I explained to you already it is an emulation layer atop the native graphics API, that is the reason you need things like ANGLE because you can't just simply translate the shaders and then run it on the native API.

                      So you're saying they no longer support OpenGL natively? That's a choice. And that means every OpenGL game should also be impacted.

                    • by dgatwood ( 11270 )

                      So I'm very confused by what you're saying, because it doesn't match up with my understanding of the technology.

                      As I explained to you already it is an emulation layer atop the native graphics API, that is the reason you need things like ANGLE because you can't just simply translate the shaders and then run it on the native API.

                      So you're saying they no longer support OpenGL natively? That's a choice. And that means every OpenGL game should also be impacted.

                      And you're saying they no longer support DirectX? Because those benchmarks are consistent with the results when using the DirectX version of that benchmark. That benchmark (Aztec Ruins 4K offscreen) has versions that run in OpenGL, Vulkan, DirectX, and Metal natively [technical-news.net], and the M1 using the Metal API [gfxbench.com] beats the 3080 laptop chip using the DirectX API [gfxbench.com] on that test.

          • Apple's M1 and M2 GPU only scales as high as a 1050TI (trash tier)

            I just found out my graphics card is trash.

            I actually program 3D graphics for a living, too. LOL!

    • by Z00L00K ( 682162 )

      Built in GPUs are good enough for low end desktop and for servers, but for high end computers you usually add an external GPU.

      • >but for high end computers you usually add an external GPU.

        Speak for yourself. My high end computers don't have a screen and they don't meddle in machine learning nonsense. No GPU needed. But lots and lots of fast cores - that's what make my jobs run faster.

  • "Day 0" drivers? Is that a Freudian slip in which they're admitting these drivers will contain 0 days?
    • Contain 0 days? The concept of "Day 0" support in drivers has been pretty common for quite some time with the major vendors.
  • How powerful are these dedicated GPUs? Will the drivers be fully open source?
  • Intel ended support for Ivy Bridge and older processors in 2018 when they failed to create a new IGP driver for windows "10" 1803, although Ivy Bridge was only practically replaced in late 2014 when Haswell processors finally became generally available, meaning approximately 3 years of support.

    Today's announcement terminating support for Comet Lake which isn't even 3 years old and is still being sold in new systems seems a bit extreme. It's really up to Microsoft whether they decide to kill the old drivers

The unfacts, did we have them, are too imprecisely few to warrant our certitude.

Working...