Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics AMD Hardware

Next-Gen GPU Progress Slowing As It Aims for 20 nm and Beyond 91

JoshMST writes "Why are we in the middle of GPU-renaming hell? AMD may be releasing a new 28-nm Hawaii chip in the next few days, but it is still based on the same 28-nm process that the original HD 7970 debuted on nearly two years ago. Quick and easy (relative terms) process node transitions are probably a thing of the past. 20-nm lines applicable to large ASICs are not being opened until mid-2014. 'AMD and NVIDIA will have to do a lot of work to implement next generation features without breaking transistor budgets. They will have to do more with less, essentially. Either that or we will just have to deal with a much slower introduction of next generation parts.' It's amazing how far the graphics industry has come in the past 18 years, but the challenges ahead are greater than ever."
This discussion has been archived. No new comments can be posted.

Next-Gen GPU Progress Slowing As It Aims for 20 nm and Beyond

Comments Filter:
  • by All_One_Mind ( 945389 ) on Wednesday October 23, 2013 @05:16PM (#45218109) Homepage Journal
    As a Radeon 7970 early adopter, I am completely fine with this. It still more than kicks butt at any game I throw at it, and hopefully this slow pace will mean that I'll get another couple of good years out of my expensive purchase.
    • by Rockoon ( 1252108 ) on Wednesday October 23, 2013 @06:33PM (#45218781)
      Hell I recently picked up an A10-6800K APU and the integrated graphics are more than acceptable for the gaming that I do at 1920x1080 (Team Fortress 2, Kerbal Space Program, Planet Explorers, Skyrim, ...) .. and its not even with the fastest DDR3 the mobo supports.
      • Hell I recently picked up an A10-6800K APU and the integrated graphics are more than acceptable for the gaming that I do at 1920x1080 (Team Fortress 2, Kerbal Space Program, Planet Explorers, Skyrim, ...) .. and its not even with the fastest DDR3 the mobo supports.

        Can you turn on 16x Anisotropic Filtering and 8x Multi-Sample Anti-Aliasing in Skyrim? In my opinion it just looks a little ugly without it.

        • I would argue that on a typical laptop screen at 1920x1080, the limited physical dimensions make things like anti-aliasing not as important compared to say a proper desktop monitor, where the pixels are bigger and hence the effect of a lack of AA is more pronounced. As for anisotropic filtering, that's basically free for any graphics chipset these days so I'd be surprised if there's any performance hit on that.

          Having said that, Skyrim's one of those games which everyone seems to love and I dislike (then aga

        • Using fraps 60 second benchmark, running same area (not 100% consistent displayed stuff)

          Ultra then switching AA/AF:
          0AA/0AF - 23 / 63 / 32.497 (min / max / ave)
          8AA/16AF - 16 / 32 / 19.283

          High then switching AA/AF:
          0AA/0AF - 27 / 57 / 40.867
          8AA/16AF - 19 / 36 / 23.833

          Medium then switching AA/AF:
          0AA/0AF - 32 / 59 / 42.167
          8AA/16AF - 20 / 34 / 26.933

          Low then switching AA/AF:
          0AA/16AF - 35 / 62 / 49.733
          8AA/0AF - 22 / 41 / 30.417
          8AA/16AF - 22 / 40 / 31.350

          The other poster seem to be right that AF
          • Ultra then switching AA/AF:
            0AA/0AF - 23 / 63 / 32.497 (min / max / ave)
            8AA/16AF - 16 / 32 / 19.283

            Wow, seems better than I expected but 18fps is going to be a bit noticeably ugly, especially if you had a huge fight going on with few enemies such as the stormcloak / empire battles.

            Would be a good if someone started throwing these things into laptops though, I just checked quickly and can't see any yet. Hopefully that will change soon.

            • Its a 100W APU. Its the king of the current batch of CPU's with on-die GPU's with regards to GPU performance (Intel's best is like 50% of the performance.) So unlikely to ever see a laptop anytime soon.

              As far as the 19 FPS on Ultra with 8x MSAA .. AMD Catalyst can do "Morphological AA" that works on top of other AA methods (for example, 2x MSAA) and is supposed to be very efficient and frequently comparable to 8x SSAA all by itself.. but meh.. hard to quantify subjective stuff like "aa quality")

              After al
              • Its a 100W APU. Its the king of the current batch of CPU's with on-die GPU's with regards to GPU performance (Intel's best is like 50% of the performance.) So unlikely to ever see a laptop anytime soon.

                I believe you but its a pity. In the past I have bought laptops with discrete graphics even though they got crappy battery life just because I wanted to game on them on trains while travelling around here in the UK where you get a power socket in most of the seats. I only skimped on that on my current laptop because I do far less travelling now.

    • by dstyle5 ( 702493 )
      It will be interesting to see what becomes of Mantle, but it does have the potential to greatly increase the performance of games that support it and extend the life of ATI cards. Being the owner of a 7950 I'm looking forward to seeing how much of a performance boost Mantle gives vs. DirectX. I believe Battlefield 4, which I plan to purchase, will be one of the first Mantle supported games when they patch in Mantle support in December, can't wait to try it out.

      http://www.anandtech.com/show/7371/unders [anandtech.com]
  • Intel (Score:3, Informative)

    by Anonymous Coward on Wednesday October 23, 2013 @05:25PM (#45218187)

    Meanwhile, Intel is about to give us 15 core Ivy-bridge Xeons [wikipedia.org]. A year from now we'll have at least that many Haswell core Xeons, given that they have the same 22nm feature size.

    How many cores will 14nm [slashdot.org] Broadwell parts give us (once they sort the yield problems?) You may expect to see 4-5 billion transistor CPUs in the next few years.

    Yay for Moore's law.

    • Re:Intel (Score:4, Insightful)

      by laffer1 ( 701823 ) <<moc.semaghsiloof> <ta> <ekul>> on Wednesday October 23, 2013 @05:40PM (#45218313) Homepage Journal

      I actually wonder if Intel could catch up a bit on GPU performance with the integrated graphics on the newer chips. They're not process blocked. Same thing with AMD on the A series chips.

      • by Luckyo ( 1726890 )

        Unlikely simply due to memory bus limitations alone. Then there's the whole drivers elephant in the room.

        Frankly, it's unlikely that intel even wants in on that market. The costs of entering GPU market to the point where they can threaten mid end and above discreet GPUs are astronomical and may require it to cross-license with nvidia to the point where nvidia will want x86 cross licensing - something intel will never do.

        What is likely is that intel and AMD will completely demolish the current low end GPU ma

        • They've already solved the memory bandwidth issue with the eDRAM in the Iris Pro Haswell parts.

      • They're not process blocked.

        I imagine they're blocked by patents.

    • I'd happily throw money at Intel graphics, once (a) they actually do catch up to the Big Two (they are currently not even anywhere near anything close to something that so much as resembles half their performance or graphical feature set), (b) I can afford them, and (c) they stop considering soldering their new chips to the board, or make more chips with "Windows 8-only" features.

      Buying someone's chip is taken by that someone as support for their policies. Those in (c) are two that I hope I never have to s

    • by Aereus ( 1042228 )

      I realize this is /. so many may actually be doing workloads that require that sort of multi-threading. But the whole "more cores!" thing is completely lost on gaming and general computing. Games still primarily do their workload on 1-2 cores.

  • Played across 24 monitors. Who really needs this crap?
    • There have been some graphics advances since the days of Quake 3.

      • Q3A (which is still gives an excellent deathmatch experience) had some ahead-of-its-time features, such as multi-core support and ATI TruForm tessellation. :)
    • by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Wednesday October 23, 2013 @05:37PM (#45218291) Homepage Journal
      More GPU power translates into more detailed geometry and shaders as well as more GPGPU power to calculate more detailed physics.
      • Re: (Score:2, Insightful)

        by Dunbal ( 464142 ) *
        At some point the limiting factor becomes the ability of the software designers to create such a complex graphics engine rather than the video card itself.
        • by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Wednesday October 23, 2013 @08:50PM (#45219733) Homepage Journal
          True. And once graphical realism in a human-created game universe reaches its practical limit, game developers will have to once again experiment with stylized graphics. This parallels painting, which progressed to impressionism, cubism, and abstract expressionism.
        • But once the engine has been created, the artists still have to take advantage of it.

        • At some point the limiting factor becomes the ability of the software designers to create such a complex graphics engine rather than the video card itself.

          I think managing the complexity still goes a long way. You just break the engine to subproblems and assign them to different teams and people. The real caveat is that you will require more and more programmers to put all that together. Making something like the Source 2 engine already involves planning out huge frameworks and foundations, and it looks a bit like building a ship on a shipyard, at least when looking the magnitude of the project.

          • by tepples ( 727027 )

            You just break the engine to subproblems and assign them to different teams and people.

            At some point, hiring enough people to solve all the subproblems costs more than your investors are willing to pay. Not all games can sell a billion dollars like Grand Theft Auto V. Indies have had to deal with this for decades.

    • The most powerful chips out there are still far below the capacity of a human brain.

      I don't want just to play games, I want to retire and leave my computer to do my work for me.

      At this point, we already have better software models [deeplearning.net] for the brain than hardware to run it.

  • Having struggled through years of gaming on rigs with various GPUs, I have to wonder where it will hit the point that nobody needs any faster cards.
    I started out gaming on the computer on computers with no GPU, and when I got one with a Rage Pro 4MB it was awesome. Then I got a Voodoo card from 3DFX with a whopping 8MB and it was more awesomer. Now you can get whatever that will do whatever for however many dollars.

    I really don't see the game programming keeping up with the GPU power. I'm at least 2

    • by Anonymous Coward

      Diminishing returns I think is still a ways off. Even if they can't crank out faster frame rates, they can still continue the quest for smaller packages, if anything for efficiency, and power savings. Heck when some video cards require their own separate power supply, there is definitely room for improvement.

    • by kesuki ( 321456 ) on Wednesday October 23, 2013 @05:43PM (#45218337) Journal

      the real killer app i've heard of for gaming rigs is making realtime special effects for movies and tvs. other than that there is news departments where thin clients can take advantage of a gpu assited server to run as many displays as the hardware can handle.

      then there is wallstreet where a cuda assisted computer can model market dynamics in real time, there are a lot of superfast computers on stock exchanges. so there you go. 3 reasons for gpus to go as far as technology will allow.

      • by g00ey ( 1494205 )
        Not to mention all of the different research projects taken by students. I myself have indulged in more complex computer simulations using software such as Matlab. Simulations that took a few days of computing to complete on each run. If I had better hardware I would definitely use even more advanced models and conduct more simulations. So, there you have a forth reason :)
    • I'm at least 2 GeForce's behind the latest series (560ti) and I can play any game at 1200p resolution with a very decent framerate.

      I'm further behind than that (far enough to have to bump the resolution down to get anything playable on a game from the last couple years). In the past, the big benefit would've been higher API support for more effects, along with a general performance boost. GF 5xx and 7xx cards seem to support the same APIs, so I'd guess that with the current high-end cards, you've got gamers trying to match their monitor refresh rate while using higher-res monitors in a multi-monitor configuration. If you really want t

    • > How much more is enough?

      Uhm, never.

      I have a GTX Titan and it is still TOO SLOW: I want to run my games at 100 fps @ 2560 x 1440. I prefer 120 Hz on a single monitor using LightBoost. Tomb Raider 2013 dips down below 60 fps which makes me mad.

      And before you say "What?", I started out with the Apple ][ with 280x192; even ran glQuake at 512x384 res on a CRT to guarantee 60 fps, so I am very thankful for the progress of GPUs. ;-)

      But yeah, my other 560 Ti w/ 448 cores runs 1080p @ 60 Hz is certainly "goo

      • by g00ey ( 1494205 )
        But is using such a high resolution really necessary? I've looked into those 4K BF4 video clips and to be honest, it looks pretty terrible. I could barely see the city and the buildings in the game level, it looked more like a bunch of squary boxes with textures painted on top of them. When using a lower resolution I could more easily suspend my disbelief, the coarseness of the pixels makes the primitive polygons look less, ... boxy. Perhaps GPU hardware a few orders of magnitude faster is required so that
        • At 4K you might be running into the "Uncanny Valley" symptom.
          http://en.wikipedia.org/wiki/Uncanny_valley [wikipedia.org]

          I concur 100% that 4K doesn't make ANY sense due to SMALL screen sizes. In order to have ~300 dpi @ 4K (3840 x 2160) your screen size would have to be 14.66 inches.
          http://isthisretina.com/ [isthisretina.com]

          I want 300 dpi at 60 inches.

          4K only starts to make sense when you want to scale up the picture to be wall size. Let's take an average (diagonal) 60" plasma at 1080p, and the viewer sits at the recommended THX viewing a

          • by g00ey ( 1494205 )
            I'm not arguing against 4K resolution per se. Personally, I would really like to have 4K, 8K resolution or even higher. For tasks such as word processing (or any task that involves working with text or letters) and getting desktop estate it is the more the merrier that applies, at least for now at the screen resolutions that are available for current desktop or laptop PCs. I totally agree with what Linus Torvalds said about this [slashdot.org] a while ago.

            For FPS gaming on the other hand, I agree that 4K is overkill, at
    • by wmac1 ( 2478314 )

      The more DPI craze has just started. We are going to have more DPIs on monitors and graphics cards will compete to bring the same speed of lower resolutions to them. The unfortunate thing is that the moore's law is at its practical limits (for now) so more capable CPUs might become more expensive and consume more power.

      I personally hate the noise of those fans and the heat coming from under my table. I don't do games but I use the GTS-450 (joke? ha?) for scientific computing.

    • by pepty ( 1976012 )

      How much more is enough? I don't want them to stop trying, but somebody needs to ask where it reaches the point of diminishing returns.

      Right after they hand me a 2k^3 resolution holographic (360 degree viewing angle) display and a GPU that can power it at 60 frames , er, cubes per second.

      Then they can have the weekend off.

    • by Luckyo ( 1726890 )

      Not any time soon. We're still massively constrained on GPU front even with current graphics. And people making games want things like ray-tracing, which means that GPUs will have to make an order of magnitude jump before they even begin thinking about saturation.

      The reason why 560Ti (the card I'm using as well at the moment) is still functional is because most games are made for consoles, or at least with consoles in mind. And consoles are ancient.
      Requirements on PC-only/PC optimized/next gen console games

    • We are a long way off from real time photo realistic graphics. Think holo deck level or The Matrix. Add in the extra to create the fantasy worlds and you've got the maximum level.
      But I think most want the game play and story (pant shitting system shock 2) that has nothing to do with the level of graphics we can achieve.
    • Having struggled through years of gaming on rigs with various GPUs, I have to wonder where it will hit the point that nobody needs any faster cards.

      I started out gaming on the computer on computers with no GPU, and when I got one with a Rage Pro 4MB it was awesome. Then I got a Voodoo card from 3DFX with a whopping 8MB and it was more awesomer. Now you can get whatever that will do whatever for however many dollars.

      I really don't see the game programming keeping up with the GPU power. I'm at least 2 GeForce's behind the latest series (560ti) and I can play any game at 1200p resolution with a very decent framerate. Yes I beta-tested Battlefield 4.
      How much more is enough? I don't want them to stop trying, but somebody needs to ask where it reaches the point of diminishing returns. They could focus on streamlining and cheapening the "good enough" lines...

      I just upgraded from a GTX 480 to a GTX780 and the big difference it made to me is the ability to turn on proper 8x Anti Aliasing and 16x Anisotropic Filtering at 1920x1200. It did not make a huge difference but it made enough to be noticeable. You might be able to make do with a cheaper card and still play most modern games, but getting something decent does give you nicer graphics for your money at the resolution you quoted.

    • by GauteL ( 29207 )

      When they can make graphics which is indistinguishable from "real life" at a resolution where you can no longer see the pixels and which behaves with physics resembling real life, then we can start talking.

      Currently we are way off with respect to shapes, lighting, textures, resolution, physics, animation... basically all of it.

      Shapes are never fully "curved" looking due to insufficient polygon counts. When the polygon sizes start becoming smaller than the pixel counts, maybe this will be ok. We use rasteris

  • Of course it's come far in the last 18 years. Last 2 years? Not so much. In fact GPU advancement have been _pathetically_ slow.

    The Xbox One and PS4, for example, will be good at 1080p but ultimately only a few times faster than the previous generation consoles. Same thing with PC graphics cards. Good luck gaming on a high resolution monitor spending less than $500. Even Titan and SLI are barely sufficient for good 4K gaming.

    • The Xbox One and PS4, for example, will be good at 1080p but ultimately only a few times faster than the previous generation consoles.

      I believe the same sort of slowdown happened at the end of the fourth generation. The big improvements of the Genesis over the Master System were the second background layer, somewhat larger sprites, and the 68000 CPU.

      Good luck gaming on a high resolution monitor spending less than $500.

      Good luck buying a good 4K monitor for substantially less than $4K.

      • Nah. You could game "OK" on the 39" Seiki which is pretty cheap. Once they start making HDMI 2.0 models you'll see 30" 4K monitors for $700 by early next year.
    • by Anaerin ( 905998 )
      The thing with graphics improvements is that GPUs are getting better in linear scale, but quality improvements need to happen in logarithmic scale. Going from 100 polys to 200 polys looks like a huge leap, but going from 10,000 polys to 10,100 polys doesn't. I personally think the next big thing will be on-card raytracing (As NVidia has already demonstrated some). Massively parallel raytracing tasks are like candy for GPGPUs, but there is a lot of investment in Rasterising at the moment, so that is their cu
  • I'm ready for the next (or next-next) gen display, holographic display hovering in midair. Preferably with sensors that can detect my interactions with them. Wonder how far out THAT is now ?

  • by Overzeetop ( 214511 ) on Wednesday October 23, 2013 @05:39PM (#45218311) Journal

    No, seriously. I have yet to find a graphics card that will accelerate 2D line or bitmapped drawings, such as are found in PDF containers. It isn't memory-bound, as you can easily throw enough RAM to hold the base file, and it shouldn't be buffer-bound. And yet it still takes seconds per page to render an architectural print on screen. That may seem trivial, but to render real-time thumbnails of a 200 page 30x42" set of drawings becomes non-trivial.

    If you can render an entire screen in 30ms, why does it take 6000ms to render a simple bitmap at the same resolution?

    (the answer is, of course, because almost nobody buys a card for 2D rendering speed - but that makes it no less frustrating)

    • That stuff is built-in on OSX, check it out.
      • Does OSX 10.9 still choke on PDFs with embedded JPEG2000 graphics?

      • I hope it's not the same engine that is used in iOS, because the decoding in the iDevices makes windows software decoding look like greased lightning. The only thing I fear more than opening a complex 200 sheet PDF on my desktop is having to open just one of those sheets on my iPad. At least on the desktop the machine can hold a sheet in memory. The iDevices have to redraw the entire page every time I pan. I even have two versions of all my bitmap/scanned sheet music - one for quality, and one that looks li

    • by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Wednesday October 23, 2013 @05:47PM (#45218381) Homepage
      All of the 3D rendering APIs are capable of proper, full-featured 2D rendering. The same hardware accelerates both just as well. The problem is that most apps are just not using it and/or that they are CPU bound for other reasons. PDFs, for instance, are rather complex to decode.
      • by slew ( 2918 )

        All of the 3D rendering APIs are capable of proper, full-featured 2D rendering. The same hardware accelerates both just as well. The problem is that most apps are just not using it and/or that they are CPU bound for other reasons. PDFs, for instance, are rather complex to decode.

        Not totally true. Stroke/path/fill rasterization work is not supported by current 3D rendering APIs (and thus not accelerated by 3d hardware). Right now the stroke/path/fill rasterization is done on the CPU and merely 2D blit-ed to the frame buffer by the GPU. The CPU could of course attempt convert the stroke/path into triangles and then use the GPU to rasterize those triangles (with some level of efficiency), but that's a far cry from "proper, full-featured 2D".

        Fonts are special cased in that glyphs are

        • Re:Not true (Score:5, Informative)

          by forkazoo ( 138186 ) <wrosecrans AT gmail DOT com> on Wednesday October 23, 2013 @08:00PM (#45219481) Homepage

          Not totally true. Stroke/path/fill rasterization work is not supported by current 3D rendering APIs (and thus not accelerated by 3d hardware). Right now the stroke/path/fill rasterization is done on the CPU and merely 2D blit-ed to the frame buffer by the GPU. The CPU could of course attempt convert the stroke/path into triangles and then use the GPU to rasterize those triangles (with some level of efficiency), but that's a far cry from "proper, full-featured 2D".

          Fonts are special cased in that glyphs are cached, but small font rasterization isn't generally possible to do with triangle rasterization (because of the glyph hints).

          Since SW doesn't even attempt to use HW for modern 2D operations, it will likely be a long time before HW will support this kind of stuff...

          A - anything that you can't do by tesselating to triangles could be done with OpenCL or CUDA. You could, for example, assign OpenCL kernels where each instance rasterizes one stroke and composite the results or something similar, and exploit the paralellism of the GPU. But, it would be inconvenient to write. Especially since most PDF viewers don't even bother with effective parallelism in their software rasterizers.

          B - you can do anything by tesselating to triangles.

          • by slew ( 2918 )

            B. You can't run font hint virtual machine by tessellating triangles.

            A. running the font hint engine in CUDA or OpenCL would likely be an exercise in deceleration not acceleration.

        • All the tessellation needed is possible in shaders. The GPU's job is to provide primitives, I don't see any reason to expect it to dedicate hardware to every little step -- we got rid of fixed-function ages ago (mostly). Windows' Direct2D and DirectWrite are examples of high-quality 2D and font rendering done on the GPU -- they're just wrappers for Direct3D. There is also OpenVG, but as far as I know there are no desktop drivers available for this.

        • >Not totally true. Stroke/path/fill rasterization work is not supported by current 3D rendering APIs (and thus not accelerated by 3d hardware).

          Incorrect. It's there, developers just aren't using it for some reason.
          https://developer.nvidia.com/nv-path-rendering [nvidia.com]

        • by tlhIngan ( 30335 )

          Since SW doesn't even attempt to use HW for modern 2D operations, it will likely be a long time before HW will support this kind of stuff...

          Most graphics cards support 2D acceleration since the 90s - it's something that's been built into Windows for ages. Though given how fast it is to draw primitives like lines and such, it's typically far faster to do it in software than to run through the graphics card to render it to a framebuffer.

          Though for PDFs, what happens is the Adobe Reader generally uses its own

    • by Guppy ( 12314 )

      It's not being completely ignored. For example, Tom's Hardware made a stink about Radeon 2D performance a few years back, and managed to get AMD moving to fix some performance bugs:

      http://www.tomshardware.com/reviews/ati-2d-performance-radeon-hd-5000,2635.html [tomshardware.com]

    • Opening a massive set of drawings while solidworks has an exploded 22, 000 part die...even my work machine has issues.

      What you just mentioned legitimately damages my productivity severely.

    • Nvidia has been doing that for a while. They hired several vector-graphics programmers a few years ago and had them add that functionality to their cards. The problem is, no developers use this stuff.
      https://developer.nvidia.com/nv-path-rendering [nvidia.com]

  • by Anonymous Coward on Wednesday October 23, 2013 @05:46PM (#45218369)

    The 'point' of this very crappy article is that each process node shrink is taking longer and longer. Why bother connecting this to GPUs, when self-evidently ANY type of chip relying on regular process shrinks will be affected?

    The real story is EITHER the future of GPUs in a time of decreasing PC sales, rapidly improving FIXED-function consoles, and the need to keep high-end GPUs within a sane power budget ***OR*** what is the likely future of general chip fabrication?

    Take the later. Each new process node costs vastly larger amounts of money to implement than the last. Nvidia put out a paper last year (about GPUs, but their point was general) that the cost of shrinking a chip may become so high, that it will ALWAYS be more profitable to keep making chips on the older process instead. This is the nightmare scenario, NOT bumping into the limits of physics.

    Today, we have a good view of the problem. TSMC took a long time to get to 28nm, and is taking much longer to get off. 20nm and smaller aren't even real process node shrinks. What Intel dishonestly calls 22nm and 14nm is actually 28nm with some elements only on a finer geometry. Because of this, AMD is due to catch up with Intel in the first half of 2014, with its FinFET transistors also at 20nm and 14nm.

    Some nerdy sheeple won't believe what I've just said about Intel's lies. Well Intel gets 10 million transistors per mm2 on its 22nm process, and AMD, via TSMC, gets 14+ million on the larger 28nm process. Defies all concept of maths when Intel CLAIMS a smaller process, but gets far less transistors per area against a larger process.

    It gets more complicated. The rush to SHRINK has caused the industry to overlook the possibilities of optimising a given process with new materials, geometry, and transistor designs. FD-SOI is proving to be insanely better than finFET on any current process, but is being IGNORED as much as possible by most of the bigger players, because they've already invested tens of billions of dollars in prepping for FinFET. Intel has had two disastrous rounds of new CPU (Ivybridge and Haswell), because FinFET failed to deliver any of the theoretical on the process they 'call' 22nm.

    Intel has one very significant TRUE lead, though- that of power consumption in its mains-powered CPU family. Although no-one gives a damn about mains-powered CPU power usage, Intel is more than twice as efficient than AMD here. Sadly, their power advantage largely vanishes with mobile, battery powered parts.

    Anyway, to flip back to GPUs, AMD is about to announce the 290x, the fastest GPU, but with a VERY high power usage. Both AMD and Nvidia need to seriously work to get power consumption down as low as possible, and this means 'sweet spot' GPU parts which will NOT be the fastest possible, but will have sane compromise characteristics. Because 20nm from TSMC is almost here (in 12 months max), AMD and Nvidia are focused firstly on the new shrink, and finFETs, BUT moving below 20nm (in a TRUE shrink, not simply measuring the 2D profile of finFET transistors) is going to take a very, very, very long time, so all companies have an incentive to explore ways of improving chip design on a GIVEN process, not simply lazily waiting for a shrink.

    Who knows? FD-SOI offers (for some chips) more improvements than a single shrink suing conventional methods. It is more than possible that by exploring material science, and the physics of semiconductor design, we could get the equivalent of the advantages of multiple generations of shrink, without changing process.

    • by JoshMST ( 1221846 ) on Wednesday October 23, 2013 @06:07PM (#45218545)
      Because GPUs are the high visibility product that most people get excited about? How much excitement was there for Haswell? Not nearly as much as we are seeing for Hawaii. Did you actually read the entire article? It actually addresses things like the efficacy of the Intel 3D Tri-Gate, as well as alternatives such as planar FD-SOI based products. The conclusion there is that gate-last planar FD-SOI is as good, if not better, than Intel's 22 nm Tri-Gate. I believe I also covered more info in the comments section about how there are certain geometries on Intel's 22 nm that are actually at 26 nm, so AMD at 28 nm is not as far behind as some thought in terms of density.
    • In the discussions I have had with other people that I work with in the semiconductor industry, the primary case against FD-SOI was business not technical. FD-SOI is very expensive as a starting material and the sourcing of the stuff was iffy at best when Intel decided to go FinFet. It was also questionable if it would scale well to 450mm wafers, something that TSMC and Intel really want.
    • by Kjella ( 173770 ) on Wednesday October 23, 2013 @11:33PM (#45220547) Homepage

      Some nerdy sheeple won't believe what I've just said about Intel's lies. Well Intel gets 10 million transistors per mm2 on its 22nm process, and AMD, via TSMC, gets 14+ million on the larger 28nm process. Defies all concept of maths when Intel CLAIMS a smaller process, but gets far less transistors per area against a larger process.

      Comparing apples and oranges are we? Yes, AMD gets 12.2 million/mm^2 on a GPU (7970 is 4.313 billion transistors on 352 mm^2) but CPU transistor density is a lot lower for everybody. The latest Haswell (22nm) has 1.4B transistors in 177mm^2 or about 7.9 million/mm^2, but AMD's Richland (32nm) has only 1.3B transistors in 246mm^2. or 5.3 million/mm^2. Their 28nm CPUs aren't out yet but they'll still have lower transistor density than Intel's 22nm and at this rate they'll be competing against the even smaller Broadwell, though I agree it's probably not true 14nm. Very well formulated post though that appears plausible and posted as AC, paid AMD shill or desperate investor?

      • Continuing on your informative post:

        The current best 4-thread chip that Intel makes is the 22nm i5-4670K and the current best 4-thread chip that AMD makes is the 32nm A10-6800K.

        Ignoring the integrated graphics, their passmark benchmarks are 7538 and 5104 respectively. Intel performance is therefore about 1.5x the AMD chip.

        Intels process advantage between these two parts is also 1.5x transistors per mm^2.

        So performance is still apparently scaling quite well with process size.
  • How strange that it just happens that WiFi encryption standards fall with the power of last generation cards... Just a coincidence, I'm sure.

    • Sure is strange though how both NVidia and AMD announced their cards shortly after the adoption of TLS 1.2 by major American e-commerce websites and web browsers.

  • During TSMC earnings call the CEO mentioned that there are tape-outs for GPUS for 16nm Finfet, but not 20nm - hinting that Nvidia and AMD will skip that node altogether.

    http://seekingalpha.com/article/1750792-taiwan-semiconductor-manufacturing-limited-management-discusses-q3-2013-results-earnings-call-transcript?page=3 [seekingalpha.com]

    "Specifically on 20-nanometers, we have received 5 product tape-outs, and scheduled more than 30 tape-outs in this year and next year from mobile computing CPU and PLD segments"

    "On 16-FinFET.

  • ...can't come soon enough. We desperately need a change in paradigm. Hopefully we might have something around 2020. *crossing fingers*
  • by dutchwhizzman ( 817898 ) on Thursday October 24, 2013 @12:17AM (#45220669)
    If you're out of silicon to work with, you can't just keep on going to throw transistors at a performance problem. You will have to get smarter with what you do with the transistors. If the GFX card makers add innovative features to the on-board chips, they could solve many bottlenecks we still face with utilizing the massive parallel performance we have on these cards. Both for science and for GFX I'm sure there is a list of "most wanted features" or "biggest hotspots" they could work on. For example, the speed at which you can calculate hashes with OCLhashcat differs extremely for NVidia and AMD graphics. NVidia clearly is missing something they don't need a smaller silicon process for. There must be plenty of this sort of improvements both AMD and NVidia can make.
    • The feature that comes to mind that some companies have been hammering at for years is raytracing. I remember a project that intel was doing some years ago with the Return to Castle Wolfenstein source (way back when) to make the engine completely ray traced. I also remember it took a good $10k+ computer to render in less than real time (numbers escape me, and the site appears to be down now). While it did create some pretty unbelievable graphics for the time with true reflections on solid surfaces such as g

Over the shoulder supervision is more a need of the manager than the programming task.

Working...