Forgot your password?
typodupeerror
Graphics Intel Cloud Games

Wolfenstein Ray Traced and Anti-Aliased, At 1080p 158

Posted by timothy
from the sounds-like-an-in-n'-out-burger-order dept.
An anonymous reader writes "After Intel displayed their research demo Wolfenstein: Ray Traced on Tablets, the latest progress at IDF focuses on high(est)-end gaming now running at 1080p. Besides image-based post-processing (HDR, Depth of Field) there is now also an implementation of a smart way of calculating anti-aliasing through using mesh IDs and normals and applying adaptive 16x supersampling. All that is powered by the 'cloud,' consisting of a server that holds eight Knights Ferry cards (total of 256 cores / 1024 threads). A lot of hardware, but the next iteration of the 'Many Integrated Core' (MIC) architecture, named Knights Corner (and featuring 50+ cores), might be just around the corner."
This discussion has been archived. No new comments can be posted.

Wolfenstein Ray Traced and Anti-Aliased, At 1080p

Comments Filter:
  • by russ1337 (938915) on Thursday September 15, 2011 @02:08PM (#37412194)

    first ray trace...

    now where is a decent link.

  • Slashdotted already? I think not!
  • Those giant pixels never looked better!

  • I think Intel would find it easier to get people excited about this technology if they actually used it to render something that looked interesting, or at the very least looked good at all.

    • by Lorkki (863577)
      Agreed. These announcements would be a lot more interesting if the demo material didn't resemble special effects nightmares from the 90's.
    • by Twinbee (767046)

      If they used proper global illumination, then there'd be a real change. It looks like no more than one or two bounces of light to me.

  • Did we?? Neither this nor the previous version seem accessible...

  • But it's not ray traced on tables, is it? It's ray traced on a 256 core system and then somehow displayed on a tablet. Or am I reading this summary completely backwards?
    • by Aladrin (926209)

      The summary clearly says it's rendered in the cloud on 256 cores. That link should read: "Wolfenstein: Ray Traced" on Tablets.

    • by Guspaz (556486)

      So, in other words "OnLive but with a software raytracer on the server-side instead of a GPU."

  • 'Many Integrated Core'? Sounds like something from a parody. 'Many Integrated Core', with 'A Lot Of Thread'. They also come in a high-end version, 'Several Interesting Rate'. Abbreviated SIR MICALOT on Knights Corner...
  • Firefox can't establish a connection to the server at blogs.intel.com.
  • I can ray trace that at about 1FPS per core. Why do they need 256 cores? And who can play anything rendered in the cloud?
    • Depends if the rendering server is halfway across the country or halfway across the house. I remember people talking a while back (7 years or so) about using a home server to do the number crunching and moving back towards thin clients to access it. Wireless N bandwidth and latencies are pretty good, with modern technology you could probably make the idea work. Offer a suite of products that play well together: a powerful and easily upgraded server, lightweight laptops, and tablets. If you could make th

    • by iYk6 (1425255)

      And who can play anything rendered in the cloud?

      One person at a time.

    • Subscribers of Onlive?
    • You're statement is meaningless. What core are you talking about? A PIII core? An Atom core? A core on your shiny new i5 2600? The cores in Knights Whatever aka Larabee are nowhere near as powerful as the cores in your current desktop cpu.
  • UUUUUUUUU
    The umber hulk hits! - more
    The umber hulk hits! - more
    The umber hulk hits! - more
    You die - more

  • by sl4shd0rk (755837) on Thursday September 15, 2011 @02:36PM (#37412496)

    Intel is apparently running the ray tracing process on the same server their blog is on.

  • Ok, so cluster = cloud now? Even though they both serve very different purposes?
    • by loufoque (1400831)

      *remote* cluster = cloud

      • *remote* cluster = cloud = unacceptable latencies for gaming.

        The only way this concept works is if the rendering farm is running in a closet somewhere in your house.

        • by Guspaz (556486)

          OnLive made it work with acceptable latencies, but then they did it with a cheap GPU and not a 256-processor cluster.

          • Acceptable or not depends on where you live and how good your ISP is. Personally, my mediocre cable internet regularly has latency in the 200s which is annoying enough trying to play online games, I can't imagine having that kind of latency for the basic I/O layer of the game. And that's not even at 12 midnight, when they decide to push out the schedule updates to every single cable box on their network simultaneously.

            • by Guspaz (556486)

              While this may be true in your particular case, many people are within the 1000 mile radius of an OnLive data center on a decent connection.

              People talk a lot about how the network latency would make the input lag to OnLive unbearable, but consider this: 50ms of latency gets you from Montreal to Dallas (~2800km), and GTA IV on the XBox 360 has 133-200ms of input lag [eurogamer.net] despite being local. In fact, every console game that Eurogamer measured had at least 67ms of latency, and they claim that the average seemed to

              • In fact, every console game that Eurogamer measured had at least 67ms of latency, and they claim that the average seemed to be about 133ms. Gamers are clearly willing to accept this latency...

                GTA players may be willing to put up with high latency, but that doesn't fly so well with button-combo-fighting games (Soul Calibur, Street Fighter both 67ms) or competitive FPS games (CoD:MW 67-84ms). Those games just will not work with the additional latency of remote rendering over the Internet. 50ms light speed

            • by wagnerrp (1305589)
              Why would schedule updates be sent individually over the internet, rather than simply broadcast to everyone on a spare channel? That just sounds like a hideously inefficient use of spectrum.
          • by afidel (530433)
            Uh, most modern GPU's have a lot more than 256 "cores", my fairly low end HD5750 has 720, a GTX 560 has 336 (yes a CUDA core and a SP are different, I know). These chips are the continuation of Larrabee which was meant as a GPU chip.
            • by Guspaz (556486)

              Sure, but a "core" in a GPU is far simpler than a "core" in a CPU, and Larrabee wasn't stripped down anywhere near that far. Larrabee was supposed to feature 32 cores in one package initially on a 45nm process, bumping it up to 48 on a later 32nm process. Intel is still on a 32nm process, so when they talk about a "256-core cluster", they're almost certainly talking about multiple systems; an 8-chip 32-core-per-chip system (or 4-chip 64-core) would not be a "cluster" in and of itself. And such a system does

              • by afidel (530433)
                Fab42 is going to be 14nm and is being built right now, I assume they have lab equipment capable of the same so for a demo chip I can easily see them using that process node. Going from 48 cores on 32nm to 256 on 14nm doesn't seem all that incredible.
                • by Guspaz (556486)

                  It would help if I had read the summary. They've got a single server with eight Knight Ferry cards, each having 32 cores. That's where they get their 256 cores from. And they're calling the single server a "cloud".

                  What makes this most unimpressive is that nVidia has been making a GPU-accelerated real-time raytracing engine for years now (you can even download working demos [nvidia.com]), and before that they were selling a GPU-accelerated final-frame renderer (non-real-time raytracing). Intel is showing off in-house dem

              • by wagnerrp (1305589)

                Actually, they are talking about one system. It's a custom server designed for GPU computing, and has 8 PCIe 2.0 x16 slots filling nearly the whole back side of the chassis. They're using 8 32-core cards to render the images and video.

                http://www.colfax-intl.com/ms_tesla.asp?M=102 [colfax-intl.com]

                • by Guspaz (556486)

                  So what's the advantage here? The committed an eight-card 256 core server just to render a Quake 3 era game with raytracing. nVidia has been giving away (for free, as far as I can tell) a CUDA-based real-time raytracing engine for their CUDA cards (including Tesla) for a few years now, and before then, they had a final-frame renderer (non-real-time raytracer) available that predates CUDA.

                  If I can do with a $300-400 GPU in a $1000 computer what it takes Intel a massive custom-built server, what's the advanta

                  • Ten years ago, I bought a P4, 1.7 ghz, 512 megs RAMBUS, several thousand dollars. Last weekend, I bought a laptop with a 4 core i5 processor for four hundred dollars. My work laptop has eight cores, i7. So yeah, today it takes a massive server. In five years, it takes a high end desktop. In ten years, it's standard beans.
                    • by Guspaz (556486)

                      Right, but my point is that you don't need to wait five or ten years, you can buy a $400 graphics card that will do the same thing today.

                    • The raytracing application is merely a demo. Real applications in the near term will take advantage of the fact that all of the cores on Intel's accelerator card are x86 compatible. The cores on a Nvidia graphics card are not x86 cores and probably never will be.

                      With lots of x86 cores you can do interesting things like implement drivers that make your multi-core accelerator card visible to your OS as if they were real CPU cores. Imagine that you have Chrome open with 100 tabs. Chrome runs each tab in a s

                    • by Guspaz (556486)

                      The raytracing application is merely a demo. Real applications in the near term will take advantage of the fact that all of the cores on Intel's accelerator card are x86 compatible. The cores on a Nvidia graphics card are not x86 cores and probably never will be.

                      With lots of x86 cores you can do interesting things like implement drivers that make your multi-core accelerator card visible to your OS as if they were real CPU cores. Imagine that you have Chrome open with 100 tabs. Chrome runs each tab in a separate process. Your Intel accelerator card with 50-256 x86 cores could be used to run Chrome processes, one process per core. All of a sudden your main CPU is no longer bogged down running flash and background javascript crap for each of your 100 open tabs.

                      If there was any real benefit to this, we'd see dual-processor consumer motherboards; those died off in the Pentium II era. These days, with a modern quad-core processor, your "main CPU" is no longer bogged down with background javascript or running flash; that's already handled by different cores.

                      Over the long term, moores law suggests that these Intel x86 accelerator cards will have enough cores and fast enough cores to do graphics acceleration for games that is good enough and fast enough. Eventually, all of these multitudes of cores will come standard inside every Intel cpu, no "accelerator card" needed. Intel has done exactly this with current gpu technology on their current line of processors. Their graphics performance is sufficient for everyone except serious gamers.

                      Or, we'll continue to see the current progression of a steadily increasing number of fulls-sized cores, and Intel's lots-of-tiny-cores approach will be of little interest to anybody but HPC seekers.

                  • Have you used it? The IDF is for interactive frame-rates (haven't checked but last Intel demo I saw was about 20fps). That ray-tracer on the card takes several seconds per frame. They are not really comparable in performance.

                    • by Guspaz (556486)

                      I've used it... It's real time on my old GTX 285. The most fancy one, "Design Garage", gets 2-3 FPS. A modern nVidia card should be significantly faster, especially in SLI. But even in SLI, it'd still be enormously cheaper than Intel's 8-card solution.

                    • I meant the SDK for ray-tracing, rather than the ray-tracing demo in the SDK. I've tried that on a GTX-580 and it seemed to have two different rendering modes, low quality when you move the model for 2-3fps and then a refinement step that took a couple of seconds to get the highest quality.

                    • by Guspaz (556486)

                      Well, how much of that refinement is actually useful for Intel's target use case here? They're going to stream this as compressed video to a tablet, antialiasing (which seems to be a large part of the refinement done in many of the nVidia demos) isn't that useful since it's all going to get crammed into a video stream anyhow. Looking at Intel's claims in terms of performance hits for various operations versus what I saw in the nVidia demos, it's clear that Intel has a better raytracing rendering engine, but

    • by spongman (182339)

      Ok, so cluster = cloud now? Even though they both serve very different purposes?

      no, a cluster is a bunch of machines working together. the 'cloud' is purely a means to acquire funding from ADHD investors.

      'fluffy' is the new 'shiny'

  • by msobkow (48369) on Thursday September 15, 2011 @03:12PM (#37412852) Homepage Journal

    What's the big deal?

    Ray tracing isn't new.

    Parallel processing isn't new.

    It's an old game.

    What makes this news?

    • They're getting close to commodity hardware. A large 256 core server today is a run of the mill desktop in 5 years. Intel wants you to believe that GPUs have a limited lifespan, that they'll last only until real time ray tracing on the CPU can produce equivalent or better results. They could be right... but the only way to find out is going to be to wait until the hardware catches up to the point that it's economically competitive and see what the GPU makers have done in the meantime. All in all, these

    • About once or twice a year they go on a big press buzz about raytracing. Reason is they would like that you don't spend money on graphics cards, and instead take that money and spend it on a bigger processor. So they are looking in to something that GPUs don't do so well, which is raytracing. They keep trying to get people excited about the idea of raytraced games, which would be done by systems with heavy hitting Intel CPUs, rather than rasterized games done mainly with a GPU.

      As long as they keep doing pre

      • by Bengie (1121981)

        Based on what I've read about Raytracing vs Rasterization, Raytracing *will* win out in the long run. I guess RT scales better than rast, but the overhead is expensive. Once we get to the point where RT is about the same speed as rast, it will only take 1-2 generations before RT is several factors faster.

        Whichever company is ready to push out RayTracing, will stomp the market. If you release too early, your product will just be a gimmick, if you release too late, the competition will be several times faster

        • by Bram Stolk (24781)

          Yes... ray tracing will indeed win in the long run.
          This is because the performance is almost independent of primitive (triangle) count.

          As a matter of fact... currently, ray tracing really complex models is already faster than rasterizing them.
          Few hundred million or more triangles or so can be done faster with RT.

          This is why their choice for content baffles me:
          They should NOT be using doom datasets for this, they should be using hundreds of millions of polygons in their dataset.
          That is where RT is shining.

          Ra

          • by grumbel (592662)

            Rasterizing is O(N) in nr of triangles.
            Ray tracing is better than O(logN), approaching O(1) even.

            That's only really true for theoretical best-case scenario for raytracing and worst case for rasterization. In practice things look very different, as any real world realtime rasterisation engine will do LODs, tessellation, octrees, occlusion queries and whatever to drastically cut down the triangle count they have to render in rasterization, make it no longer O(N), but something much smaller. Equally O(logN) is only true for static scenes, when you have dynamic ones things look quite different as you and y

        • by Amouth (879122)

          yeap - and this is Intel - a company that knows how to play for the future (to an extent).. an example is Hyper Threading.. most people pass it off but honestly if you expect for it and optimize some things for it you can see ~80% increase in performance. Now the group that came up with it and started designing it - started their research in i believe 1992.

          some companies know how to do R&D and some don't, Intel is one that does.

        • The future of rendering in video games has always been evident if you look at high-end rendering for films. If you look at a product like Renderman, ray-tracing is used for specific materials in the scene, but not commonly used for rendering the whole frame. Getting realistic materials out of the renderer is the real problem, not rendering mirror balls.

    • In fact I hear they have hype out on Blue Ray in 3D super HD.

  • Their idea is to render the graphics on the server farm and stream them over to a thin client? If the server farm is local, then it's an expensive solution that only a small subset of users can afford. If it's cloud-based then there will be massive control lag. Neither idea is practical.
  • Is the lag for this type of solution tolerable? I can see it going both ways and (shockingly) don't have the necessary hardware to test this combo out myself. I have no experience with OnLive either.
    • Is the lag for this type of solution tolerable?

      Maybe for single-player games, but not for competitive twitch-games online. Most modern LCDs come with gobs of post-processing that puts display latency up to the 100-200ms range (of course, you turn this off if you care). I find it extreme enough to absolutely destroy my shotgun accuracy in a local split-screen game. I expect remote rendering would have a similar effect, only worse because the Internet is less deterministic than a TV's post-processing pip

      • by Bengie (1121981)

        I get a 19ms ping to Chicago, which is somewhere upwards of 500mi via the trace route in another state. Put in some more localized rendering farms, like per State, and you could easily keep latency low enough for the average user.

        Jitter could be an issue if the network isn't well designed. It would probably show up as micro-stuttering.

  • Call it what it is: either Larrabee 2 or Son of Larrabee. Trying to hide it behind a new name doesn't change the underlying idea behind it, or it's failures so far. And telling us that Larrabee 3.0 (Grandson of Larrabee) will be the one that really works smacks of Microsoft software.
  • by Syberz (1170343)
    Am I the only one who hates depth of field effects in games? When the computer/game can determine what my eyes are focusing on, then DOF will be practical. Just because the crosshair is on something, it doesn't mean that that's what I'm looking at, my eyes are very good at creating their own depth of field effects, thank you.
    • by Wizarth (785742)

      Not only games but also movies. There's some argument for making sure the person is looking at the thing you want them to, but if you need to make everything else blurry to do so, makes you wonder.

      It's one of the reasons 3D films don't work in general - they include depth of field. The only 3D film I've seen that really worked was Avatar - which has no depth of field. At all. I enjoy the movie more for this technical reason alone - I can look where I want!

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (2) Thank you for your generous donation, Mr. Wirth.

Working...