Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Cellphones Hardware

Tegra 4 Likely To Include Kepler DNA 57

MrSeb writes "Late last week, Jen-Hsun Huang sent a letter to Nvidia employees congratulating them on successfully launching the highly acclaimed GeForce GTX 680. After discussing how Nvidia changed its entire approach to GPU design to create the new GK104, Jen-Hsun writes: 'Today is just the beginning of Kepler. Because of its super energy-efficient architecture, we will extend GPUs into datacenters, to super thin notebooks, to superphones.' (Nvidia calls Tegra-powered products 'super,' as in super phones, super tablets, etc, presumably because it believes you'll be more inclined to buy one if you associate it with a red-booted man in blue spandex.) This has touched off quite a bit of speculation concerning Nvidia's Tegra 4, codenamed Wayne, including assertions that Nvidia's next-gen SoC will use a Kepler-derived graphics core. That's probably true, but the implications are considerably wider than a simple boost to the chip's graphics performance." Nvidia's CEO is also predicting this summer will see the rise of $200 Android tablets.
This discussion has been archived. No new comments can be posted.

Tegra 4 Likely To Include Kepler DNA

Comments Filter:
  • Paper tiger (Score:4, Insightful)

    by Anonymous Coward on Friday March 30, 2012 @03:30PM (#39527843)

    So will this version be something more than a paper tiger? So far the Tegras have sounded better on paper than their real world performance ends up being.

    • by Anonymous Coward

      I don't know, the Tegra2 was kind of mediocre, but the Tegra3 in my Transformer Prime is everything I'd hoped for and everything it claimed to be.

    • by TwoBit ( 515585 )

      According to this week's Anandtech's article, the Transformer Prime Tegra 3's CPU outperformans the iPad 3. It's the GPU that falls short of the iPad 3, mostly due to having less GPU transistors and not due to architectural weakness.

  • Wow (Score:4, Funny)

    by Hatta ( 162192 ) on Friday March 30, 2012 @03:31PM (#39527859) Journal

    Where did they get Johannes Kepler's DNA?

    • Re: (Score:3, Funny)

      by Anonymous Coward

      From the outside of Tycho Brahe's fake nose.

      Don't ask.

    • They cloned it from Kepler's blood taken from a mosquito fossilized in amber, obviously. DNA on a chip. Makes perfect sense. Kepler's laws of planetary motion probably add a significant boost to pipeline performance. And what better way to integrate that functionality than by cloning Kepler himself, and regrowing his brain on a chip! I was wondering how long it would take to grow a brain on a chip, after they successfully created a gut on a chip [slashdot.org]
      • by Anonymous Coward

        They cloned it from Kepler's blood taken from a mosquito fossilized in amber, obviously. DNA on a chip. Makes perfect sense. Kepler's laws of planetary motion probably add a significant boost to pipeline performance. And what better way to integrate that functionality than by cloning Kepler himself, and regrowing his brain on a chip!

        You know a lot about him. You must have read his orbituary.

    • by Anonymous Coward

      His mom was a witch, so I suspect some devilry was involved...

    • Nvidia's next-gen SoC will use a Kepler-derived graphics core.

      When did Johannes Kepler solve a graphics core?

    • If it's a graphics processor they were looking to make, they probably should've gone with Michelangelo or Leonardo's DNA.

  • by T.E.D. ( 34228 ) on Friday March 30, 2012 @03:51PM (#39528153)

    Nvidia calls Tegra-powered products 'super,' as in super phones, super tablets, etc, presumably because it believes you'll be more inclined to buy one if you associate it with a red-booted man in blue spandex.

    Wayne-powered products will of course be called "bat" instead.

  • ...on quantum GPU and CPU to go commercial.
  • by bistromath007 ( 1253428 ) on Friday March 30, 2012 @04:01PM (#39528283)
    The droidpad I'm posting this from cost $200.
  • by 140Mandak262Jamuna ( 970587 ) on Friday March 30, 2012 @04:35PM (#39528793) Journal
    It is possible to use the GPU effectively to speed up some scientific simulations. Usually in fluid mechanics problems that could be solved by time marching (or physics that obey hyperbolic governing differential equations). But working with the GPU is a real PITA. There is no standardization. There is no real support for any high level languages. Of course they have bullet points saying "C++ is Supported". But you dig in and find, you have to link with their library, there is no standardization, you need to manage the memory, you need to manage the data pipe line and fetch and cache, the actual amount of code you could fit in their "processing" unit is trivially small. All it could store turns out to be about 10 or so double precision solution variables and about flux vector splitting for Navier Stokes for just one triangle. About 40 lines of C code.

    On top of everything, the binary is a mismash of compiled executable chunks sitting in the interpreted code. Essentially the if a competitor or hacker gets the "executable" they can reverse engineer every bit of innovation you had done to cram your code into these tiny processors and reverse engineer your scientific algorithm at a very fine grain.

    Then their sales critter create "buzz". Make misleading, almost lying, presentations about GPU programming and how it is going to achieve world domination.

    • It is possible to use the GPU effectively to speed up some scientific simulations. Usually in fluid mechanics problems that could be solved by time marching (or physics that obey hyperbolic governing differential equations). But working with the GPU is a real PITA. There is no standardization. There is no real support for any high level languages. Of course they have bullet points saying "C++ is Supported". But you dig in and find, you have to link with their library, there is no standardization, you need to manage the memory, you need to manage the data pipe line and fetch and cache, the actual amount of code you could fit in their "processing" unit is trivially small. All it could store turns out to be about 10 or so double precision solution variables and about flux vector splitting for Navier Stokes for just one triangle. About 40 lines of C code.

      On top of everything, the binary is a mismash of compiled executable chunks sitting in the interpreted code. Essentially the if a competitor or hacker gets the "executable" they can reverse engineer every bit of innovation you had done to cram your code into these tiny processors and reverse engineer your scientific algorithm at a very fine grain.

      Then their sales critter create "buzz". Make misleading, almost lying, presentations about GPU programming and how it is going to achieve world domination.

      According to wikipedia, there are frameworks (like Open CL http://en.wikipedia.org/wiki/OpenCL [wikipedia.org] ) in order to program in high level languages and have compatibility through various platforms

    • by vadim_t ( 324782 )

      Just like any cutting edge tech. Not so long ago you'd be writing graphics code in assembler. And dealing with the memory restrictions DOS had to offer.

      On top of everything, the binary is a mismash of compiled executable chunks sitting in the interpreted code. Essentially the if a competitor or hacker gets the "executable" they can reverse engineer every bit of innovation you had done to cram your code into these tiny processors and reverse engineer your scientific algorithm at a very fine grain.

      Big deal. I

    • by Anonymous Coward

      It's like assembler programming - some people get it, some don't. I've never seen any ultra-high performance computing task where you don't have to manage all the variables you mention. A 10x improvement makes it all worthwhile - some projects get much greater improvements.

      Stop complaining that the tools don't let you program a GPU in Java. If you can't take the heat get out of kitchen.

    • by PaladinAlpha ( 645879 ) on Friday March 30, 2012 @07:16PM (#39530653)

      Half of our department's research sits directly on CUDA, now, and I haven't really had this experience at all. CUDA is as standard as you can get for NVIDIA architecture -- ditto OpenCL for AMD. The problem with trying to abstract that is the same problem with trying to use something higher-level than C -- you're targeting an accelerator meant to take computational load, not a general-purpose computer. It's very much systems programming.

      I'm honestly not really sure how much more abstact you could make it -- memory management is required because it's a fact of the hardware -- the GPU is across a bus and your compiler (or language) doesn't know more about your data semantics than you do. Pipelining and cache management are a fact of life in HPC already, and I haven't seen anything nutso you have to do to support proper instruction flow for nVidia cards (although I've mostly just targeted Fermi).

    • Re: (Score:2, Informative)

      by Anonymous Coward

      This is so far detached from reality it almost makes me wonder if it is someone shilling for Intel or another company that wants to defame GPU computing.

      First, the claim that it is effective "usually" in PDE solving is absurd; examine the presentations at any HPC conference or the publications in any HPC journal and you will quickly find numerous successful uses in other fields. Off the top of my head, I have seen or worked on successful projects in image processing and computer vision, monte carlo simulati

    • by maccodemonkey ( 1438585 ) on Friday March 30, 2012 @09:37PM (#39531469)

      In my experience, GPU programming works exactly like you'd expect it to work. Your nightmare doesn't sound like it's with GPU programming, it sounds like it's with NVidia's marketing.

      GPU processors are really small, so everything you've listed here is expected. The code size, variable limits, etc etc. The advantage is you have thousands of them at your disposal. That makes GPUs extremely good when you need to run a kernel with x where x is from 0 to a trillion. Upload the problem set to VRAM, and send the cores to work.

      Stuff like C++ and high level languages is also not good for this sort of work. I'm not even sure why people are bothering with C++ on GPGPU to be perfectly frank. Again, you're writing kernels here, not entire programs. C++ is honestly bulky for GPGPU work and I can't imagine what I'd use it for. Both CUDA and OpenCL are already pretty high level, any further past that and you're risking sacrificing performance.

      Interpreted code is also good. It's usually JIT compiled for the architecture you're working on. In the case of OpenCL and CUDA, it could be recompiled to run on an ATI card, NVidia card, or local CPU, all of which have different machine languages that you won't know about until runtime.

      It sounds like you're angry because GPU programming isn't very much like programming for CPUs, and you'd be right. That's the nature of the hardware, it's built very different and is optimized for different tasks. Whether that's because you were sold a false bill of goods by NVidia, I don't know. But it doesn't mean GPU programming is broken, it just may not be for you. It mostly sounds like you're just trying to cram too much into your individual kernels though.

    • It doesn't sound like you've done any GPU programming for a few years. These days, OpenCL is pretty well supported. You need some C code on the host for moving data between host and GPU memory and for launching the GPU kernels, but the kernels themselves are written in a dialect of C that is designed for parallel scientific computing.

      If you want something even easier, both Pathscale and CAPS International provide C/C++ compilers that support HMPP, so you can easily annotate loops with some pragmas to m

  • This summer? Odd, i already bought one for 180 that has non glasses 3D.

    *yawn*

  • How will they bring GPU compute to servers when they killed their FP64 performance? nVidia and AMD just flip-flipped their GPU-Compute prowess in regards to FP64.

  • Nvidia's CEO is also predicting this summer will see the rise of $200 Android tablets.

    So Android tablet prices are going up? You can already buy sub-$200 tablets all day long from Amazon, or Big Lots as one example if you prefer a brick and mortar option. Both have some pretty useful ones for $99 right now. They are not iPads, but they are pretty useful for only a c-note. I just saw one with a capacitive screen, Android 3.1, 1GB RAM, 8 GB internal (I think), and an SD card slot, for $99. Not blazingly fast, surely, but fiarly capable and dirt cheap. If you really want a tablet, and you are

    • by Lussarn ( 105276 )

      I think what he says is that there will be more $200 tablets on the market... Not that they will rise in price.

"Hello again, Peabody here..." -- Mister Peabody

Working...