Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Graphics Cellphones Hardware

Tegra 4 Likely To Include Kepler DNA 57

MrSeb writes "Late last week, Jen-Hsun Huang sent a letter to Nvidia employees congratulating them on successfully launching the highly acclaimed GeForce GTX 680. After discussing how Nvidia changed its entire approach to GPU design to create the new GK104, Jen-Hsun writes: 'Today is just the beginning of Kepler. Because of its super energy-efficient architecture, we will extend GPUs into datacenters, to super thin notebooks, to superphones.' (Nvidia calls Tegra-powered products 'super,' as in super phones, super tablets, etc, presumably because it believes you'll be more inclined to buy one if you associate it with a red-booted man in blue spandex.) This has touched off quite a bit of speculation concerning Nvidia's Tegra 4, codenamed Wayne, including assertions that Nvidia's next-gen SoC will use a Kepler-derived graphics core. That's probably true, but the implications are considerably wider than a simple boost to the chip's graphics performance." Nvidia's CEO is also predicting this summer will see the rise of $200 Android tablets.
This discussion has been archived. No new comments can be posted.

Tegra 4 Likely To Include Kepler DNA

Comments Filter:
  • Paper tiger (Score:4, Insightful)

    by Anonymous Coward on Friday March 30, 2012 @04:30PM (#39527843)

    So will this version be something more than a paper tiger? So far the Tegras have sounded better on paper than their real world performance ends up being.

  • by PaladinAlpha ( 645879 ) on Friday March 30, 2012 @08:16PM (#39530653)

    Half of our department's research sits directly on CUDA, now, and I haven't really had this experience at all. CUDA is as standard as you can get for NVIDIA architecture -- ditto OpenCL for AMD. The problem with trying to abstract that is the same problem with trying to use something higher-level than C -- you're targeting an accelerator meant to take computational load, not a general-purpose computer. It's very much systems programming.

    I'm honestly not really sure how much more abstact you could make it -- memory management is required because it's a fact of the hardware -- the GPU is across a bus and your compiler (or language) doesn't know more about your data semantics than you do. Pipelining and cache management are a fact of life in HPC already, and I haven't seen anything nutso you have to do to support proper instruction flow for nVidia cards (although I've mostly just targeted Fermi).

There are two ways to write error-free programs; only the third one works.

Working...