Tegra 4 Likely To Include Kepler DNA 57
MrSeb writes "Late last week, Jen-Hsun Huang sent a letter to Nvidia employees congratulating them on successfully launching the highly acclaimed GeForce GTX 680. After discussing how Nvidia changed its entire approach to GPU design to create the new GK104, Jen-Hsun writes: 'Today is just the beginning of Kepler. Because of its super energy-efficient architecture, we will extend GPUs into datacenters, to super thin notebooks, to superphones.' (Nvidia calls Tegra-powered products 'super,' as in super phones, super tablets, etc, presumably because it believes you'll be more inclined to buy one if you associate it with a red-booted man in blue spandex.) This has touched off quite a bit of speculation concerning Nvidia's Tegra 4, codenamed Wayne, including assertions that Nvidia's next-gen SoC will use a Kepler-derived graphics core. That's probably true, but the implications are considerably wider than a simple boost to the chip's graphics performance."
Nvidia's CEO is also predicting this summer will see the rise of $200 Android tablets.
Re:And now for the obvious question... (Score:3, Informative)
A new gaming-oriented GPU from NVidia that can't compete with even the previous generation of GPUs (Fermi) on many compute applications, namely integers. It's fine if you do single-precision floating point stuff all the time, but terrible if you want to work with integers or double-precision floats.
Re:GPU programming is a nightmare. (Score:2, Informative)
This is so far detached from reality it almost makes me wonder if it is someone shilling for Intel or another company that wants to defame GPU computing.
First, the claim that it is effective "usually" in PDE solving is absurd; examine the presentations at any HPC conference or the publications in any HPC journal and you will quickly find numerous successful uses in other fields. Off the top of my head, I have seen or worked on successful projects in image processing and computer vision, monte carlo simulations of varying complexity in any number of fields, optimization problems and risk analysis in computational finance, statistical analysis of data from experiments or observations, and (less commonly) in non-numerical-computing applications (some graph algorithms map fairly well to a GPU).
Second, the claim that GPU programming is a pain in the ass -- in general, there is no doubt. It requires more time investment due to the optimization required to get performance that justifies the hardware cost, and far more difficult debugging than a CPU program (particularly a serial CPU program). The rest of your claims here, however, are once again nonsense. You say there is no standardization, but it is not even clear from context where you found standardization lacking -- the hardware? There are new architectures every few years, but they are almost always backwards compatible (ie, they will run old code). The only thing the architecture updates do is expose new features, much like new CPU architectures may add to the instruction set. Or is it software standardization you're looking for? OpenCL and CUDA are both open standards -- one controlled by a very slow moving board of industry representatives, one controlled by a fairly fast moving single company. This is very, very similar to the state of graphics APIs, where OpenGL and DirectX fill similar roles. The idea that they don't support high level languages is questionable -- what defines a high level language? If C or C++ qualify than definitely CUDA and possibly OpenCL do as well. If they don't, then no -- but expecting to write for hardware targeted solely at high performance and scientific computing in Ruby or whatever is idiotic. I would also love evidence (meaning official claims from a manufacturer, not some 3rd party who is just as misinformed as you) of claims to support C++. The closest I can think of is nVidia expanding the support for C++ features (templates, classes, ...) in CUDA -- but I've certainly never seen them or AMD claim to support C++ on the GPU. It sounds like you were expecting to take a C++ program, hit "Compile for GPU", and get massive parallelization for free, which reinforces the idea that you are did not do a shred of research into GPU computing. That you need to link with their library is obvious -- did you expect you would be communicating with the GPU without going through the driver? You're either going to use a library and compiler directly in the build process (CUDA) or indirectly at runtime (OpenCL). That you need to manage the memory is once again idiotic whining -- the people targeted by this are already managing their memory. This isn't to accelerate your shitty Web2.0 application, it's for serious numerical computation of the sort that is almost always written in C++, C, or (ugh) Fortran. That you manage the data pipeline and fetch and cache is at best a half truth; the shared/local memory is similar to a cache, but both AMD and nVidia GPUs use an actual cache that is not controlled at all by the programmer (unless possibly you are modifying the .ptx files for a CUDA kernel? I have not gotten into that sort of thing). The claim about being able to fit a limited amount of code is utter horseshit. And, finally, with the claim about how the code is stored -- this is true for OpenCL typically (there may be obfuscation methods or something along those lines as used by other languages which allow easy retrieval of code -- I'm not sure), but CUDA can be compiled to binary files that are no m
Re:Paper tiger (Score:1, Informative)
Re:nVidia's CEO is a little behind the times. (Score:2, Informative)
I have a Nook Tablet, and I'm very happy with the hardware, happy with it as a book reader, but I wish it was a real Android tablet (with Bluetooth and Android Market^H^H^H^H^H^H^H^H^H^H^H^H^H^HGoogle Play).
So? Pick from CM7 [xda-developers.com] or CM9 [xda-developers.com].
Enjoy.