Nvidia GeForce GTX 780 Ti Review: GK110, Fully Unlocked 88
An anonymous reader writes "Nvidia lifted the veil on its latest high-end graphics board, the GeForce GTX 780 Ti. With a total of 2,880 CUDA cores and 240 texture units, the GK110 GPU inside the GTX 780 Ti is fully unlocked. This means that the new card has an additional SMX block, 192 more shader cores, and 16 additional texture units than the $1,000 GTX Titan launched back in February! Offered at just $700, the GTX 780 Ti promises to improve gaming performance over the Titan, yet the card has been artificially limited in GPGPU performance — no doubt in order to make sure the pricier card remains relevant to those unable or unwilling to spring for a Quadro. The benchmark results simply illustrate the GTX 780 Ti's on-paper specs. The card was able to beat AMD's just-released flagship, the Radeon R9 290x by single-digit percentages, up to double-digits topping 30% — depending on the variability of AMD's press and retail samples."
heh (Score:5, Funny)
Offered at JUST 700 dollars. Nice try anonymous Nvidia.
Re: (Score:2)
Re: (Score:3)
Re: (Score:3)
Amazing how suddenly there's a problem because of heath.
Unless you're talking about The Dark Knight, you may have misspelled.
Re: (Score:2)
Re: (Score:3)
For a $150 price premium over the 290X, I'd expect more than "single-digit percentages." I know there's always a tax on the high end cards, but 27% pricier for (up to) 9% speed doesn't seem like a great trade-off.
Re: (Score:3)
Aaaaaand I just looked back at the summary and noticed the "up to 30%." I should really double-check before I post these things.
Re:heh (Score:5, Informative)
Tomshardware is known to be biased as they take in ad money and partnerships with Nvidia and Intel. They put in x87 non IEEE FPU tests where Intels own chips win and declare anything AMD/ATI a loser as a result rather than real world performance. They do not test the later versions of Skyrim which have proper FPU support as an example in their benchmarks.
For a more accurate benchmark click here [extremetech.com]?
Re:heh (Score:5, Insightful)
No, you were right the first time. Toms Hardware is the only site claiming these benchmark victories for the nVidia card. I'm not saying they allow their advertising department to influence their reporting and rankings, but it's a bit fishy that they're such an outlier regarding the flagship video cards of the two manufacturers.
It's also worth noting that comparing these cards without taking AMD's Mantle technology into account is to say the least, incomplete.
Re: (Score:1)
This sounds a bit biased to me. Firstly Toms isn't the only site. Secondly taking into account something which is currently non-existent to test is pretty tricky.
What performance advantage are you going to give the Mantle API without knowing anything about it? The best they can do is give the figures they currently get for both cards and add an addendum which states that Mantle may improve the performance for the AMD cards. They should probably add that in real world situations the ATI may have performance
Re: (Score:3)
These are both reference cards. Once Gigabyte and MSI and Sapphire get hold of them, I think the AMD's will run cooler and the nVidia cards will become a little more cost-effective.
All it all, it's great to have them actually competing.
Re: (Score:2)
Re: (Score:1)
But isn't their drivers also slower? Because that's why I _don't_ want to go AMD on Linux.
Re: (Score:2)
Re: (Score:1)
As far as I know the Nvidia drivers has always been better.
Linus and friends may have liked AMD more for providing more information and hence make a better open-source driver.
Personally I wouldn't want to use an open-source driver if it gave half performance even if that would be better than say 10% performance =P. And at least previously I was under the impression not even the closed driver from AMD was competitive against the Nvidia one. But that could had changed I suppose.
Also Nvidias one also existed f
Re: (Score:1)
It's also worth noting that comparing these cards without taking AMD's Mantle technology into account is to say the least, incomplete.
If it's game benchmarks what are they supposed to do?
For anything except Battlefield 4 I think it's correct because that's the only title I know of which make use of it so far.
Maybe it would be nice to see a demo which really uses it vs some other traditional method and in the future for more games but for now this is what we have.
Re: (Score:1)
mantle won't appear in battlfield 4 until next, but it'll be very interesting to see how it benefits performance
Re: (Score:2)
Re: (Score:3)
So stop using CUDA? (Score:4, Insightful)
Re:So stop using CUDA? (Score:5, Insightful)
Ironically, in the industry I work in (computer vision, embedded signal processing, etc) - we've slowly been moving AWAY from OpenCL - it's dead/stagnant, still hasn't caught up to where CUDA was 3 years ago, and to put this in general terms, there's fragmentation among both support of STANDARD (as in, the specification) features of OpenCL and worse that prevent you using simple things like images or barriers properly.
On top of that, radically different implementations obviously have radically different optimization processes - for one particular kernel, we were looking at hitting 13 different optimizations of the same kernel/function, to target different devices.
Now days, we use CUDA - compiled to PTX (for nVidia, x86_64, and ARM NEON) and CAL for legacy AMD, HSAIL for GCN AMD, and GLSL for Intel.
Say what you will about CUDA/PTX vendor lock in, PTX is far more device agnostic than OpenCL is, just as portable (thanks to great open source efforts like LLVM (and the vendor support nVidia/AMD/Intel/etc provide) and GPUOcelot), and far more mature than HSAIL is.
That's by basis for NOT using OpenCL, where's yours?
Re: (Score:2)
OpenCL caches the binaries after the first compile. Your NVIDIA driver will be doing the same thing except its compiling from PTX binaries to the specific card rather than straight from source. There are debuggers and profilers for OpenCL. Plus LLVM also supports OpenCL.
Re: (Score:2)
Also Adobe seems to disagree with you since they have been shifting from CUDA to OpenCL for some time now.
Re: (Score:1)
That is marketing fluff.
Every GPU supports these things in hardware. What Nvidia did was refuse to support these in directX and required proprietary code in CUDA to do things that ATI does with directX. In other words Nvidia is the IE of video cards.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
OpenCL "runs" on AMD with really crap real-world performance, bad drivers and absolutely retarded software requirements (such as needing X running to be able to expose the OpenCL interface...)
OpenCL "runs" on nVidia, but CUDA is better in all ways(and can be cross-compiled to various other platforms, as another poster in another thread has already shown)
OpenCL "runs" on Intel, with crap performance(and on Linux you only get the CPU as a target device...)
And that's not even going into all the faults with Ope
Re: (Score:2)
You ignore all the Android devices which use OpenCL including the ARM processors from Apple, Qualcomm, Samsung, and others. Mali, Exynos, and PowerVR have OpenCL acceleration. I have tried to use 3rd party CUDA implementations. They all suck. The reason is fairly obvious. They support the standard to various degrees and the "fragmentation" is worse than in OpenCL. The main problem with OpenCL is that the open source implementations suck and NVIDIA does not update their implementation from anything post Open
Re: (Score:2)
Re:compared (Score:2)
This card is only $700. And how much are you paying for that iPhone and its data plan that you almost never use?
Re: (Score:2)
Well according to those who are broke the iPhone is free of charge!
It wont cost anything to use at all compared to these silly users who pay up front ... well off to go to pay $120 a month for my 1 user phone bill. Wow I can't imagine why it is so high and why I can't leave for 2 years?!
Re: (Score:1)
I pay $50 a month for my cell phone. Multiply what you pay x 24? You paid $1200
Doesn't sound like I am the sucker if you ask me and explains why poor people remain poor.
Re: (Score:2)
I like the part where they said "Depending on the variability of AMD's press and retail samples."
The variability in the results was mostly caused by some last minute driver changes that caused a performance boost in the Radeon 290 and 290X cards, but the submitter seems to make it look some sort of "golden sample" conspiracy from AMD.
More like fully unleashed (Score:1)
Re: (Score:1)
The Titan is and always ha been geared toward entry level 3D graphics level performance applications. NVidia noticed many of the just starting out game/movie creators were using hacks to enable their gaming cards that cost less than a thousand dollars to be used in stead of their 2-8 thousand dollar professional series. This is why it it has 6GB of Ram, for a cheap option for those just getting into that type of system.
The fact that it worked great for games was a side benefit, allowing nVidia to grab some
Re: (Score:2)
Re: (Score:2)
CUDA is beautifully designed. It is trivial to pick up -- the docs are great, there are plenty of examples, it is not verbose like OpenCL but concise and compact. Basically nVidia has supported CUDA significantly much better then AMD has with OpenCL.
At the end of the day, sure eventually both of them are feature-parity but you'll probably get up to to speed with CUDA faster. I keep checking out OpenCL from time to time and it is just easier to use CUDA.
Re: (Score:2)
The main difference is you cannot inline OpenCL in C like you can do with CUDA because its done differently. The rest is fluff and I do not consider it beautifully designed. In fact it is a lot more cryptic than OpenCL.
Re: (Score:3)
> Basically the Titan was a publicity stunt.
Right, you want to tell that to the World's #1 Super computer which has 18,688 Tesla K20X GPUs.
http://en.wikipedia.org/wiki/Titan_(supercomputer) [wikipedia.org]
Second, you are glossing over the fact that:
Titan = 6 GB VRAM, 780 Ti = 3 GB
Titan Float64 performance = 1/3 FP32, 780 Ti = 1:24 FP32
For gamers they couldn't give a shit about that. The fact that Titan could also game was a bonus. It was never primarily targeted at gamers just budget scientific computing.both: Proces
Sticking with ATI (Score:1)
At least I do not have driver issues that plauge Nvidia cards like the 320.x drivers which are known to brick their cards and mess around with Aero on Windows 7.
Before I got modded down or flamed galore for this ... I am taking about 2010 - 2013 ATI drivers vs Nvidia ones. I kept having quality issues when I was a nvidia fan boy and switched to an ATI 5750 and now a 7850 with HDMI and things have been great since! 2002 with ATI rage pro's ... well that is a different story altogether compared for today.
If I
Re: (Score:2)
Re: (Score:2)
So what? (Score:3)
Re: (Score:2)
What does it matter, since no game that will be released in the next 10 years is going to need more graphics power then the shitty xbox one can crap out?
... uh Crysis and anything on 4k. This card is still not fast enough at that resolution.
Graphics still are not photorealistic yet and we have a long way to go before that happens. At 1080p these would suffice but Battle Field 4 barely plays at a lousy 40 fps at 4k even with this Titan?!
Re: (Score:2)
Re: (Score:1)
4k easily... in fact 2+ monitors are becoming the norm. People keep underestimating the need for more powerful gfx cards.
TITAN has one advantage...possibly (Score:2)
The main advantage the TITAN has over the 780 Ti is the memory; having 6GB compared to the 780 Ti's 3GB. If you're only looking at running 1080p that's not such an issue, but if you're one of those people with more money than sense and you're looking at running a 4k panel, it is.
I really don't know why they didn't stuff 6GB on the 780 Ti. My *580* has 3GB. I'd really expect two series down the line for there to be a bit more RAM on it as standard (3GB wasn't the standard memory configuration for a 580; t
Re:TITAN has one advantage...possibly (Score:5, Informative)
The other advantage of the Titan is the double-precision performance. Almost all of Nvidia's cards, including the 780 Ti, run double-precision floating-point calculations at 1/24th the rate of single-precision, but for the Titan and the Tesla pure-GPGPU cards, it's 1/3rd the rate.
While I'm not sure if that's an actual hardware difference, or if it's some software limitation, or a mix of both or whatever, it's definitely real. That's the main reason a Titan is still $1000 - it's being sold as a low-end compute card, not a high-end gaming card.
Re: (Score:2)
It's a limitation implemented in firmware/microcode for marketing purposes.
The Titan, GTX 780 and GTX 780 Ti all use the same physical chip.
Who's buying these cards? (Score:2)
Re: (Score:2)
Re: (Score:1)
Shit there are people who pay $250,000 for a freaking bathub and do not blink buying a $50,000 Ford F-350 complete with $10,000 shocks, strucks and tires to show off how much money they have compared to you and I on the road.
People have money and even if the HPs and Dells are in decline, the high end motherboard industry is taking off as many play more PC games. So yes $400 is cheap for a $1800 computer to MMO with their friends.
Not everyone is a poor young professional with student loan debt. However, that
Re: (Score:2)
More to the point, the poor young professionals that grew up PC gaming have now cleared their student debt, the mortgage is a pitiful percentage of their net pay and the wife's earning as much as they are.
Gaming isn't just for kids.
Re: (Score:1)
Still have 40k on mine and no house yet.
Value cards for me it is for now. If you graduated past 2006 40k to 100k with only 10 - 12/hr temp jobs when you graduate seem to be the new norm in the great recession.
But if you graduated in the 1990s you paid 15k to 25k and could get a house for 1/4th the cost as someone graduating today. Economics do not count homes, services, rent, nor food in inflation indexes which is silly because it is a big problem as well as not counting debt and only income.
still lousy at hashing passwords (Score:1)
Re: (Score:2)
Linux (Score:2)
Great that there's competition. I'm sure this blows the AMD ones away in Linux, but do they still have the tearing problems for video? Video has looked like crap on both my recent nVidia cards because it's splitting in the middle, though only when using two monitors. The exception is a few video players that use vdpau, but I can't expect all my videos to be playable on those. While I've preferred AMD before, I'm trying to not care about the brand as long as they don't do something hostile to users (Sony), s
Fully unlocked... yet limited? (Score:1)