Transcoding in 1/5 the Time with Help from the GPU 221
mikemuch writes "ExtremeTech's Jason Cross got a lead about a technology ATI is developing called Avivo Transcode that will use ATI graphics cards to cut down the time it takes to transcode video by a factor of five. It's part of the general-purpose computation on GPU movement. The Aviva Transcode software can only work with ATI's latest 1000-series GPUs, and the company is working on profiles that will allow, for example, transcoding DVDs for Sony's PSP."
great... (Score:0, Insightful)
This would be great for MythTV.. Linux support?? (Score:5, Insightful)
My educated guess is, No, there won't be Linux support..
ATI was the leader in MPEG2 acceleration, enabling iDCT+MC offload to their video processor almost 10 years ago. How'd that go in terms of Linux support, you ask? Well, we're still waiting for that to be enabled in Linux.
Nvidia and S3/VIA/Unichrome have drivers that support XvMC, but ATI is notably absent from the game they created. So, I won't hold my breath on Linux support for this very cool feature.
I'm rarely impressed... (Score:3, Insightful)
Already available.. (Score:3, Insightful)
Re:GPU or CPU? (Score:3, Insightful)
In a few years, there will be no real benefit to the GPU. Not too many people write optimized assembly level graphics code anymore, but it can be quite fast. Recall that Quake ran on a Pentium 90MHz with software rendering. It's only getting better since then. A second core that most apps don't know how to take advantage of will make this all the more obvious.
On another note, as polygon counts skyrocket they approach single pixel size. When that happens, the hardware pixel shaders - that GPUs have so many of - become irrelevant as the majority of the work moves up to the vertex unit. Actually at that point it makes a lot of sense to move to raytracing (something I have fast code for) which is also going to be quite possible in a few more years on the main CPU(s). Ray Tracing is one application that really shows why the GPU is NOT general purpose. You need data structures and pointers mixed with fast math - preferably double precision. You need recursive algorithms. You'll end up wanting a MMU. By the time you're done, the GPU really would need to be general purpose. The problem doesn't map to a GPU at all, and multicore CPUs are nearing the point where full screen, real time ray tracing will be possible. GPUs will not stand a chance.
Re:But is it worth it? (Score:3, Insightful)
I don't play the sort of games that need a graphics card over $200 to look good. I never even considered looking at the high end. However, this video encoding improvement will certainly make me do a double take. I was proud of my little CPU overclock that improves my encoding rate by 20%. But the article talks about improvements of over 500%! That's worth a couple of extra bucks.
Of course, by the time the software to do this actually becomes full-featured and useful, the price of the 1800 ATIs will hopefully drop a bit. Still, I have a feeling this will be my next GPU.
Unless nVidia can produce something equally impressive, of course!
Using their own codecs (Score:5, Insightful)
I'd actually be willing to spend more than $50 on a video card if more multimedia apps took advantage of the GPU's capabilities.
Re:But is it worth it? (Score:3, Insightful)
Keep in mind (Score:3, Insightful)
While a select few individuals still always buy the latest and the greatest, the majority of buyers look at video cards as long term investments mainly because of the rediculously inflated prices in the GPU market. All that said, I think you have to look at the card's feature set and make a decision based on that. While, gaming wise, the Nvidia GPU may be superior, the dramatically increased transcoding times definitely make the ATI card a potentially attractive purchase to people who work a lot with video. Given the amazing rise in popularity of the Video Ipod and the existing PSP market, the number of people with interests in transcoding video is definitely on the rise, and ATI was smart in tapping that market now.
5X faster than what (Score:1, Insightful)
I would rather some reality-based claims, such as "real-time encoding of 3 VGA streams into XVid". Give me a real reason to include an X1800 into my entertainment box.
Apple foes this now. (Score:3, Insightful)
Re:Already available.. (Score:3, Insightful)
Which is interesting because I'm the author of some widely deploy cryptographic software, I worked at a IP design company [for cryptographic cores]. I'd say I'm no longer an amateur when I make enough money to live on my own.
Apparently, according to you, FPGAs aren't made from silicon, they're made from fluffy bunny pixie dust
There is a strong price difference from PCB design and tapeout. If you're making less than a million devices or so it's cheaper to just use an FPGA because the tapeout alone will cost you millions.
"going to silicon" is a common expression which means to tapeout a design in real hardware. Sure FPGAs are "real silicon" but there is a big difference between using an FPGA and an ASIC in a fielded design.
Your last paragraph is about as far from the truth as can get. The last design I know of where the CPU controlled the peripherals was the Atari 2600. Even the gameboy had dedicated LCD controllers.
You're right I'm not an EE. I never claimed to be. Though what I speak of is from my experience working alongside these folk [as well as I have quite a few friends who design FPGAs for a living].
In otherwords you're trying to look all cutsie by trying to make me look stupid but really you haven't the first foggiest clue what you are talking about.
Tom
Re:GPU or CPU? (Score:5, Insightful)
1. On another note, as polygon counts skyrocket they approach single pixel size
This is not happening. Not anywhere (except maybe production rendering). It is far too time-consuming, expensive, and labor-intensive to produce huge numbers of high-polygon-count models for games. Vertex pipes are currently under-utilized in most games and applications now. Efforts are underway to allow procedural geometry creation on the GPU to better fill the vertex pipe without requiring huge content creation efforts. See this paper [ati.com] for details.
2. A second core that most apps don't know how to take advantage of will make this all the more obvious.
This undercuts the argument you make in the next paragraph. Also, it's not true. Both the PS3 and XBOX 360 have multiple CPU cores. It's true that current-gen engines aren't optimized for this technology, but next-gen engines will be.
3. multicore CPUs are nearing the point where full screen, real time ray tracing will be possible. GPUs will not stand a chance.
This might be true, but so what? Ray tracing offers few advantages over the current-gen programmable pipeline. I can only think of 2 things that a ray-tracer can do that the programmable pipeline can't: multilevel reflections and refraction. BRDFs, soft shadows, self-shadowing, etc. can all be handled in the GPU these days. Now, you can get great results by coupling a ray-tracer with a global illumination system like photon mapping, but that technique is nowhere near real-time. Typical acceleration schemes for ray-tracing and photon mapping will not work well in dynamic environments, but the GPU could care less whether a polygon was somewhere else on the previous frame.
Hate to break it to you, but the GPU is here to stay. Why? GPUs are specialized for processing 4-vectors, not single floats (or doubles) like the CPU + FPU. True, there are CPU extensions for this, such as SSE and 3DNOW, but typical CPUs have a single SSE processor, compared to a current-gen GPU with 8 vertex pipes and 24 pixel pipes. Finally, do you really want to burden your extra CPU with rendering when it could be handling physics or AI?
Re:GPU or CPU? (Score:4, Insightful)
Most of that stuff can be done with OpenGL/DirectX or ray tracing. Grasses are sometimes done in OpenGL with instancing small clumps. In RT you'd use proceedural geometry or instancing.
For the snow, both renderes would probably do similar techniques.
Sand dunes - either method needs an engine with deformable geometry - both can support that.
Water simulation is something I don't know much about. For the FFT methods of simulating waves it's possible that a GPU has an advantage. Once it start interacting with objects, I don't know how people handle that.
Your quesitons all point toward vast detailed worlds with lots of polygons. RT scales better with scene complexity. To get more traditional methods to work well, you get into fancy culling techniques (HZB comes to mind) and RT starts to look simpler - because it is.
Re:lessons of "array processors" from 1980s (Score:1, Insightful)