Transcoding in 1/5 the Time with Help from the GPU 221
mikemuch writes "ExtremeTech's Jason Cross got a lead about a technology ATI is developing called Avivo Transcode that will use ATI graphics cards to cut down the time it takes to transcode video by a factor of five. It's part of the general-purpose computation on GPU movement. The Aviva Transcode software can only work with ATI's latest 1000-series GPUs, and the company is working on profiles that will allow, for example, transcoding DVDs for Sony's PSP."
What I want to see. (Score:5, Interesting)
Re:This would be great for MythTV.. Linux support? (Score:5, Interesting)
Re:I'm rarely impressed... (Score:5, Interesting)
But is it worth it? (Score:3, Interesting)
with nVIDIA's 512mb implementation of the G70 core touted to be at 550mhz core, it should theoretically thrash the living daylights out of the X1800XT.
http://theinquirer.net/?article=27400 [theinquirer.net]
the decision is between aVIVO's encode and transcode abilities for h.264, or superior performance by nVIDIAs offering?
GPU or CPU? (Score:3, Interesting)
Video cards with GPU's used to be a "cheap" way to increase the graphic processing power of your computer by adding a chip who's sole purpose was to process graphics (and geometry, with the advent of 3d-acellerators).
Now that GPU's are becomming more and more programmable, and more and more general~purpose, what, really, is the difference between a GPU and a standard CPU? What is the benefit to having a 3d~acellerator over having a dual~CPU system with one CPU dedicated to graphic processing?
Yawn... (Score:2, Interesting)
An Implementation of a FIR Filter on a GPU [sunysb.edu]
Re:Already available.. (Score:1, Interesting)
I got the idea when I saw the work done on the saarcor [google.ca] hardware realtime raytracing architecture. They tested their work using FPGAs.
But I'd rather have it the other way around! (Score:2, Interesting)
So, what about it ATI? Or will thi be an NVIDIA innovation?
Re:Yawn... (Score:3, Interesting)
http://openvidia.sourceforge.net/ [sourceforge.net]
Re:Already available.. (Score:3, Interesting)
Your clock advantage is about 10x [say] that is typical 400Mhz PPC vs. 40Mhz FPGA
Though the more common use for an FPGA aside from co-processing is just to make a flexible interface to hardware. E.g. want something to drive your USB, LCD and other periphs without paying to go to ASIC? Drop an FPGA in the thing. I assure you controlling a USB or LCD device is much more efficient in an FPGA than in software on a PPC.
Tom
Re:This would be great for MythTV.. Linux support? (Score:5, Interesting)
i would actually be shocked if there weren't linux support. the ability to do what they want only need to be in the drivers. i've been doing a gpgpu feasability study as an internship and did an mpi video compressor (based on ffmpeg) in school. using a gpu for compression/transcoding is a project i was thinking of starting once i finally had some free time since it seems built for it. something like 24 instances running at once at a ridiculous amount of flops (puts a lot of cpus to shame, actually). if you have a simd project with 4D or under vectors, this is the way to go.
like i said, it really depends on the drivers. as long as they support some of the latest opengl extensions, you're good to go. languages like Cg [nvidia.com] and BrookGPU [stanford.edu], as well as other shader languages, are cross-platform. they can also be used with directx, but fuck that. i prefer Cg, but ymmv. actually, the project might not be that hard, just needs enought people porting the algorithms to something like Cg.
that said, don't expect this to be great unless your video card is pci-express. the agp bus is heavily asymmetric towards data going out to the gpu. as more people start getting the fatter, more symmetric pipes of pci-e, look for more gpgpu projects to take off.
Render farms (Score:1, Interesting)
Pixar is using Intel boxes. Since Pixar writes it's own code, wouldn't it be better to write code into Renderman to shift the workload to multiple GPU's in each box in the renderfarm?
Just a thought...
funny about memory comments (Score:3, Interesting)
"This is, after all, one of the fastest CPUs money can buy, paired with very fast RAM.
"1 GB of very low latency RAM "
After the other review [techreport.com] posted today [slashdot.org] about fast memory doing almost nothing for transcoding:
"moving to tighter memory timings or a more aggressive command rate generally didn't improve performance by more than a few percentage points, if at all, in our tests."
"Mozilla does show a difference between the settings, both on its own and when paired with Windows Media Encoder. Still, the differences in performance between 2-2-2-5 and 2.5-4-4-8 timings, and between the 1T and 2T command rates, are only a couple of percentage points."
DMCA? (Score:1, Interesting)
Re:There's a CPU in my keyboard too... (Score:5, Interesting)
Re:lessons of "array processors" from 1980s (Score:4, Interesting)
A GPU is, effectively, a very wide vector unit (1024-bits is not uncommon). What happens when CPUs all include 2048-bit general purpose vector units? What happens when they include a couple on each core in a 128-core package? Sure, a dedicated GPU will still be faster - but it won't be enough faster that people will care. For comparison, take a look at Chromium. Chromium is a software OpenGL implementation that runs on clusters. Even with relatively small clusters, it can compete fairly well with modern GPUs - now imagine what will happen when every machine has a few dozen cores in their CPU.
Re:GPU or CPU? (Score:2, Interesting)
In the meantime... (Score:3, Interesting)
Re:Apple foes this now. (Score:3, Interesting)
The idea behind using your GPU in this case is even more far reaching. While using a GPU for any visual effect is fairly logical...what about SETI@Home? What about Folding? What about for runing kalc
See the difference?