Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

Transcoding in 1/5 the Time with Help from the GPU 221

mikemuch writes "ExtremeTech's Jason Cross got a lead about a technology ATI is developing called Avivo Transcode that will use ATI graphics cards to cut down the time it takes to transcode video by a factor of five. It's part of the general-purpose computation on GPU movement. The Aviva Transcode software can only work with ATI's latest 1000-series GPUs, and the company is working on profiles that will allow, for example, transcoding DVDs for Sony's PSP."
This discussion has been archived. No new comments can be posted.

Transcoding in 1/5 the Time with Help from the GPU

Comments Filter:
  • What I want to see. (Score:5, Interesting)

    by Anonymous Coward on Wednesday November 02, 2005 @01:33PM (#13933514)
    Maybe others have had this idea. Maybe it's too expensive or just not practical. Imagine using PCI cards with a handful of FPGAs on board to provide reconfigurable heavy number crunching abilities to specific applications. Processes designed to use them will use one or more FPGAs if they are available, else they'll fall back to using the main CPU in "software mode."
  • by ceoyoyo ( 59147 ) on Wednesday November 02, 2005 @01:35PM (#13933533)
    This should be written in Shader Language (or whatever it's called these days) which is portable between cards. There's no reason NOT to release this on any platform. Since it only runs on the latest ATI cards it probably uses some feature that nVidia will have in it's next batch of cards as well. If ATI doesn't release it for Linux and the Mac hopefully it won't be that difficult to duplicate their efforts. After all, shader programs are uploaded to the video driver as plain text.... ;)
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday November 02, 2005 @01:37PM (#13933544) Homepage Journal
    I'd like to see it but I wonder what the quality is going to be like as compared to the best current encoders. I mean you can already see a big difference between cinema craft and j. random mpeg2 encoder...
  • But is it worth it? (Score:3, Interesting)

    by Anonymous Coward on Wednesday November 02, 2005 @01:37PM (#13933545)
    the X1800XT ties almost exactly with the 7800GTX @ stock of 430 core in most gaming benchmarks.

    with nVIDIA's 512mb implementation of the G70 core touted to be at 550mhz core, it should theoretically thrash the living daylights out of the X1800XT.

    http://theinquirer.net/?article=27400 [theinquirer.net]

    the decision is between aVIVO's encode and transcode abilities for h.264, or superior performance by nVIDIAs offering?
  • GPU or CPU? (Score:3, Interesting)

    by The Bubble ( 827153 ) on Wednesday November 02, 2005 @01:58PM (#13933712) Homepage

    Video cards with GPU's used to be a "cheap" way to increase the graphic processing power of your computer by adding a chip who's sole purpose was to process graphics (and geometry, with the advent of 3d-acellerators).

    Now that GPU's are becomming more and more programmable, and more and more general~purpose, what, really, is the difference between a GPU and a standard CPU? What is the benefit to having a 3d~acellerator over having a dual~CPU system with one CPU dedicated to graphic processing?

  • Yawn... (Score:2, Interesting)

    by benjamindees ( 441808 ) on Wednesday November 02, 2005 @02:04PM (#13933751) Homepage
    nVidia has been doing this for a while now. In fact, there are finally getting to be interesting implementations like GNU software radio [gnu.org] on GPUs:

    An Implementation of a FIR Filter on a GPU [sunysb.edu]
  • by Anonymous Coward on Wednesday November 02, 2005 @02:09PM (#13933814)
    Yeah, FPGAs are indeed slower than ASICs. How long would it be useful? I was imagining if it ever got popular (like in every gamer's computer) they'd be upgradeable like video cards and CPUs where every year the technology gets better and frequencies go up.

    I got the idea when I saw the work done on the saarcor [google.ca] hardware realtime raytracing architecture. They tested their work using FPGAs.
  • by Macguyvok ( 716408 ) on Wednesday November 02, 2005 @02:12PM (#13933850)
    I'd rather see GPU's ofloading thier work to the system CPU. There's no *good* way to do this. So, why not run this isn reverse? If it's possible to speed up general processing, why can they speed up graphics processing? Especially since my CPU hardly does anything when I'm playing a game; it has to wait on the graphics card.

    So, what about it ATI? Or will thi be an NVIDIA innovation?
  • Re:Yawn... (Score:3, Interesting)

    by ehovland ( 2915 ) * on Wednesday November 02, 2005 @02:18PM (#13933902) Homepage
    To see the latest generation of this work, check out their sourceforge page:
    http://openvidia.sourceforge.net/ [sourceforge.net]
  • by tomstdenis ( 446163 ) <tomstdenis@gma[ ]com ['il.' in gap]> on Wednesday November 02, 2005 @02:21PM (#13933932) Homepage
    FPGAs aren't always slower than what you can do in silicon. AES [sorry I have a crypto background] takes 1 cycle per round in most designs. You can probably clock it around 30-40Mhz if your interface isn't too stupid. AES on a PPC probably takes the same time as a MIPs which is about 1000-1200 cycles.

    Your clock advantage is about 10x [say] that is typical 400Mhz PPC vs. 40Mhz FPGA ... so that 1000 cycles is 100 FPGA cycles. But an AES block takes 11 FPGA cycles [plus load/unload time] so say about 16 cycles. Discounting bus activity [which would affect your software AES anyways] you're still ahead by ~80 FPGA cycles [800 PPC cycles].

    Though the more common use for an FPGA aside from co-processing is just to make a flexible interface to hardware. E.g. want something to drive your USB, LCD and other periphs without paying to go to ASIC? Drop an FPGA in the thing. I assure you controlling a USB or LCD device is much more efficient in an FPGA than in software on a PPC.

    Tom
  • by thatshortkid ( 808634 ) * on Wednesday November 02, 2005 @03:05PM (#13934390)
    wow, for once there's a slashdot article i have insight on! (whether it's modded that way remains to be seen.... ;) )

    i would actually be shocked if there weren't linux support. the ability to do what they want only need to be in the drivers. i've been doing a gpgpu feasability study as an internship and did an mpi video compressor (based on ffmpeg) in school. using a gpu for compression/transcoding is a project i was thinking of starting once i finally had some free time since it seems built for it. something like 24 instances running at once at a ridiculous amount of flops (puts a lot of cpus to shame, actually). if you have a simd project with 4D or under vectors, this is the way to go.

    like i said, it really depends on the drivers. as long as they support some of the latest opengl extensions, you're good to go. languages like Cg [nvidia.com] and BrookGPU [stanford.edu], as well as other shader languages, are cross-platform. they can also be used with directx, but fuck that. i prefer Cg, but ymmv. actually, the project might not be that hard, just needs enought people porting the algorithms to something like Cg.

    that said, don't expect this to be great unless your video card is pci-express. the agp bus is heavily asymmetric towards data going out to the gpu. as more people start getting the fatter, more symmetric pipes of pci-e, look for more gpgpu projects to take off.
  • Render farms (Score:1, Interesting)

    by Anonymous Coward on Wednesday November 02, 2005 @03:16PM (#13934495)
    If GPUs are more optimized for graphics, why can't renderfarms use more GPU's rather than more CPU's?

    Pixar is using Intel boxes. Since Pixar writes it's own code, wouldn't it be better to write code into Renderman to shift the workload to multiple GPU's in each box in the renderfarm?

    Just a thought...

  • by iamhassi ( 659463 ) on Wednesday November 02, 2005 @03:19PM (#13934515) Journal
    it's funny to read the article and see them brag about the "very fast RAM":
    "This is, after all, one of the fastest CPUs money can buy, paired with very fast RAM.
    "1 GB of very low latency RAM "

    After the other review [techreport.com] posted today [slashdot.org] about fast memory doing almost nothing for transcoding:
    "moving to tighter memory timings or a more aggressive command rate generally didn't improve performance by more than a few percentage points, if at all, in our tests."
    "Mozilla does show a difference between the settings, both on its own and when paired with Windows Media Encoder. Still, the differences in performance between 2-2-2-5 and 2.5-4-4-8 timings, and between the 1T and 2T command rates, are only a couple of percentage points."

  • DMCA? (Score:1, Interesting)

    by VisceralLogic ( 911294 ) <paul@visceral[ ]ic.com ['log' in gap]> on Wednesday November 02, 2005 @03:41PM (#13934701) Homepage
    Is this even going to be legal in a couple years?
  • by Saffaya ( 702234 ) on Wednesday November 02, 2005 @04:34PM (#13935172)
    Though I am sure you wrote that as a pure joke, this has already been done long ago. During the fierce competion on the demo scene between the ATARI ST and the Amiga, crews were exploiting every speck of power they could from their machine. The ATARI ST being a general purpose machine compared to the Amiga (which had very advanced sound and graphical custom processors), the programmers who wanted to pull off the same graphical effects went as far as using the processor managing the keyboard (a 68xx 8bit motorola chip) for added computational power.
  • by TheRaven64 ( 641858 ) on Wednesday November 02, 2005 @04:58PM (#13935371) Journal
    A lot of the improvements in CPU performance recently have come from vector units. On OS X, things like the AAC encoder make heavy use of AltiVec - to the degree that ripping CDs on my PowerBook is limited by the speed of the CD drive, not the CPU.

    A GPU is, effectively, a very wide vector unit (1024-bits is not uncommon). What happens when CPUs all include 2048-bit general purpose vector units? What happens when they include a couple on each core in a 128-core package? Sure, a dedicated GPU will still be faster - but it won't be enough faster that people will care. For comparison, take a look at Chromium. Chromium is a software OpenGL implementation that runs on clusters. Even with relatively small clusters, it can compete fairly well with modern GPUs - now imagine what will happen when every machine has a few dozen cores in their CPU.

  • Re:GPU or CPU? (Score:2, Interesting)

    by LaPoderosa ( 908833 ) on Wednesday November 02, 2005 @05:01PM (#13935400)
    "In a few years, there will be no real benefit to the GPU" Nonsense - we're actually going in the other direction, we need more general purpose massively parallel processing units to go beyond current hardware limitations. Dual CPUs do not come close to the level of parallelism we have on GPUs. Rendering a 1600x1200 4X AA scene with full filtering on a top tier dual core system would yield perhaps 1fps with an optimized software path. That gives you an idea of the order of magnitude you gain in performance with parallelizing these tasks on the GPU. "[GPUs] need data structures and pointers mixed with fast math - preferably double precision. You'll end up wanting a MMU" Nonsense. GPUs already do everything you need for raytracing [sourceforge.net]. There are demos on the internet. Raytracing is ideally suited to GPUs - there's so much you can parallelize. "Actually at that point it makes a lot of sense to move to raytracing " Nonsense. You're off by orders of magnitude [slashdot.org]. Maybe they just haven't seen your fast code... *rolls eyes*
  • In the meantime... (Score:3, Interesting)

    by Happy Monkey ( 183927 ) on Wednesday November 02, 2005 @05:05PM (#13935437) Homepage
    Does anyone have any transcoding software recommendations? Nero for some reason keeps losing audio sync after a few minutes of video.
  • by GweeDo ( 127172 ) on Wednesday November 02, 2005 @05:11PM (#13935480) Homepage
    Well, sorta. CoreImage is for video effects in real time. Like window transitions, transpernecy, shadows, blah blah blah.

    The idea behind using your GPU in this case is even more far reaching. While using a GPU for any visual effect is fairly logical...what about SETI@Home? What about Folding? What about for runing kalc :)

    See the difference?

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...