Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

Transcoding in 1/5 the Time with Help from the GPU 221

mikemuch writes "ExtremeTech's Jason Cross got a lead about a technology ATI is developing called Avivo Transcode that will use ATI graphics cards to cut down the time it takes to transcode video by a factor of five. It's part of the general-purpose computation on GPU movement. The Aviva Transcode software can only work with ATI's latest 1000-series GPUs, and the company is working on profiles that will allow, for example, transcoding DVDs for Sony's PSP."
This discussion has been archived. No new comments can be posted.

Transcoding in 1/5 the Time with Help from the GPU

Comments Filter:
  • by tji ( 74570 ) on Wednesday November 02, 2005 @01:28PM (#13933466)

    My educated guess is, No, there won't be Linux support..

    ATI was the leader in MPEG2 acceleration, enabling iDCT+MC offload to their video processor almost 10 years ago. How'd that go in terms of Linux support, you ask? Well, we're still waiting for that to be enabled in Linux.

    Nvidia and S3/VIA/Unichrome have drivers that support XvMC, but ATI is notably absent from the game they created. So, I won't hold my breath on Linux support for this very cool feature.
    • Hasn't nVidia been talking about using the GPU for video acceleration since the GeForce 5 came out? I don't understand why this isn't already available...
    • by ceoyoyo ( 59147 ) on Wednesday November 02, 2005 @01:35PM (#13933533)
      This should be written in Shader Language (or whatever it's called these days) which is portable between cards. There's no reason NOT to release this on any platform. Since it only runs on the latest ATI cards it probably uses some feature that nVidia will have in it's next batch of cards as well. If ATI doesn't release it for Linux and the Mac hopefully it won't be that difficult to duplicate their efforts. After all, shader programs are uploaded to the video driver as plain text.... ;)
    • GPU Stream programming can be done with Brook http://graphics.stanford.edu/projects/brookgpu/ [stanford.edu]. Brook supports the nVidia series, so that is what you purchase.

      Pick up a 5200FX card (for SVIDEO/DVI output) and then use the GPU to do audio and video transcode. I have been thinking about audio (MP3) transcode as a first "trial" application.

      "Heftier" GPUs may be used to assist in video transcode -- but it strikes me that the choice of stream programming system is most important (to allow code to move to other GP
    • by thatshortkid ( 808634 ) * on Wednesday November 02, 2005 @03:05PM (#13934390)
      wow, for once there's a slashdot article i have insight on! (whether it's modded that way remains to be seen.... ;) )

      i would actually be shocked if there weren't linux support. the ability to do what they want only need to be in the drivers. i've been doing a gpgpu feasability study as an internship and did an mpi video compressor (based on ffmpeg) in school. using a gpu for compression/transcoding is a project i was thinking of starting once i finally had some free time since it seems built for it. something like 24 instances running at once at a ridiculous amount of flops (puts a lot of cpus to shame, actually). if you have a simd project with 4D or under vectors, this is the way to go.

      like i said, it really depends on the drivers. as long as they support some of the latest opengl extensions, you're good to go. languages like Cg [nvidia.com] and BrookGPU [stanford.edu], as well as other shader languages, are cross-platform. they can also be used with directx, but fuck that. i prefer Cg, but ymmv. actually, the project might not be that hard, just needs enought people porting the algorithms to something like Cg.

      that said, don't expect this to be great unless your video card is pci-express. the agp bus is heavily asymmetric towards data going out to the gpu. as more people start getting the fatter, more symmetric pipes of pci-e, look for more gpgpu projects to take off.
  • by Anonymous Coward on Wednesday November 02, 2005 @01:31PM (#13933494)
    I wonder if http://www.gpgpu.org/ [gpgpu.org] could offload some of the Slashdot effect to their GPU?
    • I heard there's a startup who have just announced a slashdot coprocessor board - it automatically searches for and downloads slashdot articles you might be interested in reading - unfortunately, it never stops and completely hogs your bandwidth connection, even with a 1 Terabit connection.
  • What I want to see. (Score:5, Interesting)

    by Anonymous Coward on Wednesday November 02, 2005 @01:33PM (#13933514)
    Maybe others have had this idea. Maybe it's too expensive or just not practical. Imagine using PCI cards with a handful of FPGAs on board to provide reconfigurable heavy number crunching abilities to specific applications. Processes designed to use them will use one or more FPGAs if they are available, else they'll fall back to using the main CPU in "software mode."
    • That's a really cool idea. I've had ideas that were along that line, but never quite made it through the thought process to what you are suggesting. It's like having an external floating-point processor, but extremely general-purpose and reconfigurable. That'd be a great component to have on one of the new PCI-Express boards, those have tons of available bandwidth that you could use up if what you were processing required lots of I/O, even on the 1x slots.

      -Jesse
      • This already exists.
        One such company is Cyclone Microsystems. They offer i960 coprocessor based systems.
        I don't remember the other vendor I looked at but they offered a xylinx FPGA solution or a TI DSP solution.
        -nB
    • by LWATCDR ( 28044 )
      I have seen a combo FPGA/PPC chip for embedded applications. The issue I see with this is how long would it be useful? FPGAs are slower then ASICs. Something like the Cell or a GPU will probably be faster than an FPGA. There are a few companies looking at "reconfigurable" computers. So far I have heard about any products from them yet.
      • FPGAs aren't always slower than what you can do in silicon. AES [sorry I have a crypto background] takes 1 cycle per round in most designs. You can probably clock it around 30-40Mhz if your interface isn't too stupid. AES on a PPC probably takes the same time as a MIPs which is about 1000-1200 cycles.

        Your clock advantage is about 10x [say] that is typical 400Mhz PPC vs. 40Mhz FPGA ... so that 1000 cycles is 100 FPGA cycles. But an AES block takes 11 FPGA cycles [plus load/unload time] so say about 16 cy
        • Ummm... Comparing a general purpose CPU to an FPGA is a bit odd. The grand-parent post was talking about ASIC's vs. FPGA's. An ASIC can impliment exactly the same structure as an FPGA, so it can work just as efficiently, but an ASIC can be made to clock higher than an FPGA. Somebody mod the parent post "non-sequitor."

          • The original post never mentioned ASICs that I saw. But in any case ASIC vs FPGA isn't all that relevant to the article, whereas FPGA vs Generic CPU is very relevant (and isn't at all odd). As the post you replied to said, if you do the math and it appears you can offload an operation you would normally do on your general purpose CPU to an FPGA and get the results back sooner than you could have calculated it on the CPU, it's a win (hell even if the net times are identical, you've freed up some general pu
            • Photon317 writes:

              The original post never mentioned ASICs that I saw.

              Ummm... Okay, here is a quote from the original post again... by LWATCDR:

              I have seen a combo FPGA/PPC chip for embedded applications. The issue I see with this is how long would it be useful? FPGAs are slower then ASICs.

              And then a quote from tomstdenis:

              FPGAs aren't always slower than what you can do in silicon.

              tom then goes on to talk about PPC versus FPGA's, as if LWATCDR weren't talking about ASICs. Since this conversation now ivolv

        • There's another problem with general-purpose FPGAs.
          (order of magnitude comparison only):
          Athlon 64 4000 (from pricewatch): $330
          Xilinx 2.4Million gate design (from digikey): $2100-$5000.

          The computing world would look a lot different if there were good $100 high-speed, high-capacity FPGAs. Now, I wouldn't argue with a good ASIC or highspeed DSP implementation for some algorithms...
          • You don't need a 2.4Mgate FPGA to host an AES core, or USB controller, or ...

            2.4Mgates can be quite a bit [depending on the measure of "gates"]. The typical AES-CCM core in Virtex gates is ~30k or so.

            Tom
          • There's another problem with general-purpose FPGAs. (order of magnitude comparison only): Athlon 64 4000 (from pricewatch): $330 Xilinx 2.4Million gate design (from digikey): $2100-$5000.

            You haven't specified which FPGAs you're talking about, but at those prices, you should be getting more like 6 million gates or so (e.g. an XC2V6000 goes for about $4000). Perhaps you're looking at something like a Virtex-4 FX? If so, you should be aware that what you're looking at not only includes FPGA gates, but al

      • >Florida Power and lights SUCKS 8 days without power and counting!

        No, they rule. I saw them working in heavy rain to get my feeder back on. It came back later that night. They could have easily postponed the job until the next day, but they did it.

        They have a lot of work on their plate; relax, they'll get to you.

        -Z
    • Well there was that joke about the SETI processing card [ 1 [slashdot.org] ] [ http://web.archive.org/web/20010413215232/http://w ww.krasnoconv.com/index.html [archive.org] [fn1] ], and now there is a company building the general purpose Physics card for games (I wonder what else it would work on?), so taking this to the next step, by having a card filled with FPGAs or the like isn't all that new of an idea.
      Seeing someone make some money off of it would be.
      [fn1] - Bug in the HTML Format posting ablility- /. doesn't like two http:/ [http]
    • This might work, but the question to ask is whether it would really be faster. FPGAs are usually a lot slower than ASICs, as another replier pointed out. One FPGA emulation that I saw didn't even run half as fast (in terms of compute time for a task) as the actual ASIC. And if the FPGA becomes the critical path in your processing, it had better be fast (or at least faster than your CPU).

      So I think that this would only work if a general purpose CPU (or GPU, for that matter) has a serious architectural wea
    • I think there are a couple of issues preventing this:

      1) What does the API for this look like from the application perspective?
      2) Top of the line (read: pricey) FPGAs are mostly in the 500mhz range right now, which is in the same range as a GPU. So unless a GPU doesn't solve the problem, why would you need this? GPUs have a design that solves #1.
    • Unfortunately, there are a few problems with this scenario in practice that prevent it from becoming widespread. I worked on optimizations with VHDL destined for FPGA's in a prior life.

      - Tools: FPGA tools are getting better, but still suck compared to modern IDEs and software development. This might be me being jaded (VHDL can get nasty), some things like System C and others are in the infancy stage, but long ways to go here.

      - Synthesis time: It can take DAYS on a very fast machine to run the synthesis that
    • Ding!

      http://www.tarari.com/products-cp.html [tarari.com]

      They were in Startup Alley a few years ago at N+I demoing their cards doing Perl regex's and spam checking.
    • I use to play with this idea 4-5 years ago. A small team was going to look into building FPGA PCI boards that could be used with http://www.distributed.net/ [distributed.net] to help crack DES/RC5/*insert-your-choice-encryption-here*.
  • by HotNeedleOfInquiry ( 598897 ) on Wednesday November 02, 2005 @01:35PM (#13933529)
    With tech stuff these days, but this is awesome. A very clever use of technology just sitting in your computer and a huge timesaver. Anyone that does any transcoding will have immediate justification for laying out bucks for a premium video card.
    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday November 02, 2005 @01:37PM (#13933544) Homepage Journal
      I'd like to see it but I wonder what the quality is going to be like as compared to the best current encoders. I mean you can already see a big difference between cinema craft and j. random mpeg2 encoder...
      • by Dr. Spork ( 142693 ) on Wednesday November 02, 2005 @02:04PM (#13933747)
        You don't get it. ATI is not releasing a new encoder. The test used standard codecs, which do the very same work when assisted by the GPU, only 5X faster.
        • by no_such_user ( 196771 ) <jd-slashdot-2007 ... ay.com minus bsd> on Wednesday November 02, 2005 @02:22PM (#13933946)
          It looks like they're using their own codec to produce MPEG-2 and MPEG-4 material. How would you get an existing, x86-only aware application to utilize the GPU, which is not x86 instruction compatible? It's a good bet that codecs will be rewritten to utilize the GPU once code becomes available from ATI, nVidia, etc.

          I'd actually be willing to spend more than $50 on a video card if more multimedia apps took advantage of the GPU's capabilities.
          • I thought from my first read of the article that they're using the standard codecs, but on second read-through, it appears that you're right. This leaves open the possibility that they have a really pared-down MPEG4 codec which produces really crappy results, quickly. That is not very impressive. What they need is to take an open-source codec like Xvid and "port" it to their hardware. Or better, they need to release the interface so that people can code for it. Yes, this is a lot less cool than I realized a
      • Honestly, I think CinemaCraft is a little overrated. Nothing wrong with it, but I generally get better results out of both Compressor 2 and ProCoder/Carbon. And yes, this is backed up by double-blind third party quality review - I've got an article about this coming out in DV Magazine in a few weeks.
    • Anyone that does any transcoding will have immediate justification for laying out bucks for a premium video card
      Hardly - I do a lot, but I wouldn't pay three hundred quid for this even though it is impressive...
  • But is it worth it? (Score:3, Interesting)

    by Anonymous Coward on Wednesday November 02, 2005 @01:37PM (#13933545)
    the X1800XT ties almost exactly with the 7800GTX @ stock of 430 core in most gaming benchmarks.

    with nVIDIA's 512mb implementation of the G70 core touted to be at 550mhz core, it should theoretically thrash the living daylights out of the X1800XT.

    http://theinquirer.net/?article=27400 [theinquirer.net]

    the decision is between aVIVO's encode and transcode abilities for h.264, or superior performance by nVIDIAs offering?
    • Well, if you can see the difference between 150fps and 200fps, and you don't waiting and don't care about spending an extra $200, you really should wait for the G70.

      I don't play the sort of games that need a graphics card over $200 to look good. I never even considered looking at the high end. However, this video encoding improvement will certainly make me do a double take. I was proud of my little CPU overclock that improves my encoding rate by 20%. But the article talks about improvements of over 500%!

      • Keep in mind (Score:3, Insightful)

        by Solr_Flare ( 844465 )
        That while few people will notice the difference between 150fps and 200fps, those numbers are more or less there to help you determine the lifespan of the card itself. While, for current games, both cards will perform extremely well, a 50fps difference means that on future games, the Nvidia card will be able to last longer and run with more graphics options enabled without bottoming out on fps.

        While a select few individuals still always buy the latest and the greatest, the majority of buyers look at vid
    • Well, I'm assuming that the hope is that support for encoding/decoding h264 will be put into hardware going forward (meaning it will find its way into low-end cards as well). I know encoding h264 is the longest, most processor intensive task I do with a computer these days, and a hardware solution that would drop any time off that task would be appreciated.
    • I think if you are a video professional (like me) and you've seen how obsenely slow rendering h.264 can be (which is an amazing codec) and you spend half your time waiting for rendering, than I think the answer is a profound YES it is worth it (if it works).
    • Yeah, I'm also having a hard time deciding whether to buy a freight train or a convertible. They can both do similar things (transport stuff), so I should consider them both and compare them directly, right?
  • Crippled? (Score:5, Funny)

    by bigberk ( 547360 ) <bigberk@users.pc9.org> on Wednesday November 02, 2005 @01:39PM (#13933557)
    But will the outputs have to be certified by Hollywood or the media industry? You know, because the only reason for processing audio or video is to steal profits from Sony, BMG, Warner, ... and renegade hacker tactics like A/D conversion should be legislated back to the hell they came from
    • Why bother? If we force ATI and the other card creators to simply give themselves over to the MPAA companies, we're guaranteed that they'll never make something that can break the rules. For that matter, why don't we just let the MPAA run anything related to video, and the RIAA run anything related to audio? It'd be the perfect solution. We wouldn't have to worry about this kind of stuff, because we know they have our best interests at heart, and aren't remotely corrupt or greedy...
  • GPU or CPU? (Score:3, Interesting)

    by The Bubble ( 827153 ) on Wednesday November 02, 2005 @01:58PM (#13933712) Homepage

    Video cards with GPU's used to be a "cheap" way to increase the graphic processing power of your computer by adding a chip who's sole purpose was to process graphics (and geometry, with the advent of 3d-acellerators).

    Now that GPU's are becomming more and more programmable, and more and more general~purpose, what, really, is the difference between a GPU and a standard CPU? What is the benefit to having a 3d~acellerator over having a dual~CPU system with one CPU dedicated to graphic processing?

    • Re:GPU or CPU? (Score:3, Insightful)

      by gr8_phk ( 621180 )
      "what, really, is the difference between a GPU and a standard CPU? What is the benefit to having a 3d~acellerator over having a dual~CPU system with one CPU dedicated to graphic processing?"

      In a few years, there will be no real benefit to the GPU. Not too many people write optimized assembly level graphics code anymore, but it can be quite fast. Recall that Quake ran on a Pentium 90MHz with software rendering. It's only getting better since then. A second core that most apps don't know how to take advanta

      • Re:GPU or CPU? (Score:2, Interesting)

        by LaPoderosa ( 908833 )
        "In a few years, there will be no real benefit to the GPU" Nonsense - we're actually going in the other direction, we need more general purpose massively parallel processing units to go beyond current hardware limitations. Dual CPUs do not come close to the level of parallelism we have on GPUs. Rendering a 1600x1200 4X AA scene with full filtering on a top tier dual core system would yield perhaps 1fps with an optimized software path. That gives you an idea of the order of magnitude you gain in performance
        • we need more general purpose massively parallel processing units to go beyond current hardware limitations.

          Agreed. The current hardware limitation is power dissipation for both CPU and GPU. Hence multi-core from AMD and Intel.

          Rendering a 1600x1200 4X AA scene with full filtering on a top tier dual core system would yield perhaps 1fps with an optimized software path.

          Speculation. Besides, not many people run at that resolution with FSAA except GPU fanboys.

          "You'll end up wanting a MMU" Nonsense.

          I've al

      • Which approach is going to be most effective but economical for rendering fields of grasses or detailed jungles? How about a snowstorm with snow that gets denser and fog like into the distance? Sand dunes that give way and slide underfoot? Water that breaks around objects and coats them in a wet sheen?
        • Re:GPU or CPU? (Score:4, Insightful)

          by gr8_phk ( 621180 ) on Wednesday November 02, 2005 @06:39PM (#13936304)
          "Which approach is going to be most effective but economical for rendering fields of grasses or detailed jungles? How about a snowstorm with snow that gets denser and fog like into the distance? Sand dunes that give way and slide underfoot? Water that breaks around objects and coats them in a wet sheen?"

          Most of that stuff can be done with OpenGL/DirectX or ray tracing. Grasses are sometimes done in OpenGL with instancing small clumps. In RT you'd use proceedural geometry or instancing.

          For the snow, both renderes would probably do similar techniques.

          Sand dunes - either method needs an engine with deformable geometry - both can support that.

          Water simulation is something I don't know much about. For the FFT methods of simulating waves it's possible that a GPU has an advantage. Once it start interacting with objects, I don't know how people handle that.

          Your quesitons all point toward vast detailed worlds with lots of polygons. RT scales better with scene complexity. To get more traditional methods to work well, you get into fancy culling techniques (HZB comes to mind) and RT starts to look simpler - because it is.

      • Re:GPU or CPU? (Score:5, Insightful)

        by SlayerDave ( 555409 ) <elddm1@g m a i l .com> on Wednesday November 02, 2005 @06:38PM (#13936288) Homepage
        You're hallucinating, buddy. Let me count the ways.

        1. On another note, as polygon counts skyrocket they approach single pixel size

        This is not happening. Not anywhere (except maybe production rendering). It is far too time-consuming, expensive, and labor-intensive to produce huge numbers of high-polygon-count models for games. Vertex pipes are currently under-utilized in most games and applications now. Efforts are underway to allow procedural geometry creation on the GPU to better fill the vertex pipe without requiring huge content creation efforts. See this paper [ati.com] for details.

        2. A second core that most apps don't know how to take advantage of will make this all the more obvious.

        This undercuts the argument you make in the next paragraph. Also, it's not true. Both the PS3 and XBOX 360 have multiple CPU cores. It's true that current-gen engines aren't optimized for this technology, but next-gen engines will be.

        3. multicore CPUs are nearing the point where full screen, real time ray tracing will be possible. GPUs will not stand a chance.

        This might be true, but so what? Ray tracing offers few advantages over the current-gen programmable pipeline. I can only think of 2 things that a ray-tracer can do that the programmable pipeline can't: multilevel reflections and refraction. BRDFs, soft shadows, self-shadowing, etc. can all be handled in the GPU these days. Now, you can get great results by coupling a ray-tracer with a global illumination system like photon mapping, but that technique is nowhere near real-time. Typical acceleration schemes for ray-tracing and photon mapping will not work well in dynamic environments, but the GPU could care less whether a polygon was somewhere else on the previous frame.

        Hate to break it to you, but the GPU is here to stay. Why? GPUs are specialized for processing 4-vectors, not single floats (or doubles) like the CPU + FPU. True, there are CPU extensions for this, such as SSE and 3DNOW, but typical CPUs have a single SSE processor, compared to a current-gen GPU with 8 vertex pipes and 24 pixel pipes. Finally, do you really want to burden your extra CPU with rendering when it could be handling physics or AI?

        • "1. On another note, as polygon counts skyrocket they approach single pixel size

          This is not happening."

          Because a GPU is too hard to program for recursively refining nurbs or doing subdivision surfaces on the fly.

          I'll pass on #2. I'm not sure what to say - future engines using mutiple cores effectively is somewhat speculative at this point. But lets say you win, so it takes another generation to catch the GPU.

          "3. multicore CPUs are nearing the point where full screen, real time ray tracing will be po

    • GPUs are highly parallel, far moreso than a CPU. This makes them even more suited to vector operations than CPUs with SIMD.

      What I want to know is whether, given the new-found programmability of the GPU, more pressure will be applied for ATI and nVidia to open up the ISAs to their graphics chipsets.
    • GPU's are designed to do parallel bulk vector processing (which is why they can transcode faster than a CPU) but this also limits what kind of applications or tasks you can reasonably offload to the GPU.

      This means that the 'general purpose GPU' code, isn't really going to be general purpose, it going to be heavily vector orientated. On the other side the CPU is more general purpose, good at running many tasks and handling interrupts &co, for this reason the CPU won't replace the GPU and the GPU won't re
    • What is the benefit to having a 3d~acellerator over having a dual~CPU system with one CPU dedicated to graphic processing?

      That depends on what you mean by the "one CPU dedicated to graphic processing." If you mean something on the order of a second Pentium or Athlon that's dedicated to graphics processing, the advantage is tremendous: a typical current CPU can only do a few floating point operations in parallel, where a GPU has lots of pipes to handle multiple pixels at a time (or multiple vertexes at

  • Yawn... (Score:2, Interesting)

    nVidia has been doing this for a while now. In fact, there are finally getting to be interesting implementations like GNU software radio [gnu.org] on GPUs:

    An Implementation of a FIR Filter on a GPU [sunysb.edu]
  • I'd rather see GPU's ofloading thier work to the system CPU. There's no *good* way to do this. So, why not run this isn reverse? If it's possible to speed up general processing, why can they speed up graphics processing? Especially since my CPU hardly does anything when I'm playing a game; it has to wait on the graphics card.

    So, what about it ATI? Or will thi be an NVIDIA innovation?
  • by peter303 ( 12292 ) on Wednesday November 02, 2005 @02:25PM (#13933975)
    In the scientific computing world there have been several episodes where someone comes up with a attached processor an order of magnitude faster than a general purpose CPU and try to get the market to use it. Each generation improved the programming interface eventually using some subset of C (now Cg) combined with a preprogrammed routine library.

    All these companies died mainly because the commodity computer makers could pump out new generations about three times faster and eventually catch up. And the general purpose software was always easier to maintain than the special purpose software. Perhaps graphics card software will buck this trend because its a much larger market than specialty scientific computing. The NVIDAS and ATIs can ship new hardware generations as fast as the Intels and AMDs.
    • > All these companies died mainly because the commodity computer makers could pump out new generations about three times faster and eventually catch up.

      The improvement on general purpose CPU were mainly gained by increase of cache size, advanced pipelining and clock increase, all these factors seems to have somewhat be exploited to the max by current CPU so now Intel and AMD have to fall back on multi-core CPUs which need special purpose software to be exploited efficiently.

      Still while NVidia and ATI can
      • by TheRaven64 ( 641858 ) on Wednesday November 02, 2005 @04:58PM (#13935371) Journal
        A lot of the improvements in CPU performance recently have come from vector units. On OS X, things like the AAC encoder make heavy use of AltiVec - to the degree that ripping CDs on my PowerBook is limited by the speed of the CD drive, not the CPU.

        A GPU is, effectively, a very wide vector unit (1024-bits is not uncommon). What happens when CPUs all include 2048-bit general purpose vector units? What happens when they include a couple on each core in a 128-core package? Sure, a dedicated GPU will still be faster - but it won't be enough faster that people will care. For comparison, take a look at Chromium. Chromium is a software OpenGL implementation that runs on clusters. Even with relatively small clusters, it can compete fairly well with modern GPUs - now imagine what will happen when every machine has a few dozen cores in their CPU.

  • by Anonymous Coward
    fyi this is already done by Roxio in Easy Media Creator 8. they offload a lot of the rendering or transcoding to GPUs that support it. for those that are older they have a software fallback. probably not an increase by such a large factor but still a significant boost on newer PCI-E cards.
  • Apple's core image (Score:4, Informative)

    by acomj ( 20611 ) on Wednesday November 02, 2005 @02:29PM (#13934022) Homepage
    some of Apple's apis (core video/core image/core audio) use the gpu when it detects a supported card, otherwise it just uses the cpu, seemlessly and without fuss. So this isn't new.

    http://www.apple.com/macosx/features/coreimage/ [apple.com]

  • by Anonymous Coward
    As I remember from my hardware class...there's an Intel 8051 or similar in most PC keyboards...wouldn't it be cool to somehow be able to use that CPU for something useful (aside from polling the keyboard)
    • by Saffaya ( 702234 ) on Wednesday November 02, 2005 @04:34PM (#13935172)
      Though I am sure you wrote that as a pure joke, this has already been done long ago. During the fierce competion on the demo scene between the ATARI ST and the Amiga, crews were exploiting every speck of power they could from their machine. The ATARI ST being a general purpose machine compared to the Amiga (which had very advanced sound and graphical custom processors), the programmers who wanted to pull off the same graphical effects went as far as using the processor managing the keyboard (a 68xx 8bit motorola chip) for added computational power.
  • Linux Support (Score:3, Informative)

    by Yerase ( 636636 ) <randall...hand@@@gmail...com> on Wednesday November 02, 2005 @03:03PM (#13934377) Homepage
    There's no reason there couldn't be Linux Support. At the IEEE Viz05 Conference there was a nice talk from the guys operating www.gpgpu.org about cross-platform support, and there's a couple of new languages coming out that act as wrappers for Cg/HLSL/OpenGL on both ATI & NVidia, & Windows & Linux... Check out Sh (http://libsh.sourceforge.net/ [sourceforge.net] and Brook (http://brook.sourceforge.net./ [brook.sourceforge.net] Once their algorithm is discovered (Yipee for Reverse engineering), it won't be long.
  • This is cool, but if the feeds that process generates is as nonstandard as the MPEG2 their Multimedia Center puts out, it is worthless.

    I can't use the files I recorded on anything but ATI's software and Pinnacle Videostudio (go figure, it understands the codec).

  • by iamhassi ( 659463 ) on Wednesday November 02, 2005 @03:19PM (#13934515) Journal
    it's funny to read the article and see them brag about the "very fast RAM":
    "This is, after all, one of the fastest CPUs money can buy, paired with very fast RAM.
    "1 GB of very low latency RAM "

    After the other review [techreport.com] posted today [slashdot.org] about fast memory doing almost nothing for transcoding:
    "moving to tighter memory timings or a more aggressive command rate generally didn't improve performance by more than a few percentage points, if at all, in our tests."
    "Mozilla does show a difference between the settings, both on its own and when paired with Windows Media Encoder. Still, the differences in performance between 2-2-2-5 and 2.5-4-4-8 timings, and between the 1T and 2T command rates, are only a couple of percentage points."

  • GPU stands for General Purpose Unit right? Or was is Generic Processing Unit? I can't remember.
  • by ChrisA90278 ( 905188 ) on Wednesday November 02, 2005 @03:42PM (#13934708)
    Aple does this now. "Core Image" is built into the OS and all "correctly written" applications that need to do graphic use Core Image. Core Image wil use the GPU if one is available. This is a very good idea but the hardest part of getting this to work on a non-Apple platform will be standarizing the API so that we can use any GPU. OK X11 did this fr displays on UNIX ad we have OpenGL for 3D graphic so we can hope something will happen an API for GPU based image transfomation. The biggest use for this wil not be just simple transcoding but editing and dispay programs for still and moving images Think "gimp" and "cinelerra".
    • by GweeDo ( 127172 )
      Well, sorta. CoreImage is for video effects in real time. Like window transitions, transpernecy, shadows, blah blah blah.

      The idea behind using your GPU in this case is even more far reaching. While using a GPU for any visual effect is fairly logical...what about SETI@Home? What about Folding? What about for runing kalc :)

      See the difference?
  • Another great idea that will no doubt be poorly implemented and suffer from a closed spec, stifling developer input.

    At the risk of becoming -1 redundant, many other posters have already pointed out that stuff like this should be done in a generic shader language so that it can be run across a gamut of GFX cards - I'm no programmer, but in my mind this would be like current CPU apps asking "do you support MMX? SSE? SSE2?" etc etc etc. Interesting projects like LibSh [libsh.org] offer to provide a platform-independent me
  • I've got an older AGP-based system (Athlon XP Barton), and this sounds like the perfect thing to speed up transcoding and playing of H.264 video. Too bad the irony will be that most systems with PCIe (and support for all these new cards) can play H.264 at a decent speed w/o this card (and transcode quite a bit faster then mine), while most systems that really need it would have to be upgraded in the first place. I know AGP has reached it's limit for 3D performance due to it's bandwidth limitations, but th
  • In the meantime... (Score:3, Interesting)

    by Happy Monkey ( 183927 ) on Wednesday November 02, 2005 @05:05PM (#13935437) Homepage
    Does anyone have any transcoding software recommendations? Nero for some reason keeps losing audio sync after a few minutes of video.
  • ... once more around the wheel of karma?

    --dave
    See Foley and Van Dam, Fundamentals of Interactive Computer Graphics Addison-Wesley, 1982

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...