DirectX9 - For More Than Just Gamers? 311
Xev writes "HEXUS.net are showing a review of a new product called 3DEdit. This uses the DirectX 9 3D rendering engine; 3D transitions; DirectX 9 Shader-based filters, in order to give you a powerful home DV editing suite. This proves a lot more value to me as a Video editor than a card which just lets me play the latest games. Perhaps there is more use for these cards even at a consumer level?"
Typo (Score:4, Informative)
CoreImage (Score:5, Informative)
http://www.apple.com/macosx/tiger/coreimage.html/ [apple.com]
Re:OpenGL (Score:2, Informative)
When a DirectX game gets ported to OS X or any other platform you'll often find that the multiplayer is limited to the platform you're using.
Perhaps someone can suggest some other libraries/frameworks for input/sound/networking. There's GLUT [opengl.org] for input, but it's pretty simple so it might not work for everyone.
DirectX Website (Score:1, Informative)
You missed the point.... (Score:3, Informative)
Motion (Score:3, Informative)
Already done (Score:2, Informative)
Re:CoreImage (Score:2, Informative)
Re:DirectX Website (Score:2, Informative)
http://www.microsoft.com/windows/directx/default.
try that
Re:Sys requirements... (Score:4, Informative)
Traditionally, using software rendering, a simple 10 minuite clip can take 1 hour to render. Just over the weekend, I created a 8 minuite "moving slideshow" video clip from Still photos, and titles, which consited of photos gently moving in and out, and cross fading, and titles being added to it. A very simple composition task, yet it took my Athlon 2500+ over 40 minuites to render frame by frame into High Quality MPEG2 for DVD using software rendering.
Few years ago, it was suggested that maybe a 3D card can be used to assist in that, so the blunt of the rendering was done on a 3D card, and then frame by frame captured from the frame buffer to create a final AVI/MPEG of the composition.
The presumption was that frames of the video or Stills can be used as Textures, and the power of the Graphics card to render it all
Also it can be used for Real-Time composition of effects, as you can hook up a video recorder to the output, and directly record onto tape.
This technology was used extensivly in the Matrox RT2000 and beyond. The RT2000 was a professional video editing suite which consists of a Modified Matrox G400 graphics card (called the G400 flex) and a RT2000 video in/out card, which did realtime DV/MPEG encoding/decoding, and had firewire/analogue connections.
The RT system used the Matrox G400 Flex to perform the realtime compositing and rendering, and is powerfull enough to do the same effects in realtime, then send back to the RT card to directly send to DV tape, or MPEG2 file.
Then in 2000, ATi shown a proof of concept software using a normal Radeon card to render two video sequences onto a spinning cube in realtime, which was really stunning to look at.
So i assume this is further development is the realisation of this proof of concept.
As for WHY all this is nessasary, well for professional video editors, it gives the ability to have instant high quality previews, and fast rendering, which saves so much time, hence increases productivity.
Re:Ugly UI, Functional UI (Score:3, Informative)
Re:Ugly UI, Functional UI (Score:3, Informative)
You're talking about Project Looking Glass [sun.com], which is still in alpha, but will eventually bring a true 3d interface to the Linux desktop. It truly looks like a revolutionary interface, and you can see a video demo of Satan himself (Jonathan Schwartz)
Some developers are already beginning to contribute to the project, which is open sourced. You can find more details and even download a developer preview of the release at this website [java.net].
I downloaded the developer preview and briefly got it up and running on my system. I'm running Suse 9.2, and it requires an ATI or Nvidia 3d card with DRI support enabled in your X config.
Re:Yeah, maybe (Score:4, Informative)
Contrast this with doing the same thing in OpenGL:
1. (if necessary) switch to the correct OpenGL context.
2. (if necessary) switch to the correct texture stage.
3. Bind the texture.
To me, that's an obvious win for the OOP (Direct3D) version, but there you go. OK, so (1) will only be necessary in very special circumstances, but (2) is practically always necessary, avoiding it tends to be more work than not.
The point of a production library is not to demonstrate design patterns, but to apply the most appropriate techniques to whatever it is abstracting. If you consider a library's API incomplete or inferior just because it doesn't utilise polymorph multiple inheritance from virtual template base classes, you might want to consider a career as a computer science professor, a few of them will actually agree with you.
I'm also not really sure what C++ features you're missing. OK, so instead of exception handling they use return values, which I personally consider more appropriate in this case. Feel free to disagree on that point.
You still failed to miss the point I was trying to make: OpenGL's could benefit a lot from a better API. As it is, all the newer features are added-on hacks that add obfuscation by introducing statefulness at the API(!) level.
by Anonymous Coward
Good work stuffing your foot in your mouth.
*chuckles* Ah, kids these days.
~phil