Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics AMD Technology

AMD Launches Partnership With CAD Developer PTC 75

MojoKid writes "AMD is kicking off its weekend with news of a partnership between itself and CAD software developer PTC (Parametric Technology Corporation). PTC owns and develops the Creo software family. One of the programs at the heart of the company, Creo Element/Pro, was originally known as Pro/ENGINEER. It's not at all unusual for software developers in the CAD/CAM space to ally with hardware manufacturers, but it's typically Nvidia, not AMD, making such announcements. AMD claims that the upcoming Creo 2.0 product suite will be able to take advantage of the GPU in unprecedented ways that simultaneously improve performance and visual quality without compromising either. The company calls one such option Order Independent Transparency, or OIT. OIT is a rendering technology that allows for the partial display of wireframes and models inside a solid surface without creating artifacts or imprecise visualizations."
This discussion has been archived. No new comments can be posted.

AMD Launches Partnership With CAD Developer PTC

Comments Filter:
  • by morrison ( 40043 ) on Sunday April 15, 2012 @10:28PM (#39697193) Homepage

    Wow, talk about a blatant slashvertisement. As the summary states, it's not at all unusual for CAD/CAM software to ally with hardware so what exactly is the news for nerds here??

    With more contributors working on improving BRL-CAD's usability and features, we'd have an open source alternative without the huge recurring price tag. Lots of ways to get involved are listed here: http://brlcad.org/wiki/Contributor_Quickies [brlcad.org]

    You see what I did there.

  • Same ProE (Score:4, Interesting)

    by Alex Belits ( 437 ) * on Sunday April 15, 2012 @11:47PM (#39697503) Homepage

    Same ProE that dropped Linux support out of the blue (but kept Solaris, so it's not a matter of development effort, Unix or platform popularity)?

    gg assholes!

  • Re:OpenGL (Score:3, Interesting)

    by Anonymous Coward on Monday April 16, 2012 @12:07AM (#39697573)

    Implementation.... the CAD companies control the standards body, Khronos, forcing them to make minor incremental upgrades to the OpenGL standard. AMD/Intel/NVIDIA add their own extensions to the standard to utilize their silicon more fully, however they are all different from vendor to vendor. Then gaming engines need to code for three implementations of tessellation, or some other bullshit technology that only games use. Game developers look at this, give up, and develop for Windows/XBox exclusively, because Microsoft does not put up with this bullshit in DirectX.

  • by dbIII ( 701233 ) on Monday April 16, 2012 @04:40AM (#39698467)
    Gaming consoles and even handhelds like the aging Nintendo DS use OpenGL. The poster above is referring to the shrinking games market on PCs of which not all use DirectX anyway. When the big guns like Blizzard use OpenGL I really can't see how you can say it's dead on the desktop.
  • OIT isn't that new. (Score:4, Interesting)

    by ikekrull ( 59661 ) on Monday April 16, 2012 @10:05AM (#39699819) Homepage

    Note: I'm no expert in this area, this is just some stuff I have picked up along with a basic understanding of how these techniques are employed. There may be inaccuracies or incomplete information, corrections welcome.

    OIT is one area that modern graphics hardware really struggles with - A software render can just go ahead and allocate memory dynamically to keep track of the depth value and the colour of each fragment that contributes to a pixel's final colour in a list, but on a 'traditional' GPU, the big problem is that you have no easy way to store anything more than a single 'current' colour per pixel that will get irreversibly blended or overwritten by fragments with a lower depth value, and even if you could keep a list of them, you have no associated depth values, and nor do you have a simple way to sort them on the GPU. However, there is some clever trickery detailed below:

    Realtime OIT has been researched and published on (notably by Nvidia and Microsoft) for over a decade.

    Heres the basic technique - 'Depth Peeling', from 2001:

    http://developer.nvidia.com/system/files/akamai/gamedev/docs/order_independent_transparency.pdf?download=1

    Depth peeling renders the scene multiple times with successive layers of transparent geometry removed, front to back, to build up an ordered set of buffers which can be combined to give a final pixel value.

    This technique has severe performance penalties, but the alternative (z-sort all transparent polygons every frame) is much, much worse.

    'Dual Depth Peeling' - from 2008:

    http://developer.download.nvidia.com/SDK/10.5/opengl/src/dual_depth_peeling/doc/DualDepthPeeling.pdf

    This works in much the same way, but is able to store samples from multiple layers of geometry each rendering pass ,using MRT (multiple render targets), and a shader-based sort on the contents of the buffers, speeding the technique up a lot.

    Refinements to the DDP technique, cutting out another pass - from 2010:

    http://developer.nvidia.com/sites/default/files/akamai/gamedev/files/sdk/11/ConstantMemoryOIT.pdf

    Reverse depth peeling was developed where memory was at a premium - which extracts the layers back-to-front for immediate blending into an output buffer instead of extracting, sorting and blending, and it is also possible to abuse the hardware used for antialiasing to store multiple samples per output pixel.

    Depth peeling really only works well for a few layers of transparent objects, unless you can afford a lot of passes per pixel, but in many situations, it is unlikely that the contribution of transparent surfaces behind the first 4 or so transparent surfaces means much in terms of visual quality.

    AMDs 'new' approach involves implementing a full linked-list style A-buffer and a separate sorting pass using the GPU - this has only been possible with pretty recent hardware, and I guess is 'the right way' to do OIT, very much the same as a software renderer on a CPU would do it.

    Heres some discussion and implementation of these techniques:

    http://www.yakiimo3d.com/2010/07/19/dx11-order-independent-transparency/

    This really isn't anything new, single-pass OIT using CUDA for fragment accumulation and sort was presented at Siggraph 2009 - nor is it something PTS can claim as their own. Its possible AMDs FirePros have special support for A-buffer creation and sorting, which is why they run fast, and AMD in general has a pretty big advantage in raw GPGPU speed for many operations (let down by their awful driver support on non-windows platforms, of course) - but really any GPU that has the ability to define and access custom-structured buffers will be able to perform this kind of task, and given NVidia's long history researching and publishing on this subject, its pretty laughable that AMD and PTS can claim it is their new hotness.

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...