Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
IBM Technology Hardware

Introducing the PowerPC SIMD unit 83

An anonymous reader writes "AltiVec? Velocity Engine? VMX? If you've only been casually following PowerPC development, you might be confused by the various guises of this vector processing SIMD technology. This article covers the basics on what AltiVec is, what it does -- and how it stacks up against its competition."
This discussion has been archived. No new comments can be posted.

Introducing the PowerPC SIMD unit

Comments Filter:
  • by tabkey12 ( 851759 ) on Monday March 07, 2005 @11:50AM (#11866343) Homepage
    This highlights one of the real advantages that AltiVec has over the various SIMD instruction sets available for x86 processors: its comparative stability. Every AltiVec processor since the original G4 has had the same essential functionality, the same large register pool that isn't shared with anything, and a reasonably complete set of likely operations. This has made it easier for support to become widespread: a program designed to take advantage of the original G4 will still get a noticeable performance improvement on today's G5. x86 SIMD was frankly botched - MMX was a very odd idea, and, though SSE & SSE2 have partially fixed the problem, the fact that SSE optimised code usually runs slower on an Athlon than 'unoptimised' code has severely limited its applications.
    • by Anonymous Coward
      the fact that SSE optimised code usually runs slower on an Athlon than 'unoptimised' code has severely limited its applications.

      Oh? What's your source for that? Is that a problem with the processor or with the optimiser?
    • by Anonymous Coward
      You mean you read the article? That is .... strange.
    • by adam31 ( 817930 ) <adam31 @ g m a i l .com> on Monday March 07, 2005 @12:15PM (#11866664)
      the fact that SSE optimised code usually runs slower on an Athlon than 'unoptimised' code has severely limited its applications.

      What does this even mean? I've written a great deal of optimized SSE code, and I can promise you that it works just as well on AMD. In fact, if you look at Athlon's pipeline, it does some really amazing things rescheduling and executing operations out-of-order. Fiddling around with ordering individual instructions is basically pointless because the scheduler has gotten so good at doing it on-the-fly.

      Can you cite a specific example, because I've never run into this.

      • What it means... (Score:1, Flamebait)

        by turgid ( 580780 )
        ...is that this is yet another IBM PR fluff piece liberally sprinkled with FUD and half-truths.

        We get on average one of these per month posted here to slashdot as news.

        Nothing to see here. Move along please.

      • It's a bunch of hooha. Ignore it. Altivec has added new instructions; MMX and SSE have added new instructions and registers over time. Clearly the intel-driven stuff has changed more than Altivec. Of course, it's been a couple years longer, too. The Pentium MMX (P55C) came around in 1997, and the G4 in 1999. That's not an excuse really, but useful information anyway. The benefit of altivec since it has changed less is that you don't need to make changes to the old code to make it run faster on newer process
      • The gain of using SSE on Athlons is smaller because they have decent ALUs and FPUs to begin with. They are tough to beat. Switching code to SSE on P4 gives much more dramatic results because it's a POS CPU. He could just be meaning that...
        • Which also means that, well optimized floating point code (which is more likely, since compilers have had many years to improve their floating point code generation) would run faster on an athlon than poorly written sse code.. Whereas the p4 would run the sse code faster simply because it runs fpu code so poorly.
          I'm sure well written sse code would run faster on both platforms, atleast in cases where vectorization makes sense.. Intel implemented the weak fpu unit in the p4 to try and steer users onto sse co
          • Intel implemented the weak fpu unit in the p4 to try and steer users onto sse code even when it's not really appropriate.

            SSE has scalar instructions too. Since the SSE registers are in a flat register file (c.f. stack in the legacy FPU descended from the 287/387), it's actually easier for a compiler to generate efficient code for SSE. SSE only does single precision though, so for double precision you need SSE2/SSE3. Incidentally, if you look at the intel manuals you will see that SSE3 is only a minor exten

            • Easier if your starting from scratch perhaps, but experience counts for a lot too, and people have far more experience optimizing for x86 style floating point units currently.. Although it may not take as long to bring compilers up to the same level with SSE as it did with x87.
    • by Anonymous Coward
      Have you done any PPC programming? There are some big gotchas when optimizing for the G5 and I usualy have to program things twice as the typical G4 altivec optims often runs quite slower on the G5, particularly the ones where we use the stream load/save hint instruction to help the processor schedule its memory accesses. The apple site has a page dedicated to the diference between G4 and G5 when it comes to optimizing. On the contrary I never found such gotchas when programming SSE on PIII and going to the
      • by Paradox ( 13555 ) on Monday March 07, 2005 @01:46PM (#11867791) Homepage Journal
        Ahh Slashdot. First, let's mention this link [apple.com] which you were evidently too busy to provide. It links to two papers on how to tune for the G5. That way, someone can verify what I'm saying.

        The problems you're talking about are not the AltiVec's fault, and the AltiVec instruction set is still stable. Code will still run very quickly even if you don't optimize for the G5. But, let me bring a quote from one of those linked papers:

        Of course, your code may still need to be restructured to handle the increased latencies of the G5 Velocity Engine pipeline. Avoid small data accesses. Due to the increased latency to memory, the longer cache lines, and the nature of the CPU-to-memory bus, small data accesses should be avoided if possible. The entire system architecture has been designed to optimize the transfer of large amounts of data (i. e. maximize system memory throughput). As a side effect, the cost to handle small accesses can be very high and is quite inefficient.
        See, the problem you're complaining about is a problem with any port to the G5, or really any port from a slow-thin-memory-access system to a fast-wide-memory-access system. It has nothing to do with your AltiVec code. It just has to do with tuning for a larger L2 cache and and faster FSB rather than a slow FSB and a huge L3 cache.

        So let's not blame AltiVec for this. Except for a brief change in policy in the 745X G4, it seems like the AltiVec invocation has been stable for quite awhile.

      • It's interesting to see how the 1.67ghz G4 chip holds it's own against the G5 chips for cracking RC5.. according to http://n0cgi.distributed.net/speed/query.php?cputy pe=all&arch=2&contest=rc572
        Infact, the 1.67ghz beats the 2ghz G5 and isn't far behind the 2.5ghz G5..
  • Altivec and OS X (Score:5, Insightful)

    by siliconwafer ( 446697 ) on Monday March 07, 2005 @11:58AM (#11866454)
    I'd like to know if Mac OS X uses the Altivec instructions to their full potential. For example, the article mentions that a heavily loaded server can benefit greatly from Altivec if the TCP checksum algorithm uses it. Does OS X TCP stack do this?
    • by Anonymous Coward
      Does OS X TCP stack do this?

      Best guess no - you have to weigh up the immediate gains against the overhead of saving/reloading the SIMD registers when you enter/leave the TCP stack. Usually OS kernels avoid SIMD/FPU for this reason.

      (Uh, why was that modded troll?)
      • Re:Altivec and OS X (Score:4, Interesting)

        by crow ( 16139 ) on Monday March 07, 2005 @12:14PM (#11866653) Homepage Journal
        But if you only need one or two of the registers for TCP, then perhaps it would be a win if you are only doing a save/restore on one or two registers (and presumably a status register). And if you only need it in the TCP stack, then only do the save when the kernel calls those functions (which admittedly gets complicated if the kernel is preemptable).
        • Actually I read an optimisation on some darwin mailing list were the altivec register are not immediately save on a context switch, but configured in such a way that a write to those causes an interruption, and that the service routine does the register saving then. The idea was that often, you are prempted by a task that does not use the altivec register, so saving and restoring those registers would be a waste.

          I can't remember if this was implemented in the end...

        • Remember that we're talking about context switch overhead, not function call overhead. It doesn't matter if you use three or thirty vector registers, the kernel still has to take the performance hit to figure out which registers are in use. The compiler will generate instructions to set the appropriate bits in the VRSAVE register, but the OS still needs to compare each bit in VRSAVE so that it knows which registers to save/restore.
    • Re:Altivec and OS X (Score:5, Informative)

      by ip_fired ( 730445 ) on Monday March 07, 2005 @12:17PM (#11866688) Homepage
      I googled around and found this article [macworld.com] on Macworld:
      According to several developers Macworld talked to who are currently working on OS X applications, anytime the OS can take advantage of the AltiVec engine, it does. This ensures that the parts of the OS that can utilize AltiVec, such as working in the new user interface, experience a significant increase in performance.

      I don't know how much of OS X has AltiVec code, but there are many other apple apps that use it. iTunes uses it for encoding music. I'm sure the video codecs in Quicktime use it as well.

      The Mac has a really nice optimization tool called shark [apple.com] which will help you find things that can be put into the AltiVec processor (it also helps with general optimization).
    • If you go to Apple's XCode page, for XCode 2.0 apparently it automatically "vectorizes" applications to take advantage of this. So if not now, probably in the future (unless they're afraid of their newest tools, like Microsoft's internal use of Win2K).
    • Re:Altivec and OS X (Score:5, Informative)

      by Chuckstar ( 799005 ) on Monday March 07, 2005 @12:29PM (#11866858)
      Most (all?) Apple hardware does the checksum in hardware (built into the NIC). Add to that the inefficiency of using Altivec in the kernel, especially for small data sets, and it did not make sense for Apple to develop an Altivec version of the TCP checksum code.

      The reason the article mentions the checksum case is not because Apple is missing the boat, but because there was a nice research article written about writing optimized TCP checksum code for Altivec, providing a good set of example code for aspiring Altivec coders.
      • Also, afaik most kernel code tries hard not to use any math/vectorization coprocessor. In Linux RAID is supposedly MMX/SSE -accelerated and tries hard not to botch everything, but most other aren't.

    • Anecdotally I can tell you that OS X runs much better on a G4 or better than a similarly clocked G3. For example I have an iBook G3 500Mhz and a first generation, G4 400Mhz. The G4 runs noticeably faster. My previous experience pre-OS X was that G3's and G4's performed pretty much on par. It wasn't until the release of OS X that Apple really started putting AltiVec optimizations into the OS.
    • Re:Altivec and OS X (Score:3, Interesting)

      by dbrutus ( 71639 )
      Since the OS X TCP/IP stack is likely fully available in Darwin, why don't you go look and let us administrator types know?
    • Re:Altivec and OS X (Score:5, Informative)

      by bill_mcgonigle ( 4333 ) * on Monday March 07, 2005 @02:30PM (#11868308) Homepage Journal
      I'd like to know if Mac OS X uses the Altivec instructions to their full potential.

      No, at this point too much needs hand tuning for everything to fully utilize the potential of Altivec. Most serious DSP-class apps spend the effort to do this in critical code, but there's plenty of compiled code running in OSX that doesn't benefit from the parallel vectorization that the Altivec unit can offer.

      This is all about to change with GCC 4 which offers an SSA [gnu.org] tree optimizer. The SSA form is particularly useful for doing automatic vectorization of code. I'm not sure what the efficiency will be like in the first release but it looks like good things are coming.
      • Re:Altivec and OS X (Score:3, Informative)

        by edp ( 171151 )
        "No, at this point too much needs hand tuning for everything to fully utilize the potential of Altivec. Most serious DSP-class apps spend the effort to do this in critical code, but..."

        Of course it is rarely true that AltiVec instructions are used to their "full potential" in the sense you can usually find another CPU cycle to eliminate, but neither is it necessary to use hand tuning to get big boosts from AltiVec. We do the hand tuning for you (in C with AltiVec extensions or in assembly language) and p

        • even simple things like copying memory are significantly faster when done with AltiVec instructions.

          Do I get these with a generic compile of an XCode project (curious)?
          • "Do I get these with a generic compile of an XCode project (curious)?"

            Yes, in things like memcpy, you will get AltiVec instructions with just default switches. You could single-step through memcpy (actually a subroutine named __bigcopy) in the debugger and see the instructions.

            The compiler isn't going to automatically recognize you're doing an FFT routine and call an optimized routine instead of using your code. So, to use the optimized signal processing routines, you would add a reference to the Acce

    • OSX most definately uses Altivec QUITE extensively.

      I switched from a 450mhz G3 to a 450mhz G4 a couple years ago, and there's a HUGE performance difference in OSX's boot and response time.

      Of course, the GUI runs a bit faster on the G4, too, but that could be because of the AGP video card.
  • AltiVec is nice... (Score:5, Informative)

    by Grand V'izer ( 560719 ) on Monday March 07, 2005 @12:12PM (#11866636)
    I've done some altivec programming in the past, and discovered it was a very effective use of my time. Since there's no mode-switching penalty for using the vector instructions you can use it for some very trivial-but-common tasks, like replacing strlen(), vector operations on small tables, etc.. I knocked a lot of computation time (25%) from one of my projects just by vectorizing three functions. Of course there's a hitch: vector processing only works for certain kinds of algorithms and requires a change in mindset. In spite of that it's a great tool to have in your box.
    • by evn ( 686927 ) on Monday March 07, 2005 @12:29PM (#11866853)

      The other nice thing about Altivec on OS X is that Apple has done a fairly good job of making it accessible without forcing the programmer to learn and use assembly language. These libraries will automatically fall back to a scaler code path if they're running on a G3 so it saves you from a fair bit of work there too. They have included a number of optimized libraries that use Altivec that are ready to go "out of the box" with xCode including:

      • Vimage: for image processing
      • vDSP: for signal processing
      • BLAS: the name says it all: "Basic Linear Algebra"
      • LAPAC: for solving systems of equations and matrix factorization
      • Vector Math Library: unloads common operations like square root, transcendental functions, division, etc to VMX
      • vBasicOps: for simple algebra operations like integer addition, subtraction, etc.
      • VBN: for dealing with 256-1024bit numbers easily

      Apple has documentation and source code for the libraries on their Developer Connection Website [apple.com]. What good are vector units if nobody can make use of them? I can't wait for Apple to put the GPUs image processing abilities into my hads with CoreImage/Video.

    • How is it helpful for replacing strlen()? Do you load up the string in a vector register and have it compare each byte to \0? Could you provide an example of how this is done? Thanks!
  • by rsborg ( 111459 ) on Monday March 07, 2005 @12:20PM (#11866719) Homepage
    is here [arstechnica.com]. They talk about altivec on Page 3. IIRC, it's the best designed mass-market SIMD implementation there is out there.
    • by adam31 ( 817930 ) <adam31 @ g m a i l .com> on Monday March 07, 2005 @01:48PM (#11867814)
      Why does no one ever talk about Sony's VU assembly in their SIMD comparisons? The parent's linked article even cites the PS2 in its very first sentence, but then ignores it completely!

      The VUs have the sweetest SIMD instruction set I've seen. 32 registers (like altivec), but you can do component swizzling within an instruction, it has MADD and also a sweet Accumulate register that can be re-written to on successive cycles (throughput is worse if you accumulate results in a normal vector register, like you have to on all other SIMDs). So you can do a 4x4 matrix/vector multiply in just 4 instructions!

      The big problem was that you didn't get any of the nice instruction scheduling/re-ordering that you get on PPC or x86 platforms, so the onus was on the programmer to NOP through latency issues (huge pain!)... They finally came out with the VCL that would process chunks of VU assembly and reschedule everything at compile time.

      The really sad thing is that Sony/IBM/Toshiba opted for AltiVec in the Cell. I guess it probably has better tools and IBM is highly leveraged into VMX, but VU was very, very clever considering that it pre-dates all these other SIMD instruction sets.

  • simdtech.org (Score:5, Informative)

    by kuwan ( 443684 ) on Monday March 07, 2005 @12:29PM (#11866849) Homepage
    If anyone is interested, simdtech.org [simdtech.org] is probably the best resource you can find for AltiVec (or any other SIMD) programming. They have a number of tutorials and technical resources and the mailing list is the best there is. Motorola, Apple, and IBM engineers frequent the list so you can get help and information directly from the guys that created AltiVec as well as from those who program for it.

    --
    Join the Pyramid - Free Mini Mac [freeminimacs.com]
  • API matters (Score:4, Insightful)

    by dugenou ( 850340 ) on Monday March 07, 2005 @12:31PM (#11866864)
    /. caught again on buzz words with a shallow article.

    Anyway, what we need is not an autovec compiler, but instead a library with most CPU hungry algorithms well implemented with SIMD extensions.

    What about an open library, cross-platform, multimedia oriented, along the line of SUN's mediaLib [sun.com] ? Would SUN allow the re-use of their API ?

    I'm looking for such a library, with GPL/LGPL compatible license. The API has to be in C, to maximise audience. For many projects, C++ is not an option.

    Primary use will be DSP work in GNU Radio project [gnu.org], but multimedia extensions could prove useful anywhere in GUI's to audio/video app, etc.

    I would take any pointers to such an already existing API/project, or be ready to start a new one, if other people interested in.

    See also this previous story [slashdot.org] for cheap recylced comments.

    • Re:API matters (Score:3, Informative)

      by Kluge66 ( 801510 )
      I think you're looking for liboil: "Liboil is a library of simple functions that are optimized for various CPUs. These functions are generally loops implementing simple algorithms, such as converting an array of N integers to floating-point numbers or multiplying and summing an array of N numbers. Such functions are candidates for significant optimization using various techniques, especially by using extended instructions provided by modern CPUs (Altivec, MMX, SSE, etc.)." http://www.schleef.org/liboil/ [schleef.org]. Th
  • tradeoffs (Score:1, Flamebait)

    by idlake ( 850372 )
    Choosing something like AltiVec involves a bunch of trade-offs:

    -- How much work do I need to do in order to take advantage of it? Some BLAS implementations may support it and some Fortran 95 compilers may generate code for it for some primitives, but other than that, it's a lot of manual work to tune code for it. (My own experience with using the AltiVec instructions can only be described as "painful", among other things because the C interface to them is poorly defined and causes name conflicts.)

    -- Wha
  • Does it matter? (Score:2, Insightful)

    by leereyno ( 32197 )
    I don't know of anyone who makes an open standards based system using the the PowerPC architecture. IBM did release a reference design for a PPC based motherboard, but as far as I know no one every produced it.

    Unless and until I can go down to Fry's and buy a motherboard based off of this chip and put it into a standard case, it really doesn't matter if the CPU is better or not. It is the system as a whole that matters, not the relative performance of one of its components. I'm not going to paint myself
    • Re:Does it matter? (Score:1, Interesting)

      by Anonymous Coward
      You can just buy a $499 mini and get done with it.
      • >> motherboard based off of this chip and put it into a standard case,

        > You can just buy a $499 mini and get done with it.

        uhm, did you notice the word **motherboard**?
        and how about a **standard** case?

        the grandparent is right - it's open but not actually.

        buy a bladecenter loaded with IBM's PowerPC blades now and pray to dear god you'll have anyone but IBM and their partners give you quotations for maintenance agreement in 2006.
        "makes you think twice who you invite to your house"
        • Well the mac mini is small enough that it would easily fit inside a standard case, i'm sure you could build a cluster of them in a standard case and even build in a switch, possibly even a kvm, all in one standardish case.
    • Re:Does it matter? (Score:5, Insightful)

      by podperson ( 592944 ) on Monday March 07, 2005 @12:51PM (#11867146) Homepage
      I don't know of anyone who makes an open standards based system using the the PowerPC architecture. IBM did release a reference design for a PPC based motherboard, but as far as I know no one every produced it.

      CHRP, the PowerPC Common Hardware Reference Platform is what you're looking for, and it's been around since before there were Apple PowerPCs. AFAIK most, if not all, the PowerPC-based workstations shipped by IBM, the BeBox, various third-party PowerPCs such as those from PowerComputing, and many of Apple's machines (even tody) are either compliant or as-close-to-compliant-as-makes-sense with this or evolutions of this standard (such that some fanatics Rhapsody/OS X were able to get it running on AIX PowerPC workstations).

      CHRP Links [firmworks.com]

      I'm not going to paint myself into a corner with a proprietary system from anyone, let alone Apple.

      Until I can make the computer from sand, copper ore, and crude oil using recipes downloaded from the internet (i.e. "The Diamond Age"), I don't see the useful distinction between being able to build a computer out of proprietary chips from one of, count them, two CPU manufacturers, a video card from one of, count them, two graphics card manufacturers, etc. and simply buying a computer that works.
    • Unless and until I can go down to Fry's and buy a motherboard based off of this chip and put it into a standard case, it really doesn't matter if the CPU is better or not. It is the system as a whole that matters, not the relative performance of one of its components. I'm not going to paint myself into a corner with a proprietary system from anyone, let alone Apple.

      that's a slightly oxymoronic way of expressing it, isn't it? There are descendants of CHRP that exist, such as the Power Mac. Last I checked

    • Re:Does it matter? (Score:1, Informative)

      by Anonymous Coward
      the new 'Amiga' is basically a PowerPC reference board. they make an ATX and a Mini-ITX version of the board. it will run the new AmigaOS / MorphOS / all (?) the various PPC Linuxes (Linii?).

      http://www.walibe.com/modules.php?name=News&file=a rticle&sid=16 [walibe.com]

      http://slashdot.org/article.pl?sid=03/09/22/239215 &tid=137&tid=138 [slashdot.org]

      the AmigaOne boards are either G3 or G4. no G5.
    • $499 is hardly painting yourself into a corner financially, unless a $30 KVM switch would break your bank.

      and what difference does it matter if it's an already complete system, or not in a "generic" case?? if you program for altivec enabled processors, it's probably going to be running on a Mac anyway. it's highly unlikely that any code you may write will be running on IBM hardware as it's even less common and much more expensive than any Mac(G4/5) w/altivec.

      Ignoring Altivec isn't a very good idea eit

    • You might want to keep an eye on these guys [power.org]. Power.org members are likely to produce those PPC standard case motherboards in the reasonably near future.
  • AltiVec instructions (Score:1, Informative)

    by Anonymous Coward
    There's a book "Vector Game Math Processors" by James Leiterman ISBN: 1-55622-921-6 that discusses programming PowerPC-AltiVec, MIPS, and 80x86 SIMD instructions. I found it pretty useful when I do vector programming with AltiVec! Some instructions that other processors have that AltiVec doesn't are simulated with what he called PseudoVec!
  • by AeiwiMaster ( 20560 ) on Monday March 07, 2005 @01:17PM (#11867480)
    On the D programing [digitalmars.com] newsgroup we have been talking
    about implementing a vectorization syntax, so
    we can have portable vector code which
    approach the speed of hand coded vectorization.

    Here is something from the list.

    What is a vectorized expression? Basically, loops that does not specify any
    order of execution. If there is no order specified, of course the compiler
    can choose any one that is efficient or maybe even distribute the code and
    execute it in parallel.

    Here is some examples.

    Adding a scalar to a vector.
    [i in 0..l](a[i]+=0.5)

    Finding size of a vector.
    size=sqrt(sum([i in 0..l](a[i]*a[i])));

    Finding dot-product;
    dot=sum([i in 0..l](a[i]*b[i]));

    Matrix vector multiplication.
    [i in 0..l](r[i]=sum([j in 0..m](a[i,j]*v[j])));

    Calculating the trace of a matrix
    res=sum([i in 0..l](a[i,i]));

    Taylor expansion on every element in a vector
    [i in 0..l](r[i]=sum([j in 0..m](a[j]*pow(v[i],j))));

    Calculating Fourier series.
    f=sum([j in 0..m](a[j]*cos(j*pi*x/2)+b[j]*sin(j*pi*x/2)))+c;

    Calculating (A+I)*v using the Kronecker delta-tensor : delta(i,j)={i=j ? 1 : 0}
    [i in 0..l](r[i]=sum([j in 0..m]((a[i,j]+delta(i,j))*v[j])));

    Calculating cross product of two 3d vectors using the
    antisymmetric tensor/Permutation Tensor/Levi-Civita tensor
    [i in 0..3](r[i]=sum([j in 0..3,k in 0..3](anti(i,j,k)*a[i]*b[k])));

    Calculating determinant of a 4x4 matrix using the antisymmetric tensor
    det=sum([i in 0..4,j in 0..4,k in 0..4,l in 0..4]
    (anti(i,j,k,l)*a[0,i]*a[1,j]*a[2,k]*a[3,l]) );
    • Interesting, but how do you enforce the property of paralellism without restricting the language that is available within the loop?
      • I'm just guessing, but they probably don't try to restrict the language or "enforce" paralellism properties at all. It's probably like writing "function(x++,x++)" which could be evaluated as function(x,x) or as function(x,x+1) or as function(x+1,x). The behaviour is undefined and its considered the programmer's fault if he does it and gets buggy results.

        -
  • I wrote AltiVec code in 1999, and I even have a faded AltiVec t-shirt from the same year. AltiVec is just not new.

    "Re-Introducing" would be a better title.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...