Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Reverse-engineered KNI Documentation 66

Clive Turvey has continued his investigation into KNI, and has documented the instructions he has found so far. MMX coders will be happy to see that KNI contains a 1-cycle shuffle instruction, like Altivec does, but unlike MMX. As expected it has some instructions specifically for 3D (1/x and 1/sqrt(x)). Update: 02/24 12:11 by S : As reader Christopher Thomas points out, Intel has released a manual that includes a functional description of KNI. But I can't find any instruction timings in it. It weighs a hefty 5.5Mb.
This discussion has been archived. No new comments can be posted.

Reverse-engineered KNI Documentation

Comments Filter:
  • or maybe not. Im sure intel will just enforce their will upon the innocent masses. I noticed Visual Studio 6.0 is mysteriously lacking an "optimize for 3D-NOW" checkbox while having both an optimize for KNI and MMX box. does Intel have some secret deal with Microsoft that im not aware of?
  • Actually I found an incredible deal where this place is Cali was selling em for the same price as most other places were dealing PIIs. I landed em for under 500 each, including warranty and heatsink fan.


    PDG--"I don't like the Prozac, the Prozac likes me"
  • A lot of that coding is well over my head-But one thing I do agree with is that MMX is a lot of hype, There is only one game I can think of that you can tell a difference between MMX and non and that is Rainbow 6-if you can ever get it to work in the first place.
  • A small subset of these SIMD instructions are useful in what you might consider the kernel, or the core libraries for packet handling and copying blocks of memory. Don't write them off completely.
  • Why reverse engineer publicly available information? Does this guy just have way too much time on his hands?

    Daniel
  • hey, I just grabbed myself a pair of PIIIs for my dual cpu server, where I plan to do some massive graphics streaming, and reproduction. Put it with the 500megs of RAM and this thing is gonna scream like the devil


    PDG--"I don't like the Prozac, the Prozac likes me"
  • when are we gunna get some optimazations for 3Dnow? I have a k6-2 300 and would love to see someone enhance the voodoo2 drivers in Linux for 3Dnow. I don't knowthat much about 3Dnow, but would it help to include 3Dnow into the kernel? I would love to see this. Anyway I just think it would be nice for people to write there apps and include 3dnow enhancements! Im not gunna purchase a PIII, its too expensive and the K3 looks better (plus its cheaper).
    NaTaS
    natas777@geocities.com
  • would you rather go thru the PDF hell of getting Intel's file or see this guys stuff fast and easy? Also give the guy credit for hacking this chip to scratche his itche in a format I find easy to read; not that I think I'm ever going to use this stuff. Allez Clive. You have My support!!
  • "...the KNI instructions won't ever get
    used because all the rendering is passed
    off to the graphics card."

    Is incorrect. All a 3d accelerator card does is speed up the DISPLAY of 3d objects, the processor still has to figure out where to PUT those objects, hence the usefulness of KNI (or SIMD, as Intel wants to call it, or even AMD's 3dNow.)
    See below for a more in-depth explanation.

    As for 2d? Again, the only thing the card does is DISPLAY pictures, it does not do ANY calculations on where anything goes. In your standard Windowing OS, the processor has to figure out what is going to be displayed, and where it is going to go. After it figures this all out, it passes it on to the video card, which displays it. The difference between accelerated (2d) and non-accelerated is that on a non-accelerated, the processor has to tell it where every pixel goes, whereas an accelerated card can have "draw box", "put this text in box" passed to it instead.

    3D explanation:
    There are four steps in displaying a real-time 3d scene (I'll use a first person shooter for examples):
    1. Process the scene for objects (walls, cieling, floor, polygons for bad guys)
    2. Plot exactly where those objects are going to be, including removing hidden lines
    3. Apply textures and lightsourcing to polygons
    4. Display it on the monitor.
    In a 3d-accelerated computer, the accelerator does steps 3 and 4. That leaves the processor to figure out where everything goes (which is where about 50% of the processing power is needed.)
    This is why a P2-450 without 3d accelerator is approximately equal to a P-200MMX with one. The P2 is about twice as fast, so it does the same.
    On VERY, VERY high end video cards ($2000+ workstation OpenGL cards) they can also do step 2, and THAT would take the hard work away from the processor.
  • Actually, the hard way is to get a Slot One Celeron and drill out a pin, and solder a wire or two on the chip board thing. The easy way is here [kikumaru.com] and it doesn't void your warranty on the Intel chip. It may void a warranty on a $10 "Slotket" adapter, but WHO CARES?

    - - - - - - - - - - - - - - - - -
  • Is it safe to assume this is meant sarcastically?
  • Well, it would be nice if gcc would implement a "use KNI instructions" option. Hopefully it'll happen in less time than it took GNU to implement MMX instructions.
  • oh well..

    Playing games at a fast speed is fun..
    but coding your engine to go 1frame/sec faster then someone else is funnier...

    Hell.. next they place WHOLE GAMES into 1 single processor-instruction..
  • Does gcc have a command-line option to produce 3dNow! inline instructions in the executable? If not, it should.
  • No consumer level ( 3DLabs gamma chip doesn't count) 3D Geometry hardware is announced by any of the major players so if you expect to see anything shipping in 99 I think you are dreaming. Just look how long it took 3DLabs, NVidia, ATI,... to ship the latest chips after they were announced.


    Geometry acceleration will almost certainly start showing up by 1H00, when 0.18 is in full swing and another round of cards show up, and may be showing up in 2H99 with the current round of new cards. Which you believe depends on the veracity of the rumour mill and the competence of the card manufacturers.


    Putting geometry acceleration on the cards is a Good Thing from the graphics card manufacturer's point of view. This frees up the processor in CPU-limited cases, and frees up the bus if some of your models can stay resident on the card and just be transformed instead of reloaded. Card manufacturers have stated for a while that they want to put geometry acceleration on cards, and since the middle of last year have been saying that it will show up Real Soon Now (tm). Whichever manufacturers _do_ manage to get good geometry acceleration out there with good drivers will decimate the members of the competition who put out cards without acceleration, at least in the short term. It is in their interests to implement this as quickly as possible.


    Now, the rumour mill. Rumours include, but are not limited to:

    • Matrox putting a general-purpose RISC core on one of their new cards coming out.
      They have the beginnings of this on the G200 already. Two Matrox cards are rumoured to be ready to ship, but tied up in NEC's fab lines as they work on the Dreamcast. One is the G400, rumoured to be a pair of G200s at 0.25 micron. One is the G300, which is an unknown quantity.

    • 3DLabs putting one of their high-end geometry chips on some versions of the Permedia 3 card.
      This wouldn't hurt sales of their high-end cards, which are multi-chip, have vast amouns of RAM, and are in general designed to be better cards than anything on the consumer end.
    • Vague talk about the TNT2 having some geometry acceleration.
      As opposed to just being a shrink of the TNT to 0.25 micron. I'm doubtful of this one.


    Take your pick, but it wouldn't surprise me if at least one of these turned out to be true, and these aren't the only cards coming out this year.


    This also doesn't address the fact that for doing certain things ( wieghted meshes, physics,...) a geometry accellerator isn't going to help.


    Quite true. However, there is enough geometry grunt work being done in most cases that a geometry accelerator would certainly help.

  • I thought the only way you could get a Celeron to SMP was by souldering on some extra wires and stuff?
  • >if the OpenGL and DirectX driver authors decide to write KNI-optimized drivers

    That's the problem though, when MMX first came out it was the same old thing-it was going to be real useful whenever the authors decide to use it-Now they are going to KNI-and again, we will have to wait and see if our money will be well spent. Although this time around I think it is slightly different than MMX in the way that people have their eyes open, and hopefully realize that it will be some time before there is software available to use KNI to it's max potential. Most people don't realize that KNI and MMX won't enhance their current apps.
  • I'm not quite clear here... if Intel wants KNI to become a new processor standard, and wants everybody to write software for it, wouldn't it behoove them to publish the instruction set themselves, not leave it to hackers to reverse engineer it???


    It most certainly would, which is why the full instruction set manual is up on their web page in plain view for anyone who wants to look at it.


    Ideally they'd have released it before the PIII launch, but now that the PIII has been officially released, it's definitely publically available.


    The URL is http://developer.intel.com/de sign/pentiumiii/manuals/ [intel.com].

  • hmm. well it seems strange that they didnt include 3dnow support in their VS and they ALREADY have KNI support.
  • For real-time 3D multimedia, people who are
    interested in performance will probably
    invest in a 3D accelerator card anyway.
    So the KNI instructions won't ever get
    used because all the rendering is passed
    off to the graphics card.

    And for the 2D stuff, everyone already
    HAS graphics acceleration anyway. Since
    when is the main system CPU doing any
    multimedia calculations in a modern
    high-end PC? (Except for video decoding)

    It seems to me that the only time these
    KNI instructions would help speed up a
    3D app would be when you are rendering
    in non-realtime, such as when you use
    POV to raytrace an image. But that
    certainly doesn't seem to be the market
    Intel is aiming for.

    Someone please correct me if I have made
    some mistake in my brief analysis.

  • Microsoft's release of Windows KNI..

    run to your local dealer.. .WHEEEE!!
  • You mean you already blew a few thousand dollars without stopping to see if there was a better solution first. I have one thing to say, alpha. Everyone knows the drill, 64 bit and all the rest.



    I assure you I am very biased but that is only because I know I am right
  • I sent an e-mail to AMD about this, and they pointed me in the direction of their docs
    for the 3DNow instruction set - Unfortunately I am not skilled enough to do anything constructive ( like making a 3DNow asm target....)

    They were quite nice about it, and not at all hostile to Linux -they'd presumably be only too happy for someone to do something about it,
    it could only increase their sales...


    Here are the relevant links:

    Dear David,
    For the 3Dnow code instructions, check the Data book:
    http://www.amd.com/K6/k6docs/index.html

    For general information on 3Dnow, consult the following site:
    http://www.amd.com/products/cpg/3dnow/index.html

  • I ate two sticks of celery at the same time once, and I did get all those wirey (s) things in my teeth. Thats about all the good it did me tho, probably about the same that a dual celeron system would do anyone.
  • Blew a few thousand? I think not. But aside from buying an SGI or an even more Unix system, I think building this system has been rather affordable. Besides, the software is cheaper, easier to find, and my bank account isn't UNlimited.



    PDG--"I don't like the Prozac, the Prozac likes me"
  • Someone please correct me if I have made
    some mistake in my brief analysis.


    Re. 3D graphics, you are most certainly correct. As of roughly 2H99, enough 3D graphics cards with geometry acceleration will exist to make SSE useless to the gamers who would otherwise have cared about it.


    Re. 2D graphics, the situation here is a bit odder. 2D acceleration has existed for a while, but most image processing programs use software filtering for better control over the output. Filtering is one thing that AltiVec will be good at, so I expect to see a horde of Mac users proclaiming that the G4 is the ultimate in computing because it runs Photoshop five times faster than a PII. If your main use of your computer is image processing in Photoshop, I guess that's a good point. If your main use is Quake, then it will be less relevant.


    I've been dabbling in rendering and ray-tracing for a while, but would be more interested in a cheap 8-way SMP system than a SSE system for the time being (regrettably, these don't seem to exist).

  • What we need are cheap geometry chips. The bottleneck on most 3D cards right now is the CPU and/or the bus. They just can't feed the card enoug triangles to keep it happy. This would be solved if there were cheap geometry chips that could be put on the 3dcard. Then your CPU could be 50% idle while running Quake :) I mean, think about it, the only reason people buy these powerful chips (these x86 chips) is because of games, multimedia processing (get an SGI, which are, I know x86, but at least they're different) maybe server stuff (get a Sun or an Alpha). With cheap geometry processors, you knock out a LOT of Intel's market and KNI becomes practically useless.
  • Bottom line--if everyone wants to tout celerons and AMD, then find me a motherboard that can handle TWO PROCESSORS, plus ultra-wide scsi 2, and still handle major video streaming and encoding.



    PDG--"I don't like the Prozac, the Prozac likes me"
  • I've been researching 3D cards a bit, because
    I'm considering applying for a job doing
    OpenGL programming. Anyway, any card which
    implements OpenGL will do the coordinate
    transformations as well as the actual
    rendering.

    Perhaps the KNI was a good idea which is
    about to be eclipsed by the next generation
    of 3D video cards.

    BTW, the Vodoo2 chipset does OpenGL and
    Direct3D as well as 3Dfx.

"No matter where you go, there you are..." -- Buckaroo Banzai

Working...