Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Star Bridge FPGA "HAL" More Than Just Hype 120

Gregus writes "Though mentioned or discussed in previous /. articles, many folks (myself included) presumed that the promises of Star Bridge Systems were hype or a hoax. Well the good folks at NASA Langley Research Center have been making significant progress with this thing. They have more info and videos on their site, beyond the press release and pictures posted here last year. So it's certainly not just hype (though $26M for the latest model is a bit beyond the $1,000 PC target)."
This discussion has been archived. No new comments can be posted.

Star Bridge FPGA "HAL" More Than Just Hype

Comments Filter:
  • uhh, (Score:5, Funny)

    by Anonymous Coward on Saturday February 15, 2003 @11:50AM (#5308842)
    uhh, so which link is the story?
    • uhh, so which link is the story?

      I have no idea - I was going to make a 'HAL9000 from 2001' comment here, but Im worried it might be actually On Topic...

    • Take a look here:
      http://hummer.larc.nasa.gov/acmbexternal/Personnel /Storaasli/images/HALNews.html [nasa.gov]

      If you watch the "speedup" movie, the guy talks about processing speeds equivalent to "100,000 gigs" (not sure if it's GHz or GFLOPS or what though) that sounds aweful fast. The demo shows the thing calculating fractals 35x faster than a PC while consuming only 0.1% of the resources.

      Obviously, I have no clue how this thing works other than that its mighty fast. I'm also thinking that with a bunch of these things, cracking RSA might not be so difficult after all.

  • by Superfarstucker ( 621775 ) on Saturday February 15, 2003 @11:55AM (#5308854)
    26M? hah! i save that much every year pirating software and audio off the net.. puh-leez!
  • by $$$$$exyGal ( 638164 ) on Saturday February 15, 2003 @11:55AM (#5308855) Homepage Journal
    Star Bridge Systems is the leading developer of truly parallel Hypercomputers.Our patent-pending hardware, software and Viva programming language are reinventing computer programmability to create the fastest, most versatile and energy-efficient computing systems available for solving many problems that require high computational density.

    That's directly from their site. I wish the /. summary would have mentioned parallel hypercomputers. And note that when you search Google for "parallel hypercomputers", you only get get the one hit from Star Bridge Systems (and soon you'll get a hit for this comment on /. ;-)). No wonder people thought this was a hoax.

    --sex [slashdot.org]

    • It's amazing sometimes the things that DO get posted, considering the number of interesting stories that get rejected. Maybe it's a weekend thing...

      Not that this technology isn't interesting, but the writeup above is awful!

    • Hyper as in "hyper cube" (cube in more than three dimentions where every vertex is connected with every vertex that is parallel to it)? Hyper as in hyper velocity (mach 10 and above)? Hyper as in spastic or really, really, really neato?

      Is "hypercopmuter" a real word with a standardized definition?

      • Is "hypercopmuter" a real word with a standardized definition?
        Hypercopmuter? Perhabs a cop commuting through space and time? SCNR. I think the hyper is not based on any scientific or mathematic definition, they just picked it up because they thought it sounded cool. Their product overview page [starbridgesystems.com] defines what they think hypercomputers are:
        Our products include the implementation of the relatively new computer chip, the FPGA (Field Programmable Gate Array) along with our patented Viva software, to form what we term 'Hypercomputers'. These machines are capable of truly extraordinary computational feats. The result is simply the creation of a new kind of computer system that gives users tremendous power with an intuitive, state-of-the-art software tool.
      • Is "hypercopmuter" a real word with a standardized definition?

        Never heard of it. But anything to quiet those pesky, over-zealous, redneck sheriff's deputies sounds good to me!

        Sorry, couldn't pass it up...
      • Here's the google search for only the word: "hypercopmuter [google.com]"

        Your original search: hypercopmuter returned zero results.
        The alternate spelling: hypercomputer returned the results below.

        Here's a Feb'1999 Wired Article [wired.com] that explains what Star Bridge considers a hypercomputer.

        --naked [slashdot.org]

      • I think the keyword here is "Hype"
        • Too bloody right!
          """
          It is called a fractal architecture, where the structure of the lower level is repeated at the higher level.
          """

          Wow - they've reinvented the binary tree. But given it a new modern name. I'm _sooooo_ happy for them.

          YAW.
          • Note, I did not mean toimply that the topology they use is a tree, but that a tree could also be called a fractal too.

            I was just overly annoyed at them as they told me that I'd "Loaded page in 0.012 seconds", when it took about 2 fucking seconds. That means:
            a) they're liars
            b) they're tossers for making such a fucking stupid statement.

            Anger now vented. Back as you were.

            YAW.
    • Sounds like a Unisys type company setting themselves up for another bogus IP land grab.
  • by TheRaven64 ( 641858 ) on Saturday February 15, 2003 @11:59AM (#5308866) Journal
    Here we see the solution to the problem of too many comments about a /. story. Simply obfuscate the story so much that no one can figure out what it's about, or even find the link to the original. Hats off to Gregus and CowboyNeal for the idea.

    • > Here we see the solution to the problem of too many comments about a /. story. Simply obfuscate the story so much that no one can figure out what it's about, or even find the link to the original. Hats off to Gregus and CowboyNeal for the idea.

      Yeah, but it sure makes it hard to figure out who to flame for not reading the story.

    • Amen! I was expecting a story ala Stargate. Ah, traveling to other worlds via a Star Bridge Portal.
  • Heh (Score:4, Funny)

    by code shady ( 637051 ) on Saturday February 15, 2003 @12:00PM (#5308868) Homepage
    I like that little "page loaded in x seconds" blurb in the corner.

    I'm having waaay more fun then i should be refreshing the page and watching the load times get longer . . . and looooonnnnger . . . . and looooonnnnnnngggggggger.

    Hey, it beats workin'.
    • Does anyone know how they did that? The numbers change when you refresht he page. I looked at the source and all it had was...

      Loaded page in 0.011 seconds.


      So I assume that the numbers get added to the page when it is rendered on the server side. Is this some sort of Apache plugin?
      • by SB5 ( 165464 )
        that is kind of scary, Loaded page in .011 seconds what was like before being slashdotted 0.001 seconds?
  • by RobertTaylor ( 444958 ) <roberttaylor1234 ... il.com minus cat> on Saturday February 15, 2003 @12:02PM (#5308878) Homepage Journal
    Well the good folks at NASA Langley Research Center have been making significant progress with this thing

    I can see it now...

    *techie smacks the machine

    HAL: "I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal."

    • > HAL: "I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal."

      We've missed our window of opportunity for creating HAL. If we started today, once he obtained basic sentience he'd waste all his time trolling Slashdot instead of doing his homework, and never pass his qualifications for flying a spaceship.

  • ...but getting hold of raw Naqahdah must be difficult, and I don't think anyone's managed to get a machine booted beyond Chevron Six.

    More seriously, the programming language for this smells a bit snake-oilish, as do most parallel programming languages, especially those touted by hardware companies. (Occam anyone?)

  • Consumer usage (Score:2, Insightful)

    by W0lphAU ( 643363 )
    I may be wau off here but it seems to me if your going to market this sort of product for the consumer market the point to emphasize would be the potential to pump out out millions of dirt cheap little processors exactly the same in manufacturing. and then apply the relevant "program" and turn them into a control chip for a coffeemaker or alarm clock or whatever
    • That would be a huge waste of FPGA technology. You can use a 2$ microcontroler that contains a 15MHz CPU to control smart consumer electronics. The chips can be programmed in C. motorola [motorola.com] You don't need specific chips anymore. FPGA implements massive parallelism. In what consumer electronics do you need massive parallelism? The 2$ microcontroler will be a lot easyer to program since it only does 1 instruction at a time and you don't need to worry about reconfiguring your chip hardware and sincronizing the different parts of the it like in an FPGA.
      The only place in consumer electronics where an FPGA would be usefull would be in application where space is critical like in PDA, handhelds... There the FPGA could be reprogrammed to be used as different periferals. For example if you need a sound card voila the chip transform into one. Then later you need a modem and again it is programmed into the chip. It would save space by having one chip transforming into different chips. But I'm not even sure the gain would be that big compared to having one standard chip that contains, video card/modem/sound card modules that can be turned on and off.
    • I could also be way off here, but I think that the point of this was to be able to compile programs in a high level language to actual hardware so that it would be faster and take less electricity and stuff. They added parallel computing to the mix, and now they have something really neat---if they can pull it off. For consumer electronics you use microcontrollers that you can program. They're basically little dinky computers that cost a few bucks apiece. But if you manage to have very parallel programs running on a bunch of FPGAs, that would be as if you had created the whole program using special-purpose hardware. Now if only more people were working on this, I'd get very excited.
  • by g4dget ( 579145 ) on Saturday February 15, 2003 @12:09PM (#5308899)
    People have been trying to use FPGAs for general purpose computing for as long as there have been FPGAs. Reconfigurable computing turns out to be pretty hard--it's hard to program these kinds of machines.

    Now, maybe someone will be able to make this go. But this company doesn't look like it. If you manage to get to their web site and look at the programming language "Viva" they have designed, it looks like you are drawing circuit diagrams. Imagine programming a complex algorithm with that.

    There are already better approaches to programming FPGAs (here [clemson.edu], here [colostate.edu], here [berkeley.edu]). Look for "reconfigurable computing" on Google and browse around.

    • FPGA experiences (Score:5, Informative)

      by goombah99 ( 560566 ) on Saturday February 15, 2003 @12:48PM (#5309035)
      I've brushed up against reconfigurable computing engineers in various applications I've had over the years. The last one was for trying to process laser radar returns coming in at gigabits per minute so we could do real time 3-D chemical spectoscopy of the atmosphere at long range. The problem with conventional hardware was the busses were too slow and the data rate too fast too cache, and too much to archive on disk. you could not effieicently break the task into multiple CPU since just transfering the information from one memory system to the next would become the bottleneck, breaking the system.

      FPGAs worked pretty well here because they could handle the fire hose data rate from front to back. Their final output was a small nuumber of processed bytes so that could then go to a normal computer for display and storage.

      the problems the engnieers had was two fold. first in the early chips there were barely enough gates to do the job. and in the later ones form xylinx there were plenty of transistors but they were really hard to design properly. the systems got into race conditions were you had to use software to figure out the dynamic proerties fo the chip to see if two signals would arrive at the next gate in time to produce a stable response. you had to worry where on the chip two signals were coming from. it was ugly and either you accepted instability or failed prootypes or you put in extra gates to handle synchronization--which slowed the system down, and caused you to waste precious gates.

      still my impression at the time was WOW. here is something that is going to work, its just a matter of getting better hardware compilers. Since then Los Alamos has written a C compiler that compiles C to hardware and takes into account all these details it used to take a team of highly experienced engineers/artists to solve.

      Also someone leaked a project going on at National Instruments that really lit up my interest in this. I don't know what ever became of it, maybe nothing. but the idea was this. National instruments makes a product called "labview" which is a graphics based programming language whose architechute is based on "data flows" rather than procedural programming. in data flows, objects emitt and receive data asynchronously. when an object detects that all of its inputs are valid data it fires, does its computation (which might be procedural in itself, or it might be a hierarchy of data flow subroutines hidden inside the black box of the object) and emitts its results as they become valid. there are no "variables" per se just wires that distriuted emitted data flows to other waiting objects. the nice thing about this language is that its wonderful for instumentation and data collection, since you dont alwayd know when data will become available or in what order it will arrive from different sensors. Also there is no such thing as a syntax error, since its all graphical wiringing, no typiing, thus it is very safe for industrial control of dangerous instruments.

      anyhow the idea was that each of these "objects" could be dynamically blown onto an FPGA. each would be a small enough computation that it would not have design complications like race conditions and all the objects would be self timed with asyncronous data flows.

      THe current state of the art seems to be that no one is widely using the C-code or the Flow control languages. instead they are still using these hideous dynamical modelling, languages that dont meet the needs of programmers because they require to much knowledge of the hardware. I dont know why. maybe they are just too new.

      However these things are not a panacea. For example, recently I went to the FPGA engineers here with a problem in molecular modeling of proteins. I wanted to see if they could put my fortran program onto an fpga chip. the could not, because 1) there was too much stored data required and 2) there was not enough room for the whole algorithm. So I thought well maybe they could put some of the slow steps on to the fpga chip. for example, given a list of 1000 atom coordinates, return all 1 million pair wise distances. This too proved incompatible for a different reason. When these fpga chips are connected to a computer system the bottleneck of getting data into and out of them is generally worse than that of a cpu (most commerical units are on PCMCIA slots or the PCI bus). thus the proposed calculation would be much faster on a ordinary microporcessor since most of the time is spent on reads and writes to memory.! there was however one way they could do it faster and that was to pipeline the calculations say 100 or 1000 fold deep. so that you ask for the answer for one array, and then go pick up the answer to the array you asked about 1000 arrays ago. this would have complicated my program too much to be useful.

      these new FPGAs are thus exciting because they are getting so large and have so much onboard storage and fast internal busses that a lot of the problems I just mentioned may vanish.

      My knowlege of this is about year out of date so I apologize if some of the things I said are not quite state of the art. But I suspect it reflects the commerially avialable world

      • Also there is no such thing as a syntax error, since its all graphical wiringing, no typiing, thus it is very safe for industrial control of dangerous instruments.

        Ummm--that's kind of the equivalent of the panic glasses from the Hitchhiker's Guide to the Galaxy: they turn dark when there is anything dangerous around that might frighten you.

        When you get an error in a programming language, that's a good thing: it means that the language detected something you were trying to do that doesn't make sense. Error detection isn't perfect, but it's there for a reason. If you want as little error detection as possible, program in assembly language.

        FPGAs are probably one of the worst ways in which you could try to build reliable systems: they are hard to program and they lack error checking. Your best bet for building reliable systems is a very mature, simple microprocessor running a bulletproof, verified language implementation that has extensive built-in error checking and support for error recovery.

        • Also someone leaked a project going on at National Instruments that really lit up my interest in this.

          As a previous poster has replied, LabView is in wide distribution. It's aimed at the scientific and engineering markets, and, like AutoCAD and similar products, allows the user to enter program descriptions either in graphical or textual form.

          But, here's the kicker: every single heavy LabView user I know of (even the ones without extensive previous programming training or experience) drops the graphical interface in favor of the textual one. Further, I am familiar with one large project developed under LabView, and the opinion of the programmers involved was that it pretty much sucks eggs.

          What's the point or relevance here? National Instruments (not to be confused with National Semiconductor) has put a lot of time and effort into developing a graphical language that seems *perfect* for capturing data-flow like algorithms, just the kind of thing you'd want to run on an FPGA, and they've pretty much failed. (People continue to use LabView because, for that community, there currently is little better that has such wide support.) The problem of programming interface for things like this is hard, mostly because our algorithms are by-and-large not stateless.
      • "Also someone leaked a project going on at National Instruments that really lit up my interest in this."

        Labview has been available for quite some time now. It's very specialized software with almost no use in the mainstream that I can think of, but it's out there.
      • Eeek! (Score:5, Insightful)

        by Snork Asaurus ( 595692 ) on Saturday February 15, 2003 @02:53PM (#5309577) Journal
        the systems got into race conditions were you had to use software to figure out the dynamic properties of the chip to see if two signals would arrive at the next gate in time to produce a stable response

        Precisely one of the reasons that I shriek in horror when I hear that some hardware was 'designed' by a clever software guy. What you describe "figure out the dynamic ... stable response" (a.k.a. timing analysis) is not done in debugging - it is part of the design from square one, and is part of proper hardware design practices.

        The fact that FPGA's are "programmable" does not move their development into the domain of software engineers.

        A whole spectrum of skills is required to do proper hardware design (being a good 'logician' is only one of them) and FPGA's are raw hardware not finished product like a motherboard. Timing and many other 'real-world' factors that must be considered bore the hell of many 'logicians', but are critical to a reliable design.

        A frightening number of Rube Goldberg machines exist out there that were designed by people who know something of logic and nothing of hardware design. I've had to redesign several of these "works only under ideal conditions but it's brilliant" pieces of junk.

        Before you dismiss me as a hardware snob, let me tell you that I have spent many years both sides of the street and have dedicated my most recent 10 years to learning the art of good software design (there was supposed to *cough* be a bigger future in it). Each requires a set of skills and abilities that do intersect, but many of which exist entirely outside of that intersection. The fact that "logic" is one of the intersecting skills does not make a good hardware designer good at software nor does it make a good software designer good at hardware.

    • One interesting project I found a few years back is the RAW project at MIT [mit.edu]. It does pretty much the same thing but they are no longer using FPGAs. (They use chips which are similar to FPGAs but specified towards computation.) Their first prototypes used FPGAs though.

      Seems like the "programming language" is similar to LabView and such schematic programming languages. (Eg in Matlab you have Simulink.) Apparently there's quite a lot of people who find that easier to work with.

      Oh well, it's an interesting field. Let's just hope they don't get a bunch of ludicrous patents that stifle other research in the area.
  • No magic -- sorry (Score:5, Insightful)

    by Anonymous Coward on Saturday February 15, 2003 @12:13PM (#5308914)
    For a start: chip designers everywhere use FPGA:s to prototype their designs. No magic; they are reasonably fast (but not as fast as custom designed chips), and way more expensive. Having a large array of them would indeed make it possible to run DES at a frightening speed -- but so would a mass of standard computers. The sticking point is that the collection of FPGA:s emulating a standard CPU would be way slower for any given budget for CPU:s than a custom chip (like the PII, PIII or AMD K7) -- and way more expensive.

    Think about it: both Intel and AMD (and everybody else) uses FPGA:s for prototyping their chips. If it was so much more efficient, why do they not release chips whith this technology already?

    As for the reprogramming component part of this design: translating from low-level code to actual chip surface (which it still is very much about) is largely a manual even for very simple circuits, largely because the available chip-compiler technologies simply aren't up to the job.

    Besides, have any of you thought about the context-switch penalty of a computer that will have to reprogram its' logic for every process :)
    • No magic; they are reasonably fast (but not as fast as custom designed chips), and way more expensive.. ... although, it would be really cool to run Magic (ASIC chip design software) on these things. Probably bitching fast, too, as you have a prototype board just sitting there. And you could use it to design those "custom chips" that are so much more efficient :-)

      BTW, you're right, context-switch would be a bitch, probably take 10 milliseconds.
    • I still think it would be cool to have a device that could play mp3s for a while, and then when a call came in, simply load in the wireless phone processor and go from there. I don't know if it would ever actually be cheaper than simply having both devices though.
    • by seanadams.com ( 463190 ) on Saturday February 15, 2003 @12:52PM (#5309050) Homepage
      For a start: chip designers everywhere use FPGA:s to prototype their designs.

      Xilinx/Altera would not be in business if this were the only thing people used FPGAs for. There are some things you can do in an FPGA exceptionally well, eg pumping lots of data very quickly, and doing repetitive things like encryption, compression, and DSP functions. Generally speaking, the simpler the algorithm and the more it can be parallelized, the better it will work in hardware as compared to a CPU (yes, even a 4GHz pentium might be slower per $).

      As for the reprogramming component part of this design: translating from low-level code to actual chip surface (which it still is very much about) is largely a manual even for very simple circuits, largely because the available chip-compiler technologies simply aren't up to the job.

      I think it's a language problem more than a limitation of the synthethis/fitting tools. VHDL and Verilog are horrific. They are designed for coding circuits, not algorithms.

      Besides, have any of you thought about the context-switch penalty of a computer that will have to reprogram its' logic for every process

      With today's FPGAs this is a real problem. They're designed to be loaded once, when the system starts up. What we neeed is an FPGA that can store several "pages" of configurations, and switch between them rapidly. The config would need to be writeable over a very fast interface of course.
      • What we neeed is an FPGA that can store several "pages" of configurations, and switch between them rapidly.

        I was just thinking the exact same thing. When the reconfiguring process speeds up to the point where it loses only a few cycles instead of thousands, it could speed up certain processes considerably. Suppose the FPGA would start out in a 'basic' general purpose config, while a preprocessor would scan ahead and create several circuit schemes based on the code it finds. Something leaning towards compiler based optimisation, but in real-time. This would be a tricky task, but the boost could be significant.
        • MIT has a project like this call PipeWrench I belive
        • There is research going on in this field. Eg it would allow you to reconfigure part of a pipe-line while data is flowing through the chip.

          This is in fact already possible, but the reconfiguration time for large parts of a chip is generally way to slow for it to be usable. But if you have a design which allows you to reconfigure only a very small part of the chip then it's doable during runtime. (Although you may need special boards to do it, I'm not sure how many developer boards actually support reconfiguration while running.)

          The idea of having small premade parts is already in use by eg the RAW project at MIT. Doing runtime optimizations is probably never going to happen though because doing routing on a large FPGA can take days to complete.
    • Although FPGAs may be used to emulate CPUs etc., that does not maximize their potential speed and flexibility. Traditional CPUs are severely restricted to only one (or several) operations/cycle. Thus, most silicon (gates) on general-purpose CPUs is wasted during each cycle with less than 1% active/cycle. FPGAs are inherently parallel, allowing orders of magnitude more operations/cycle. You can pack applications to maximize the operations/cycle and if you exceed the 6 million gates/FPGA chip, even extend to additional FPGAs & FPGA boards. This allows tailoring FPGAs to applications in a reconfigurable way to optimize silicon use. Viva simplifies coding of large-scale applications in a 3-dimensional way (x & y screen axes plus drilling in for the 3rd dimension) which is more intuitive than traditional 1-dimensional sequential line-by-line ASCII coding. The next generation [cox.net] seem to adapt well to graphic (iconic) coding perhaps better than many of us who may have our tradition in 1-D ASCII coding.
    • The introduction to this article addresses most of your points: "Iterative Matrix Equation Solver for a Reconfigurable FPGA-Based Hypercomputer" [starbridgesystems.com]. I'm certainly no expert in chip design, but what they are saying makes some sense:

      Your point about speed:

      "... the collection of FPGA:s emulating a standard CPU would be way slower ..."

      Their point is that you aren't emulating a standard CPU. Their approach is for application that involve "Solving systems of simultaneous linear equations ...". The traditional approach is many generic CPU's in parallel.From the article:

      "However, this type of parallelism is inefficient, using only a small fraction of CPU resources at any given time, while the rest of the silicon lies idle and wastes power. CPUs are designed to be general and capable of performing any function they will ever need to perform. Therefore, they contain many resources that are rarely used. In addition, the inter processor communication time required by traditional matrix equation solvers seriously limits the number of processors that may operate efficiently in parallel optimize chips is normally a long and tedious process not available or feasible to most programmers."

      You argue cost:

      ... for any given budget for CPU:s ... and way more expensive.

      The article argues that probably replace a single FPGA with a whole lot of CPU's (because it can process as much in parallel as you can cram on the chip). One could also point out that if this type of technology becomes more prevalent, higher production volumes would lower FPGA costs. I guess we'd have to see some ROI analysis - how many CPU's can they replace with an FPGA? Could you get one workstation class device to replace a cluster or mainframe? Most of their articles discuss a technology in the Proof of Concept stage - so it will be a while before we can talk about which situations it pays off to use this in.

      Your third point, its hard to code FPGA's:

      "...translating from low-level code to actual chip surface ... is largely a manual even for very simple circuits, largely because the available chip-compiler technologies simply aren't up to the job."

      A major thrust of StarBridge systems seems to be creating easy to use and effective tools to do exactly this. Read the sections about their Viva technology. Even if it doesn't do it perfectly, it may do it good enough.

  • ...Does anybody else find it remotely unnerving that NASA is working with a computer system named "HAL"?!

    Arthur C. Clark might of been on to something... First the geosync. satellite, now this!?
    • Better than that, a key NASA researcher attempting to convert Cray weather codes is Dave. However, we're targeting the new Starbridge HC system with Xilinx FPGA chips with 6 million gates rather than the 62K gates of HAL.
  • by wirelessbuzzers ( 552513 ) on Saturday February 15, 2003 @12:16PM (#5308924)
    I looked at this site several years ago, and thought, "whoa, cool idea, FPGAs would make a really fast computer." Then for two years, nothing to show for this idea. And after I programmed some FPGAs, I realized (at least partly) why: they're too slow to program. It takes on the order of milliseconds to reprogram even a moderate-sized FPGA.

    And even a very large FPGA would be pretty lousy at doing SIMD, vector ops, etc. Basically, they would suck at emulating a computer's instruction set, which is (fairly well) optimized for what software actually needs to do. I can't think of many algorithms used by software today that would work much better in an FPGA, except for symmetric crypto. And if you need to do that, get an ASIC crypto chip, 10s of dollars for umpity gigs/second throughput. SPICE might also run a bit faster on these (understatement), but those types already have decent FPGA interfaces.

    Furthermore, the processor programming these FPGAs must have some serious power... if you have to do many things on an FPGA at once (which you do if there are only 11 of them), you basically have to place & route on the fly, which is pretty slow.

    So, I don't think that these "hypercomuters" will ever be any better than a traditional supercomputer in terms of price/performance, except for specialized applications. And even then, it won't be any better than an application specific setup. And not many people need to go back and forth between specialized tasks. (Who am I to complain of price/performance, I'm a Mac user?)

    That said, if they *can* put a hypercomputer on everyone's desk for $1,000.00, more power to them!
    • I looked at this site several years ago, and thought, "whoa, cool idea, FPGAs would make a really fast computer." Then for two years, nothing to show for this idea. And after I programmed some FPGAs, I realized (at least partly) why: they're too slow to program. It takes on the order of milliseconds to reprogram even a moderate-sized FPGA.

      "This site" is hardly the forerunner in reconfigurable computing. Look for "reconfigurable computing" on Google, and you will find that academic research labs have been looking at it for as long as there have been FPGAs.

      There are probably better tradeoffs than FPGAs for reconfigurable computing: rather than reconfiguring gates, it may make sense to reconfigure arithmetic circuits. There has been some work in that area. The point is that FPGAs are nice because they are commodity hardware, but they are probably a pretty suboptimal choice for reconfigurable computing.

  • by Anonymous Coward
    How long will it take thier server to reconfigure (or melt)? I guess it can't handle the slashdot effect.
  • How about in the slashdot summary, we give some idea what we're talking about from now on.
  • News for the curious. Stories that you cant really find.
  • by Anonymous Coward on Saturday February 15, 2003 @12:31PM (#5308972)
    According to this this presentation [nasa.gov], NSA are involved with two projects.

    Going from 4GLOPS in Feb'01 to 470GFLOPS in Aug'02 for ten FPGAs, that's 120 times faster in little over a year. Not bad.

    Any thoughts on what this means for crypto cracking capability?

    • Yes, and the theoretical rate of how fast the GeForce 4200 can go is 1 TFlop. Anyways, FPGA's have a lil problem: they're too slow. That's balanced by the fact they can be re-programmed (even by themselves). ASIC's are almost always faster.
    • ASICs are much faster, and if you're going to throw a lot of horsepower at brute-forcing a problem, then a low-volume ASIC run is a better. Here's the EFF's DES cracker [eff.org], which used ASICs. If you want to crack DES, you can use the mask sets they've paid for. Or, you can use upcoming one-mask technology that allows you to build a mostly-custom chip at 1/10 the cost.
  • by Dolphinzilla ( 199489 ) on Saturday February 15, 2003 @12:32PM (#5308979) Journal
    We started using FPGA's in our HPC designs where I work several years ago - the designs are faster, more reliable, and quicker to design. StarBridges graphical development environment is a lot like another product sold by Anapolis Micro called Corefire [annapmicro.com].
    Corefire is a java based graphical (iconic)development environment for Xilinx FPGA's. It is like anything else though sometimes programming in VHDL will be a better choice, it depends on the complexity of the design and the desired end result. But all in all we probably saved at least 6 man-months of design time using Corefire.
  • More information (Score:5, Informative)

    by olafo ( 155551 ) on Saturday February 15, 2003 @12:38PM (#5308991)
    More technical information is found in MAPLD Paper D1 [klabs.org] and other reports [nasa.gov]. NASA Huntsville, NSA, USAF (Eglin), University of South Carolina, George Washington University, George Mason University, San Diego Supercomputer Center, North Carolina A&T and others have StarBridge Hypercomputers they are exploring for diverse applications. The latest StarBridge HC contains Xilinx FPFAs with 6 million gates compared to the earlier HAL-Jr with only 82,000 gates. Costs are nowhere near $26 Million. NASA spent approx 50K for two StarBridge Systems.
  • ... you'd thing they would use it to host their web page [starbridgesystems.com] har har!

    Of course, the slashdotting it is starting to succomb to might be because they spent do much on developing the machine that they could only afford hosting off a single little DSL connection. After all, they certainly haven't spend much on PR either as they do not garner many search hits on the net or widespread press...
  • by KingPrad ( 518495 ) on Saturday February 15, 2003 @12:53PM (#5309055)
    There is a lot of work being done with adaptive computing involving a combination of a general CPU and an FPGA. The CPU takes care of normal work, but processing-intensive repetitive tasks can be pushed onto the FPGA. The FPGA is basically reconfigured as needed into a dedicated signal processor which can churn through a set of complex instructions in a single step rather than a few dozen clock cycles on a general purpose CPU.

    The way it works then is that a board is made with a normal CPU and an FPGA next to it. At program compile time a special compiler determines which algorithms would bog down the processor and develops a single-cycle hardware solution for the FPGA. That information then becomes part of the program binary so at load time the FPGA is so configured and when necessary it processes all that information leaving the CPU free. The FPGA can of course be reconfigured several times during the program, the point being to adapt as necessary. The time to reconfigure the FPGA is unimportant when running a long program doing scientific calculations and such.

    It's a pretty nifty system. Some researches have working compilers and they have found 6-50x speedup with many operations. The program won't speed up that much of course, but it leaves the main CPU more free when running repetitive scientific and graphics programs.

    You can find information in the IEEE archives or search google for 'adaptive computing'. It's a neat area with a lot of promise.

  • From what i have read about reconfigurable chips at Sci American and other websites is that while they can do wonder for certain applications they still can't match the wiring of a 'vector processor' . Vector chips are very efficient. I have always wondered why they industry has turned it's back on them. The linux/intel solution is not as efficient as everyone thinks. Too much heat and networking the chips has it's difficulties. The japanese nec vector supercomputer is way ahead now of USa. If you don't believe me then go here and learn what top US scientists say , good article. Go down 3 articles . NewsFactor Portal [newsfactor.com]
    • Vector processors are very much in use today. All current processors support in in some way through SSE and similar instructions, and the G4 AltiVec has a lot more to offer in the same area. Furthermore if you have a reasonably current graphics card then it uses vector processing as well.

      Nobody said vector processors are dead. They just tend to be overkill for most applications. (And hence they are instead used as a type of co-processors.)
  • by OoSync ( 444928 ) <wellsed@gmailBOHR.com minus physicist> on Saturday February 15, 2003 @03:42PM (#5309847)
    I cannot remember the name of the project, but two years ago a Chinese group published a paper where they used a Xilinx fpga on a custom circuit plugged into a pc SDRAM slot. The idea was to limit the communication bottlenecks of other pc busses and also to present a simple way to communicate with the fpga. All in/output to/from the fpga was done with simple mmap() routines. Their test application was a DES code breaker that could run as fast as the the memory subsystem could take it. Exciting stuff. And it has to be said: I wish I had a Beowulf cluster with these.
  • Was the paper too simple? I mean I was able to understand every single thing in the paper, as if the paer was written for a layman. Since when did NASA guys start writing papers like that? But anyway, the other day I was discussing the use of FPGA in an embedded system and we reached the conclusion that an FPGA consumes too mcuh power to implement a whole CPU out of it. That said, an FPGA is surely an interesing concept for specialized computation as the article mentions. But this is very much limited by how frequently can u reprogramme the FPGA and if I am not wrong, this is not a very fast process, not comprabale to the number of instructions executed by a modern day CPU. So effectively, massively paralled applications are fine but if U have to reprogramme your FPAS too often then you wont get the proposed performance boost.
  • FPGA problems (Score:4, Interesting)

    by wumpus2112 ( 649012 ) on Saturday February 15, 2003 @04:54PM (#5310216)
    FPGAs are great for complex control logic in hardware. They can also be used for DSP functions. Using FPGAs for general purpose computing efficiently is difficult. You essentially start out with a 100:1 handicap against a commodity CPU (this comes from the amount of transitors per gate, using VHDL vs. custom design, and the way routing wires have to be universal). Programming such a beast becomes an exersize in finding huge amounts of parallelism that don't require memory accesses (FPGAs have limited RAM on board, but don't expect much, and you have to share accross 100s of functions). Supposedly there are C to hardware complilers out there, but I can't see Joe Software designer chugging out code that carefully checks to see how every line affects the clock rate (remember that: in software you have 10% of the code executed 90% of the time, in harware you have 100% of the code executed 100% of the time. The FPGA can only clock as fast as the slowest path). The economics are probably the worst problem. These sort of things are most likely to go into government or military instalations where the contract says hardware has to do (impossible thing), and be maintained for x years. The device gets made, the customer changes the requirements, spin repeat until it ships as an expensive system that a simple desktop could do by shipdate. If you build a beowulf to do something, with a little forsight you can turn around in two years and double the power. With an FPGA design, you may be able to buy an off-the-shelf board that has advanced (if the company is still in buisness, not guarenteed), but then you have to dig into the source and modify for the new chips. This gets expensive fast.
  • While FPGAs are definitely very useful for specific problems they are way too difficult to use for replacing a general purpose CPU. At the company I work for we use FPGAs for prototyping our ASIC. Each such Xilinx FPGA costs about $1000! Sure, it beats the hell out of any DSP but it takes an entire development team and a substantial amount of time to design the FPGA equivalent of whatever algorithm you're trying to implement! It would be nice if we had a C->FPGA compiler but... just take a look at how much difficulty Intel is having with their VLIW Itanic! Good parallelizing compilers are still a good research project rather than a proven product. Nonetheless, I would love to get my hands on that machine...
  • These days anything that isn't related to Linux or Windows or a new video card is considered hype or hoax. It's sad how close minded and unexciting computers have become. We have huge debates about whether X11 should be retired, which dates from 1984, and the end result is always "well, we've gotten this far with it so let's keep going." And so it goes.
  • Hmm... computing platform heavily dependent on runtime-configurable FPGAs. Doesn't that sound like the Commodore-One (MSRP 250 Euros)?

Bus error -- driver executed.

Working...