Star Bridge FPGA "HAL" More Than Just Hype 120
Gregus writes "Though mentioned or discussed in previous /. articles, many folks (myself included) presumed that the promises of Star Bridge Systems were hype or a hoax. Well the good folks at NASA Langley Research Center have been making significant progress with this thing. They have more info and videos on their site, beyond the press release and pictures posted here last year. So it's certainly not just hype (though $26M for the latest model is a bit beyond the $1,000 PC target)."
uhh, (Score:5, Funny)
Re:uhh, (Score:1)
I have no idea - I was going to make a 'HAL9000 from 2001' comment here, but Im worried it might be actually On Topic...
Re:uhh, (Score:2)
http://hummer.larc.nasa.gov/acmbexternal/Personne
If you watch the "speedup" movie, the guy talks about processing speeds equivalent to "100,000 gigs" (not sure if it's GHz or GFLOPS or what though) that sounds aweful fast. The demo shows the thing calculating fractals 35x faster than a PC while consuming only 0.1% of the resources.
Obviously, I have no clue how this thing works other than that its mighty fast. I'm also thinking that with a bunch of these things, cracking RSA might not be so difficult after all.
$26M ...just a drop in the bucket (Score:5, Funny)
Re:$26M ...just a drop in the bucket (Score:1)
and Kevin Mitnick did more than 5 times that in damages in the mid nineties, so it can't be that hard.
What is Star Systems? (Score:5, Insightful)
That's directly from their site. I wish the /. summary would have mentioned parallel hypercomputers. And note that when you search Google for "parallel hypercomputers", you only get get the one hit from Star Bridge Systems (and soon you'll get a hit for this comment on /. ;-)). No wonder people thought this was a hoax.
--sex [slashdot.org]
Re:What is Star Systems? (Score:2)
Not that this technology isn't interesting, but the writeup above is awful!
What is a "hypercomputer"? (Score:2)
Is "hypercopmuter" a real word with a standardized definition?
Re:What is a "hypercomputer"? (Score:2)
Re: "hypercopmuter" (sic) (Score:2)
Never heard of it. But anything to quiet those pesky, over-zealous, redneck sheriff's deputies sounds good to me!
Sorry, couldn't pass it up...
Re:What is a "hypercomputer"? (Score:3, Informative)
Your original search: hypercopmuter returned zero results.
The alternate spelling: hypercomputer returned the results below.
Here's a Feb'1999 Wired Article [wired.com] that explains what Star Bridge considers a hypercomputer.
--naked [slashdot.org]
Re:What is a "hypercomputer"? (Score:1)
Re:What is a "hypercomputer"? (Score:1)
"""
It is called a fractal architecture, where the structure of the lower level is repeated at the higher level.
"""
Wow - they've reinvented the binary tree. But given it a new modern name. I'm _sooooo_ happy for them.
YAW.
Re:What is a "hypercomputer"? (Score:1)
I was just overly annoyed at them as they told me that I'd "Loaded page in 0.012 seconds", when it took about 2 fucking seconds. That means:
a) they're liars
b) they're tossers for making such a fucking stupid statement.
Anger now vented. Back as you were.
YAW.
Re:What is Star Systems? (Score:1)
Re:What is Star Systems? (Score:1)
(It was probably more like Occam than C, to be honest, as parallelism was a given. However, Car Hoarne was not involved in this spin-off.)
YAW.
Re:What is Star Systems? (Score:3, Insightful)
Finally a solution! (Score:5, Funny)
Re: Finally a solution! (Score:1)
> Here we see the solution to the problem of too many comments about a
Yeah, but it sure makes it hard to figure out who to flame for not reading the story.
Re:Finally a solution! (Score:1)
Heh (Score:4, Funny)
I'm having waaay more fun then i should be refreshing the page and watching the load times get longer . . . and looooonnnnger . . . . and looooonnnnnnngggggggger.
Hey, it beats workin'.
Re:Heh (Score:1)
So I assume that the numbers get added to the page when it is rendered on the server side. Is this some sort of Apache plugin?
Re:Heh (Score:1)
Daisy Daisy... (Score:3, Funny)
I can see it now...
*techie smacks the machine
HAL: "I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal."
Re: Daisy Daisy... (Score:1)
> HAL: "I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal."
We've missed our window of opportunity for creating HAL. If we started today, once he obtained basic sentience he'd waste all his time trolling Slashdot instead of doing his homework, and never pass his qualifications for flying a spaceship.
Well, a working Starbridge would be cool... (Score:2)
More seriously, the programming language for this smells a bit snake-oilish, as do most parallel programming languages, especially those touted by hardware companies. (Occam anyone?)
Re:Well, a working Starbridge would be cool... (Score:1)
Consumer usage (Score:2, Insightful)
Re:Consumer usage (Score:1)
The only place in consumer electronics where an FPGA would be usefull would be in application where space is critical like in PDA, handhelds... There the FPGA could be reprogrammed to be used as different periferals. For example if you need a sound card voila the chip transform into one. Then later you need a modem and again it is programmed into the chip. It would save space by having one chip transforming into different chips. But I'm not even sure the gain would be that big compared to having one standard chip that contains, video card/modem/sound card modules that can be turned on and off.
Re:Consumer usage (Score:1)
reconfigurable hype (Score:3, Insightful)
Now, maybe someone will be able to make this go. But this company doesn't look like it. If you manage to get to their web site and look at the programming language "Viva" they have designed, it looks like you are drawing circuit diagrams. Imagine programming a complex algorithm with that.
There are already better approaches to programming FPGAs (here [clemson.edu], here [colostate.edu], here [berkeley.edu]). Look for "reconfigurable computing" on Google and browse around.
FPGA experiences (Score:5, Informative)
FPGAs worked pretty well here because they could handle the fire hose data rate from front to back. Their final output was a small nuumber of processed bytes so that could then go to a normal computer for display and storage.
the problems the engnieers had was two fold. first in the early chips there were barely enough gates to do the job. and in the later ones form xylinx there were plenty of transistors but they were really hard to design properly. the systems got into race conditions were you had to use software to figure out the dynamic proerties fo the chip to see if two signals would arrive at the next gate in time to produce a stable response. you had to worry where on the chip two signals were coming from. it was ugly and either you accepted instability or failed prootypes or you put in extra gates to handle synchronization--which slowed the system down, and caused you to waste precious gates.
still my impression at the time was WOW. here is something that is going to work, its just a matter of getting better hardware compilers. Since then Los Alamos has written a C compiler that compiles C to hardware and takes into account all these details it used to take a team of highly experienced engineers/artists to solve.
Also someone leaked a project going on at National Instruments that really lit up my interest in this. I don't know what ever became of it, maybe nothing. but the idea was this. National instruments makes a product called "labview" which is a graphics based programming language whose architechute is based on "data flows" rather than procedural programming. in data flows, objects emitt and receive data asynchronously. when an object detects that all of its inputs are valid data it fires, does its computation (which might be procedural in itself, or it might be a hierarchy of data flow subroutines hidden inside the black box of the object) and emitts its results as they become valid. there are no "variables" per se just wires that distriuted emitted data flows to other waiting objects. the nice thing about this language is that its wonderful for instumentation and data collection, since you dont alwayd know when data will become available or in what order it will arrive from different sensors. Also there is no such thing as a syntax error, since its all graphical wiringing, no typiing, thus it is very safe for industrial control of dangerous instruments.
anyhow the idea was that each of these "objects" could be dynamically blown onto an FPGA. each would be a small enough computation that it would not have design complications like race conditions and all the objects would be self timed with asyncronous data flows.
THe current state of the art seems to be that no one is widely using the C-code or the Flow control languages. instead they are still using these hideous dynamical modelling, languages that dont meet the needs of programmers because they require to much knowledge of the hardware. I dont know why. maybe they are just too new.
However these things are not a panacea. For example, recently I went to the FPGA engineers here with a problem in molecular modeling of proteins. I wanted to see if they could put my fortran program onto an fpga chip. the could not, because 1) there was too much stored data required and 2) there was not enough room for the whole algorithm. So I thought well maybe they could put some of the slow steps on to the fpga chip. for example, given a list of 1000 atom coordinates, return all 1 million pair wise distances. This too proved incompatible for a different reason. When these fpga chips are connected to a computer system the bottleneck of getting data into and out of them is generally worse than that of a cpu (most commerical units are on PCMCIA slots or the PCI bus). thus the proposed calculation would be much faster on a ordinary microporcessor since most of the time is spent on reads and writes to memory.! there was however one way they could do it faster and that was to pipeline the calculations say 100 or 1000 fold deep. so that you ask for the answer for one array, and then go pick up the answer to the array you asked about 1000 arrays ago. this would have complicated my program too much to be useful.
these new FPGAs are thus exciting because they are getting so large and have so much onboard storage and fast internal busses that a lot of the problems I just mentioned may vanish.
My knowlege of this is about year out of date so I apologize if some of the things I said are not quite state of the art. But I suspect it reflects the commerially avialable world
Re:FPGA experiences (Score:2)
Ummm--that's kind of the equivalent of the panic glasses from the Hitchhiker's Guide to the Galaxy: they turn dark when there is anything dangerous around that might frighten you.
When you get an error in a programming language, that's a good thing: it means that the language detected something you were trying to do that doesn't make sense. Error detection isn't perfect, but it's there for a reason. If you want as little error detection as possible, program in assembly language.
FPGAs are probably one of the worst ways in which you could try to build reliable systems: they are hard to program and they lack error checking. Your best bet for building reliable systems is a very mature, simple microprocessor running a bulletproof, verified language implementation that has extensive built-in error checking and support for error recovery.
Re:FPGA experiences (Score:2)
And the IDE for a programming language like Java will not let you compile programs with syntax errors or type errors.
There aren't IDEs or compilers that flag bad algorithms.
But the error checking that exists for programming languages is still vastly superior than anything that exists for hardware or circuit programming: making circuits work correctly is still a lot harder than making equivalent software work correctly.
Re:FPGA experiences (Score:2)
As a previous poster has replied, LabView is in wide distribution. It's aimed at the scientific and engineering markets, and, like AutoCAD and similar products, allows the user to enter program descriptions either in graphical or textual form.
But, here's the kicker: every single heavy LabView user I know of (even the ones without extensive previous programming training or experience) drops the graphical interface in favor of the textual one. Further, I am familiar with one large project developed under LabView, and the opinion of the programmers involved was that it pretty much sucks eggs.
What's the point or relevance here? National Instruments (not to be confused with National Semiconductor) has put a lot of time and effort into developing a graphical language that seems *perfect* for capturing data-flow like algorithms, just the kind of thing you'd want to run on an FPGA, and they've pretty much failed. (People continue to use LabView because, for that community, there currently is little better that has such wide support.) The problem of programming interface for things like this is hard, mostly because our algorithms are by-and-large not stateless.
Re:FPGA experiences (Score:2)
Labview has been available for quite some time now. It's very specialized software with almost no use in the mainstream that I can think of, but it's out there.
Eeek! (Score:5, Insightful)
Precisely one of the reasons that I shriek in horror when I hear that some hardware was 'designed' by a clever software guy. What you describe "figure out the dynamic ... stable response" (a.k.a. timing analysis) is not done in debugging - it is part of the design from square one, and is part of proper hardware design practices.
The fact that FPGA's are "programmable" does not move their development into the domain of software engineers.
A whole spectrum of skills is required to do proper hardware design (being a good 'logician' is only one of them) and FPGA's are raw hardware not finished product like a motherboard. Timing and many other 'real-world' factors that must be considered bore the hell of many 'logicians', but are critical to a reliable design.
A frightening number of Rube Goldberg machines exist out there that were designed by people who know something of logic and nothing of hardware design. I've had to redesign several of these "works only under ideal conditions but it's brilliant" pieces of junk.
Before you dismiss me as a hardware snob, let me tell you that I have spent many years both sides of the street and have dedicated my most recent 10 years to learning the art of good software design (there was supposed to *cough* be a bigger future in it). Each requires a set of skills and abilities that do intersect, but many of which exist entirely outside of that intersection. The fact that "logic" is one of the intersecting skills does not make a good hardware designer good at software nor does it make a good software designer good at hardware.
Re:relax dude (Score:2)
It was not intended to be a rip at him and if it seems that way, I'm sorry.
But I am seriously disturbed by the rate of propagation of the utter falsehood that because FPGA's are programmable, "Joe Programmer" can properly implement a reliable design with them (just as the simple fact that I can type and develop logic does not automatically mean that I have the skills be a great programmer).
Assuming that "Joe Programmer" can is VERY dangerous and indeed in some cases could be a life-threatening assumption. God forbid that you should ever depend on a critical system where any portion of the hardware was based on programmable logic implemented by a programmer because he could program them. There's a hell of a lot more to proper electronics hardware design than implementing logic (and designing with FPGA's is electronics design, not programming). It requires a vastly different set of skills than "Joe Programmer" will have and the solution does not simply boil down to "proper tools" like you suggest. There is a reason why Electrical Engineers study a lot of different courses than Software Engineers.
To be balanced the same applies in the reverse direction. Electrical Engineers do not automatically have the skills to be good Software Engineers. But that is not what is at issue here.
Please read and remember: The fact that FPGA's are "programmable" does not move their development into the domain of software engineers.
Forget SW engineers, how about plain ol' h4x0rz (Score:2)
Now that's not meant to be a personal attack, just a little joke from someone who comes from a family littered with engineers. But it also goes straight to my point which is that FPGAs do begin to move chip functionality into the realm of the hacker.
By definition, these are people who tend to flaunt conventions. That's not to say that they might not have impressive technical backgrounds; on the contrary, it's usually one of the defining elements of such an individual. But this is a category of people who are looking for answers to problems that might not even attract the attention of a more rigorous professional engineer.
For these people, the FPGA is truly a revolutionary advance. Take OpenCores for instance. Here we see black box IP cores that enable people who have neither hardware nor software design skills to begin tinkering with FPGAs. I know a guy who is a total computer idiot who works downloading video encoders onto FPGAs for video production units. This guy couldn't program a DOS batch file or re-wire a broken lamp, but his job description makes him and FPGA Engineer. Given such realities, I think the assertion that a typical programmer type is somehow dangerous at the controls of an FPGA is a bit of an overreaction.
Re:Forget SW engineers, how about plain ol' h4x0rz (Score:2)
It seems the logic of this thread
Good one.
relies a bit too heavily on strict definitions of software engineers and hardware engineers.
That's a matter of convenience and nothing else. If you'd prefer, you can preface each use of the term with 'people that have the typical skills of'.
Personally, I think that one of the biggest problems that the engineering community has is the "Berlin Wall" approach to software and hardware and that we'd all have much better systems if each had a better understanding of what the people on the other side do, how, and why. Speaking as one who has dodged the bullets and crossed that wall more than once (and sometimes perches on top of it shouting obscenities at each side for their snobbery), I can tell you quite authoritatively that each side consists of good, honest, bright, hard-working people who do vastly different things and generally don't know much about how the people on the other side work, yet think they know it quite well.
What about those of us who think there are two kinds of engineers: those who drive trains, and those with sticks up their asses?
There's a difference between being an elitist and being a realist. Let me assure you that I am the latter, not the former.
Now that's not meant to be a personal attack, just a little joke from someone who comes from a family littered with engineers.
We have something in common.
But it also goes straight to my point which is that FPGAs do begin to move chip functionality into the realm of the hacker. By definition, these are people who tend to flaunt conventions.
Look, I'm the first guy to flaunt conventions and thumb my nose at people who rigorously adhere to them for no good reason. But I'm also very strongly of the opinion that while there are conventions that should and must be challenged, there are others that must not. The convention that all North Americans drive on the right side of the road is one, for example, that is better not to flaunt. The same is true of the conventions that lead to the design of safe and reliable systems.
That's not to say that they might not have impressive technical backgrounds; on the contrary, it's usually one of the defining elements of such an individual. But this is a category of people who are looking for answers to problems that might not even attract the attention of a more rigorous professional engineer. For these people, the FPGA is truly a revolutionary advance. Take OpenCores for instance. Here we see black box IP cores that enable people who have neither hardware nor software design skills to begin tinkering with FPGAs. I know a guy who is a total computer idiot who works downloading video encoders onto FPGAs for video production units. This guy couldn't program a DOS batch file or re-wire a broken lamp, but his job description makes him and FPGA Engineer. Given such realities, I think the assertion that a typical programmer type is somehow dangerous at the controls of an FPGA is a bit of an overreaction.
Context, sir, context. I have no problem about hobbyists and hackers dabbling in their own environs - I actually think that it's a good thing. But one must not make the leap of faith that because he gets a something to "work" on his bench that the "hacker" is capable of and follows the proper design practices required to create products for use in the real world. I can make a steel box, fasten some cable to it, run the cable over some pulleys mounted on a high platform and hook the cable up to a motor. Are you ready to ride my elevator?
Now, if that "hacker" understands that the FGPA does not exist in isolation, but is rather part of a real-world system; understands, has analyzed and has designed for things such as worst case timing, voltage and temperature conditions, noise (and all the other boring considerations that lead to safe and reliable systems); has done proper simulation, prototyping and testing under degraded conditions then we may have some basis for discussion. How many "hackers" or hobbyists or "people who have the skills of Software Engineers" can and do do that?
There is a place for the type of freewheeling experimentation that you describe and indeed it helps to advance technology - I have no argument with that. But that is experimentation, not implementation, and the danger is that many people do not understand the difference which can lead to disastrous results.
I have had to rework designs in the past where the people who implemented them had a very clever logic concept but no idea about hardware design. Such designs would typically work only under ideal lab conditions, but fail or behave erratically once moved to the real world because the 'designer' failed to understand the real-world issues that exist outside of the abstract and ideal world in with which pure logic exists. Therein lies one of the major differences between an Engineer and what I like to call a "Logician".
Re:Forget SW engineers, how about plain ol' h4x0rz (Score:2)
I know we're not to this point yet, but what intrigues me so much about FPGAs is the idea that you might someday be able to buy a hardware product intended to be used in one way and use it in another way much as an Intel PC loaded with Microsoft at the time of purchase can be repurposed as a Linux PC.
This seems like such a powerful political issue. This could be THE political issue of the 21st century. The seduction of such thoughts were the inspiration for my comment. I certainly didn't mean to suggest that hardware design no longer remains a very sophistiacted realm of engineering. But to be fair there are still elements of, for instance, kernel programming that are not accessible to the novice or even the apprentice programmer. However, that doesn't stop millions of end users from reformatting their Windows drives with Linux distros.
I think that the more we get people who are taking the total bonzai approach to programming their own FPGAs, the better because it creates a knowledge base that comes from this out-in-left-field orientation. That kind of knowledge base eventually makes it into blogs, discussion boards, mailing lists, newsgroups and becomes the basis of a FAQ or a How-To that addresses the issues that a novice is most likely to face based on the real world experience of legions of such individuals.
While there is a snowball's chance in hell that such work will result in well engineered designs, it's still important. Well engineered designs will come from individuals such as yourself who have the dedication and competence to make then happen. What concerns me is not so much who will create the good designs, but how will those designs be used by the public, who will make those decisions about how those designs are used and how will those decisions be enforced.
Having said that, I'm sure you're right that there are dangers in letting incompetent people play with powerful toys. One great danger is that shoddy products will be marketed by irresponsible jokers. In fact, the irresponsibility that we've seen by so many software designers has led us to this point where it's almost impossible to sell software partly because the users insist on trying it first because they're so wary of being ripped off yet again. Well that may be a gross oversimplification of an issue that is complicated in many ways, but poor design and outright scams certainly haven't helped matters.
For now, the hardware and software worlds are still far apart, but it seems that the FPGA is drawing them closer. This may be ominous, but it may also be liberating in many senses of the word.
Re:reconfigurable hype (Score:2)
Seems like the "programming language" is similar to LabView and such schematic programming languages. (Eg in Matlab you have Simulink.) Apparently there's quite a lot of people who find that easier to work with.
Oh well, it's an interesting field. Let's just hope they don't get a bunch of ludicrous patents that stifle other research in the area.
No magic -- sorry (Score:5, Insightful)
Think about it: both Intel and AMD (and everybody else) uses FPGA:s for prototyping their chips. If it was so much more efficient, why do they not release chips whith this technology already?
As for the reprogramming component part of this design: translating from low-level code to actual chip surface (which it still is very much about) is largely a manual even for very simple circuits, largely because the available chip-compiler technologies simply aren't up to the job.
Besides, have any of you thought about the context-switch penalty of a computer that will have to reprogram its' logic for every process
Re:No magic -- sorry (Score:1)
BTW, you're right, context-switch would be a bitch, probably take 10 milliseconds.
Re:No magic -- sorry (Score:2)
Re:No magic -- sorry (Score:5, Insightful)
Xilinx/Altera would not be in business if this were the only thing people used FPGAs for. There are some things you can do in an FPGA exceptionally well, eg pumping lots of data very quickly, and doing repetitive things like encryption, compression, and DSP functions. Generally speaking, the simpler the algorithm and the more it can be parallelized, the better it will work in hardware as compared to a CPU (yes, even a 4GHz pentium might be slower per $).
As for the reprogramming component part of this design: translating from low-level code to actual chip surface (which it still is very much about) is largely a manual even for very simple circuits, largely because the available chip-compiler technologies simply aren't up to the job.
I think it's a language problem more than a limitation of the synthethis/fitting tools. VHDL and Verilog are horrific. They are designed for coding circuits, not algorithms.
Besides, have any of you thought about the context-switch penalty of a computer that will have to reprogram its' logic for every process
With today's FPGAs this is a real problem. They're designed to be loaded once, when the system starts up. What we neeed is an FPGA that can store several "pages" of configurations, and switch between them rapidly. The config would need to be writeable over a very fast interface of course.
Re:No magic -- sorry (Score:1)
I was just thinking the exact same thing. When the reconfiguring process speeds up to the point where it loses only a few cycles instead of thousands, it could speed up certain processes considerably. Suppose the FPGA would start out in a 'basic' general purpose config, while a preprocessor would scan ahead and create several circuit schemes based on the code it finds. Something leaning towards compiler based optimisation, but in real-time. This would be a tricky task, but the boost could be significant.
Re:No magic -- sorry (Score:1)
Re:No magic -- sorry (Score:2)
This is in fact already possible, but the reconfiguration time for large parts of a chip is generally way to slow for it to be usable. But if you have a design which allows you to reconfigure only a very small part of the chip then it's doable during runtime. (Although you may need special boards to do it, I'm not sure how many developer boards actually support reconfiguration while running.)
The idea of having small premade parts is already in use by eg the RAW project at MIT. Doing runtime optimizations is probably never going to happen though because doing routing on a large FPGA can take days to complete.
Emulating CPUs with FPGAs??? A better way... (Score:1)
Re:No magic -- sorry (Score:1)
The introduction to this article addresses most of your points: "Iterative Matrix Equation Solver for a Reconfigurable FPGA-Based Hypercomputer" [starbridgesystems.com]. I'm certainly no expert in chip design, but what they are saying makes some sense:
Your point about speed:
"... the collection of FPGA:s emulating a standard CPU would be way slower ..."
Their point is that you aren't emulating a standard CPU. Their approach is for application that involve "Solving systems of simultaneous linear equations ...". The traditional approach is many generic CPU's in parallel.From the article:
"However, this type of parallelism is inefficient, using only a small fraction of CPU resources at any given time, while the rest of the silicon lies idle and wastes power. CPUs are designed to be general and capable of performing any function they will ever need to perform. Therefore, they contain many resources that are rarely used. In addition, the inter processor communication time required by traditional matrix equation solvers seriously limits the number of processors that may operate efficiently in parallel optimize chips is normally a long and tedious process not available or feasible to most programmers."
You argue cost:
The article argues that probably replace a single FPGA with a whole lot of CPU's (because it can process as much in parallel as you can cram on the chip). One could also point out that if this type of technology becomes more prevalent, higher production volumes would lower FPGA costs. I guess we'd have to see some ROI analysis - how many CPU's can they replace with an FPGA? Could you get one workstation class device to replace a cluster or mainframe? Most of their articles discuss a technology in the Proof of Concept stage - so it will be a while before we can talk about which situations it pays off to use this in.
Your third point, its hard to code FPGA's:
"...translating from low-level code to actual chip surface ... is largely a manual even for very simple circuits, largely because the available chip-compiler technologies simply aren't up to the job."
A major thrust of StarBridge systems seems to be creating easy to use and effective tools to do exactly this. Read the sections about their Viva technology. Even if it doesn't do it perfectly, it may do it good enough.
Ummmm.... (Score:1)
Arthur C. Clark might of been on to something... First the geosync. satellite, now this!?
Re:Ummmm.... (Score:1)
I still don't entirely believe it... (Score:4, Insightful)
And even a very large FPGA would be pretty lousy at doing SIMD, vector ops, etc. Basically, they would suck at emulating a computer's instruction set, which is (fairly well) optimized for what software actually needs to do. I can't think of many algorithms used by software today that would work much better in an FPGA, except for symmetric crypto. And if you need to do that, get an ASIC crypto chip, 10s of dollars for umpity gigs/second throughput. SPICE might also run a bit faster on these (understatement), but those types already have decent FPGA interfaces.
Furthermore, the processor programming these FPGAs must have some serious power... if you have to do many things on an FPGA at once (which you do if there are only 11 of them), you basically have to place & route on the fly, which is pretty slow.
So, I don't think that these "hypercomuters" will ever be any better than a traditional supercomputer in terms of price/performance, except for specialized applications. And even then, it won't be any better than an application specific setup. And not many people need to go back and forth between specialized tasks. (Who am I to complain of price/performance, I'm a Mac user?)
That said, if they *can* put a hypercomputer on everyone's desk for $1,000.00, more power to them!
Re:I still don't entirely believe it... (Score:2)
"This site" is hardly the forerunner in reconfigurable computing. Look for "reconfigurable computing" on Google, and you will find that academic research labs have been looking at it for as long as there have been FPGAs.
There are probably better tradeoffs than FPGAs for reconfigurable computing: rather than reconfiguring gates, it may make sense to reconfigure arithmetic circuits. There has been some work in that area. The point is that FPGAs are nice because they are commodity hardware, but they are probably a pretty suboptimal choice for reconfigurable computing.
Reconfigurable Server? ;) (Score:1, Funny)
etiquette, efficiency (Score:2)
New /. slogan (Score:2)
Crypto cracking applications? (Score:3, Informative)
Going from 4GLOPS in Feb'01 to 470GFLOPS in Aug'02 for ten FPGAs, that's 120 times faster in little over a year. Not bad.
Any thoughts on what this means for crypto cracking capability?
Re:Crypto cracking applications? (Score:2)
Re:Crypto cracking applications? (Score:2)
This is the future of High Performance Computing (Score:5, Interesting)
Corefire is a java based graphical (iconic)development environment for Xilinx FPGA's. It is like anything else though sometimes programming in VHDL will be a better choice, it depends on the complexity of the design and the desired end result. But all in all we probably saved at least 6 man-months of design time using Corefire.
More information (Score:5, Informative)
If starbridge was ready with their snazzy machine, (Score:2)
Of course, the slashdotting it is starting to succomb to might be because they spent do much on developing the machine that they could only afford hosting off a single little DSL connection. After all, they certainly haven't spend much on PR either as they do not garner many search hits on the net or widespread press...
adaptive computing has great promise (Score:3, Informative)
The way it works then is that a board is made with a normal CPU and an FPGA next to it. At program compile time a special compiler determines which algorithms would bog down the processor and develops a single-cycle hardware solution for the FPGA. That information then becomes part of the program binary so at load time the FPGA is so configured and when necessary it processes all that information leaving the CPU free. The FPGA can of course be reconfigured several times during the program, the point being to adapt as necessary. The time to reconfigure the FPGA is unimportant when running a long program doing scientific calculations and such.
It's a pretty nifty system. Some researches have working compilers and they have found 6-50x speedup with many operations. The program won't speed up that much of course, but it leaves the main CPU more free when running repetitive scientific and graphics programs.
You can find information in the IEEE archives or search google for 'adaptive computing'. It's a neat area with a lot of promise.
4 IBM PowerPCs onboard a Xilinx FPGA (Score:1)
Reconfigurable vs Vector (Score:1, Informative)
Re:Reconfigurable vs Vector (Score:2)
Nobody said vector processors are dead. They just tend to be overkill for most applications. (And hence they are instead used as a type of co-processors.)
chinese fpga processor (Score:3, Interesting)
Re:chinese fpga processor (Score:1)
Is it just me or...... (Score:2)
FPGA problems (Score:4, Interesting)
VLIW / Reconf. computing (Score:1)
Hype or hoax (Score:2)
Deja vu (Score:2)