Sacrificing Accuracy For Speed and Efficiency In Processors 499
Skudd writes "Modern computing has always been reliant on accuracy and correct answers. Now, a professor at Rice University in Houston posits that some future applications could be revolutionized by 'probabilistic computing.' Quoting: 'This afternoon, Krishna Palem, speaking at a computer science meeting in San Francisco, will announce results of the first real-world test of his probabilistic computer chip: The chip, which thrives on random errors, ran seven times faster than today's best technology while using just 1/30th the electricity. ... The high density of transistors on existing chips also leads to a lot of background "noise." To compensate, engineers increase the voltage applied to computer circuits to overpower the noise and ensure precise calculations. Palem began wondering how much a slight reduction in the quality of calculations might improve speed and save energy. He soon realized that some information was more valuable than other information. For example, in calculating a bank balance of $13,000.81, getting the "13" correct is much more important than the "81." Producing an answer of $13,000.57 is much closer to being correct than $57,000.81. While Palem's technology may not have a future in calculating missions to Mars, it probably has one in such applications as streaming music and video on mobile devices, he said.'
Sacrifices are expected (Score:4, Interesting)
I have spent the last 9 months coding up a dynamic scalable UI for the nokia tablets.
I have had to make huge compromises to accuracy to obtain the desired performance.
I had the choice of using the full featured (but slow) widget sets and graphical primatives which existed already, or find a way to make it work as I expected it to.
The results have left people breathless :)
take a look here: http://www.youtube.com/watch?v=iMXp0Dg_UaY [youtube.com]
Re:Nothing new here... (Score:2, Interesting)
Re:gfx (Score:2, Interesting)
I agree. In fact, with real-time photorealistic rendering, these slight deviations would probably make frames look more accurate, since real video is full of low-level random noise.
The film-grain shader on Left 4 Dead wouldn't be necessary any more.
Re:Nothing new here... (Score:4, Interesting)
There have been dedicated MPEG encoding and decoding chips for many years. DXR3 comes to mind.
I think the only new twist is applying the idea to general calculations as a whole as opposed to a specific function or set of functions in software. An interesting idea. Maybe we'll end up with double-precision, single-precision and ballpark floats.
My bank yould love this (Score:2, Interesting)
Re:Bank balance (Score:3, Interesting)
With regards to music, they're not talking about skips and pops, they're talking about extremely slight modulations in pitch or, in the case of video, a very slight difference in color. If these differences are random and small enough (say, 300 per second), then it averages out to the same thing and our minds can't tell the difference. Hell, if the differences are small enough you wouldn't really notice them anyway. If I could get 30x the battery life out of my laptop by accepting imperfections in the video it displays and in the audio it plays (and I know it wouldn't, but this is a hypothetical), then I'd gladly go for it.
Re:Bank balance (Score:5, Interesting)
High accuracy is required for encoding music and video, though.
Maybe we could have a selective accuracy, where programmers can set their needs via registers or direct it to different CPU cores. Accurate cores for banking transactions, AV-stream encoding and 2D GUI operations while inaccurate cores are used for AV-stream decoding, and computer game 3d drawing and AI decisions.
There's a whole lot we are calculating now without the need for more than 3 significant digits - and a whole bunch where we intentionally use random numbers, sometimes even with strong hardware entropy gathering.
These are all cases where we could just scrap the accuracy for faster processing or longer battery times. No one cares about single bit errors in portable audio decoders or in high fps 3d gaming.
A Little Bit of History Repeating (Score:2, Interesting)
So basically he's advocating fuzzy logic [wikipedia.org], which was big in AI research in the 80's?
Re:Primality testing (Score:3, Interesting)
Even more so: we intentionally gather entropy to improve the pseudo random numbers. With intentionally inaccurate CPU cores, we could scrap all that and gather entropy en-passant AND be much faster anyway.
Well, it's a *little* more important (Score:3, Interesting)
For example, in calculating a bank balance of $13,000.81, deliberately risking getting the "13" incorrect is fraud that risks $13,000 in damages and $1,000,000 in statutory penalties, and risking getting the "81" incorrect is fraud that only risks $0.81 in damages and $1,000,000 in statutory penalties. Surely saving a couple watt-microseconds is worth that!
Re:Use in MP3 Players (Score:5, Interesting)
If you bought a popular artist recently, your music is autotuned already.
Anyway, this means that less than 0.1 percent of your decoded audio samples will be less 0.1 percent off. This is probably very acceptable outside concert halls and living rooms if it delivers large bonuses in battery saving or calculation speed.
For example, we could use a much beefier compression algorithm than MP3 or current algorithms even longer on even smaller devices.
Re:wll, (Score:5, Interesting)
This whole thing is old and silly.
Seymour Cray is known for saying "Do you want fast or accurate?" because back then there was no IEEE 754 spec (which is not infinitely precise) for floating point numbers at the time and machines were pretty primitive then and his machine did Newtonian approximations of many numeric calculations that were accurate to a point, just like John Carmack did (in software) with Doom's inverse square root.
The moral of the story is that in 2009 and beyond its probably best to have hardware continue to be accurate. This is why we have digital 1s and 0s instead of some other base of computation.
Now, in software, feel free to make things as sloppy as you want. If your bank (not mine) wants to round 13,000.83 to some other value, then by all means go for it. But I think that most of us are OK with accurate hardware.
Re:Bank balance (Score:3, Interesting)
Given that, I would expect this hardware - if it proves useful - would primarily be in the "entertainment" sector of the market. Of course making this judgment is pretty much impossible at the moment as there's no real information on just how bad the accuracy loss is with these chips.
Re:Bank balance (Score:3, Interesting)
Well the odds are you probably wouldn't even notice if a few bits here and there were wrong in your audio stream. I'm not sure what the error rate is but if its less than 17 times as much as we have now it'd be worth considering for some applications
I figure if they do use this technology they'd more than likely use the multi-core system currently in place and make one a high accuracy CPU while the other 2-4 cores high speed CPUs. Like someone said it'd be used for gaming and streaming video/audio where 'accuracy' isnt as important.
Re:What a waste of grant money... (Score:2, Interesting)
It seems to me that we have already been designing things in a probabalistic manner.. all that these new guys are doing is allowing the error tolerance to be much more liberal.
Re:Bank balance (Score:3, Interesting)
There are a few areas where games care a little bit about accuracy. You've got to be really careful about it in any kind of flight game with playing fields of more than a few miles in size. It's amazing the kind of graphical artifacts you can get if you don't take the error in floating points into account when you handle that sort of thing. In our first naive implementation of the engine, at around 50 kilometers out every character would jitter constantly every time they moved because of the last floating point bits being inaccurate.
Fudgy-Dah! (Score:2, Interesting)
I live in the Washington, DC, area. One of the big perks of living here is the Library of Congress. You have to undergo a security clearance to get permission to be there, but once you get that photo id, hoo boy, there sure is a lot of information there.
I'm a computer programmer (currently working in Web development, as my handle here suggests). Long, long ago, I researched fuzzy logic at the Library of Congress from the original studies that proposed it. (I think that the original papers were from an Air Force study.) It did NOT revolve around lookup tables. It was more fundamental than that.
Fuzzy logic assigns a floating point number to data that represents how confident you are that it's true. In the early days, the number varied between 1.0 (absolute certainty) to 0.0 (not a clue, wild guess), or even -1.0 (absolutely sure that it's not true). The exact range used is unimportant except that it's well-known. Of course, almost all of these numbers are between whatever limiting numbers you set, representing shades of gray. That's where the term fuzzy came from.
We like to believe that knowledge is a deterministic process, but to believe that, we have to ignore our fundamental assumptions, which force us to admit that we don't really know for sure. Fuzzy logic seeks to improve decision-making by acknowledging that fuzziness and quantifying it.
We all know that we have to multiply percentages to derive percentages based on multiple criteria. If roughly 1 in 2 people are male and exactly 1 in 2 have characteristics above the median, we derive that roughly 1 in 4 of all people are males with intelligence above the median for males. But how confident are we in knowing that?
Say, for a given population, the 1-in-2 statistic for males is about 97% likely to be true. The 1-in-2 statistic for being above the median is 100% likely to be true, because that's the definition of median. So when you combine the 2 criteria, the 1-in-4 statistic still has a 97% likelihood of being true. In other words, fuzzy logic is a completely independent, parallel computation based on the confidence factors that travel with the data.
The fact that confidence factors travel with the data is an important part of fuzzy logic. You never throw the confidence factors away and say "That's it. We got the inference we wanted. It's over 95%. That's close enough, so we're going to treat it as true." If you do that, you've gained no wisdom whatsoever from tracking confidence factors. You have to remember that there's still a 5% chance that you're wrong. So all inferences also have their own confidence factors, which are also not thrown away.
To take an Air Force example, in deference to my vague recollection that they sponsored the original study, satellite surveillance reveals that a bunch of bombers are heading north in Russia. Is it just an exercise, or first evidence of an attack? You have to factor in a lot of unknowns. What's their air speed? How likely are they to be carrying a payload at that speed? Even if they appear to be carrying a payload, how likely is it to be a full dress rehearsal, with ballast for realistic plane handling? The list goes on and on. As part of standard readiness, the confidence factors of these questions have been calculated and recalculated on an ongoing basis. NORAD sees the scenario and looks it up in a scenarios book. As a result, they decide not to change the DEFCON, but they read ahead to see what to do if the bombers leave Russian air space. Crossing the border has a different confidence factor that goes into the overall calculations of whether or not it's likely to be just an exercise.
So no, fuzzy logic is not simply using lookup tables. Lookup tables are just a convenient way to organize inferences based on changing conditions in reality.
Returning to Washington, DC, area fo
Finally a use for fuzzy logic? (Score:1, Interesting)
Hmm, I wonder. Most sloshdatters may never have heard of fuzzy logic. It was a fad for bad boolean algebra that went nowhere over many years.
Early Pentium 60 fiasco (Score:3, Interesting)
You probably remember when some scientists noticed it was generating some math error in their applications so they contacted Intel about it.
Intel's response was, "Well, these are early processors in design and shouldn't affect 99% of the population". Of course the scientists created an awareness of this issue and public generated a stink about it even though 99% of them may never run into the bug that affects their applications.
So now 15+ years later this pops up saying it is OK to have these errors in non-critical applications for sake of speed. People are going to wonder about these chips in their products when something isn't working right and may cry afoul.
Re:Not completely correct (Score:1, Interesting)
The IEEE FP standard defines a predictable specification. Whoever uses it can predictably understand the outcome of applying an FP operation to two operands. If there IS going to be inaccuracy in some least-significant digits of the resulting outcome, it is to be expected.
However, in a probabilistic setting, certain outcomes may be right or wrong, on a case-by-case basis (possibly even affected by environmental factors such as temperature or noise). This is not an easy "spec" (if you can call it that) to work around with.
Re:Bank balance (Score:2, Interesting)
I don't think I agree. This processor could bring more lifelike movies to my house, for less money. I'm guessing that with this chip popping and rattling, I could get rid of the guy I hired to cough loudy and fumble with crinkly bags of chips in my "true movie experience" home theater room.
Already here (Score:3, Interesting)
We already have accuracy issues with the processors available today. I've worked with quant teams who will insist on only having machines with Intel processors in their compute farm, because they get a different result from the same code running on AMD machines. As the business has signed off the Intel numbers, Intel it is.
And nonsensical applications again.... (Score:3, Interesting)
In order to stream auto or videao meaningfully, you need to play/display the contents. This has energy consumption high enough that energy savings in the computations will not matter at all before long. In addition, such a chip will allways be very special purpose and most IT experts will not be able to programm in. I call this a dead end.