Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Math Technology

Sacrificing Accuracy For Speed and Efficiency In Processors 499

Skudd writes "Modern computing has always been reliant on accuracy and correct answers. Now, a professor at Rice University in Houston posits that some future applications could be revolutionized by 'probabilistic computing.' Quoting: 'This afternoon, Krishna Palem, speaking at a computer science meeting in San Francisco, will announce results of the first real-world test of his probabilistic computer chip: The chip, which thrives on random errors, ran seven times faster than today's best technology while using just 1/30th the electricity. ... The high density of transistors on existing chips also leads to a lot of background "noise." To compensate, engineers increase the voltage applied to computer circuits to overpower the noise and ensure precise calculations. Palem began wondering how much a slight reduction in the quality of calculations might improve speed and save energy. He soon realized that some information was more valuable than other information. For example, in calculating a bank balance of $13,000.81, getting the "13" correct is much more important than the "81." Producing an answer of $13,000.57 is much closer to being correct than $57,000.81. While Palem's technology may not have a future in calculating missions to Mars, it probably has one in such applications as streaming music and video on mobile devices, he said.'
This discussion has been archived. No new comments can be posted.

Sacrificing Accuracy For Speed and Efficiency In Processors

Comments Filter:
  • Re:Bank balance (Score:5, Insightful)

    by CastrTroy ( 595695 ) on Sunday February 08, 2009 @02:17PM (#26774003)
    I agree. The whole problem with the example given in the summary is that your bank balance should never be wrong. There is no room for error in calculating bank balances. I also don't want to hear skips and pops in my music because they though it would be more energy efficient to use a processor that produced errors. I already get 26 hours of charge out of my MP3 player. I'd rather have them focus on getting more space for cheaper so I can carry lossless audio on my portable mp3 player.
  • Re:Bank balance (Score:5, Insightful)

    by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Sunday February 08, 2009 @02:21PM (#26774039)

    If you are listening to music on a portable media device, it's safe to say that you aren't going to be able to hear the difference between the lossy format and the lossless format.

    It's like drinking from a well. Connoisseurs may claim to be able to taste the difference between it and tap water, but that's just the extra tang from all the bull shit.

  • by rolfwind ( 528248 ) on Sunday February 08, 2009 @02:23PM (#26774071)

    like financial things, you compare the end product with the end product of the same calculation run either through the chip again or another chip (or increasingly likely another core).

    Still would be faster too.

  • DSP's? (Score:3, Insightful)

    by Zantetsuken ( 935350 ) on Sunday February 08, 2009 @02:24PM (#26774075) Homepage
    Isn't that the point of using a DSP? So you can use a slower CPU to run the firmware and the DSP do that grunt work of decoding, thus letting you save power with the low voltage CPU?

    My question is, if it's just as well to use a DSP, why not just use a damned DSP?
  • by chthon ( 580889 ) on Sunday February 08, 2009 @02:26PM (#26774099) Journal

    If you read the chapter about the history about the IEEE FP standard in Microprocessors : a quantitative approach, then you will see that in the past accuracy was already sacrificed for speed in supercomputers.

  • gfx (Score:5, Insightful)

    by RiotingPacifist ( 1228016 ) on Sunday February 08, 2009 @02:27PM (#26774117)

    can't this be used in gfx cards, i mean with anti-aliasing and high resolutions it doesn't really matter so much if 1/2 a pixel is #ffffff or #f8f4f0 , hell you can probably even get a pixel entirely wrong for one frame and nobody will care (as long as it doesn't happen too often).

  • by chill ( 34294 ) on Sunday February 08, 2009 @02:33PM (#26774197) Journal

    Isn't that essentially what JPEG, MPEG and every other lossy codec or transform does?

  • Games? (Score:4, Insightful)

    by ndogg ( 158021 ) <the@rhorn.gmail@com> on Sunday February 08, 2009 @02:40PM (#26774267) Homepage Journal

    I could definitely foresee this being used in game systems, especially for graphics.

    As long as it mostly looks right, that's all that really matters.

  • Re:DSP's? (Score:1, Insightful)

    by Anonymous Coward on Sunday February 08, 2009 @02:45PM (#26774333)
    Presumably, the idea is to use this technology to make good-enough DSPs that are much more efficient.
  • by gavron ( 1300111 ) on Sunday February 08, 2009 @02:46PM (#26774335)
    Processors that provide different output for the same input cannot be used for anything that wants predictable output.

    They can not be used for ANY result that is later used by anything else -- after all, data based on bad data yields bad results.

    Next thing you know, some offshore manufacturer will use the "imprecise" (cheaper) chips instead of the "accurate" ones, and simple things we depend on everyday will fail in wonky ways.

    A bit-flip on a microwave will make a 30-second timer not expire at 0, and keep going at 99:99 and burn the food.

    A bit-flip on a home heating circuit will make 70F appear to be a target heat of 6F and never turn on.

    A bit-flip on an MP3 player would have it skip from 65 seconds into the song to 134 second into the song.

    These are all results if just ONE BIT were to flip JUST ONCE. The processor described would UNPREDICTABLY and RANDOMLY do worse than actually flip one bit.

    What's next -- they'll put three processors in each device and when two of them agree they'll go for it? "Voting Processing" is bs.

    E

  • by Lordfly ( 590616 ) on Sunday February 08, 2009 @02:52PM (#26774423) Journal

    ...is gaming applications.

    Programmers spend a lot of time coming up with algorithms that simulate randomness for AI or cloud generation or landscapes or whatever... if the processor was just wonky to begin with it'd make certain things a lot more natural looking.

    It's interesting that AI in games is always touted as being "ultra-realistic", but always ends up being insanely easy to trip up. Having something "close enough" would add just enough realism/randomness to situations to perhaps make games and environment more dynamic.

    I wouldn't want these things processing my bank balance, though, unless it rounded up.

  • Re:DSP's? (Score:5, Insightful)

    by Firethorn ( 177587 ) on Sunday February 08, 2009 @02:52PM (#26774425) Homepage Journal

    I think the point would be using a DSP/math coprocessor that uses 1/30th the power in exchange for a .001% loss in accuracy for non-essential tasks like music decoding.

    I mean, combined with the lousy earbuds most people use, who'd notice? Especially if it makes their MP3 player last 3 times as long as ones that use more traditional and technically accurate DSP/decoder?

  • Re:Bank balance (Score:5, Insightful)

    by Hojima ( 1228978 ) on Sunday February 08, 2009 @02:56PM (#26774467)

    There are countless applications for a computer that don't depend on accuracy, but do depend on speed. For example: gaming, stock analysis, scientific/mathematical research etc. Just about every use for the computer can benefit from this. Bear in mind these applications can take the hit of inaccuracy, if not benefit from it depending on the situation. Yes there are some instances were accuracy is crucial, but that's why they will continue to make both of the processors. It's what they call a free market, and there will be always be a new niche to fill.

  • for simulation (Score:2, Insightful)

    by kilraid ( 645166 ) on Sunday February 08, 2009 @03:09PM (#26774615)
    For simulation, Monte Carlo and such statistical sampling this would be perfect. There is already random error - sampling error - so adding a lesser source of error - computational error - while reducing the first would make sense if the computations can be sped up.
  • Bad example... (Score:4, Insightful)

    by Firethorn ( 177587 ) on Sunday February 08, 2009 @03:30PM (#26774843) Homepage Journal

    It looks like you got sucked into the bad example land. Later on in the article it mentions that it's intended for stuff where accuracy isn't paramount, but where it's not really necessary. Multimedia applications over space/bank calculations.

    I mean, there's 1.764 Million pixels in my screen that I'm typing my post on at the moment. Does it really introduce much error if I round it to 1.8M? I'm also running at 60Hz. Do you think that I'd really notice if there's a .01% chance that instead of getting white(255) I get white(254)? That'd be an average of 176 pixels a refresh, assuming an all-white screen. Thing is, those pixels wouldn't be the same every time. Then again, logically each pixel would tend towards red/blue/yellow depending on the error. But only slightly. In a HD movie, are you really going to notice?

  • by Anonymous Coward on Sunday February 08, 2009 @03:31PM (#26774857)

    It's related to the story because it's an example of increasing performance at the cost of reducing accuracy.

  • by cnettel ( 836611 ) on Sunday February 08, 2009 @03:44PM (#26774995)

    Any system which fails permanently due to a single bit error is unstable and not robust (in the numeric sense). If the system is really critical, you should better be ready for bit errors.

    This approach is basically similar to what would be required in analog systems. After all, analog engineering was quite possible. The main of the meat in decoding MP3 is not about seeking in the stream, it's a lot of Fourier and postprocessing of the waveform -- loss there can be completely acceptable.

  • Re:Bank balance (Score:4, Insightful)

    by memco ( 721915 ) on Sunday February 08, 2009 @03:48PM (#26775035) Journal
    Ok, so not only do I have to give up efficiency in the chip itself, but now my efficiency suffers because I now have to determine which chips are useful for which applications. I don't want to have to start thinking about whether or not I plan to use my new laptop for anything requiring accuracy greater than such and such a percentage. I suppose this might be effective for niche markets, but it seems messy if you try to make it part of all computing platforms.
  • by Dahamma ( 304068 ) on Sunday February 08, 2009 @04:05PM (#26775233)

    Yeah, but there is a big difference between "random" and "incorrect".

    The errors resulting from undesirable interactions between transistors are probably a lot less random than a good pseudorandom number generator for these purposes.

  • by Hognoxious ( 631665 ) on Sunday February 08, 2009 @04:28PM (#26775471) Homepage Journal

    Random algorithms are used all the time.

    I'm not even sure such things exist. Even if you meant pseudorandom, that still has zero to do with the point under discussion.

  • by osu-neko ( 2604 ) on Sunday February 08, 2009 @04:36PM (#26775579)
    Sorry, I missed the part back in the 80's where using fuzzy logic caused my processor to consume 1/30th the power.
  • Re:Bank balance (Score:1, Insightful)

    by Anonymous Coward on Sunday February 08, 2009 @04:37PM (#26775587)

    "There are countless applications for a computer that don't depend on accuracy, but do depend on speed. For example: gaming, stock analysis, scientific/mathematical research etc."

    Any mathematical calculation which is then used to decide program flow will result in errors. That directly affects gaming, stock analysis, scientific/mathematical research.

    Also chaos theory applies, which can amplify some errors over time as the values propagate. This greatly complicates the problems in real world examples.

    Arguing accuracy doesn't matter isn't the case in real world examples. In simply lab experiments its much easier to control the chaotic behavior. Achieving predictable and repeatable ways in real world examples is a lot harder.

  • Sooo... (Score:3, Insightful)

    by thetzar ( 30126 ) on Sunday February 08, 2009 @05:16PM (#26776031) Homepage

    He's invented analog?

  • Re:Bank balance (Score:2, Insightful)

    by Hojima ( 1228978 ) on Sunday February 08, 2009 @05:33PM (#26776195)

    Given that, I would expect this hardware - if it proves useful - would primarily be in the "entertainment" sector of the market.

    Not really. Any simulations that are influenced by natural chaos would greatly benefit from this. Examples include engineering simulations of products that must have robust preventions of system failures, pharmaceutical simulations of chemical reactions that may have natural anomalies, statistical research that depends on many unknowns, stalk market predictions that depend on such statistical research, evolutionary algorithms that thrive on error, and it can even serve as supplemental calculation for AI to learn how to deal with new and unexpected occurrences. Like I said this processor has a whole new niche of its own, and it may come standard in a pc like the GPU.

  • Re:Bank balance (Score:3, Insightful)

    by shaitand ( 626655 ) on Sunday February 08, 2009 @05:34PM (#26776203) Journal

    Don't forget there are errors in the hardware processes anyway and error correction algorithms running on the software side that take care of them.

  • Re:Bank balance (Score:4, Insightful)

    by ZombieWomble ( 893157 ) on Sunday February 08, 2009 @06:25PM (#26776733)
    I don't think any of these things particularly benefit from this type of processor. All of the situations you describe involve some degree of randomness, but this CPU doesn't sound like a source of useful random values at all.

    The randomness in these processes occurs in particular places, in particular quantities - this processor presumably produces some characteristic amount of randomness in each calculation, but the odds on it being a meaningful amount for whatever arbitrary calculation you're doing is vanishingly small - and given it's apparently treated to give different amounts of randomness in different bits, it's almost certainly non-uniform as well.

    In almost all simulations you want to make use of extremely well-controlled random numbers - something which adding some noise as part of your floating-point calculations is not.

  • Analog? (Score:3, Insightful)

    by w0mprat ( 1317953 ) on Sunday February 08, 2009 @06:35PM (#26776823)
    Transistors are naively analog. It's ironic that we use them for digital logic locked to a frequency cycle.
  • Re:Bank balance (Score:3, Insightful)

    by wisty ( 1335733 ) on Sunday February 08, 2009 @07:12PM (#26777263)

    Accuracy might not matter for some steps in an implicit, iterative numerical scheme.

  • Re:Bank balance (Score:3, Insightful)

    by Firethorn ( 177587 ) on Sunday February 08, 2009 @07:47PM (#26777621) Homepage Journal

    In all probability this would never make it into usage as a CPU, more likely a dedicated section of silicon performing the function of a GPU/FPU/DSP. Actual control instructions would be high-integrity, just the low significance digits would get the 'cheap but a bit more unreliable' methods.

    I'd like to see how much an image would change if you imposed a 1% chance of a 10% error in either the chroma or luminance of each pixel. I'm willing to bet it'd be 'take a magnifying glass and the images next to each other to tell'. Change it to 30-60 hz video, and the perception of differences would go away.

  • by 3seas ( 184403 ) on Sunday February 08, 2009 @08:18PM (#26777875) Homepage Journal

    .... of artificial intelligence.

  • Re:Bank balance (Score:3, Insightful)

    by Pseudonym ( 62607 ) on Sunday February 08, 2009 @09:24PM (#26778345)

    Think about a computation such as X - Y. If X and Y have the 3 first same digit, then you will have 0 significant bit for the result.

    Any numeric analyst worth their pay will have thought hard about every calculation. If there's a subtraction, then either it won't be two floating-point numbers of similar magnitude, or the result won't be crucial (e.g. it might be an error estimate rather than the actual result).

    If the characteristics of the hardware are known, then algorithms can be designed to suit them. This is just another tool in the toolbox.

    if you do anything more complicated than adding and multiplying, then you need accurate computation.

    Because, of course, IEEE-754 floating point numbers are renowned for their accuracy.

  • Re:Bank balance (Score:3, Insightful)

    by arminw ( 717974 ) on Sunday February 08, 2009 @09:55PM (#26778555)

    ... In the US cents in my checkbook are important....

    Would that not depend on whether the errors made were equally in your favor and against it? If you did a large number of transactions and half of them added a penny or two to your account and half of them subtracted a similar amount, it would all even out in the, would it not? Only if the errors were unidirectional, would there be a problem in the long term.

  • Re:Bank balance (Score:3, Insightful)

    by ozphx ( 1061292 ) on Sunday February 08, 2009 @10:04PM (#26778613) Homepage

    You wha?

    I think hardware and software designers already have that covered when they perform processing on different parts of your system, such as your CPU, or GPU...

    Specialization is a good thing, unless you have a preference for the performance of the directx reference rasterizer....

  • by Aphoxema ( 1088507 ) on Monday February 09, 2009 @02:01AM (#26780157) Journal

    You say what you say, but for the life of me, I have never felt concerned with the supposed quality of audio. I just can't tell the difference between 128kbps and 256kbps, MP3 from FLAC, 22khz from 44khz.

    I also can not tell the difference between HD and upscaled SD when in motion.

    It just feels so ridiculous, I swear people are only fooling themselves in this high-def nonsense. I can't be sure of that, maybe there really is something I'm missing I'm unaware of.

  • Re:Bank balance (Score:2, Insightful)

    by durrr ( 1316311 ) on Monday February 09, 2009 @05:33AM (#26781013)
    Inaccurate = my darts miss the target by up to a meter.
    Random = my darts may be assigned any possible movement vector with equal probability.

    Please understand that there's a difference between these two. If i'm inaccurate we can compensate by altering the mechanics of our 'game'(make a huge dartboard, triangulate from multiple throws). Whereas if i'm random we can't really do much to help.

    Accuracy(or lack of it) is not very challenging to measure either, and once you have it measured it should be trivial to compensate until the rate of significant errors drop lower than the chance of you being hit by lightning or whatever is considered within safe bounds.

    You should consider that you could spend 85% of your cpu cycles on error corrections to achive equality with precision circuits while keeping the advantage of having 1/30 power cost. The advantage of 1/30 the powercost doesn't only(or neccesarily) translate to smaller utility bills. It also means less heat, which doesn't only mean less nosiy fans but also bigger, more powerful chips(with noisy fans). Scale your current cpu with 10x the amount of transistors at current tech and your computer will rival your microwave in heating power. At 1/30th the power that would still only be 1/3 of what you're already using.

    Saying this technology will have no use at all is a bit unimaginative, unless you perhaps have a lot of stocks in the current industry?

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...