Forgot your password?
typodupeerror
Math Technology

New Method for Random Number Generation Developed 395

Posted by ScuttleMonkey
from the lots-of-coin-flipping dept.
Science Daily is reporting that a German team has developed a new method of random number generation that they hope will improve security. "The German team has now developed a true random number generator that uses an extra layer of randomness by making a computer memory element, a flip-flop, twitch randomly between its two states 1 or 0. Immediately prior to the switch, the flip-flop is in a 'metastable state' where its behavior cannot be predicted. At the end of the metastable state, the contents of the memory are purely random. The researchers' experiments with an array of flip-flop units show that for small arrays the extra layer makes the random number almost twenty times more 'random' than conventional methods."
This discussion has been archived. No new comments can be posted.

New Method for Random Number Generation Developed

Comments Filter:
  • by AtomicDevice (926814) on Monday February 22, 2010 @03:38PM (#31233934)
    I'd say based on the fact that all your characters were lower case, and the overwhelming proportion of characters to digits, there are significantly fewer bits of entropy in your so-called random comment than you would have us believe.
    • Here's a question about bits of entropy:

      If they can mathematically calculate how random something is, can't they just mathematically determine what would be the most random series of numbers, and just use that?

      • by blueg3 (192743)

        No. Neither a number nor a sequence of numbers has, by itself, any entropy.

        • Re: (Score:3, Insightful)

          by tepples (727027)
          The entropy of a sequence of numbers is its Kolmogorov complexity [wikipedia.org]. It can't be calculated, but compression programs like 7-Zip give upper bounds.
          • by blueg3 (192743)

            No, that's really a measure of complexity. It can be used as *a* measure of "randomness", but it is not the same as entropy.

      • If they can mathematically calculate how random something is, can't they just mathematically determine what would be the most random series of numbers, and just use that?

        Then all that's needed is legislation that requires everyone desiring a random series of numbers to use the one that was pre-calculated for them. Problem solved!

    • by TheCarp (96830)

      He never said what the encoding was

    • by Sir_Lewk (967686)

      That however doesn't mean that it is any less random. I can make a random sequence using nothing more than 1's and 0's. Including the digit 2 would not make it any more random, it would just increase the randomness per character.

  • by Anonymous Coward on Monday February 22, 2010 @03:39PM (#31233978)

    the generation of random number is too important to be left to chance.

  • XKCD Bait (Score:5, Funny)

    by jgtg32a (1173373) on Monday February 22, 2010 @03:42PM (#31234050)

    Lets play a game, what XKCD am I thinking of?

  • by fuo (941897) on Monday February 22, 2010 @03:43PM (#31234060)
    always been one of my favorites... http://xkcd.org/221/ [xkcd.org]
  • Hardware? (Score:4, Insightful)

    by e2d2 (115622) on Monday February 22, 2010 @03:44PM (#31234084)

    TFA fails to state whether they used existing memory types or if they intend to use a custom piece of hardware on board.

    • Re:Hardware? (Score:4, Interesting)

      by eldavojohn (898314) * <eldavojohn@gmFREEBSDail.com minus bsd> on Monday February 22, 2010 @03:58PM (#31234382) Journal

      TFA fails to state whether they used existing memory types or if they intend to use a custom piece of hardware on board.

      My guess would be custom though not completely different from everyday stuff. I was familiar with "metastability" from my college courses where it was mentioned as a classic problem in electronics [wikipedia.org]. I suppose there could be a way to harvest this data from hardware before it gets corrected. I never thought of this before but if you had a long length of optical fiber cable (longer than what it's rated for use) then you could send messages through that and collect them on the other end. I mean, we implement parity to remove these random flips of bits through transmission, couldn't we also use this to increase randomness of random numbers? I think I've read of the network guys fighting metastability [acm.org] so their incorrectly implemented hardware could probably be exploited as sources of random bits.

  • by onionman (975962) on Monday February 22, 2010 @03:47PM (#31234156)

    From TFA:

    The team adds that the efforts of a cracker attempting to influence the array will be wholly obvious to a simple statistical analysis as -- depending on the type of attack -- either the whole array or single elements will be disturbed, whereas these are again selected randomly. So this true random number generator can protect systems against third-party snooping, potentially making private and sensitive transactions on the Internet more secure.

    Now I'm really skeptical. A cracker who is able to "influence" the array might be able to influence it with a pseudorandom number generator that he/she can predict.

    I think that hardware based RNGs, such as those detecting radioactive isotope decay, have been around for a while. I'm not sure how this one can provide more security, especially if the attacker has access to the hardware. I think that most gate transition thresholds can be influence by simple things like temperature anyway.

    What exactly does "more random" mean in the summary? I think something is either random or it isn't. Perhaps this claim should just make us "more skeptical".

    • by dissy (172727)

      What exactly does "more random" mean in the summary? I think something is either random or it isn't. Perhaps this claim should just make us "more skeptical".

      True random means that each item in your possibility list has equal chances of occurring.

      If your possibility list is the numbers 1-10, then each number would have exactly a 10% chance of occurring, in order to be truly random.

      If instead some numbers have a 10.001% chance of being chosen, and some others have a 0.999% chance of being chosen, then while the result might appear to be just as random, it is less random than the first case.

      Of course anything else that adjusts the outcome and enables further predi

      • Re: (Score:3, Informative)

        What exactly does "more random" mean in the summary? I think something is either random or it isn't. Perhaps this claim should just make us "more skeptical".

        True random means that each item in your possibility list has equal chances of occurring.

        No, true random means the outcome cannot be predicted with certainty. What you're describing is one particular type of randomness known as the "uniform distribution". Gaussian or binomial random variables, for example, don't have equal likelihood for the outcomes but are still truly random.

      • True random means that each item in your possibility list has equal chances of occurring. If your possibility list is the numbers 1-10, then each number would have exactly a 10% chance of occurring, in order to be truly random.

        perl -le '$v = 0; sub random {$v=($v+1)%10;return $v}'

        Perfectly random?

    • Re: (Score:3, Interesting)

      by ticklemeozmo (595926)

      What exactly does "more random" mean in the summary? I think something is either random or it isn't. Perhaps this claim should just make us "more skeptical".

      Nothing can be ever be considered random. If it is, it's just in a state of "we just don't have a means of measuring it's next value."

      You can call me guessing a "number between 1 and 10" random, but that's just because you don't know my method of choosing. If you did, it wouldn't be random at all. If you knew the order of the deck of cards, and precisely each transition of the shuffle, then the next card could easily be predicted. Since you don't have that power, it's considered "random".

      Same thing with

      • by blueg3 (192743)

        Quantum mechanics would like to have a word with you.

      • Re: (Score:3, Informative)

        by Bruce Perens (3872) *
        You seem to be missing quantum mechanics. The noise from a noise diode, a good way of getting real randomness, is a quantum phenomenon and you can only explain it with statistics. There is a probability that any little bit of the junction will avalanche within a certain time, but there is no way for you to say when.
    • by joggle (594025) on Monday February 22, 2010 @04:06PM (#31234560) Homepage Journal

      In Numerical Recipes for C they list several benchmarks for determining how good one random number generator is compared to another (based on various statistics measures) so it certainly is possible for one method to be more random than another. Read chapter 7 of that book for all the details you could possibly want on this subject (with references to even more information).

      One way of generating a good random number in Linux is using /dev/random (which uses a hardware-based random signal as its source, I don't recall the details). However, it isn't fast enough for most applications, outputting only a few bytes per second of random information, although it can serve as a useful seed for other random number generators. Just run 'cat /dev/random > random_bytes.bin' to see its output.

      I'm curious what rate random information can be generated using the method in the article. I'm presuming it's fast enough that an application could rely solely on this data without having to use it as a seed for a pseudo-random number generator. The question is how long does it take for the hardware to get to the state where its next value is unpredictable--in the case of /dev/random it's relatively long.

      • Re: (Score:3, Informative)

        by BlaisePascal (50039)

        /dev/random is slow because it maintains an entropy pool filled by sources of randomness in the hardware -- things like mouse movements, keystroke timings, disk timings, etc. If reading from /dev/random drains the pool faster than it's filled, then /dev/random blocks until there is enough entropy. /dev/urandom uses the same techniques (same pool, even), but it doesn't block when the pool is drained of entropy. Theoretically this means that there could be enough information in the output of /dev/urandom to

    • Dedicated hardware random number generators are expensive and therefore aren't found in regular run of the mill consumer electronics. This is a simple, easy to manufacture, solid state device that improves randomness considerably. It's almost impossible to have a true random number generator, so we generally use pseudo-random number generators instead, generally software based ones. The problem is that given a certain seed value, a random number generator will always produce the same outputs. You might

    • by jittles (1613415)
      There is no such thing as a random number generator, only a psuedo-random number generator. Therefore these numbers appear to be more random than for instance software based techniques to generate a psuedo-random number.
      • by vlm (69642)

        There is no such thing as a random number generator, only a psuedo-random number generator.

        If you allow special hardware this is almost too easy, listen to a geiger counter click using a microphone, etc.

        If you insist on off the shelf PC hardware, simply record the sound input (better with a microphone attached, but just the hisssssssssss is OK too) then hash it or otherwise stir well.

        the LSB of the timer for each keyboard interrupt works OK too.

        There is probably a theoretical proof, that over a long enough congested enough internet path, you can get bits of randomness out of the least sig bits of

  • 20 times more random?

    umm.. errr... wha?
    • 20 times more random?

      I don't get it either. First they claim it's a true random generator that generates "purely random" numbers.

      Then they proceed to explain that

      ... The degree of randomness possible depends on the size of the array ...

      Can anybody tell me how this works?

    • by Strilanc (1077197)

      I'm just going to assume they meant "can generate 20 times more entropy per second per cost-of-hardware than existing methods".

  • by Arimus (198136)

    Would this beat methods such as leaky diodes or radio noise which some systems use to get random data?

  • WiFi (Score:3, Interesting)

    by hey (83763) on Monday February 22, 2010 @03:50PM (#31234218) Journal

    I always thought the WiFi radio in laptops would be a good thing for generating random numbers.

  • reproducibility (Score:3, Insightful)

    by domulys (1431537) on Monday February 22, 2010 @03:53PM (#31234270)
    While this new technique may improve security, it seems to lack one important property of pseudo-random numbers that is required by many applications: reproducibility.

    Good luck finding the bug in your program with a stream of randoms you'll never be able to reconstruct again.
  • 9...9...9...9...9...9

  • by wiredlogic (135348) on Monday February 22, 2010 @03:59PM (#31234410)

    There is no way they can prove that these flip flops don't have bias one way or the other. Even if you could design a perfect circuit it would be subject to the imbalances between p-type and n-type transistors and process variations. This makes it impossible to create a perfect Gaussian metastability function or to place a device at the apex of that function such that the probability is 50/50 of switching to 1 or 0. Hence, you will not achieve truly random results. Metastability is also affected by the power supply voltage and current. A cryptographic device employing this technique could be subject to attack by lowering or modulating the power supply in such a way as to create predictable "random" numbers. i.e. make sure all the flip-flops transition to 1 or 0.

    • Ratio sensitivity (Score:4, Interesting)

      by overshoot (39700) on Monday February 22, 2010 @04:05PM (#31234528)

      Even if you could design a perfect circuit it would be subject to the imbalances between p-type and n-type transistors and process variations.

      That's one problem it won't have, since the initial condition is at the balance point of P vs. N. The bias would show up in the curvature of the gain function around the bias point. It's not a large bias, and it's likely to vary from one device to the next -- so the prudent designer would have to correct for each bit's history. Still, thermal noise is easier to work with than radioactive decay.

    • by ooooli (1496283) on Monday February 22, 2010 @04:47PM (#31235364)

      You're confusing Shannon entropy and true randomness. If you have a string of bits that are created by a process that is truly random but has a bias, it's easy to transform it into an unbiased (but shorter) string.

      The problem with pseudo-random generators is that they're really not random at all: They're determinstic functions that map a seed onto a sequence of random bits. If you know the function and the seed, you can predict all of it, which leads to potential vulnerabilityies. The point of truly random numbers is that there's no possible information you could have that would enable you to predict it.

    • Re: (Score:3, Informative)

      by Alef (605149)
      Hardware random number generators are often biased, and there are well known ways to deal with that. (See for example Wikipedia [wikipedia.org].)
  • Every x years, someone will find and publish a way to cure cancer... in mice.

    Every y years, someone will invent and publish a way to treat phase velocity as if it were group velocity.

    Every z years, someone will discover and publish a way to use metastable flip-flops to produce random numbers.

  • Link to actual paper (Score:3, Informative)

    by scovetta (632629) on Monday February 22, 2010 @04:15PM (#31234744) Homepage

FORTRAN is for pipe stress freaks and crystallography weenies.

Working...