Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology

Future Computers 129

jethro200 writes "Popsci.com has an interesting story on the up-and-coming silicon replacements, ranging from DNA to a little molecule called thiol to using atoms in a quantum state. Obviously, these are a long way from being your next desktop, but an interesting article nonetheless."
This discussion has been archived. No new comments can be posted.

Future Computers

Comments Filter:
  • ...a bright orange solution of a billion billion molecules.

    In other news, astronauts have discovered a new use for tang.
  • Great minds reading /. who are interested in this article should definitely make their way to Pasadena this summer for the Computing Beyond Silicon Summer School [caltech.edu]. The dateline for applying has passed, but you can always gatecrash, or monitor the site to read the lecture notes online (they better be available).
  • by ObviousGuy ( 578567 ) <ObviousGuy@hotmail.com> on Thursday May 23, 2002 @02:59AM (#3570756) Homepage Journal
    We have this 3-5 pound computer sitting in our heads that is so powerful that we can't emulate it with any success. To boot, it doesn't use hardware like we normally think about it.

    When the claim comes up that someday we will use biological computers, custom grown neurons that will do calculations for us and grow beyond our own puny brains, I can only nod my head in agreement. Our hardware can't be so difficult to figure out, we've got the raw components, we just need to know the schematics.

    It's only a matter of time.
    • This sort of stuff bothers me.

      We don't have a 3-5 pound computer sitting in our heads. We have a 3-5 pound brain emulator sitting on our desk.

      The point of the computer (originally) was to do complex tasks that took the human brain too much time. It does slave-like replication. It's an emulation of something we can already do, in theory.

      Furthermore, the human brain is far from puny. We have 10^15th synapses, which is far more connections than there are genes in our genome, or even stars in the galaxy. 10^15th synapses is an incredibly large number to imagine. A synapse is a neuronal connection. A data transfer point.

      I urge the above poster to consider the fact that life and thought have been debated for thousands of years. We *can* be so difficult to figure out.
      • I think we're all missing the easy phrase: Life will find a way. We don't have to create a computer that will mimick a human brain, that would be redundant. Instead, we need to give a computer free thought, and let it develop on it's own. How many millions of years did it take the human brain to get to where it is now? I would imagine a computer could evolve at twenty times the rate if we give it the chance, but are we willing to take that risk?
        • I agree with Child. It's basically if you create AI that can evolve, you might end up with Matrixism or Terminator 1 and 2 kind of world where machines rule man.
    • The human brain takes about a decade to boot. Everything that happens to it in that decade affects it's performance. Not a very good example for computing.

      I'd never buy one of those for production, maybe for fun.

    • ....sure, but try recompiling the kernel!
    • by Anonymous Coward
      Yeah and an uptime averaging 75 years!
    • We have this 3-5 pound computer sitting in our heads that is so powerful that we can't emulate it with any success.

      Yeah, and just try computing factorial 200 with your brain, or solving systems of 5000 equations, or indexing the web.

      The brain and computer hardware are two different things, each with its own strengths.
    • We have this 3-5 pound computer sitting in our heads that is so powerful that we can't emulate it with any success.

      And just look at what most people do with theirs. I can see the fine print now: Luser, PHB and Evil modes installed by default. :-)

      -- D

      Ceci n'est pas une sig.
  • "The ENIAC was a 4-bit computer that ran at a now-paltry 20,000 cycles per second--about the computing power found in an electronic greeting card that plays a silly song when opened."

    Man, how far have we progressed? How far do we have to go? This is some cool stuff, wonder how long before I'll be able to walk down the street and tap the street sign to get directions, locate restaurants, etc.....hmmm...come to think of it....

    I am applying for a patent on this new found technology. I'll call it National Universities Technology Streetside Assistance Center, or NUTSAC for short.

    Yes, soon I will take over the world, one stretch of dirty litter-ridden pavement at a time!!!! MUWHAHAHAHAHA!!!!!!
  • Encryption? (Score:1, Interesting)

    by BitHive ( 578094 )
    The author makes reference to quantum computers speeding up crypto operations. I thought (from various stories here on /.) that quantum crypto was supposed to be the end-all for unbreakable encryption (unless quantum state cloning is perfected)...now, from my understanding of the way quantum-based crypto works, this doesn't require a whole lot of math.

    I find it interesting that the problems that this new research is solving are changing along with the solutions.

    • Quantum security is theoretically infallable, thanks the Heisenbergs principle. Only the message you receive is the key that you use (intercepted bits are altered). I imagine the author is refering to the generation of primes (and the factorisation of large numbers) for generating shared keys and hacking RSA encryption - i.e. Quantum computers can be used for conventional security.
    • There are two distinct concepts here:

      Quantum Computing
      A method of computing that encodes data in the supposition of states of quantum bits. When this supposition collapses, the answer falls out (more or less).
      This technique can be used to solve the Integer Factorisation Problem. If this occurs, all public-key cryptography is useless.

      Quantum Cryptography
      A technique of sending single photons oscillating in different directions. This is secure because it is not possible to determine the direction of oscillation of the photon without changing its state (ie. it is imposiible to eavesdrop without being detected).
      This is a means of ensuring secure two-party communications.

      If Quantum Computing was viable today, it would invalidate all existing public key encryption systems. The good news is that Quantum Encryption would then be available to replace it.

      David Jackson
      --
      Incorrigible punster - Do not incorrige.
  • by wbav ( 223901 ) <Guardian.Bob+Slashdot@gmail.com> on Thursday May 23, 2002 @03:11AM (#3570783) Homepage Journal
    I thought that this might be interesting for a few of those who don't know much about quantum computing.

    The idea is to have a bit that can be a 1 or 0 at the same time. This means that with 50 bits, called qubits, you can represent every number from 1 to 1 trillion, at the same time.

    What's really cool, is with this you can use what's called a bogo sort. Imagine a set of cards, that is shuffled. Now to sort them in order, most people would go through 1 by 1 and put some in front and some in back. A bogo sort creates a new universe and then throws the cards into the air. If they land in order, great, else destroy the universe.

    All these universes are created at the same time, making it 1 step to sort 52 cards. Like I said, it's interesting.
    • But then who has the fun of creating new and wonderfully complex encryption methods? If it only takes one step to sort 52 cards, will it also only take one step to try, say, a few million different attempt to break an encryption. To use the previous example... Has the encryption been cracked? If not destroy the universe! :) Sounds kinda evil. ;)
      • Acutally this means we need to change our encryption methods. Most encryption is based on factoring large numbers, rather than new and wonderfully complex encryption methods. Also for communications, quantum computing adds extra security in that if someone else is listening to the message, they alter it, thus rendering the whole stream useless.
        • Forgive me for my limited understanding of quantum theory but....
          wouldn't the same argument apply to the recipient?
          By trying to read the message would you alter it?
          • No, it has to do with the number of listeners.

            if there is only one transmitter, and one receiver, The data only flows one way; when another receiver is added, it changes the message becuase it's there. It's difficult to explain, but you should find some good books listed with the website above to help you out.
      • Qauntum is base three not base two?
        top left, up down , strange and charmed.
        this enables you to represent 1 and 0 at the same time (but not 1 0 and -1) or whatever you'd have in a base 3 system. so you can still encrypt.
    • Here is a paper I had to write on the subject; it is in draft form as the class did not call for a final version.
      Quantum Computing [orst.edu]
    • One thing that popular science magazines fail to mention is that Quantum computing can never fully replace conventional computing. They're marvelous at solving prime number factorisation (cracking the RSA prime number problem in O(n^2) - lucky for us, quantum security is guarenteed to be safe (but that is another topic)), but they're not so good for things which need to give feedback. How a quantum computer will look in the future is there will be a "black box" plugged into a port (not unlike the current external ray-tracers which are being flogged) which will do all your quantum number crunching, but for everyday processing, good old serial silicon is irreplaceable. Think about a chess game - a simple serial PC will be able to happily take in your current move, produce a game tree, and return the next best move. A quantum computer would be fairly useless in processing every single move - its true advantage would be to process a batch of, say, 10 moves ahead, and give you the results after your 10 moves. Not particularly interactive.
      • This is true, so the real future of computers will be a hybrid between quantum and conventional computing. It still will be a pain to write the software fo it.
        • Hah, you think humans will still be writing software in a future that complex, or designing the even more complex architectures it runs on? You've got to be crazy. :)

          The future belongs to various flavors of AI and evolved solutions, with the rare human playing the role of big picture conductor.
          --

      • What you've said is sort of true, but more importantly, quantum computers must contain classical components to operate. A quantum computer will have silicon like a skeleton supporting its foamy quantum flesh.

        Take as example, simple quantum error correction (which is much more important in quantum computers than classical ones) must involve classical routines such as syndrome detection and are critical for the basic storage and movement of quantum information which otherwise decoheres in picoseconds. Basically, you'll have classical error correction subroutines running like arteries around qubits as the flow through the computer. So, you can bet that silicon will run throughout the quantum computer.

        Shor's quantum algorithm for factoring integers is based on peiod finding and it has many, many classical parts.

    • All these universes are created at the same time, making it 1 step to sort 52 cards. Like I said, it's interesting.
      Ok, so far this is what everybody and his dog knows by now about quantum computing.

      Unfortunately, like everybody else's descriptions I've seen so far, you left out one minor detail: How do you pick the right result? With your card sorting example, I would end up with ca. 8*10^67 results simultanuously. Which is the right one? Do I go through them one by one? Do I build another quantum computer to check them all at once?

      Can somebody please explain?

      • This, from what I understand from the sources listed on the website given, is supposedly a "feature" of the software. The universe it's self checks the order of the cards, and if it's wrong destroys it's self. Thus you only have one correct universe, and it's the only one left standing.
  • Is its ruggedness. It has a fairly wide range of operating temperatures, and it's fairly easy to make shockproof enclosures (mobile phones, for instance.)

    Anything based on a more delicate technology like DNA (which is only stable over a narrow temperature range) would need some kind of homeostatic enclosure. Potential candidates are cheaply available.

    That isn't a herd of cows, that's the Pentagon's decryption engine farm. And Bin Laden's successors are really going to have to worry about what the cockroaches might be up to, especially if Dolly the supercomputing sheep is better at processing operational intelligence than the FBI and the CIA - not, on the face of it, difficult.

    • Having grown up in Iowa, I don't think I'd want to sort throught the "output" of a herd of supercomputing cows or sheep. :)


  • There have been many mention of "future computers" involving "DNA" or "Molecular Structures" and so on.

    Well ...

    Aren't we made up of "molecules" ?

    Don't we have "DNA" ?

    Methinks the REAL future computer be the DRASTICALLY ALTERED HUMAN BEINGS (if they can still be called "humans") with their molecular structures perfectly alligned to carry out not only bodily functions but also for computational needs, their DNAs become ultra-computational devices ... PLUS, QUANTUM COMPUTER TECHNIQUES operational inside their domain (body).

    Yep, that may be scifi like, but who knows ?

    Instead of the computer being out of body (wearable, or whatever), the future computers will be THE BODY WE HAVE - don't need to wear anything or carry any batteries.

    • their molecular structures perfectly alligned to carry out not only bodily functions but also for computational needs

      'Scuse me for a minute. I really gotta take a memory leak.
    • Every second of every day, our genetic material and their supporting machinery regulate an inimaginable number of complex chemical pathways by carrying out the entire range of sensing, analysis and control. If they didn't, we'd just be a mush of amino acids.

      Machinery that regulate chemical processes in our bodies are an inherent part of the processes themselves. In fact, it's productive and enlightening to think of biological systems as computational and chemical processes within them as algorithms. Researchers like Prof. Erik Winfree [caltech.edu] at Caltech are beginning the difficult process of applying this insight into research. [caltech.edu]

      Due to the difference between the environment that DNA computers require and the environment supported by the modern infrastructure we have built for computing, the type of DNA computers studied in today's laboratories will never replace the silicon chip. Also, unlike quantum computing, DNA computing does not offer exponential growth in computing power with the number of elements used. However, DNA computing may find a niche in bioinformatics by offering a way to probe, analyze and ultimately control complex biological processes in vitro.

      Hence, research into DNA computing may offer us a way to understand, interact with, and ultimately control nature's algorithms in biological systems.

      The challenge for computation over the next century is to overcome barriers in the shrinking of circuit size for conventional computers, create practically useful quantum computers, apply conventional and quantum computers along with experimentation to understand the role of computation in complex processes (notably biological systems), and use the understanding gained to create a unified architecture for computation that will allow us to embed synthetic algorithms into every complex dynamic system we design and create and extend our control to the atomic level. When that happens, nanotechnology [foresight.org] will finally fulfill its promise.

      Stephen Wolfram [wolframscience.com], Erik Winfree [caltech.edu], Hideo Mabuchi [caltech.edu], Jeff Kimble [caltech.edu], John Preskill [caltech.edu], Bill Goddard, [caltech.edu] Isaac Chuang [stanford.edu] are leaders on the bleeding edge of computation. There are many many others I don't know about.

      On that note, I will end my foray into wild speculation.

      • Due to the difference between the environment that DNA computers require and the environment supported by the modern infrastructure we have built for computing, the type of DNA computers studied in today's laboratories will never replace the silicon chip.

        Not to say that there won't be other niches [slashdot.org]. What I mean is that, there will be huge issues with reliability, durability, and interface if one ever tries to replicate the functionality of silicon chips with DNA computing based on base-pair recombination. It is probably easier to shrink circuit density, power consumption and cost of manufacture to those scales comparable to computing DNA base pair recombination. It's possible that the materials we would use to make those circuits would be chemically similar to DNA, but the way they compute would be closer electrical or magnetic switching than base-pair recombination

    • What you are describing sounds a lot like the "mentats" in the "Dune" series.


      In reality, however, I doubt if we are ever going to overcome the human & legal issues that would certainly arise from creating a race of supercomputing humans just to use them as our calculators. Or something.

    • Mentats, anyone?
  • by ahfoo ( 223186 ) on Thursday May 23, 2002 @03:30AM (#3570823) Journal
    It seems like making nano FPGAs would be the easy way to go, but never having made one myself I wouldn't really know, would I? I have done a bit of research on the subject though and apparently there is skepticism of the current king [xilinx.com] of FPGA, Xilinx, has been criticized [rug.ac.be] for using an inefficient and non-standard design in their FPGAs that would supposedly work better in a much simpler layout. Obviously simplicity of design could be helpful when dealing with nanoscale materials.
    On a totally separate note, I thought the DNA experiment about the party guests was a bit suspicious. I've written GRE study guides in the past and so I've spent quite a bit of time analyzing those kinds of analytical questions. From a test writer's perspective, their experiment raises some interesting issues. The GRE frequently uses seven or more entities with special requirements in the analytical section and most of the questions can be solved with a piece of paper and pencil in a few minutes using simple logic. If that wasn't the case, then how would the test writer be sure what the correct answer is if they couldn't verify it?
    So, if they've got all these special case situations with perhaps dozens of variables for each party goer then how do they know what the right answer is and that there are not more than one right answer --the bane of test writers. And if they do know how to accurately calculate this data, then is it really as complicated as they make it seem?
  • by millisa ( 151093 ) on Thursday May 23, 2002 @03:31AM (#3570824)
    Up and coming silicon replacements? But they've been around for decades! [breast-imp...ations.com]

    I thought saline was the newest thing? But hey, if they can get 'em to run quake, I'm all for it. Oh the wonders of technology! . . . Maybe I missed the point . . .
    • Up and coming silicon replacements? But they've been around for decades! [breast-imp...ations.com]
      I thought saline was the newest thing? But hey, if they can get 'em to run quake, I'm all for it. Oh the wonders of technology!


      In this case, anything that gets you closer to the hardware will give a better gaming experience.

      Whatever next ... force feedback?

  • Errata (Score:1, Informative)

    by Anonymous Coward
    A thiol is not a molecule but a -SH residue so if someone talks of thiols, a whole group of molecules featuring that residue is meant. That's much like alcohols with their -OH residue.
  • by HiQ ( 159108 )
    I vote for the first quantum processor to be named the 'Schrodinger'.
  • Here is a paper I had to write on the subject; it is in draft form as the class did not call for a final version.
    Quantum Computing [orst.edu]

    I know, I posted this in my thread too. Oh well, I have karma to burn. Besides, I feel this is important enough to bring it to the top.
  • Think to make sure this was okay with

    I'd hate to think such innovations are right around the corner, only to find they haven't been approved.

  • "Well, sure, the Frinkiac-7 looks impressive, don't touch it, but I predict that within 100 years, computers will be twice as powerful, 10,000 times larger, and so expensive that only the five richest kings of Europe will own them." - Professor John Frink
  • Not really that much of a spread of technologies, mostly just small-scale molecular/DNA computing and quantum computing. If you ask me the real front runners for next gen computing are RSFQ [sunysb.edu], spintronics [sciam.com], and massively parallel "quasi-processors" / reconfigurable computers (such as RAW [mit.edu] and "smart memory" [stanford.edu]). More the kind of thing you'll see on your desktop 5-10 years from now rather than in the lab and still needing another decade to fully develop.
  • by iangoldby ( 552781 ) on Thursday May 23, 2002 @04:08AM (#3570910) Homepage
    Who is going to write the software for these little beasties? I mean, how many of us are currently even a quarter tapping the potential of the machine on our desktop? (Yes, I know some are - generally those doing massive calculations. I'm talking about the typical user.)

    The most pressing limitation of current computing to my mind is the software we have available. Either it has bugs in it, or it doesn't quite allow us to do what we want, or the user interface is klunky and non-intuitive.

    Ideally, we'd like computers to work out what we are really trying to do. There are some tasks that can be described in just a few words of English, yet to write a script that current computers could understand would be a significant undertaking.

    I remember being impressed the first time I used MacDraw and found that if I duplicate a shape, drag it to a new position and duplicate it again, the next shape automatically appears in an analogous position. But this is just one tiny little example of a program being a bit intuitive and helpful. There are millions of other things programs could do like this, but so few are actually implemented.

    Advances in computer hardware make it more possible to run complex AI algorithms in a short time, but someone has still got to write those algorithms. I think currently there is a bigger gap between the software we want and the software we have than there is between the hardware we want and the hardware we have.
  • One problem with all of this is separating fact [sciam.com] from hype [wired.com] when it comes to nanotechnology [techtarget.com].

    The money may come in [redherring.com], but the market has to correct sometime.

    I predict a "nanotechnology" version of the web economy bullshit generator [dack.com] in the not-so-near future!

    Dot-con business plans were hard enough to understand; I can only imagine how bad these nanotech ones are read...
  • It is concepts like "qbits" which remind us how primitive our current computers really are.

    Even our most sophisticated machines are based in classical physics as the electrons are simply shunted around a maze of transistors. Technically it is possible to rebuild any current CPU using balls and rails in place of bits and circuits.
  • Then, in October, Bell Labs scientists Hendrik Schon, Zhenan Bao, and Hong Meng designed a molecular transistor even tinier than a nanotube--one that's one-millionth the size of a grain of sand. Schon and colleagues sandwiched a thiol molecule--a mixture of carbon, hydrogen, and sulfur--between two gold electrodes, then used the thiol to control the flow of electricity through it. What's important about this nanocircuit is not merely its size. In a discovery that baffles even its creators, the molecule also acts as a powerful signal amplifier--an essential part of a transistor that boosts the electronic signal (or gain). "We were amazed to be able to (operate) at low voltage and achieve such high gain," says Schon. "It was a very pleasant surprise."

    The thiol-based systems are the same ones from Bell Labs that are being questioned [slashdot.org] as based on potentially fraudulent data.

  • As usual these methods try to replicate silicon and most are abandoned because they dont have performance as good as silicon on a few properties.

    VLSI designers cant conseve a world where you dont have a global clock or a transition can take a varying ammount of time.

    Each silicon replacement shuld have its own design style reather than try and use the things that worked for silicon.
  • by bedessen ( 411686 ) on Thursday May 23, 2002 @07:22AM (#3571317) Journal
    I would like to thank the webmasters at popsci.com for such a well done site.

    I'm so tired of those "old school" web pages that use a readable font like the default 10pt Times. I love it so much when I get the opportunity to read an article in a miniscule 6 point sans serif font in a narrow column that takes up about a fifth of the width of the screen. I'm tired of all these websites that actually flow text to the size of the window I've chosen. It's so refreshing to have all that nice white space.

    And I hate those sites that actually put the related content on one page. It's time more webmasters realize how much I appricate having an article arbitrarily spilt into seven different pages. And its so nice of them to save the screen space taken up by those pesky "Next Page" buttons. I really enjoy clicking on those tiny page numbers to flip pages. I thought for a minute that they'd made a mistake and that red rectangle image with the ">" symbol was the page flipper, but after clicking it about ten times it's apparent it doesn't do anything. Phew, that was close.

    It's a good thing it was split up to many pages, I was really looking forward to seeing that insightful poll question "Will the Segway change transportation? Yes/No/Maybe." I thought I'd only get to see it once, but instead it was on each page, in case I missed it the first six times. Well done!

    Now, usually most webmasters go soft and have a "print this" link that shows the entire article text in the default font, wrapped to the screen size. popsci does includes this link, but they get it! They realize that should I wish to print an article, I don't want to print the whole thing at once. Rather, I enjoy clicking the "print this" link on each page and sending off a different print job for each page. After all, why should my printer driver decide where to break up page boundaries? Is that really its job? Why would I possibly want to have all the article text in one place?

    Finally, a webmaster that "gets it"!
  • In 1994, Leonard Adleman (the A in RSA) showed that it was possible to solve a particular computational problem using standard molecular biology techniques. His experiment solved an instance of the Directed Hamiltonian Path Problem (also called the Travelling Salesman problem) entirely by manipulating strands of DNA. The problem he solved was only 7 nodes in length, which is easily computable by hand in about 20 minutes, but was a great achievment in molecular computing.

    There are two compelling advantages to using molecular biology to solve computational problems. Firstly, DNA has a much greater information density than almost any other media: using DNA it is possible to store data in a trillion times less space than with an electronic computer. Presently, it is possible to contain 10^21 DNA molecules in less than 1 litre of water, (with each molecule encoding potentially 400 bits of information).
    Secondly, biological operations performed on DNA are massively parallel. All operations that are executed are performed on each strand of DNA simultaneously.

    Adleman's experiment encoded a 7 node Hamiltonian Path Problem. Each node of the graph was encoded as a random 20 base long strand of DNA, and these were randomly annealed into long potential 'paths' through the graph. Paths were selected (and extracted) based on the length of the strand, which nodes were encoded in the strand, and whether the path encoded all nodes. At the end of this selection process, the remaining strand/s should encode the shortest path of the graph.

    As can be imagined, this experiment has potentially huge consequences for large computational problems.

    The problem remains to make this process reliable.

    David Jackson
    --
    Incorrigible punster - Do not incorrige.
    • I'm writing a report on DNA computing for my masters at the moment, but instead of actually working, I'm browsing slashdot. Nevermind...

      Anyway, what you described is perfectly true. NP complete problems and other 'hard' problems can scale exponentially in time on a normal computer, but linearly in time on a DNA computer due to the massive parrallel nature of working with DNA. Unfortunately, however, the mass of DNA required to represent every possible answer in the one glass beaker simultaneously grows exponentially instead.

      Even Aldeman realises that DNA computers won't be able to outperform electronic computers, unless some radically different algorithms can be found. From his research website, he doesn't feel that it was a waste of time, since the principles learned in trying to develop DNA computers may be applicable to other areas of DNA research - this view is also held in microfluidic computing research - see P NATL ACAD SCI USA 98 (6) 2961-2966 for the discussion

      And now I've written more for slashdot on the subject than for my supervisor. Good work, eh?
  • There's in the June '02 issue of "Scientific American" on computers using electron spin to compute.
  • ... before all of the allegations of fraud by Hendrik Schon surfaced. The picture of him on page 2 of the article is the same one as on Lucent's webpage, and the disucssion of his groundbreaking molecular transistor work (p. 3) is presented straight. It throws a pall over the whole article, at least for those susceptible to cynicism.
  • I am not sure if I remeber correctly but I beleive it it was in a Steven Hawkings book. They had talked about using the 8 natural states of an electron to hold one byte of information. I found this link but it isn't exactly what I am talking about.http://www.eetimes.com/story/OEG20000831S001 9
  • This PopSci article covers quantum computation, DNA computation, and molecular electronic digital logic devices built of nanotubes, thiol, and DNA.

    I wrote an article [canonical.org] on this same topic last month. It's almost exactly the same length as the PopSci article, but it covers a broader range of topics: all of the above (except for thiol), but also inkjet-printed semiconductors, rod logic, buckling-spring logic, optical computing, spherical integrated circuits, fluidics, and Josephson junctions. It's also a little less confused than the PopSci article.

    However, its style is not nearly as engaging, and I didn't interview any researchers for it, so it's limited by my own limited knowledge.

    I hope you find it interesting.

  • Ever since I read the prediction from an ol popular science "Computers of the future may only wiegh 2 tons."

    Somehow I don't think we can begin to comprehend what we will see in the future. Try telling somone from 1995 that in 5 years he would be able to buy a 30 gig harddrive for $100 and in 7 years 80 gig HDs would ship in computers. Try telling someone from 1990 that in 10 years computers would run standard at 1 ghz. Or try telling someone from 1970 that in 30 years he would be able to hold a super-computer in his lap and it would wiegh about 5 pounds.
  • further advances in memory data storage technology that can combine all cpu/system
    memory functions in one rewritable storage
    device like volume holographic optical storage.
    That is small, extreme data transfer bandwidth,
    colossal amounts of memory, and non-volatile.
  • Come on, if we want silicon replacements, Gallium/Arsenic fully doped (using doping agents ONLY, without any base) transistors are already here, although I don't know how good they'd be for computing, but they are really great for radio.
  • Yet again, someone [this column] lists the "first modern computer" as ENIAC--yet England and Iowa State University both built predecessors:

    Colossus [umass.edu]
    Atanasoff-Berry Computer [ameslab.gov]

No man is an island if he's on at least one mailing list.

Working...