Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology

Towards Molecular Computing 122

pq writes "The NY Times has a progress report on molecular computers: the results are finally rolling in. This July, HP and UCLA reported molecular logic gates; now Yale and Rice are reporting the ability to cycle those gates on/off and HP is announcing conducting wires less than a dozen atoms across. Interesting review - to quote, `this should scare the pants off anyone working in silicon.' " Mmmm...nano.
This discussion has been archived. No new comments can be posted.

Towards Molecular Computing

Comments Filter:
  • It seems to me that newer isn't always better

    From everything I've heard, read about, and drempt of, this is far better. Silicon chip makers aren't going under today, chances are the existing companies will have a large stake in the new technologies. Nanotechnology is the next step, and I find it vey unlikely that it will just be some "useless, new" technology no one will care about. IMHO, we're in the beginning stages of a major paradigm shift, starting with the microcomputer revolution of the late 70s to the PC explosion of the mid/late nineties, and the shift ending (and new paradigm beginning) with the AI and nano revolution of the 2020s-2030s. That's judt my guesses, we have a few years yet =)

    The Good Reverend
  • Ok, sure building a really tiny gate is cool and all, but the question is: of what use is this?

    Sure it may be tiny, but if it can't make a state transition fast enough to keep up with today's silicon - forget it.

    Yes, smaller may be better for Place and Route, but what about the REALLY important stuff like - timing. (ok, the guy does P&R for a living, begin your rant). I'll argue that there are more time critical hardware designs than size critical hardware designs, but correct me if I'm wrong. Faster is better than smaller.
  • "They were using silicon wafers for what?"
    200 years? You're a hopeless pessimist. Try 30, 50 at the outside. Hell, there are people today who don't know what to do with a phone with a dial on it.
    I guess the valley will have to come up with a new name.
    You're assuming that the valley would remain the center of the new technology, like the old. However, the Valley's fortunes grew as its offerings displaced those of other industries. It may be that the biomolecular revolution will be centered someplace else, like Austin or Minneapolis.

    If it does manage to remain the place to be, I'd suggest Assembler Alley, Polymer Pass (too chemical, maybe) or Nanogate Notch.
    --

  • I'm just pondering over the problems inherent in Quantum Computing.

    The role of a technician could change drastically. Instead of fixing problems, a tech would just sit and observe the system at every given moment so that it runs properly. If the tech stops observing for any length of time the system would go haywire.

    Then again... maybe it's not so different after all. :)

  • There won't be a difference between human and robot. Robots wil never happen, because nanotech allows for people to be modified to suit the purpose you were going to build the robot for. What would you use a robot for? Mining? War? Service jobs? We have millions of poorly educated and more-or-less self-maintaining people out there who will do the same job with a couple of nanotech tools and some quality propaganda.

    I just saw Three Kings and thought it was the best movie I've seen in a very long time. Go watch it, and then ask yourself if the powers that be would or wouldn't chop-n-channel your sorry ass instead of building a robot from scratch.

    SP
  • Yes, smaller may be better for Place and Route, but what about the REALLY important stuff like - timing. (ok, the guy does P&R for a living, begin your rant). I'll argue that there are more time critical hardware designs than size critical hardware designs, but correct me if I'm wrong. Faster is better than smaller.


    For the most part Smaller IS Faster. If the gate only has to travel 10000th of the distance in the nano-processor as opposed to the Silicon one, then it can move 1000 times slower and still be 10 times faster. And if it moves at the same speed, then it's 10000 times faster. Distance, Time, and Speed are directly linked. Changing one can change the others.

    Kintanon
  • Actually lifespans could be much longer. I heard that, if it were possible to prevent aging, people might be able to live on the order of one hundred thousand years each on average.
  • Once we reach this scale I think we will not use electric gates as we do today with our semiconductor technology.


    Rather we will have to use mechanical computers, if only to avoid the wierdness of QED. A return to Babbages original ideas. Babbage would be rather pleased.


    Of course this will nullify some of the speed gains.

  • Haynes. Snapper. McDonald's. With computers this cheap, the primary purpose they're put to will be "smart" materials and "smart" products. Your hamburger wrapper will flash and sparkle an ad for whatever product you didn't buy. Your socks will beep at you if you put on a mismatched pair.... All for an extra fifty cents (and an extra cost of a penny or two).
  • As I said, the interpreter needs to be ported - then you're done. "all the C programs" is another thing entirely. You can port the compiler - that's one job. Then, you probably still need to port all those individual programs. Thousands, millions of them. But the interpreted source code doesn't need to change at all.
  • Now the rub on THAT whole idea is:
    Will we have figured a way around population growth by that point in time? There IS something to be said for people dying by natural means. If things were to continue at that rate we would over poplate our galaxy. Of course I spose some people will still die in stupid ways...drunk driving/flying and the like. I don't know that I would WANT to live that long honestly. Of course I have severe moral issues with life extension in the first place. And futurist technology as a whole. That's for me to work out though. =)

    "We hope you find fun and laughter in the new millenium" - Top half of fastfood gamepiece
  • The question of interfacing is an extremely important one. Just because you have the means to make the connection does not necessarily mean you have the capability to understand the 'language' of signalling. I will admit that the visual system is one of the most studied of human physiology, but there are still many questions to be answered. Nanotechnology is not the ultimate solution to any one problem, particulary this one. . . .
  • "It may be that the biomolecular revolution will be centered someplace else, like Austin or Minneapolis."

    ...or Toronto, Hong Kong, Dublin, Naples, etc.

  • Pardon the long quote.


    "Manufacturing might involve assembling trillions of circuits and then identifying and mapping out the bad ones -- much as faulty sectors are declared off limits in today's disk drives.


    "It's a very biological approach. Everyone's brain is the same, but the pathways are all unique."


    Actually I interpret these two statements as a very strong connection between mulecular computing and AI (why not?) Imagine "pouring out" a motherboard and allowing it to form itself throughout its productive usage time.

    Will bad circuits be isolated only in production, or on a continual basis throughout the circuitmass' service tenure? If the latter, then would this circuit not then gain "experience" in addition to data? I would at first compare this to the TCP/IP system, but on a vastly condensed scale. Does this not, at least, simulate learning process?

    What about this statement:
    "But researchers in molecular electronics are optimistic that they will be able to ... 'self-assemble' vast numbers of molecular-scale circuits at infinitesimal cost."

    In addition to an evolving system that maps out its own bad sectors, consider self-assembly (I know this is a stretch, but...) A circuit mass that can map out its own bad sectors, self-assemble (reproduce?)

    How distant then, would be a self-assembly process that detects the cause of bad circuits, and implements processes to aviod these causal situations?

    At which point will have we stopped witnessing product evolution vs. genetic Darwinism?

  • For those who really care, 10^-18 is 'atto'. But the prefix 'nano' seems to be already accepted for the purpose of molecular computing. I assume that the resulting processors would be 'nanoprocessors'.

    But there will probably one day be better technology...
    picocomputing - using electrons for computing (pretty much quantum computing)
    femtocomputing - computing in base 6 using quarks
    attocomputing - computing using whatever the hell quarks are made of (superstrings? energy patterns? green cheese?)
    --
  • And of course, once you have a nano robot, you can use it to assemble a smaller nano robot... until we start manipulating individual molecules.
  • Does anyone see potential in using nanobots to fix corrupted chips, boards, etc. once such an error is detected and diagnosed? Store a number of 'bots somewhere in the motherboard, in a little compartment, and send them out to fix stuff within the computer?
  • Who are you talking about the polygon pushers desigining chips at the gate level or the people in tyvek suites working in the fabs?

    The digital engeneering work will require exactly the same skill set (routing power raround the die will be different).

    As for the people working in the fabs, they proably dont make a whole hell of a lot relitivly.

  • Here's a link [rpi.edu] to a project that already has some function at 2GHz, and is planning to step to 4GHz with some of the new technologies. Cool stuff, and a great group of guys.

    This includes some chip shots, as well as descriptions of several of the new technologies that are being used for the project.
  • nah - one usefull molecule does not a technology make - there's lots of interesting things people have proposed that you can do at the molecular level. Much more interesting are things like:
    • how do you fab them? in bulk, with what defect densities?
    • what are their operating temp ranges?
    • how do you hook them together to do usefull stuff?
    • how do they talk to the outside world? (do they have to be fabbed in a Si world in order to get hooked up to usefull things like PCI buses etc)
  • No, not unless they use organic molecules to create actual replicating cells. Virus' require a complete and functioning cell with replicating DNA in order to do their deed. On another note, however, if there were replicating-cell computers, you would then have to worry about your monitor giving your computer cancer.
    ----------------------------------------- -----------------
  • Well I'm 25 as well and that was my biggest question. Will I see it in my lifetime? Thank you for such a great response.
    A bit of definition if i may:
    by assembler do you mean mass production or just single unit fabrication?
    "We hope you find fun and laughter in the new millenium" - Top half of fastfood gamepiece
  • Well, yes, brute force could be faster. But then we'll just have to use those "micro-micro-micro" processors to find bigger primes. A processor can always encrypt better than it can decrypt. Currently encrypted stuff would suddenly be very ... vulnerable, thought. Makes you think differently about all those catch-phrases like, "using current technology..." or, "barring some incredible new advancment..." doesn't it?
  • I've lived through some changes, and all I can tell you is it's hard to see very far ahead. But if I were you I'd try to get some coursework in neural networks, genetic algorithms and fuzzy logic.
    --
  • This is true, but could you imagine "growing" computers? And computers giving "birth"? I'd be all for that...it'd be weird, but talk about the end of WinTel....
  • Regardless of the hardware, they'll always need software jocks since advances in this area occur very slowly. So start out in hardware and migrate to software engineering as the technology changes.
  • I will be the first to admit that I'm not clued in about either technology so I'm hoping the more clueful here can offer some insight. How does the performance benefit derived from such a device compare with what could be achieved from the research being done into optical devices (ie: those using light rather than electrons). It seems to me that if one of these two technologies gained favor, the research being done into the other would be lost. Is this the case? Is it possible to use the processes mentioned in this article to replicate an optical device? Besides eliminating heat emissions problems, I'm sure there are plenty of other benefits. Does anyone know?
  • Okay, so I lied, that stuff does interest me and I did look :).

    Yeah, some of that stuff looks like what my friends and I looked at and played with at RIT. Well okay, they played and I got to BS with them and see the wafers. Not really the same as doing it is it? :)
  • I could be wrong on this, but from what I understand the problems with replacing a retina are more concerned with the interface to our nervous system than any lack of ability to create an artificial means of input.

    I'm not even sure that it would be possible to improve on the optic interface (by making more receptors with small components) since I'm pretty sure most of the way the brain processes optical information is pretty much "set". Maybe if you did it for a baby and they could grow up with it.
  • I think this has been around for a while; but for all of you who don't feel like signing up for the free registration:

    Login: slashdoteffect
    Password: slashdot
  • Current software will be trashed along with the Von Neumann machine architecture since the economics of molecules vs. silicon are significantly different. For instance, a fast pattern matcher could replace the ALU as the basic building block. We have no software to capitalize on this type of structure.
  • Bell's theorm is kinda' like this.

    Basically if you take electrons spinning the same directions from the same atom and split them up and change the spin on one electron they spin on the other will change at the same time.

    So you have a (theoritical) way to transmit binary from anywhere in the universe to anywhere the coorsponding electron is.

    The first step to FTL communications?
  • Why the focus on molecular computing? (I guess to answer myself, it'd be an intermediate step).

    I think that a more productive branch of research would be in quantum computing. (bye-bye binary. ON/OFF/BOTH will become far more interesting).

    They have had logic gates for some time in quantum computing. (One of the left-coast universities, I believe).

    A good starting link to find out about this stuff is http://www.quantumcomputing.com. You will also find information about quantum cryptography. I haven't looked at the crypto stuff yet, but by the fact that they have a section about it, it would seem that it would answer people's current questions about current encryption technology as embattled by molecular computing.

    There was also a show about it on Science Friday some months back. (http://www.sciencefriday.com), although I couldn't find it in the archive.
  • actually using the LSBs of music CD data as a one-time pad is an interesting possibility .... after all the pad distribution problem is solved for you :-)

    Isn't this sort of thing just an elaboration on the "book code" concept?
    --
    "HORSE."

  • The number of CDs in the world is probably not more than 1 billion. This is a 32 bit key. The measure of a good key is how many bits of information is contained. If the key is in any way derivable from a smaller set (such as CDs or English words) then it is equivalent to using a smaller set.

    The Netscape problem back in the day was caused because the keys were generated based on 40 bits of information, it didn't matter that there were more bits in the key, they just needed to check those 2^40.

    Generally anything that makes a key easy to manage also makes it useless. sigh.

    Admittedly, getting the list of codes from all CDs would be tricky....
  • Actually lifespans could be much longer. I heard that, if it were possible to prevent aging, people might be able to live on the order of one hundred thousand years each on average.

    And by then we'll be able to port ourselves from carbon to something more long-lasting...perhaps to whatever's supplanted whatever's supplanted whatever's supplanted (and so on...) silicon by then. :-)
    --
    "HORSE."

  • That has crossed my mind also. I have concluded that it will make my most favorite fantasys come true(except the one about the blond).....we will have to find other places to live...fast. I see huge ships full of New Worlders heading out to all corners of this uni/multiverse. This is what we are here for...to see the continuation of ourselves. The only way we can do that is to get off of this single rock before it disappears. Overpopulation will quickly push us there. But I am not syaing anything a hundred fantasy scifi writers have said better and before I have.

    It's fun just to think about.
  • Pop over to http://www.foresight.org [foresight.org] and check out Robert Freitas' work on nanomedicine in the "what's new". Having processing power like that in a watch will be childs play. You could have them replacing your red blood cells, and still doing better at oxygen and CO2 transport than the original blood by an order of magnitude. I have artwork on my high-bandwidth homepage [ihug.co.nz] of nanotech bloodcells at work.

    Vik :v)
  • and why would it be unchristian? and should unchristian things be forbidden? Buddhism is unchristian. Atheism is unchristian.. planning to ban those too? well.. go ahead. I don't live in the US, but found your note funny

    //rdj
  • I think the New York Times is really dumb for requireing a free l/p just to read stories. Anyway, heres and easy one to remember...
    login: wheredoyou
    password: wanttogotoday

    Mark "Erus" Duell
  • Get a login and only select linux news. You happy: you only see linux news. We happy: we don't get more posts like this.

    //rdj
  • As I said, the interpreter needs to be ported - then you're done. "all the C programs" is another thing entirely. You can port the compiler - that's one job. Then, you probably still need to port all those individual programs. Thousands, millions of them. But the interpreted source code doesn't need to change at all.

    What makes you think we'll need more change to the C-source than to the interpreted source???

    A very good port of the compiler will compile anything that compiled before, just as a good port of the interpreter will interpret anything. And an ill-ported interpreted will have just as many failures as an ill-ported compiler.
  • " way the brain processes optical information is pretty much "set"."
    I don't think this is actually true. I seem to recall people who'd had their hands / feet stitched back on following a mechanical injury recovering utility of the limb. Immediately following the injury & operation, the hand would be incorrectly "wired", so that trying to move your little finger would result in your index finger moving. But over time (with plenty of physioterorism :) the brain could be taught to accept the new nerve input. Quite how you'd do that for retinal implants I don't know, but at least it shows that the brain can be dynamic in the way it operates.
  • There's two differences.

    (1) You're talking about repairing previously existing functions, whereas he was talking about actually adding new ones. The former is simple enough, given time. The latter, however, is not. This is the reason that, e.g., 'feral' children never learn to speak, or people born blind can't suddenly 'regain' their sight.

    The brain is extremely flexible at birth, and grows progrssively less so as you age. This is a necessary: performance is improved through specialization, which inevitably results in a more rigid system. An adult brain is still tremendously flexible, but seldom sufficiently so to make any drastic changes. Adding new features is very definiately a drastic change. Repairing damage, OTOH, is a matter of reversing drastic changes, so natural it works quite a bit better. (For a related example, phantom limb (partially) the result of the brain not being able to adapt to it's new state, sans limb. I'm not sure if/how you can recover from PL, but it takes a very long time to do so.)

    (2) It's not a matter of new input in the case of super-retina's. There no input whatsoever, because there's no optic nerves connecting the added features to the brain. You's have to totally rebuild the visual system, at least up to the LGN, to make it work.
  • To me, this was the most interesting line of the story:

    The Clinton administration is now considering the possibility of a National Nanotechnology Initiative as early as next January to set up financing and help organize diverse research activities in nanotechnology.

    Not that government involvement is always a good thing, but the field can only benefit if a government with the resources of the United States decides to provide official backing. Also, it will help to lend more credibility to the field, in the eyes of Universities and other governments that might not be as convinced of the possibilities as we are.

    Of course, if this happens, in a few years we'll have to listen to Al Gore tell us how he invented Nanotechnology, just like he invented the Internet.

    darren

  • C is not 100% portable. Most C programs that currently don't need any source code change to go from, say, Linux to Windows are either A) 100% ANSI C (and thus, no GUI), or B) have extensive pre-compiler commands. If a brand new architecture comes out, the ANSI C programs can probably be made to work without change, but the pre-compiler commands will be useless. A new set will be necessary for the new platform. Maybe this isn't particularly difficult, but it's probably significant. Plus, you add in all the code that's not so well written to make porting easy......
  • Okay, so...if they're using organic molecules does that mean they could be self-sufficant and considered alive? Hmmmm.....what happens when your computer can catch a cold?

    Abort Retry ACHOO?

    Also on a side note, anyone know the current status on organic nurel(sp?) nets? IE artifical organic brains?
  • Before the revolution takes place on the nano-technological scale, they need sufficently advanced tools to automate the processes inherient to nano-technology. If the building of machines is going to rely on a team of scientists using scanning-tunneling microscopes to put the individual atoms in their place, we'll never get anywhere. and machines on that scale will cost a buttload. the first nano assembler though.. whew. who ever creates the first decent one has got a fortune for themselves.
    --
    Gonzo Granzeau
  • Ok. We have hard drive size getting smaller and cheaper. Chip/etc size getting smaller. So just when am I gonna get my own 500mhz watch?

    Oh yah. How long will it be until somebody writes the patch to cluster them?
    I can see it now. All the geeks are wearing watches up and down both arms. All of them are running Seti@home. Team slashdot could kick some serious butt.


  • I realize that CPU speed are supposed to double after a given amount of time.

    Correct me if I'm wrong, but chip speed have a limit to them. For example, right now, speeds are held back by the material they're made from, copper and silicon. Wouldn't it be true that the fastest a chip could ever go would be a chip made from a couple of thosand atoms, with the electrical impulses in the chip running at the speed of light? Which brings up a question as to how close we are to the limit.

    And once we hit that limit, would the only way to boost performance be to create paralell processing units?

    So, I guess, really, what I'm trying to say is...

    ...WHEN CAN WE USE BEOWULF ON IT!?!?!

    ( OK, maybe that's not what I wanted to say. Maybe something along the lines of "When will we see any of this cool nanotech which seems to exist...but not really." There, that's better. )
  • It seems to me that newer isn't always better, in the job market that is. Look at what a Cobol or RPG programmer makes today and what a C++ programmer makes today. The new stuff is in the news - the old stuff is what gets things done. Plus there are thousands of kids right out of school willing to work all day and all night on the new stuff for nothing while the old stuff is in great demand. I should have stayed with RPG on AS/400...
  • This is definately a step foreward in the nanotech department. I, as well as others I'm sure, am eagerly waiting for all that is nano to arrive
  • Could someone with a bit of information in this area provide what they see to be a reasonable timeline for nanotech? I have been doing alot of pondering on technology growth and futurism lately and wondering how much I'll see in my lifetime, especially considering how much I've ALREADY seen in my lifetime.
    "We hope you find fun and laughter in the new millenium" - Top half of fastfood gamepiece
  • Brings a whole new definition to viral attacks on computers. Will we have to babysit and feed the poor machine Vicks just so it will feel better?

    Where do machines end and living things begin?
  • They won't have to assemble each gate. They are using self-assembly techniques, which work in ways similar to the ways that cells make proteins which fold themselves (sometimes with help) into the proper configurations, or DNA automatically pairs with complementary strands.

    When the techniques are refined sufficiently, it will be just about as easy as mixing the ingredients and stirring. This is where the claims of "dirt cheap" come from, and yes, they're quite serious about it.
    --

  • I wish I could be around 200 years from now to hear someone go "They were using silicon wafers for what?"

    Speaking of silicon, I guess the valley will have to come up with a new name. Free beer for the best guest :>)
  • Currently pundits are predicting the end of Moore's law with 9 micron interconnects. In my opinion, we'll see this limit as soon as 2002. If it takes til 2014 to ramp up manufacturing processes for molecular CPUs, this means that the CPU you buy in 2003 might actually remain state-of-the-art for an unprecedented 10 years!
  • The people working in silicon can just buy
    the new technology.

  • I'm a bit familiar with the research at Rice, and basically they are building gates by deforming nanotubes in certain ways to alter their electrical characteristics. They've been playing with it and playing with it until they can build structure which act like logic gates. The implications boggle the mind.

    I say its high time for nanotech logic to start ramping up, so this is very exciting. Silicon is only going to carry us so much further. Once you get down to a semiconductor gate that is 20 molecules across, the Physics get much more interesting, and electromigration starts eating your lunch.

    A very interesting field to follow...

    --Lenny
  • Free beer for the best guest

    I always have free beer for the best female guests ;>) I should have said free beer for best guess

  • I like this idea... I think this will be the next revolution (since the Silicon) We might be looking at a new era in computing history... Kinda neat, what what?

    ---
  • >I wish I could be around 200 years from now to
    >hear someone go "They were using silicon
    >wafers for what?"

    ...And that someone will be will be a robot taking a big bite of a silicon wafer at a robot party...Catered by humans slaves....
  • every other year someone shows up with the wonderfull new technology that's gonna eat silicon's lunch ..... but there's just so much invested in Si R&D and so much practical process knowledge about how to build Si structures that it has just so much momentum and nothing else has managed to catch up with it. Plus there's a whole bunch invested in CAd tools, and training chip designers like me

    One day there will be something, and it will almost certainly be at the nanotech level where all the existing stuff breaks down .... but it's going to be a long ugly transition .... Si will hold on as long as it possibly can (kind of like how modem tech kept coming back after people predicted it' demise) I bet longer than most people predict .... and molecular level stuff's going to be initially unreliable (and probably get a bad rep as a result).

    Me I'm still hoping for the non-electrical nanotech stuff and assemblers so we can get away from this 2d chip paradigm we're currently stuck in

  • I was reading in a science magazine a number of months ago (Discovery I think, before the new editor) about the so called "spooky nature of photons." I am not exactly sure how the expirement worked or how the scientists got one photon to be tied to another photon, but here is the jist of what happened.

    They sent two photons along fiber optic cables at 90 degrees from each other for approx 20 kilometers (is this right?) Then they ran expirements with these photons such as stopping one photon in travel, etc. The spooky thing that happened was that when one of the photons was stopped, the other photon also stopped 100's of kilometers away at the same instant (in the mathematical definition).

    What this has to do with computer speeds (if anything) is that if we could harness this power then we would have near _instantaneous_ data transfer and processing speed. Of course there would still be a little bit of light speed overhead if we made the chips transfer any light, but the possibilities are amazing. Imagine zero latency downloading! It would probably change the entire way that architectures are built.

    If anyone has anymore info on the subject please post it. My memory may be flawed and so may be my data but this is what I remember. I'm not a physicist, just a programmer.
  • "When the techniques are refined sufficiently, it will be just about as easy as mixing the ingredients and stirring."

    Kind of like the first compiler? IOW, they had to hand code the assembler in binary to assemble the real compiler so that the compiler could compile.

    I forsee a massive warehouse full of vats and white coats working for years to create the "nanotech compiler". Once that's completed, the creation of nano-anything will be a trivial (relatively) task. Therefore the prive will drop.

    Someone correct me if I'm wrong...
  • To quote Conan O'Brian's "In the Year 2000":

    "Computers will become so small that millions of them will fit in a tablespoon. They will be used to enhance the flavor or soup."

    Hee hee hee hee

    P.S. Don't kill my Karma, I marked it off-topic in the subject!

    "Software is like sex- the best is for free"
    -Linus Torvalds
  • Light doesn't move at an infinite speed, therefore the number of instructions executed could not be infinite as well.

    Or, at least that what my "common sense" says.
  • Some of the interesting stuff happening at Rice involves getting the buckytubes to behave as either metals or semiconductors, depending on the tiling pattern of hexagons [eurekalert.org]. To build circuits with this, you'd need to be able to join tubes of different tiling patterns. This was an active area of research two or three years ago, I haven't heard more since. At the same conference, I heard that it was pretty easy (from a chemist's pov) to "functionalize" buckytubes, i.e. stick little molecules on there that did useful things. Probably that would be useful in joining tubes with different tiling patterns.
  • You could. This sort of thing is done by recording white noise. If I have a secure chanel and send you hours of white noise, I can then xor my data against my coppy of the noise, then send you the result. you can then get the data back. withought a secure chanel, it would be security through obscurity, but if I want to email you something secret and I called you, telling you that I'd xor'd it against a piece of music, the chances are no one could decrypt this. Of course the way sugnificant bits go in 16 bit music, I'd suspect that the first several bits of every other byte would be the same 90% of the time, so it wouldn't help much...
    It could be done, though.
    --Ben
  • Electrical impulses will not be the most efficient method on a computer that consists of only a few thousand atoms. These type of cpu's will probably be simple mechanical computers vibrating at the frequency of the atom (or something like that.) Ie. Drexlers "rod logic." Since quantum effects will make electrical devices that small very difficult.
  • actually using the LSBs of music CD data as a one-time pad is an interesting possibility .... after all the pad distribution problem is solved for you :-)

    The down side of course is that while there are an awfull lot of CDs in circulation .... therevreally aren't that many .... I can imagine if it got to be common there'd be some poor shmo at the NSA who spent all day trying to break into the impossible celophane that imprisons CDs ad tossing them in a CD drive to be read

    Of course eventually the work of some bands would be prized for how close their music comes to random noise .... then finally we'd just give up on any pretense of music .... "excuse me could you please direct me to the white noise bins thankyou"

  • I guess the valley will have to come up with a new name.

    Nanopolis

  • Smaller is better. The distance signals have to travel will directly impact the rate at which useful computation can be achieved. It is for this reason that supercomputers are typically torus-shaped, with direct, straight connections through the hub, avoiding indirect bus connections which give signals further to propagate.
  • Here's one vote for Minneapolis! ;)
  • Well, at the speed of light, would it not be possible to execute an infinite amount of instructions in no time what so ever?

    From the point of view of the photon, yes. Unfortunately we're moving so much slower than the photon that there's a significant difference in our perception of time... :-)

    We'd see c / [circut length] cycles per second, but a conventionally-shaped CPU could be printed in the width of the traces on a current CPU, so as Feynman says, there's plenty of room (at the bottom) for improvement.

  • Well, I'm a long-term optimist, so to me, being born is like buying computers... It'll always be better to do it later, but you have to do it some time. :P
  • Technology is neither Christian nor un-Christian. The moral and ethical implications of what people do with technology, on the other hand, is a different story.
  • You know, ten years is a long time to work at McDonald's -- just take a job that will pay for your Physics PhD...
  • In order to build the quantum computers, you need to machine the gates fine enough (and regular enough) to force the quantum states to "decide" at the right time.

    Sounds like this tech will be more likely to be able to provide that. Also, quantum computing may well have limited applications (good at searching and sorting, lousy at running a Window Manager). We still need quicker, smaller, cheaper processors that work with understood logic to solve these macro-scale problems.

    Still, I can see quantum processors combined with Turing-style memory readers and semi-mechanical procedural processors to make kick-butt systems that store and process their terabytes in the space of a shirt-button.

    How many buttons can you sew on a shirt, anyway?

  • Well, increasing evidence points to the brain being more plastic than previously thought as far as forming new synapses. I believe you are correct that some training could be accomplished, but probably not as much as would be required to make full use of such an interface.

    To take another approach, perhaps it would be possible to examine the existing interface and come up with something that mimics it well enough that the brain doesn't have to do a lot of rewiring to figure it out. For a damaged eye (using your example) if some of it was still working then perhaps it will someday be possible to observe the way the undamaged portion works and then improve upon that.

    Creating a completely new and improved interface might be difficult. To do something like Jordy's visor in STNG would probably not be possible without training from birth.
  • Myself I'm letting someone else work on the first matter compilers. I'm waiting for them to get the bugs out and then I'll invent a bio-chip that'll let me control one mentally and embed the sucker in my body somehow. :)
  • ABC Nightly news (Nov 3 or 4) had video with interviews of Mark Reed and James Tour. They showed what looked like an animated video of the HP picture of a molecular wire shown in the Progress report article. They had the quote about making molecular computers becoming as simple as making photographic film. Transcripts may become available http://abcnews.go.com/onair/worldnewstonight/trans cripts/wnt_transcripts_index.html ABC news could also be contacted. Here is a link to a good paper by others working in the field of molecular computing. http://www.mitre.org/technology/nanotech/Arch_for_ MolecElec_Comp_1.html
  • Why is it that the anaylists keep saying that we can't continue to shrink transisters fast enough anymore to keep up with Moore's law? The relentless quest to make Quake run faster is enough for most engineers to make the impossible happen. This is a big step in microprocessor design, and when combined with the insulating properties of aerogells, will allow us to pack more and more transisters into tighter areas. Before too long, we'll be manipulating the state of a Quark to turn switches on and off. It's a wonderful time.
    Well, the quantum computer won't be available this quarter, unfortunatly. :) However, one day.... In the mean time, sit tight and wait for the 10 cent processor.

    Mike
  • by Saige ( 53303 ) <evil...angela@@@gmail...com> on Monday November 01, 1999 @08:58AM (#1572122) Journal
    Could someone with a bit of information in this area provide what they see to be a reasonable timeline for nanotech? I have been doing alot of pondering on technology growth and futurism lately and wondering how much I'll see in my lifetime, especially considering how much I've ALREADY seen in my lifetime.

    All the estimates I've seen, and this is from many areas, including top researchers, is 20-30 years for the first assembler. And once one of those is built, things should explode in quick succession.

    They've said they will be very suprised if it is not here in 50 years.

    I know people have always liked to quote the predictions about how AI would be here by now, etc, etc. But this figure is arrived at from many different directions. The progression of how much material is used for memory, and the size of computer chips are two things that will hit the nanotech level around then. Convergence from chemistry, biology, and engineering/physics directions all point to about that area.

    I've seen more than enough to convince me that the odds are very good I'll see it in my lifetime. And I'm 25.
    ---
  • like this?

    http://www.calmec.com

    Waye
  • Hmm... in the space of a single Pentium [23] we could have thousands of micro-micro-micro (that's 10^-18, for those who care) processors... what does this mean for cryptography? Brute forced keys in fractions of seconds.... *shudder*

    b underscore geiger at hotmail dot com
  • While the capabilities available by driving smaller devices ever-faster are interesting, there is another side to the speed/power progression: as the devices get smaller, they need less and less power to do the same thing.

    I'm interested in the possibilities of these minuscule gates to run on the tiny bits of power from a glucose/oxygen fuel cell. With some molecular photodetectors, gates like these could be used to make an artificial retina and restore sight lost due to age, injury or disease; with the tiny size of the gates, they could be made smaller than the cells they replaced.

    Biomolecules don't seem to like heat very much, so really high-speed (and high-power) operation might not be the best use for these. But making up for it with massive parallelism, and taking hints (or outright copying) natural systems could lead us to a whole new world of technology that we might have trouble recognizing as computing. Still, I'm game for it!
    --

  • Trust me, there is *plenty* of room for silicon to still grow. As I recall, the highest frequency you can get the silicon to work at is roughly 1.6 Ghz. Yes this figure is being rapidly approached... but that number also corresponds to how many cycles a CPU has to perform given operations.

    For some quick math, assume your average instruction takes 10 cycles on this processor, that mean you can perform 160,000,000 instructions in one second. That's quite a few, but if you can get your average instruction to take 9 cycles, you can now run 177,777,777 instructions per second. That is a ten percent speed increase due to optimization of the instruction set.

    If you can manage to get a FISC CPU running all instructions in 1 clock cycle, then you can run 1,600,000,000 instructions in a second.

    Other areas of technology can also increase this number. Adding execution pipes to the processor so you can execute multiple instructions at the same time can greatly increase the speed of the processor ( to what factor I have no idea, with a good compilor I still doubt it gets anywhere close to double ). I think the Power PC has three execution pipes which *could* effectively triple your execution time, but I highly doubt that.

    Increasing cache speed and memory lookup times will speed up the amount of time that a processor has to sit around idle twiddling it's thumbs waiting for the next instruction or the next piece of data to get to it.

    "Best guess" compilors have the ability to start executing code that it thinks has a higher probability of executing so when you make the decision on a statement you're ahead of the game.

    I'm sure there is a lot more technology being looked at to increase the speed of a processor, and I really doubt that the most research is going into non-silicon based products.

    Sorry to all you computer engineers out there for not using proper terms and probably bungling concepts, I am a lowly computer scientist and don't really care so much how it works as that it works consistently. So if you want to tell me my head from my ass you are more than welcome to :)
  • Does this mean that soon I'll be able to code with just a mere thought? Woohoo!!!

    But does this mean I need to buy twice as much mountain dew? Pretty soon overclocking will just be a IV into your computer of Dew.. *snicker*

    A sip for you.. a sip for me..

    Ahh.. what a future!
  • A bit of definition if i may:
    by assembler do you mean mass production or just single unit fabrication?


    By assembler I mean the device that has the ability to be programmed to build things at a nanotech level. Once one of these is created, it will be programmed to build a copy of itself. At this point, you can just supply the materials and the energy and wait a while and you've got all the assemblers you need. Then you've got all you need to experiment with and create products with.

    It doesn't matter how long it takes to build just one assembler by hand, as long as it can create a copy of it self in a small amount of time.
    ---
  • What really has me curious is the chemical/electrical interfacing. From the Times article I'm assuming that what they have right now is inorganic, so are the gates still transfering electrons? Does anyone know of work in this area?
  • Strictest definition of organic: including carbon. Until now, any serious carbon based chemicals are typically related to life. The only exception I can think of is elemental carbon crystals: graphite, diamond.

    But the cool thing about organic circuitry is that there's no heat requirement. The energy required to impose order can be chemical. And one of the upshots of THAT is that the substrate doesn't need to resist high temperatures. So your LCD could go on mylar or some other plastic. Thus, unbreakable laptop displays. Or freaky lightshows as the inherent grid of the LCD is distorted.

    Additionally, it would be theoretically possible to make circuits that could almost be painted on. For instance, what if UPC symbols were also radio transponders, so you really could just push your groceries through the checkout thingy. You Will...

  • An Assembler is the basic unit of nanotechnology. Its quite simply nothing more then a little robot that takes atoms and places them into molecular structures in a precicely defined way. Assemblers can make more assemblers, and billions of assemblers will make larger structures such as food, computers, houses, whatever.
  • Come now, you're being way to hard on fluff by comparing it to Roblimo's drivel!

    W S B Knocking on heaven's backdoor...
  • See Nanosystems by Drexler and Merkle.
  • Naw. What you get out of a computer-related degree is the ability to think analytically about problems. This is something which will always will be useful to you.
  • > See Nanosystems by Drexler and Merkle.

    Yeh that's what I was refering to when I mentioned 'non-electrical systems' ..... rod-logic computers and that sort of stuff .....

  • Well when you think about it you may not have to worry about this at all. This stuff these folks are figuring may make the term "life time" less meeningfull. Combine nano with regrowing/cloneing and we may be looking at a few hundred years for each of us, maybe at least till we get sick of this world and want to move onto the next (if there is one, who is going to go check if they do not have to?)
    Personelly I think we may have a chance to see more then we can imagine or count on right now, and it is going to be one hellava ride. Glad I started ROGAIN now so I wont have to be bald for 200+ years.

    And I thought HDTV was cool.
  • Faster computers make things more secure, not less. If I add one bit to my key, then to encrypt that key with just one more bit will hardly be noticeable. Encryption will go really fast.

    But, decrypting the key with one more bit will require twice as many attempts in a brute force search.

    See how that relationship works to make things more secure as computers get faster? Just make your keys longer. The only problem is remembering a passphrase long enough to make a long key.

  • There are plenty of architectural improvements yet to be made in processor design. Once feature shrinking becomes even more difficult, clock speeds may begin to change more slowly, but chips will still improve in performance. More effor will be thrown into developing advanced architectural features around existing gate technology.

    And don't forget the software side of technology! As more and more software is written towards a multi-threaded architecture, the speed advantages presented by multi-processing and multi-threading architectures will become even greater.

    One development I am watching with rapt attention is the transition to Simultaneous Multi Threading Processors (SMT). This is still in the works, but processors such as the Alpha 21464 will be built around this design in the near future.

    SMT procs move some of the process table down on to the processor itself such that the processor can fill time while waiting for a cache miss to be serviced by task switching to a seperate thread. Further, SMT allows simultaneous dispatch of instructions from *multiple* instruction streams. This sort of architecture makes much more efficient use of parallelism in hardware than current superscalar processors. Further, executing on the same chip, the different threads can synchronize *in cache* which is far more efficient than hitting memory like current SMP systems are forced to do. Very exciting...

    There's no way a processor company would sell the same part for 10 years. If they can't shrink their gates any further, they'll just find new ways to exploit parallelism in hardware with more advanced architectures.

    --Lenny

A complex system that works is invariably found to have evolved from a simple system that works.

Working...