Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Progress Toward Single Molecule Transistors 74

Fungii writes "There is an amazing story over at sciencedaily.com saying two research teams have managed to create single molecule transistors, looks like we don't have to worry about limitations on feature sizes for a while."
This discussion has been archived. No new comments can be posted.

Progress Toward Single Molecule Transistors

Comments Filter:
  • Someone translate this into plain language for me, perhaps? It sounds like this is some kind of important breakthrough, but what does it actually mean on a practical level?
    • Practicality? (Score:5, Informative)

      by DoctorFrog ( 556179 ) on Sunday June 16, 2002 @03:58PM (#3711823)
      ...but what does it actually mean on a practical level?

      This means very little on a practical level at the moment; it's more an indication of what's possible than anything we're going to see actually used in the next few years (IMHO). It's an ongoing question just how small a transistor can get and still be functional, and this seems to be an answer to that: it can get molecule-sized. Whether a molecule-sized transistor can or will be actually be usefully incorporated in any practical device is another question (well, technically it's two other questions).

      At the very least a practical device using transistors that small would have to have a radically different design from present-day circuits, including vastly larger error-checking capabilities and probably some self-repairing abilities. Heat is a problem even now, and in circuits on this scale it wouldn't take much for the circuitry to literally shake itself apart. Quantum effects, which are negligible on today's scale, would introduce all kinds of errors into both the input and output of such small circuits if you tried to simply copy the same structure onto the smaller scale.

      Speaking of which, the issue of actually hooking in I/O at such a scale is both a major hurdle for some applications, and a major possibility for practical use in others. For example, this is the kind of scale you'd want if you're going to try to splice more-or-less traditional electronic circuitry directly into fine nerves; when the electronic eyes currently just coming into being become fine-grained enough to support normal vision, they'd probably need extremely fine connections to individual nerve fibres in the retina.

      This is a real wowser of a breakthrough, and major kudos rightfully go to both teams. It shows that there's a long way to go before transistor-type circuits can't be made smaller. By the time we actually get that far down the Rabbit Hole it's likely that we'll also have other information-processing techniques available, such as quantum computing (and this technology, once developed, might be just what is needed to usefully access the output of qubit-based systems).

    • by Compuser ( 14899 )
      Current scale for transistors is about 90 nm
      (current production technology is 130 nm).
      Single molecule transistor scale would be 1 nm.
      So oversimplifying a bit, this is 100 times
      smaller than current tech.
        • So oversimplifying a bit, this is 100 times smaller than current tech.

        100 times linear ... which means that if they make chips using this technology in a similar way to current 2 dimensional chips, there will be 10,000 as many transistors on a chip of the same area.

        If they learn how to stack them (3D), that would be really nice.

  • by Myriad ( 89793 ) <myriad@the[ ]d.com ['bso' in gap]> on Sunday June 16, 2002 @03:19PM (#3711706) Homepage

    ...Hand soldering SMT's was a bitch!

    • by PacoTaco ( 577292 ) on Sunday June 16, 2002 @03:44PM (#3711787)
      Now someone will need to invent the "soldering ion."
    • This is one of the big problems. People have been coming up with switching devices for a while now. It's been done with rotaxane [hp.com] , it's been done with nanotubes [nature.com]. As you point out, the really tricky problem is specific wiring.

      Some programmable logic technologies handle wiring with a uniform sea of logic gates connected by fuses, and you create a particular logic circuit by selectively blowing fuses. The HP/UCLA rotaxane work involves essentially the same idea, using molecular switches at the intersections of a 2D grid of molecular wires. In addition to some discussion here on Slashdot [slashdot.org], there is more at Nanodot [nanodot.org], and a fairly extended discussion on sci.nanotech [google.com].

      Solving the problem of routing specific wires to specific gates, and doing it in a way that's reliably manufacturable in mole quantities, will pretty much relegate today's foundries to niche markets. But that's probably a long way off, numerous problems to solve to get there. Interesting times ahead.

  • I read the article but couldnt find actual sizes anywhere (bacterias vary widely in size) but i wanted to know how smaller than the current transistors inside CPUs these are.

    Anyone know ? (BTW: which proc has the smallest rite now ?)
  • by RyanFenton ( 230700 ) on Sunday June 16, 2002 @03:25PM (#3711726)
    The molecules involved in making the transistors, metal vanadium, are individually the size of golf balls.

    ;^)

    Ryan Fenton
    • What's weird is my boss kept telling me to make the vanadium atoms appear bigger (for the Nature cover) as the trimethyltriazacyclononane ligands were covering them up. Nate Crawford Long Group Dept. of Chemistry UC Berkeley
  • by Anonymous Coward
    It took twenty odd years for power suppplies to catch up with the power stability requirements of the Feild Effect Transistor (yes, the lowly FET). It wasn't a viable device (it was rediscovered when it's proof-of-concept fell out of a cabinet) until supplies were rock-stable. I wonder how many years until powersupplys will be stable enough to support these beasts.. Looks like we're going to be looking at pico to nano amp/volt stability requirements.

    I wonder what the cooling requirements for a 60 Ghz 0.5 volt cpu is going to be? :)
    • ...the power stability requirements of the Feild Effect Transistor...

      FETs can be operated under a much wider voltage range than junction transistors.

      In logic ICs, my "TTL Data Book" (Texas Instruments, 1976) the voltage requirement for bipolar chips is 4.5 to 5.5 volts for the 54 (military) family and 4.75 to 5.25 volts for the 74 family. On the other hand, for FETs, I have the 1976 "RCA Integrated Circuits" handbook, which mentions a 3 to 12 volts operating range for most of the chips in the 4000 family.

      As for discrete devices in analog circuits, FETS and bipolar transistors are more or less equivalent in power supply needs, except that FETs behave as variable resistors in very low drain-top-source voltage ranges, so they are sometimes operated in close to zero or negative voltages, while bipolar transistors need at least 0.5 volts collector-to-emmitter to operate linearly.
    • Well, the Feild Effect Transistor may have that problem but 30 years ago Field Effect Transistors were being used to replace vacuum tubes. It was basically a tube bottom with the same pin layout as the tube and the FET with appropriate voltage dropping resistors. As a direct plug in replacement they had to work with the same power supply as had been the tube being replaced.
  • Bah (Score:2, Funny)

    by Anonymous Coward
    Everyone knows bigger is better.. What is with this obsession with making everything small?

  • This article is refusing to load for some reason, but I've read many articles on molecular computing before. And if this article follows the same strategy that the others did, when the current passes through the molecule, it bends, causing the other current to flow. But to me, this seems as though 'tcould be a hinderence. Think about it this way: you put millions upon millions of these things together to make a processor. But since the all rely on contortion to change state (0 to 1 and vice versa) how would they all stay together without breaking apart? Molecules have a certain amount of play, of course, but if just the right amount switch states at once, bonds could break, or it could make it impossible for other molecules to switch states. Would they be attached to something, and, if so, would the thing they're attached to cause them to not operate at top speed, if at all? Maybe you could use carbon nanotubes to secure them, because they can spring open and closed, but that would add much unneeded complexity at this point, barring even greater advances in self-assembly technology.
    • Actually, these guys work by using the voltage on the gate electrode to tweek the molecular orbital energy levels. This brings a partially filled orbital down in energy close enough to those in the gold tips to provide a path for electron flow. What's cool about the V2 cluster is that, depending on the gate voltage, it will undergo redox chemistry that switches on antiferromagnetic coupling between the vanadium atoms, producing what I'm told are Kondo effects. Also MO energy levels in this guy move around with changing magnetic field.

      Nate Crawford
      Long Group
      Dept. of Chemistry
      UC Berkeley

  • What next? Two transistors in a single molecule? The article doesn't say how well did that device actually function as a semiconductor, I quess it isn't the point.

    In any case, I don't think anyone should go rush buy their stock. Semiconductor fabs are so expensive even evil multinational corporations have to team up to build them. I don't think this technology will compete withing next ten years (tm)
  • as with most quantum work. it may work well when isolated as with most CERN experimaents but I am sure that decoherance will cause problems. But ti is a fantastic breakthrough and I look forward to seeing how they over come manufacture problems.
  • As processing power increasse to 10-100 times what it is now, what will the role of software developers become? Will the additional power allow developers to ignore perfomance more and more, and focus on correctness and features? Or, will developers tackle bigger and bigger problems, and therefore require better asymptotic performance of their algorithms? I don't think computer science will ever die, but it's something to think about.
    • Will the additional power allow developers to ignore perfomance more and more, and focus on correctness and features?

      They already ignore performance and focus on (unneeded) features. Now there's only correctness to go.
    • IF anything, although we may have another 1000x of performance coming our way, some sort of end may be near.
      If that's the case, eventually we'll come around to having to code with efficiency in mind rather than speed of development.
      And processors will have to be more efficient, since there won't be extra GHZ to pump out to get Wal-Mart patrons to buy your processors.
      Although it has been a while since any killer software application has really stressed processor performance outside of scientific computation, unless you consider MPEG4 compression...hmmm, maybe I'm wrong there.
  • yeah just mask cost (Score:2, Interesting)

    by johnjones ( 14274 )
    who cares if you cant actually use the damn thing

    mask costs are about $2million for .13 or .9

    so its not the same and relatively few people can afford it

    unless you share and then you can only really get engineering samples

    AMD made the smart move of UMC and TSMC are just ARM/MIPS prod lines with some custom phillips stuff

    IBM, Intel and maybe TI are the only people who can aford to do this anymore....

    if you wanted me to put money on it I would bet IBM and the rest wither (yes Intel will outsource eventually)

    regards

    john jones
  • None of the stories about "nanotechnology" do all of:

    1. Avoid conflating nanotech with chemistry.
    2. Avoid conflating nanotech with biology.
    3. Describe economical fabrication techniques that are not far more advanced than the nanotech being described.
    We can be thankful that the referenced article succeeds on the first 2 counts. Unfortunately and quite preditably, it fails on the third count.
    • Re:Fabrication? (Score:3, Interesting)

      by Animats ( 122034 )
      Right now, the mask makers are ahead of the transistor designers. I went to a talk [stanford.edu] recently where images were shown of lines fabbed using subwavelength interference masks. This wasn't extreme UV; this was stuff you could do in an existing fab. They could lay out lines and transistor geometries an order of magnitude smaller than current production. But the transistors don't work. Just scaling down existing transistor designs doesn't work electrically. That problem can probably be overcome; though. The people talking were just making better masks, leaving the device physics problem will be addressed by others. This new result indicates that we're not out of room on the device physics end.

      Despite all this, everyone agrees that some time around 2015, plus or minus a few years, we hit the fundamental limit on flat silicon wafers: the atoms are too big.

      There may be ways around that, but remember that the real limit is cost per gate. A technology that provides higher density at higher cost per gate isn't going anywhere. After all, even now, the physical space taken up by ICs isn't a problem.

      • Lithography is chemistry, ie: it operates on large enough numbers of atoms/molecules to achieve statistical stability. I see no evidence it can be used to fabricate nanotech devices.
        • Lithography is chemistry, ie: it operates on large enough numbers of atoms/molecules to achieve statistical stability. I see no evidence it can be used to fabricate nanotech devices.

          You can get surprisingly close. Laying down monomolecular layers by chemical means is common, for example. Lines with edges smooth to a few atoms are possible.

          The limits are in sight, though. Read the SIA Roadmap [itrs.net].

  • Integration? (Score:2, Interesting)

    This is indeed grand news but there are many obstacles between developing a single molecule transistor and building microprocessors out of same. The difficulty with integrated devices is not whether or not a transistor of a given size will switch, but making the lithographic process of printing the things on the die accurate enough that they can be made that size in the first place. Also last time I looked the transistors on a microprocessor were not suspended between gold electrodes.

    Noise may be another issue, since now we must be talking about handfulls of electrons so that a small number of rogue noisy electrons could push the signal across the noise margin and flip the logic.

    Alternatively with that size of device we could be designing with high redundancy rather than relying on accuracy - a whole new design paradigm could open up.
  • Summing it up with a roleplay...

    Rich Man: "Damn my computer just broke!"
    Passer By: "Whats up with it?"
    Rich Man: "Well with the heat of my hand on it dang thing browned out"
    Passer By: "Oh... weird... well open it up and I'll have a look"
    Rich Man hands it over
    Passer By: "Next time get a brand name wristwatch"
    Rich Man: "No, no thats my palm top"
    Passer By slips it down sleeve...
  • A list of few corrolaries (sp?) from the experiment:

    1) computers will be less stable: literally. quantum tunneling will eventually screw up enough of your circuit to a point of "beyond repair", really soon, even if there are error checking / repairing enabled -- i still havn't seen any self-repairing technology where the chip would be able to insert one valladium (whatever) between two pieces of gold electrodes. today's large quantities of error checking are designed to correct only a few predictable errors -- i don't even think there are any self repair functionalities on logic chips; (memory chips have redundant rows / columns, but this would be REALLY hard to implement on a logic chip -- and if it was done it will cost TONS of area, which besets (?) the benefit from the small size. computers based on molecular technology will probabbly have this "half life" -- within 5 years half of all chip made will fail despite all the error checking -- so you are absolutely required to buy chips -- and it is also likely that a chip will simply become broken from sitting on a shelf (quantum tunneling, etc). ha... that will be the day =)

    2) voltage levels -- not really a problem but somewhat interesting -- small transistors operate on small voltages -- crosstalk and other interference / PS noise, etc will totally screw up your chip, real fast. (differentical signals will help) You will need tons of amplifiers to actually be able to tranmit the signals from this low, low voltage chip to the other components.

    3) heat -- wow this sucker packed this tight will be a furnace!! probabbly reaches melt-down voltage within no time... this is already a problem in today's chips -- imagine how bad it will get with small transistors like that (smaller chip, highly defined, descrete areas) -- thermal expansion locally (part of the chip doing stuff) will put stress on the rest of the chip -- and if the heat itself does not pop a transistor / molecule out of place via quantum tunneling / molecular vibrations, the physical stress sure will. this will be interesting to see how they figure it out.

    4) not so related: just because someboy comes up with some technology does not mean it's production ready or shows how far the transistor can be pushed! moore's law, as it stands today, still have a realistic barrier couple years down the line, and single transistors does not make it into a viable industrial process -- it took a LONG time for them to figure out the details of today's photolithography -- the masks and CMP (chemical mechanical polishing) took them a LONG time to figure out.
  • Not to discredit the scientific work of the people here, but about a year ago another group for Eindhoven (Netherlands) also made a single electron transistor (SET).
    They used a pair of tweezers from an atomic force microscope to make dent in a carbon nanotube.
    Details can be found here at Science 2001 July 6; 293: 76-79; also online but requires a subsicription.

    I think that the switching of a transistor by one electron, is more important that a transistor made with a single molecule. In the article it is never stated how many electrons are needed to swict the on and off state of the transistor. The size in neither mentioned, they speak about clusters and a single molecule but a single molecule could be qiute large....

    So the carbon nanotube used by Postma et al could even be smaller and uses only one electron.

    • This has been in development for a long time, and quantum theory holds that it is possible, however impractical. The problem that will occur is the fact that maintaining superposition is damn near impossible. Temperature, magnetic fields, etc. can very easily cause an electron to flip rotation, such that the 1/2 spin and internal backspin will easily slip into one of two states. The fact that decoherence is so common means that you very well could have a single-atom transistor, but there would have to be extreme controls around each such transistor so that the valence shell of any one transistor doesn't inadvertently tamper with its neighbors. Even besides that, you may very well have to keep your monitor halfway across the room to keep from b0rking your processor. It's neat, in theory, but still at least 10 years down the pipe from being near practical in even a scientific or academic setting.
    • They only describe a single transistor, not a storage cell. A storage cell requires either several transistors (SRAM) or a capacitor (DRAM). You are probably thinking of DRAM, in which the transistor is sumply used to charge up and discharge the capacitor; so it is the size of the capacitor which matter - the transistor only gets to it.

      I could imagine a memory cell which uses such a transistor to swap a single electron between two other atoms, thus making the whole memory cell out of three atoms. However, I don't see how you can shrink the wiring as much as the cell itself.
    • These molecules should only require one electron to populate a conducting molecular orbital energy level. Also, the long dimension of V2 cluster and the shorter version of the Co(terpy)2(RS)2 molecule is around a nanometer, about the width of a nanotube. Though, for practical devices at room temperature, the nanotube route seems much more feasible.

      Our group made the (Me3tacn)2V2C4N4 cluster and is more interested in the magnetic and redox info extracted from this device. One of our main goals is to build bigger clusters with many metal atoms with strong ferromagnetic coupling. These are chemically bonded through some ligand that allows the electron spins on each metal atom to communicate. If you can get enough spins on a molecule with enough magnetic anisotropy, you can create a double well potential that separates the all spin-up state from the all down with a barrier higher than room temp. These clusters could be fixed on a surface to create an extremely high density magnetic recording medium, with molecule-sized domains.

      Nate Crawford
      Long Group
      Dept. of Chemistry
      UC Berkeley
  • ... yeah, but I bet Radio Shack will still sell them 2 to a pack for $2.99.
  • I wouldn't expect to see any circuitry built directly from these technologies - though I could be wrong. But what it does say is that there is no absolute theoretical limit above the size of a single atom at which transistor operation is no longer possible. We will continue to progress along the lines we are already travelling - finer and finer masks, more sophisticated optical processing, probably electron beam writing, or maybe direct electron masking. But we now know that there is no quantum bogey man going to jump out and say that it is no use shrinking feature sizes any more because transistors just don't work at that size.

    This contrasts with hard-disk technology, where there is almost certainly a minimum size to a magnetic domain (though it may be smaller than we now thing - see the latest "pixie dust" enhancements which shrink the stable size of a domain). Somebody who works for a disk-drive manufacturer told me that their R&D people reckoned that they would be hitting brick walls erected by the laws of physics about 2012. Contrast seciconductors, where on one side a senior honcho at TSMC was reported as saying thet he could see the engineering advances continuing to at least 2020, while these results says that the physics carries on even further.

    Well before we get to the single atom transistor or the single atom memory, we are going to have problems wiring such chips. I cannot see such high densities being achieved with the wiring for true random access. I think either the wiring density will mean that larger (and hence easier) cells will fit under the wiring, or that some kind of shift-register type scheme will have to be used, slowing random access time.

    Which in reflects on computer architectures - we could be adding even more levels to the current hierarchy of register - L1 cache - L2 cache - main memory - disk. Could we usefully used a few Gbyte of (volatile) ram disk on a chip? Say, transfer speeds the same as current disks (100 Mbyte/sec, compared to the 1Gbyte/sec of PCI-X and several Gbyte/sec of main memory) but zero rotational and seek latency?
  • Ok this is really just a theoretical (and bad) idea.
    Maybe there would be some way to control quantum tunneling where one transistor 'tunnels' to a different one for true and a different one for false. Theoretically it would cut down on heat issues, and up speed in the processing.

    I know that there are know hints as to go about this as of yet, but it was just a thought.

If you have a procedure with 10 parameters, you probably missed some.

Working...