Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology Hardware

First Ever Nanotube Transistors On A Circuit 216

btsdev writes "Researchers at the University of California at Berkeley and Stanford University have developed the first ever integrated silicon circuit with nanotube technology. According to the article on UC Berkeley's site, this brings researchers one step closer to developing memory chips with carbon nanotubes - chips that could hold approximately 10,000 times more data than those we have today."
This discussion has been archived. No new comments can be posted.

First Ever Nanotube Transistors On A Circuit

Comments Filter:
  • Seven... (Score:5, Funny)

    by John Seminal ( 698722 ) on Tuesday January 06, 2004 @10:24PM (#7899470) Journal
    I guess this means the Ferengi do not have to abduct Seven of Nine after all.
  • Always Impressive (Score:4, Interesting)

    by Elpacoloco ( 69306 ) <elpacoloco.dslextreme@com> on Tuesday January 06, 2004 @10:24PM (#7899476) Journal
    Berkley has made some great stuff over the years. But this is truly cool. You could make a supercomputer the size of your current computer tower today. Or maybe even smaller with some other control method.

    Or even maybe implant it in your body.

    • by grub ( 11606 ) <slashdot@grub.net> on Tuesday January 06, 2004 @10:54PM (#7899739) Homepage Journal

      You could make a supercomputer the size of your current computer tower

      But... but.. Steve Jobs said my current computer tower is a supercomputer!
      • Re:Always Impressive (Score:5, Informative)

        by Frymaster ( 171343 ) on Tuesday January 06, 2004 @11:11PM (#7899854) Homepage Journal
        Steve Jobs said my current computer tower is a supercomputer!

        sigh. when the g4 was introduced, the united states defined "supercomputers" or "high performance computers" for the purpose of export as any machine that could do 2000 MTOPS (million theoretical operations per second).

        any machine that met this definition was under strict export control to "tier 3" countries (n. korea, iran, pretty much all of s. america &c.). hence the "supercomputer" appellation from jobs & co.

        now the export control for computers has been raised to 6500 MTOPS - so iranians can merrily get their g5's.

    • by NanoGator ( 522640 ) on Tuesday January 06, 2004 @11:29PM (#7899982) Homepage Journal
      "Or even maybe implant it in your body."

      I'll pass on the Kray Suppository, thank you.
    • Or even maybe implant it in your body.

      My prediction is that the first high-tech consumer product implants will be cell phones. But this does raise interesting questions about producing reasonably sized implant electronics for blind and deaf people, as well as other human systems failures.

    • Re:Always Impressive (Score:4, Informative)

      by webtre ( 717698 ) <webtre&hotmail,com> on Wednesday January 07, 2004 @12:12AM (#7900217) Homepage Journal
      My course in VLSI design was many, many years in the past, but what I do remember is that early integrated circuits used metal gates in the fabrication process. That process was later abandoned in favor of polysilicon because poly was much easier to work with at smaller feature sizes (I'm a bit foggy on this one). Gee, so now we're going back to metal gate processes, and we'll have real metal-oxide-semiconductor field effect transistors again?

      If this is becoming easier to do at deep submicron level, I suppose processes for making deep submicron feature-sized Gallium-Arsenide MESFET's also got easier? Now wouldn't we just love to have such GaAs chips on our desktops... (I do know I'm forgetting another difficulty in working with GaAs, anyone care to remind me why GaAs is not as common as silicon today?)
    • I'm surprised that the Berkeley people grew the tubes on the semiconducting substrate (and skeptical that this is the way to go). Unless I am misreading the article (always a possibility), they have created a very expensive way to evaluate only thousands or millions of tubes per manufacturing cycle. I would think that the real key to low-cost nanotube circuits is to use bulk chemical processes.

      Using bulk chemical processes, one might grow a big batch of nanotubes, harvest them, sort them by size (centr
      • You can sort nanotubes electrochemically, but currently the enrichment factor (1.2 or something in that neighborhood) is not great enough where you can assume your sample to be pure metal type or pure semi-conductor type. I do expect that electrochemical methods to improve rather quickly, but since the really sexy devices depend on individual nanotubes, it would always be wise to figure out exactly what types you're dealing with in an experiment.

        Self organizing films are interesting as an assembly proces
      • Why can't the production of metalic vs. semiconducting nanotubes be controlled?
      • Are the metallic nanotubes at all useful?
      • Why are the semiconducting nanotubes useful?
      • Do the semiconducting nanotubes need uniform impurity concentrations (doping) to be useful as memory storage devices?
      • The major benefit of this device is that it can identify semiconducting nanotubes in an automated manner. Does this hardware actually do anything with the nanotubes once they are classified? If not, what potentially can be don
  • 10,000? (Score:1, Insightful)

    by AvengerXP ( 660081 )
    Take that Moore's Law.
    • Yes, Moore's Law gets a shot in the arm. I'm tired of waiting a whole 18 months for only one doubling. Or, maybe we should observe Moore's Law and add a throttle down innovation. I'm sure the all of the chapters of the Guild of Semi-Conductor Fabricators of America agrees.
  • Probably 10,000 times the cost too.
  • by ObviousGuy ( 578567 ) <ObviousGuy@hotmail.com> on Tuesday January 06, 2004 @10:25PM (#7899488) Homepage Journal
    All the better to track you, my dear.
  • by clifgriffin ( 676199 ) on Tuesday January 06, 2004 @10:26PM (#7899496) Homepage
    Let's see.

    1. I'd like to see a bewolf cluster of these.
    2. How long until it runs linux?
    3.

    I think that covers it all. You may proceed.

    Feel free to contribute.
  • by vpscolo ( 737900 ) on Tuesday January 06, 2004 @10:26PM (#7899498) Homepage
    If you could get lots of small chips to give high memory density, pack them into a PC and then setup a huge RAM disk with some permanent storage things would suddenly become a lot faster

    Rus
  • by RLiegh ( 247921 ) on Tuesday January 06, 2004 @10:26PM (#7899506) Homepage Journal
    stuck under your fingernails!!
  • iPod (Score:4, Troll)

    by pvt_medic ( 715692 ) on Tuesday January 06, 2004 @10:27PM (#7899512)
    well just wait till they pop one of these into an iPod you be able to store like 1 million songs. on that thing.
  • by MajorDick ( 735308 ) on Tuesday January 06, 2004 @10:27PM (#7899515)
    Ummmm. There is a pretty serious problem with heat dissapation and CARBON nanotubes Like this report shows [innovations-report.com]

    Isnt this going to cause a pretty serious problem in integrating nanotube technology into electronics ?
    • by Smidge204 ( 605297 ) on Tuesday January 06, 2004 @10:59PM (#7899784) Journal
      Did you even read the article you linked to? In order for that to happen, you need to fill a laundary list of rather specific criteria:

      1) Single walled nanotubes
      2) Presence of oxygen
      3) Temperatures in excess of 1,500 C
      4) Only intense light seems to effect it (photons are absorbed by the nanotubes directly)

      We can let #1 slide since I do not know if there is any specific requirement if nanotubes can (or must be) single or multi walled for use in electronics. Since there hasn't been any real development of nanotube electronics yet, I don't think anyone really knows. The linked article is about tool to analize nanotubes, not no much build electronic devices that incorperate them. It does make a good proof-of-concept though.

      #2 is easily remedied because the devices would be hermetically sealed in opaque packages. That also takes care of #4...

      And I don't think anyone will have to worry about the 1500 degree temperatures so far as electronics are concerned. At least nobody in the private sector...

      I mean damn, it's one thing to not RTFA, but you didn't even read your own sources!
      =Smidge=
      • And I don't think anyone will have to worry about the 1500 degree temperatures so far as electronics are concerned.

        You don't own an Athlon.

      • :3) Temperatures in excess of 1,500 C

        well, you might not want to run AMD... ;)
      • by mcrbids ( 148650 ) on Wednesday January 07, 2004 @12:59AM (#7900492) Journal
        From TFA: Because extensive rearrangement of the carbon atoms occurs, the scientists estimate that the tubes reach temperatures of nearly 1,500 degrees Celsius.

        This doesn't happen *while* the nanotubes are at 1,500 C, the nanotubes heat up to 1,500 C as a result of the flash!

        You really *REALLY* should RTFA when chastising somebody else for not RTFA!

        • Why, I did in fact read the article. Both of them. While they were lax on specifics, here's what I gathered te process was:

          1) Intense burst of light is absorbed by nanotubes
          2) Absorbed energy cannot be dissipated, so temperature rises to 1500C
          3) High temperature disrupts molecular configuration, nanotubes becomes susceptible to chemical reactions
          4) Nanotubes react with atmospheric oxygen, burst into flames

          Based on that, High temps are required for the combustion to take place. So yes, combustion does happ
        • The problem here was that carbon nanotubes were exposed to an intense flash of light, which broke bonds and altered their atomic structure. When this happened, heat built up. Since the nanotubes were not arranged in any particular pattern, heat could not dissipate readily, resulting in combustion (presence of oxygen being a necessary ingredient in ANY type of combustion).

          Completely different process than what would happen in an integrated circuit. In that case, bonds are not broken. Instead, electrons
    • Gives new meaning to the term "Flash Memory."
  • by ActionPlant ( 721843 ) on Tuesday January 06, 2004 @10:28PM (#7899525) Homepage
    One step closer!

    We can rebuild him. We have the technology.

    So do these things have good tensile strength if you pack them in bundles? Because when they rebuild me, I want them to use nanotubes. They're definitely the "in" thing right now. Just imagine...legs that can literally "remember."

    Damon,
  • Diamond substrate? (Score:5, Insightful)

    by Stile 65 ( 722451 ) on Tuesday January 06, 2004 @10:29PM (#7899535) Homepage Journal
    It'll be interesting to see how they'll make carbon nanotubes work when they use diamond for a semiconductor (see article in Wired, referenced by another /. post, that I'm too lazy to find now).

    Also, it'd be neat if they could base some kind of flash memory technology on this stuff too. I know IBM/HP/etc. are coming out with the polymer memory, but this stuff would probably be able to hold a lot more - a nice HD's worth of data in an SD card, at least. Or am I completely off base? Could that even completely replace hard drives eventually?
    • by Saeger ( 456549 ) <farrellj@nOSPam.gmail.com> on Tuesday January 06, 2004 @10:44PM (#7899654) Homepage
      The wired article: The New Diamond Age [wired.com]

      The inevitability of artificial, perfect diamond has DeBeers white in the face. It also provides more fuel for the The Law of Accelerating Returns [kurzweilai.net] (rather than "Moore's Law").

      --

    • by uglomera ( 138796 )
      Diamond substrates and nanotubes face completely different challenges, and the issues with nanotubes will probably be resolved first. In that WIRED article, it was explained that it takes years to grow ONE diamond wafer, and they still haven't probably grown anything larger than 3-4 inch wafers. It will probably take several decades until they can serially produce 12" diamond wafers.

      Carbon nanotubes, on the other hand, need to have their type (metallic or semiconducor) and doping level (if semiconductor) c
      • by Stile 65 ( 722451 )
        They actually won't have 4-inch wafers for another few years - I think they're probably at about an inch squared right now. It won't take several decades to produce 12" wafers, though, because the size of the wafer they produce using CVD depends on the size of the seed. They are using the result from each session to grow larger wafers each time: Starting with a square, waferlike fragment, the Linares process will grow the diamond into a prismatic shape, with the top slightly wider than the base.

        Still,
  • Crud... (Score:5, Insightful)

    by Roadkills-R-Us ( 122219 ) on Tuesday January 06, 2004 @10:29PM (#7899540) Homepage
    I was hoping we finally had vacuum tubes grown on a chip. Besides building Eniac on a chip (but without the power bill and air conditioning problems) we could have every vacuum tube guitar amp ever made on a chip - just need a clean power amp after it.

    Fooey.
    • Re:Crud... (Score:5, Interesting)

      by earthforce_1 ( 454968 ) <earthforce_1@yaho[ ]om ['o.c' in gap]> on Tuesday January 06, 2004 @10:59PM (#7899778) Journal

      Actually, the idea of building "integrated vaccum tubes" isn't as silly as it sounds. Transistors don't function above 200C, and microscopic tubes would allow us to build sensors and other circuits where transistors cannot go, at least without elaborate cooling. There has already been talk of using silicon vaccum tubes to power remote sensors in jet and aircraft engines, which must operate at extremely high temperatures.

      And I always thought they would find an idea home in robot spacecraft, where there is already a vaccum. They would also offer extreme resistance to the effects of hard radiation such as the Io belt around Jupiter, which tends to fry semiconductor electonics.
    • I was hoping we finally had vacuum tubes grown on a chip. Besides building Eniac on a chip (but without the power bill and air conditioning problems) we could have every vacuum tube guitar amp ever made on a chip - just need a clean power amp after it.

      I do not believe that the characteristics of vacuum tubes that make them good for audio extend nicely into the micro-realm.

      It's much more likely that their different effects could be simulated by a properly adjustable tube, to sound like whatever you want
    • The whole point of vacuum tube guitar amps is that the whole signal path is tube--e.g., both preamp and power stage. Most of the nice crunch of a say a Fender Bassman is in the power stages. Furthermore, the interaction between the power amp and the speaker is also important, which is why you typically record a guitar amp with a microphone, not a direct box.

      A low-voltage 12AX7 stuffed into a digital stomp box (with a window and an LED that makes it "glow") does not give you "real vintage tube-amp sound",

  • by ircShot_guN ( 737033 ) <{ua.ude.qe} {ta} {03telfa}> on Tuesday January 06, 2004 @10:29PM (#7899541)
    At least in a server environment, I don't see the requirement for many gigs of memory (on a single chip no less) without also having better technology to access it quickly.
    • by Jesus 2.0 ( 701858 ) on Tuesday January 06, 2004 @10:58PM (#7899771)
      I guess I don't follow your reasoning.

      First of all, I would just plain love to have many gigs of memory, even if it's only accessible at today's speeds. To be able, for example, to actually search through my immense email archive at a reasonable speed, without needing to constantly fault to disk? Even if I have a whole movie loaded into memory and playing? Terrific.

      Second of all, access speed will, of course, improve with time. It is almost a tautology - technology improves. Especially with associated technological leaps forward to drive the need for it, such as is the case with what's discussed in the article.
      • I (obviously) don't know much about CPU design, but it seems that we have about three major storage levels, processor cache, main memory and hard disk.

        If you could cram 100 gig of fast memory onto your CPU chip, would you need main memory or harddrives?

        Obviously the chips would have to be designed differently to take advantage of such a design, but it seems like not having to deal with multiple levels of slower and slower storage would be a really good thing for processors.

    • At least in a server environment, I don't see the requirement for many gigs of memory (on a single chip no less) without also having better technology to access it quickly.

      Ok.. Now imagine those many gigs of memory on on-die with the CPU itself. Get's interesting, yes?
    • I don't see the requirement for many gigs of memory (on a single chip no less) without also having better technology to access it quickly.

      Maybe this technology could create a RAM chip with a capacity that matches or beats any hard drive at a similar price. If so, that would eliminate rotational and head seek latency for accessing your data, speeding up nonlocal accesses by orders of magnitude.

      The need for virtual memory and disk buffering would be essentially eliminated. It would precipitate a total rew

    • "At least in a server environment, I don't see the requirement for many gigs of memory (on a single chip no less) without also having better technology to access it quickly."

      That's because we don't have that much RAM to fill. I hate to think about how much less porn I'd have if not for JPEG.
  • Necessity? (Score:4, Interesting)

    by agent dero ( 680753 ) on Tuesday January 06, 2004 @10:31PM (#7899556) Homepage
    Ok, if you have 10,000 more the space, it all disappears when you power off right? Or when the power goes out?

    Also what about address space?

    How many bit CPUs will we need to address 1,280,000MB of RAM?

    Nonetheless cool, even though it seems either overkill or impractical
    • Re:Necessity? (Score:4, Insightful)

      by Pyro226 ( 715818 ) <Pyro226&hotmail,com> on Tuesday January 06, 2004 @10:39PM (#7899621) Journal
      Shamelessly quoted from http://peripherals.about.com/cs/buildyourpc/a/aa03 1215a_2.htm

      To understand how 64-bit technology gives your computer more RAM memory, you need to do a little math. Don't worry, it's easy math. Your computer's processor uses 8-bit blocks of memory (called bytes) in powers of 2. A 32-bit processor can address up to 2^32 bytes of RAM, or 4294967296 bytes. That's 4 gigabytes (a gigabyte is 2^30 bytes).

      Theoretically, 64-bit processors can use 2^64 bytes of RAM, or 18446744073709551616 bytes. That's 17179869184 gigabytes, or 16777216 terabytes (units of 2^40 bytes).

      • by willy_me ( 212994 ) on Tuesday January 06, 2004 @11:39PM (#7900047)
        You're assuming that a 64bit cpu will use a 64 bit wide address for memory access. Actually, most don't. I belive 42bits is the common size.

        For a RISC cpu, each word contains an instruction. The address is embeded inside that instruction. With 64bits, this leaves you with a 22bit command and a 42 bit address. The maximum memory addressed is then 2^42 bytes - or four terabytes.

        The advantage of doing it this way is that the entire memory space can be addressed in a single instruction - no complex addressing schemes are required. Simple is good.

        You don't belive me - check the literature on the G5, located here [ibm.com]. (See page 7)

        • I don't see why they can't use 64 bit addresses. 32 bit machines seem to get by with it just fine.

          For people who aren't familiar with computer architecture, I present a quick summary. If you are, skip to the last paragraph.

          There are other ways of working around the instruction word problem to use the full 64 bits. I don't know if these are done in CPUs used in PCs, even 32-bit ones, but I'll briefly describe how the MIPS deals with 32-bit addresses with 32-bit words since the MIPS is what I know from my c
        • Yeah, but...while RISC processors lack the many addressing modes of CISC processors, they all tend to have register + displacement addressing. That displacement, as you say, is necessarily less than 64 bits, but the register is not limited in that respect, so while it potentially takes an instruction or two setup to get to an arbitrary location in a 64-=bit address space, you can get there.
        • For a RISC cpu, each word contains an instruction. The address is embeded inside that instruction. With 64bits, this leaves you with a 22bit command and a 42 bit address. The maximum memory addressed is then 2^42 bytes - or four terabytes... You don't belive me - check the literature on the G5...

          Wrong! The PowerPC G5 (like all other 64-bit RISC chips) use 32-bit instructions. These instructions don't directly encode addresses (addresses are mostly held in registers).

          True, some implementations of 64-bi
      • Re:Necessity? (Score:4, Informative)

        by megabeck42 ( 45659 ) on Tuesday January 06, 2004 @11:47PM (#7900093)
        Its something to note that while many chips can have 64 bit pointers, the chip does not necessarily support 64 address lines. For example, from the Athlon 64 FX Datasheet found here [amd.com], we know that the Athlon 64 FX has 40 physical address lines, Granted, that's still a Terrabyte of physical address space, but, its nowhere near the numbers you quote.

        Mind you, the originaly 68000 was like this, with only 24 physical address lines, as were the 80486SLC's with only 24 physical address lines, despite being 32bit internally. Oh, and I believe MIPS arches have 30 address lines because they do not support non word-aligned read/writes, but that may or may not be true.

        Oh, another thing, the Athlon 64 does clock in 64 or 128 bits per read/write cycle, so even if it uses the physical address lines for the high bits (most likely) its still not the full 64 bit address space.
    • Re:Necessity? (Score:4, Informative)

      by paul248 ( 536459 ) on Tuesday January 06, 2004 @10:45PM (#7899664) Homepage
      How many bit CPUs will we need to address 1,280,000MB of RAM?

      41.
    • Re:Necessity? (Score:5, Informative)

      by RyanFenton ( 230700 ) on Tuesday January 06, 2004 @10:53PM (#7899723)
      "Ok, if you have 10,000 more the space, it all disappears when you power off right?"

      Actually, no. The basic technology from the last story (can't find it now - slashdot's search seems disabled now) implied that the memory would not require constant charge, but would instead be based on van-der-waals effect on many nanotubes to make up one bit. It's not the most efficient method - it's just much more data-dense than current methods.

      Ryan Fenton
  • by Anonymous Coward
    I always wanted a Marshall tube stack I could carry in my pocket!!!!!
  • by Graabein ( 96715 ) on Tuesday January 06, 2004 @10:37PM (#7899604) Journal
    So, does this mean then that we can finally break that pesky Petabyte RAM barrier in personal computers?

    Not that I can see why anyone would ever need more than 640 TB anyways. Except people still using MS Windows and MS Office, of course. Sheesh!

    Ooops, wrong timeline. 'Scuse me while I duck back, er... forwards, to 2014 again.

  • Hmmm (Score:1, Interesting)

    by Anonymous Coward
    carbon nanotubes - chips that could hold approximately 10,000 times more data than those we have today

    How about carbon Megatubes that could hold (10x9)* (10x6) times the data of carbon nanotubes

  • by horati0 ( 249977 ) on Tuesday January 06, 2004 @10:40PM (#7899630) Journal
    "What the hell's this... some kinda nanotube?!"
  • by Anonymous Coward on Tuesday January 06, 2004 @10:41PM (#7899634)
    While this might be a great accomplishment it is a bit hard to tell from what was written. This is not the first carbon Nanotube transistor, but it might be the first to be integrated on silicon. This is not really important unless they have solved the principle problem with such devices, which is creating an ohmic contact. If there is not an ohmic contact the switching frequency (GHz) is massively limited making them useless.
  • by superpulpsicle ( 533373 ) on Tuesday January 06, 2004 @10:50PM (#7899700)
    In 1995, there was alot of talk about a glass cube that can store a terabyte of data. This technology was expected to be around the market by 2005. Where is it now?

    Exactly. Like 90% of the great technical innovations they either don't make it for political reasons. Or heavily delayed for an eternity. Scary part is, Doom III will probably come out after this stuff.
    • ...but I'll bet they couldn't justify spending gazillions to do the research when magnetic media keeps progressing so well. Lacie sells a half terrabyte drive now. Stack two of those, encase it in glass, and you've got yourself a terrabyte glass cube.
    • part is, Doom III will probably come out after this stuff.

      Naw man you mean Duke Nukem.
    • The problems with the glass cube related, I believe, to specifying a focus...you need to be quite accurate in three dimensions to get that kind of info out of a glass cube.

      The problems may have been economic, but I rather suspect that they were actually technical. (They were the last time I heard.)

  • I think it's very interesting that as we get closer to being able to reproduce the capabilities of human intelligence, we consistently return to the basics of our 7th-grade Life Sciences classes (apologies for the American-centric illustration).

    Carbon, carbon, carbon....

    For (another) example, eyes are made of carbon [spie.org].
  • by bishop32x ( 691667 ) on Tuesday January 06, 2004 @10:54PM (#7899735)
    Finally a good use for all this stupid carbon! Get out of the atmosphere and into my computer!
  • I have been hearing about that thing about this new m$ os that will be fast and you will be able to open more apps, now i see how!!, just take windoze, an old version obviously, XP would be still to heavy, say, NT 3.51, and run it in a machine with about 50 Gb of ddr, and if you add it proper fast scsi disks so it can swap out all the time, and there you go!, you just got unix like performance on a windowish os!!!, ups, well we still has to get ride of that bsod, we'r working on it guys :)
  • by Crypto Gnome ( 651401 ) on Tuesday January 06, 2004 @11:26PM (#7899954) Homepage Journal
    And people these days think that Fossil Fuels are the result of a few million years of pressure and heat transforming Dead Trees.

    In fact all these "fossil fuels" we keep burning are the decomposition of a once well-known and essentially pervasive vastly superior technology [google.com]. Technology which we're only now beginning to open the doors to.
  • by Nom du Keyboard ( 633989 ) on Tuesday January 06, 2004 @11:36PM (#7900029)
    And the race is on to see which arrives first:

    1: Vastly more memory at much cheaper prices.
    -or-
    2: Such draconian DRM/DMCA/**AA lawsuits/Product Activation woes/SCO lawsuits/stupid Congressional actions and the like such that there is nothing left to put in said memory.

  • So when... (Score:3, Funny)

    by ThusandSuch ( 737882 ) on Tuesday January 06, 2004 @11:46PM (#7900080)
    does the 150,000 Gig iPod come out?
  • Just in time! They are going to need some serious capacity to store HQ images of fingerprints belonging to millions of the world's terrorists.
  • So how long? (Score:4, Interesting)

    by PetoskeyGuy ( 648788 ) on Tuesday January 06, 2004 @11:57PM (#7900142)
    until they can encode the human genome in something close to the size of the human genome?
  • by nissin ( 706707 ) on Wednesday January 07, 2004 @12:00AM (#7900158)
    First off, congratulations to all involved on this achievement. They barely beat the research group I am a part of at Caltech, which is working on the same sort of thing. Our chip is in fab right now, returning in a month or so.

    Information on the Caltech research can be found here [caltech.edu].

  • Bad Acronym (Score:5, Funny)

    by Dorf on Perl ( 738169 ) on Wednesday January 07, 2004 @12:28AM (#7900301)
    Just thought I'd point out that CNT makes a horrible acronym. No wonder materials engineers can't get dates, going on about all the really tight CNTs they're growing in the lab...
  • by erp6502 ( 725641 ) * on Wednesday January 07, 2004 @01:03AM (#7900524)
    TFA states that what they've created is a matrix of silicon islands connected by molybdenum MOS transistors to automate batch testing of carbon nanotubes (about 2000 at a time). Yes, they look for I/V curves, but the CNTs are being tested as two-terminal devices (e.g. diodes) not three-terminal devices (e.g. transistors).

    At least, they're not laying claim to it (though you can bet they would like to). Their more modest (!) goal is to characterize the fabrication process in hopes of achieving higher yields of semiconducting (vs. metallic) CNTs.

    There will definitely be a few problems with productization; molybdenum's not something you want to get anywhere near a commercial fab, and that big blob of CNT growth catalyst is a bit of a disaster. But this looks like a very nice bit of engineering.

  • by retro128 ( 318602 ) on Wednesday January 07, 2004 @01:34AM (#7900694)
    Reading the article, it looks like what they did was build a chip that can detect the types of nanotubes growing on it - conductive or semiconductive, with the nanotube actually being grown on the chip itself.

    This research is a nanotube manufacturing method, not nanotube circuit fabrication.
  • by Mister Attack ( 95347 ) on Wednesday January 07, 2004 @10:18AM (#7902749) Journal
    ...not in any computationally useful sense, anyway. Now, I'm not knocking this research, because it's a great way to make a bunch of nanotubes and examine them quickly (much faster than the usual process of making nanotubes, decorating a surface with them, hoping some of them line up with the traces you've deposited, etc.) -- but the fact remains that this is still basically an aleatoric process. You grow a bunch of nanotubes, and you know that some of them are going to be your nice metallic armchair nanotubes, some are going to be your nice semiconducting zigzags, and some are going to be junk. We don't have any way of controlling what type of nanotube we want to grow yet, nor do we have any way of getting yields high enough to make a traditional microprocessor. Right now, maybe 10 percent of the "transistors" you make out of molecules actually act like transistors. Since your Athlon is junk if even a few of its transistors or interconnects go bye-bye, and even Teramac didn't try to run with 90 percent of its transistors failed, it is clear that nanotubes for desktop-type computation are way out on the horizon.
  • What about Nantero? (Score:2, Interesting)

    by luwain ( 66565 )
    This article has the researchers at Berkeley claiming to be the "first ever" to report success in integrating nanotubes with integrated circuits. What about that company Nantero which claims a propriety nanotube memory chip design ( NRAM ), developed by Dr. Thomas Rueckes (who got his PhD in chemistry from Harvard).They have venture capital ( from Charles River Ventures, Draper Fisher Jurvetson, Stata Venture Partners, and Harris & Harris group). Their web page (www.nantero.com) claims, " Dr. Rueckes

Someday somebody has got to decide whether the typewriter is the machine, or the person who operates it.

Working...