Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Technology

100x Denser Chips Possible With Plasmonic Nanolithography 117

Roland Piquepaille writes "According to the semiconductor industry, maskless nanolithography is a flexible nanofabrication technique which suffers from low throughput. But now, engineers at the University of California at Berkeley have developed a new approach that involves 'flying' an array of plasmonic lenses just 20 nanometers above a rotating surface, it is possible to increase throughput by several orders of magnitude. The 'flying head' they've created looks like the stylus on the arm of an old-fashioned LP turntable. With this technique, the researchers were able to create line patterns only 80 nanometers wide at speeds up to 12 meters per second. The lead researcher said that by using 'this plasmonic nanolithography, we will be able to make current microprocessors more than 10 times smaller, but far more powerful' and that 'it could lead to ultra-high density disks that can hold 10 to 100 times more data than today's disks.'"
This discussion has been archived. No new comments can be posted.

100x Denser Chips Possible With Plasmonic Nanolithography

Comments Filter:
  • dense? (Score:4, Funny)

    by chibiace ( 898665 ) on Sunday October 26, 2008 @03:43PM (#25520219) Journal

    what ever happened to smart chips?

  • by DragonTHC ( 208439 ) <DragonNO@SPAMgamerslastwill.com> on Sunday October 26, 2008 @03:45PM (#25520235) Homepage Journal

    The problem is this: when will it be cheap enough to be used as a process for the chips we use now?

  • Fragility (Score:5, Interesting)

    by Renraku ( 518261 ) on Sunday October 26, 2008 @03:47PM (#25520267) Homepage

    A question for the physics people out there.

    At what point does Brownian motion become a serious consideration? What about tunneling electrons and other quantum-ish effects?

    • Re:Fragility (Score:5, Informative)

      by wjh31 ( 1372867 ) on Sunday October 26, 2008 @03:51PM (#25520319) Homepage
      brownian motion isnt really relevant at this level, but i imagine that if the channel or 'wires' or whatever were close enough then tunneling could be an issue, but probability of tunneling falls off exponentially with the distance, and the severity depends on the energy, but if the wires are put close enough then it could be an issue, however only if there was just few atoms between channels
      • Well, that's kinda the whole point. Given that today's transistors are 45nm or so, 10 times smaller would be 4.5nm, or about 15 silicon atoms IIRC. I think we can worry about that already.

        • the researchers that make 200-400GHz transistors today DO in fact worry very much about tunneling. (I'm thinking of InP/InGaAsP transistors)

          Quantum wells are around 5-10nm wide, so anything approaching ~20nm would at least have to account for that sort of quantum effect. So density may have a difficult limit to breach, but smaller lithography certainly makes high speed transistors easier to implement on CMOS.

          (EE, not physics)

    • Re:Fragility (Score:5, Interesting)

      by mehtars ( 655511 ) on Sunday October 26, 2008 @04:36PM (#25520713)
      Actually with processors using a 90 and 45 nanometer transistor size, there is a very high likely hood that a number of transistors will fail over the lifetime of the chip due to diffusion alone. Though modern processors have taken care of this by routing data through parts of the chip that are still active. Though this has an interesting affect of slowing the processor down as it gets older.
      • Re: (Score:2, Interesting)

        by ThisNukes4u ( 752508 ) *
        Do you have any source/references on techniques used to compensate for this effect?
      • Re: (Score:3, Insightful)

        by Zerth ( 26112 )

        So bit rot is real now? Argh.

        Well, at least I can put it back on the excuse calendar.

      • Not true (Score:1, Informative)

        by Anonymous Coward

        This is completely untrue, if a transistor fails on a CPU, that's it, there's no routing around the damage as you seem to imply.

        If you'd actually read the article you referenced when queried by someone else, you'd see that was a three year study initiated in 2006, so even if that study bears fruit it'd be 5-10 years at least before it showed up in the CPUs you buy from Intel or AMD.

    • Re:Fragility (Score:5, Insightful)

      by Cyberax ( 705495 ) on Sunday October 26, 2008 @04:37PM (#25520721)

      At about 5nm. Other effects should limit our current tech to about 10nm.

      If "10 times smaller" is about chip area, then it might be possible - square root of 10 is about 3 and our current best lithography processes are about 30nm.

    • Re: (Score:3, Informative)

      by drerwk ( 695572 )
      I tend to think of Brownian motion happening in a gas or liquid - which Wikipedia confirms http://en.wikipedia.org/wiki/Brownian_motion [wikipedia.org]
      Thermal diffusion of atoms in a device do cause problems and limit the temperature at which semiconductors can work. In fact, diffusion of dopants is one way a chip can 'wear out' with long term use. No doubt the smaller the scale the more problem diffusion will be, but it tends to be very temperature sensitive, so keeping the device at some reasonable temperature would pr
    • Re:Fragility (Score:5, Informative)

      by Gibbs-Duhem ( 1058152 ) on Sunday October 26, 2008 @05:29PM (#25521125)

      Tunneling electrons and other quantum effects are already in effect in current devices. We just design around those effects instead of taking advantage of them currently. When we really get the ability to make reliable 5nm size scale parts, we'll just switch to quantum dot based transistors (single electron transistors).

      Brownian motion isn't relevent here.

      A big issue is that sharp features are thermodynamically unstable (lots of dangling surface bonds), so edges tend to "soften" over time due to surface diffusion. Also, at ohmic contacts you can get pits forming which can eventually degrade features.

      Another issue is that at the size scales we're talking about, current insulators stop working. They're looking at switching to a variety of new materials for this purpose (for example, IrO2), but these are tricky. This is what they mean when they say "high dielectric constant" materials. Every MOS transistors has a this oxide layer (between the Metal and the Semiconductor), and that layer's thickness defines many of the physical properties of the device.

      Finally, you have to worry about inductors to a lesser extent. Current inductors aren't quite good enough, but we're working on that too =) Nanoscale metallic alloys are definitely the way to go.

      In any event, this article is sort of sensationalist (surprise!). I was able to make 20nm features using physical embossing (stamping metal liquid precursors with a plastic stamp and then curing them) back in 2002. Making features of small size scale is easy, it's keeping error rate, making interconnects, etc that's hard and annoying. Plasmonics is very neat though, I can imagine it working with time.

      Besides, hard disks already have magnetic domains of ~ only a few nanometers anyway.

      • Finally, you have to worry about inductors to a lesser extent. Current inductors aren't quite good enough, but we're working on that too =) Nanoscale metallic alloys are definitely the way to go.

        Now my experience with electronics is quite brief at best, but I was under the impression that inductors were specifically avoided in electronic circuitry for a number of reasons, not the least of which is that they tend to be bulky. This is not a big problem because the effects of an inductor can be simulated wit

        • They avoid them as much as possible, as you say. I meant by "lesser extent" that they aren't as big of a deal because we avoid them.

          In rare situations they are necessary, and the limiting factor is one of standard magnetic materials ceasing to function as expected at very high frequencies. You wouldn't necessarily have them patterned into a circuit, but say for instance you want to use an inductor to transformer-couple AC signals into an analog to digital converter.

          I have to reach a bit to find a real reaso

      • Re: (Score:3, Funny)

        Comment removed based on user account deletion
        • Hell, it's a great phrase to drop into any conversation. I'm currently on the lookout for a reason to say "Plasmonic Nanolithography". It's right up there with Flux Capacitor.
    • by ZarathustraDK ( 1291688 ) on Sunday October 26, 2008 @05:42PM (#25521241)

      A question for the physics people out there. At what point does Brownian motion become a serious consideration? What about tunneling electrons and other quantum-ish effects?

      Depends on the fiber-content of the brownie...

    • At what point does Brownian motion become a serious consideration?

      I don't know about you, but after a few footlong chilidogs I take any Brownian motion very seriously.

  • by kitsunewarlock ( 971818 ) on Sunday October 26, 2008 @04:00PM (#25520379) Journal
    These thin chips keep breaking off in my salsa.
    • In 5 to 10 years [ref. user wjh31], scientists will eventually combines the advantages of today's chips with Chex, which benefit from breaking less easily and becoming soggy less often.
  • by tylerni7 ( 944579 ) on Sunday October 26, 2008 @04:01PM (#25520385) Homepage
    Do current chip manufacturers like Intel and AMD work on new lithography techniques, or do they focus more on architectural changes?
    It seems that they shrink their process at a fairly slow rate, and both companies seem to do it at about the same speed.

    Also, if they both have been just advancing the standard techniques using high frequency light to etch all the chips, how easily could they change their manufacturing process over to something radically different?

    Seeing chips with 100 times more density would offer incredible benefits for speed and power savings, seeing the recent changes that the 65nm to 45nm process has brought. Hopefully we'll actually be able to see this process being used inside the next 10 years though.
    • by freddy_dreddy ( 1321567 ) on Sunday October 26, 2008 @04:19PM (#25520547)
      You have to make a difference between Fabs which produce ICs and companies that produce Fab equimpent. Off course they're intertwined but AMD and the likes is an architecture Co, where Companies like ASML drive Fab technology. The "slow rate" is set by industry agreements - milestones - to keep the cost of Fab tech R&D minimal. The shrink step is a factor 2 for surface, resulting in a factor sqrt(2) for feature size. Litho tech companies use this step because the market is not viable for developing Fab tech which takes a different approach: litho is just a fraction in the hundreds of steps it takes to produce an IC. If you were to implement a new Fab litho technique which differs from the roadmap you won't have customers because the technology isn't in sync with the other processes. In other words: this new technology is only viable if the others jump on the bandwagon, so far it's "only" proof of concept. The field of Fab tech R&D is filled with new concepts, but that's just a small part of the story.
      • by Thing 1 ( 178996 ) on Sunday October 26, 2008 @09:09PM (#25522677) Journal

        A .sig comment:

        "Violence is the last refuge of the incompetent" - Isaac Asimov

        I've always had trouble with this quote. "Last refuge" means, basically, "after trying all else, we do this."

        Therefore, I would state that violence is the last refuge of the competent, and, generally, the first refuge of the incompetent.

        • Corrected
        • by Tweenk ( 1274968 )

          No, no. Obviously you didn't read the Foundation novels.

          The quote means that the competent will always find solutions before resorting to violence, because for every possible situation there is an option better than violence. The incompetent can't find any of those options and use force, which is never the optimal solution.

          • by CAIMLAS ( 41445 )

            Obviously Asimov never had to contend with an armed robbery, rape, or violent assault before.

            Short of dying/surrendering and taking a beating/willingly participating in the rape, violence is the only option.

            Of course this could also be said as "Corollary: the above does not hold valid in the event of violence being in execution already."

        • I think that it's meant to mean, 'if you find yourself in a situation where you feel you must use violence, you have been or are being incompetent'. In other words, you've either done something wrong in the past or you aren't seeing all your current options.

          While I see your point, Asimov meant to take it even further. Your interpetation implies that violence can be an acceptable solution to a problem. Asimov is saying that it never is, and if you think it is a valid solution, you're not seeing the whole

          • by Thing 1 ( 178996 )

            While I see your point, Asimov meant to take it even further. Your interpetation implies that violence can be an acceptable solution to a problem. Asimov is saying that it never is, and if you think it is a valid solution, you're not seeing the whole picture.

            Filling one's belly involves violence, even if one is one of the strictest forms of vegan.

            Therefore, either we're all incompetent because we all eat, or there's a flaw in Asimov's logic, which I rightly pointed out in the GP.

    • It's my understanding that they work on both. It's really expensive to build the fabs to produce the chip at the smaller process so obviously they are going to profit off the ones they have as long as possible. Last I had heard AMD is one generation behind Intel right now. You can't just shrink a chip down either with the new techniques. Every time you have a process shrink you run into new problems.

      Perhaps this will make SSDs competitive now. You can get 4 GB microSD cards these days. If you could get j
    • by Kjella ( 173770 )

      It seems that they shrink their process at a fairly slow rate, and both companies seem to do it at about the same speed.

      I have no idea what definition of slow you're using at least. Making a new process work is absurdly complicated and expensive and they usually do it once every four years. By any standard I can think of the computer industry is still moving at breakneck speeds, setting new performance records, creating new device classes and entering new price brackets all the time. For older definitions of supercomputer, you're probably carrying one in your pocket. At this rate, it'll be a little chip under my watch in ten

      • by mikael ( 484 )

        For older definitions of supercomputer, you're probably carrying one in your pocket. At this rate, it'll be a little chip under my watch in ten years.

        You can already get mobile phone watches (CECT M800 and others), which have 2 Gigabytes of memory, have Bluetooth capability and which can both record and play mp3/mp4 files, along with using WAP internet access. There's even a watch with Wi-FI detection built in.

      • I don't mean to say slow exactly, but the progression of lithography technology doesn't seem to be moving as fast as other areas are. This could just be because I don't really understand the whole process of photolithograpy-- I understand that it is complicated, but the sizes are decreasing by a constant factor of about 1.4 every 2-3ish years, while we can easily see hard drive density increasing exponentially.
        That is why (well maybe a possible reason why) companies have just been making multi-core machine
    • by Valdrax ( 32670 ) on Sunday October 26, 2008 @05:25PM (#25521093)

      Do current chip manufacturers like Intel and AMD work on new lithography techniques, or do they focus more on architectural changes?

      Yes. This research was funded by the National Science Foundation, a federal agency, but IBM, Intel, and AMD are all active in process technology research. I can't dig up much in the way of what they're currently researching, but here are a few things I was aware of in the past few years (and some things I dug while looking for them):

      • Intel was researching extreme-ultraviolet (EUV) lithography around 2002-2004.
      • Intel is also funding research into computational lithography to avoid having to do immersion lithography, like IBM and others are doing for the next generation.
      • AMD & IBM were partnering on a test fab for EUV lithography in 2006 and had successfully demonstrated the ability to create transistors but were still working on metal interconnects at that time. I'd bet money they've gotten past that point by now.
      • IBM did a lot of pioneering work on strained silicon that they announced back in 2001.
      • Silicon-on-insulator (SOI) was another fab technology they pioneered in 1998, but it hasn't spread much in the industry beyond them, AMD, and Motorola / Freescale -- in other words, IBM and its partners.
      • And then again, back to IBM, they were the first company to come up with a viable process for laying down copper interconnects, using what's called a dual-damascene process, in the late 90's.
      • Hitachi has been actively developing electron-beam lithography for over a decade, but the technology has yet to really live up to its promise as a commercially viable competitor for photolithography AFAIK.

      Some of the above research was about commercializing "pure" research done in independent labs like this experiment, but a lot of it was directly funded by the big fabrication companies and their clients and partners. Since I'm not in the fabrication industry myself, I can't really comment any further on who has done what (and how much each of the above deserves credit). This is just news I remember from years past.

      • Intel is also on the forefront of photonic interconnects for Processors. HP just jumped on board a year or two ago. Often they fund university research and then try to implement it viably in CMOS or current fab processes.

        Hybrid Si Laser by UCSB [intel.com]

  • by NerveGas ( 168686 ) on Sunday October 26, 2008 @04:05PM (#25520429)

    Just think... we'll be able to have 198 cores doing nothing, now!

    • It actually says nothing about whether or not these microprocessors would be able to operate faster.

      But assuming this is real, it one of two things:

      Maybe we'll have 200 cores which are about as fast as single cores we have now, in which case, nothing will be slower, and people who planned ahead (like Erlang developers) will find themselves running much faster. On top of that, embarrassingly parallel applications like raytracing will be that much more viable -- consider that it only took 16 cores to make a g

      • That, or we have 200 cores, each of which is tens or hundreds of times faster than what we've got now. In which case, WTF do I care that 198 of my cores are doing nothing, when the other two are running my Ruby and Python apps as though they were hand-optimized assembly?

        All other things being equal, C or hand-optimized assembly will still be faster than Ruby or Python. Maybe the faster processors make the Ruby and Python "fast enough", but they still won't be as fast as hand-optimized assembly language o

        • All other things being equal, C or hand-optimized assembly will still be faster than Ruby or Python.

          True, and for some things, it will matter.

          But take right now -- how many apps are Ruby or Python "too slow" for, on modern processors?

          Of course that's ignoring the possibility of a big break through in interpreter and code generation technology before these chips come out.

          It seems to be pretty steadily moving along. Just look at the recent JavaScript improvements.

          Granted, none of these will be able to match hand-optimized assembly, by definition, because we can always output exactly the same program the compiler would (VM, runtime optimizations, and all), and additionally handle corner cases that the VM might be slower with.

          But that distinctio

      • by maxume ( 22995 )

        I bet hand optimized assembly would still be faster (I do understand what you are driving at, but even on the 'garbage' available today, a huge swath of programming tasks are 'fast enough', even if implemented in something like Ruby or Python).

      • by jschen ( 1249578 )

        I am making one assumption, though: That RAM keeps up. It would really suck to have 198 cores sitting idle, and the other two mostly just waiting for your RAM.

        Presumably, as chips get faster, larger caches and more intelligent caching will become ever more important. Latency for main memory access really hasn't improved much from my first computer (Mac SE) to my current computer. Happily, though, the entire contents of my first computer's hard drive can now fit in 1% of my current computer's main memory, and the entire contents of my first computer's RAM easily fits within the on-chip cache.

      • RAM is actually a good point... maybe they can put 16 tiny cores on the chip, and use the rest of the real estate for SRAM.

  • by DoofusOfDeath ( 636671 ) on Sunday October 26, 2008 @04:09PM (#25520467)

    I thought that the real problem now wasn't our ability to get feature sizes small, but rather that at those sizes, quantum effects really start to matter.

    So how does being able to produce such small features really help us?

    • by cnettel ( 836611 )
      Semiconducting is always a matter of quantum effects. The doping needed to get the desired effects are going down to single atoms, which complicates things, and tunneling can certainly also be an issue, but it's not like these things would rely on the world being essentially Newtonian.
  • Plasmonic? (Score:5, Funny)

    by gsgriffin ( 1195771 ) on Sunday October 26, 2008 @04:14PM (#25520513)

    Was this developed at the Gizmonic Institute?

  • by Yarhj ( 1305397 ) on Sunday October 26, 2008 @04:17PM (#25520531)
    One of the difficulties with a scanning technology like this is throughput -- with mask-based lithography you can expose dice with great speed, while something like this will have to scan across the entire surface of the wafer. It sounds like there's good potential for parallelization (the article mentions packing ~100k of these lenses onto the floating head), so this technology won't necessarily be as slow as electron-beam lithography, but I can't imagine it'll be cheap either. Furthermore, the software and hardware involved must be much more complex than a conventional stepper; now you've got to modulate your light-source very rapidly, rotate your wafer, and keep track of the write-head's position to sub-nanometer precision. Tool design and maintenance costs will be pretty high, I imagine.
    • Re: (Score:3, Informative)

      by TubeSteak ( 669689 )

      so this technology won't necessarily be as slow as electron-beam lithography, but I can't imagine it'll be cheap either.

      You obviously didn't RTFA.

      Modern 40/45nm and the upcoming 23nm chips need very short wavelengths to get produced.
      This is expensive.

      The new technique uses relatively long ultraviolet light wavelengths.
      This is very cheap.

      The researchers estimate that a lithography tool based upon their design could be developed at a small fraction of the cost of current lithography tools.

      • Re: (Score:3, Interesting)

        by Yarhj ( 1305397 )

        Modern 40/45nm and the upcoming 23nm chips need very short wavelengths to get produced. This is expensive.

        The new technique uses relatively long ultraviolet light wavelengths.

        There's certainly a cost advantage to using longer-wavelength light for the exposure, but there's also a tradeoff in device complexity. Using longer-wavelength light for the exposure translates to cheaper lamps, mirrors, and optics, but the added complexity is going to add a lot of cost to the design and maintenance of these tools.

        A conventional stepper performs a series of mechanical and optical alignments before exposing a die on the wafer, then steps to the next die to continue the process. A lithogra

        • Here's an idea - fuck quality control of chips, just make them able to work around faults in hardware/firmware/software (hello solaris), that way there will never ever be any duds, just slightly slower CPUs, and slightly faster ones. Production costs ought to rocket down. Heck, if not self repairing, we can make them adaptable.

          Its not that the interconnect isn't there, its just higher resistance. (for instance)

      • by julesh ( 229690 )

        This is very cheap.

        The real question is how cheap. Current generation lithography systems have become ridiculously expensive. Preparing a mask for a 65nm process costs in excess of $2M. This makes short-run production at not-even-cutting-edge technology levels extremely expensive, and basically discourages smaller chipmakers from considering any niche applications that might require higher density.

        Even if the production process is slower, if this can cut the initial preparation costs significantly, it co

  • by philspear ( 1142299 ) on Sunday October 26, 2008 @04:40PM (#25520743)

    Nano-something you say? Can it possibly be used in the production of biofuels to increase homeland security against bioterrorism? If so I have a big check for you to pick up.

  • I just had 2 fail over the weekend. I didn't lose anything vital because I had backups but everything I considered non-essential is gone (mostly just lots of VMWare images of various distros). At some point it beocmes a bitch to manage so much data.

    • I just had 2 fail over the weekend. I didn't lose anything vital because I had backups but everything I considered non-essential is gone (mostly just lots of VMWare images of various distros). At some point it beocmes a bitch to manage so much data.

      How old were they? I would have thought that drives young enough to be around that capacity would be nowhere near their MTBF*. Is this a reflection of a general decline in manufacturing standards? Are manufacturing standards decreasing with increased capacity? Or is there something else about these high capacity drives that reduces their reliability?

      * Yes I understand that the M stands for mean and that some units fail earlier than most in order to make up that particular average. Still, a few years is

      • by syousef ( 465911 )

        I bought the system in June, so around 5 months old. One had a bad block. I pulled it out and when I powered back on the other one started ticking. I have no idea what happened? Static discharge? One drive affecting the other? Who knows. Anyway they're very well ventilated (2 large case fans sitting in front of the 4 drives. Their temp had never exceeded 40 degrees (usually around 28-32 when chugging along and sat at 25 idle with some variation depending on the weather). This machine is always on though.

        Any

    • by Thing 1 ( 178996 )
      At the risk of veering off-topic like you were modded, I had a 1 TB (Seagate!) drive fail in the past week myself, one of a purchase of 3. In the process of RMAing it. Luckily, like you, I hadn't decided to trust any data solely to it yet; so nothing was lost. Still, that purchase more than doubled the data storage in this house, in other words, those three drives together can store more data than the 40+ other drives I have in and out of machines here. (Did I just set myself up for a burglary? :) Prob
      • by syousef ( 465911 )

        Yeah they're damn convenient to have. I just hope reliability doesn't prove to be as bad as I suspect it will.

  • Do they have a solution for controling overlay error between processing layers to less than 1.25nm?

    If the answer is no, this technology is dead in the water as far as IC fabrication goes. (but may have very useful applications in other nanotech fields)

    As someone who works in litho, I enjoy reading about any advances in resolution, but know that any advance in resolution must be accompanied by an even larger improvement in the non-insignificant task of placing each of the 10 to 50+ patterns needed to build a

  • As usual, the industry thinks that Moore is better.

  • 80 nm line width?

    12 meters/second?

    assume 5x5 mm die size

    31250 80-nm lines spaced 80 nm apart can be laid on this die in one direction; twice that if the orthogonal direction is involved

    each of these lines is 5 mm long, so their total length is 312.5 meters

    at 12 m/s this will take 26 seconds per die

    a 450-mm wafer, on the other hand, if treated as one big die, would take 31.5 minutes to cover in crossing lines.

    • by blair1q ( 305137 )

      oops. forgot to carry the 2.

      a 450-mm wafer would actually take 3.8 days to write

      and that's for just two layers

      • by blair1q ( 305137 )

        P.S. google this to get the answer

        2 * sqrt( pi * ( 450 / 2 ) ^ 2 ) mm / 160 nm * sqrt( pi * ( 450 mm ) ^ 2 ) / 12 m/s

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...