Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Nanotechnology Gets Finer 131

An anonymous reader writes "ZDNet reports on a new level of detail found in nanotech construction." From the article: "Japan's NEC Electronics has developed a technology to make advanced microchips with circuitry width of 55 nanometers, or billionths of a meter, the Nihon Keizai Shimbun business daily reported Sunday. Finer circuitry decreases the size of a chip and cuts per-unit production costs. It also helps chips process data faster."
This discussion has been archived. No new comments can be posted.

Nanotechnology Gets Finer

Comments Filter:
  • circuitry width of 55 nanometers, or billionths of a meter,

    55 of them to be exact.

    Brotught to you by the Department of Redundancy Department.
  • I don't see why there needs to be.... but i'm no math genius.
    • by Anonymous Coward
      It's not about maths, it's about physics.

      Of course there is a limit to how small circuitry can get. I'm no physicist, either, but I can't see how circuitry could get any smaller than an atom's width.
    • Re:Is there a limit? (Score:5, Informative)

      by Compuser ( 14899 ) on Sunday December 04, 2005 @05:35PM (#14180531)
      The hard limit is around 0.2 nanometers (the size of one atom in
      a crystal structure - very roughly of course). The real limit is
      that it gets more and more expensive to get closer and closer to
      the hard limit, so don't expect anything below 10 nm any time
      soon.

      Oh, did I mention that you gain less and less from going smaller
      because more signal is wasted as heat. Also, solid state physics
      really changes around 30 nm (e.g. the concept of carrier mobility
      loses meaning - you have to treat each impurity self consistently).
      In short, going below even 30 nm is major money (compared with
      the currently developed 35-50 nm processes, which are themself a lot
      of money to put in production).
      • Re:Is there a limit? (Score:2, Informative)

        by Anonymous Coward
        For gate length. Sub-15nm gate oxides are already seeing quantized effects from single-atom layers.

        It will be interesting to see if there is a break from CMOS to some substantially different integrated transistor process in the next 20 years, like there was from bipolar to CMOS in the late 80s. People seem excited about nanotubes, but I don't see how they'll play well with lithography, yet.

      • Nano... (Score:3, Interesting)

        Oh, did I mention that you gain less and less from going smaller
        because more signal is wasted as heat.


        Unless of course, you're optical transistors, nanotubes, spintronics and all that nano stuff that hasn't been applied to electronics yet.
    • Re:Is there a limit? (Score:4, Informative)

      by Belseth ( 835595 ) on Sunday December 04, 2005 @05:36PM (#14180537)
      Is there a limit?

      There actually is and it has nothing to do with math but physics. Obviously there is a limit when you start talking circuits that are made of single paths of atoms. Even before that there's a leakage that occurs leading to errors. There'd have to be a redundancy to overcome the occational lost electron so you get a deminishing return. There's talk of ways of avoiding the the issue but circuits a few atoms across are likely to be the limit. Anything beyond that will mean working on a sub atomic level and well beyond any known technology.

    • Re:Is there a limit? (Score:5, Informative)

      by Jerry Coffin ( 824726 ) on Sunday December 04, 2005 @06:40PM (#14180868)

      I don't see why there needs to be.... but i'm no math genius.

      The hard lower limit is based on the sizes of the atoms involved, but you can't really get very close to a single atom thick without radically changing designs. For example, one of the thinner parts in a typical CMOS circuit is the gate oxide layer. In typical semiconductors, this is composed of silicon dioxide. The problem is that if that is made only a single atom thick, at a given spot you don't really have silicon dioxide anymore; you only have silicon or oxygen. With current designs, you need to maintain a layer that's thick enough to still be silicon dioxide -- i.e. molecule-sized, not atom-sized.

      Realistically, even getting close to that is pretty difficult anyway. Even at the present time, the gate oxide layers are starting to cause problems -- the gate oxide layer is supposed to act as an insulator, so no direct current flows through it. In reality, a little direct current will inevitably "leak" through, but in the past it's been pretty small. In current designs, the gate oxide layer is getting thin enough that this leakage current is becoming a substantial part of the total power drawn by the part.

      There are ways around that, such as using a different material. When you thin the oxide layer, the conductors connected to each side of it can be smaller, and still maintain the same capacitance. Another way to achieve the same objective is to use a material with a higher dielectric constant (traditionally abbreviated as "K").

      Silicon dioxide is also used to insulate between other conductors on the chip as well. Here, you generally want to reduce the capacitance between the conductors though, because increased capacitance leads to increased cross-talk (the signal on one conductor creating noise in a conductor nearby).

      Therefore, semiconductor materials people are working in both directions: low-K dielectrics for insulation, that maintain the same (or lower) capacitance between conductors with thinner insulation, as well as high-K dielectrics to allow thicker gate-oxide layers (reducing leakage) while maintaining the increased capacitance of a thinner layer. These, however, typically lead to substantially more difficult (read: costly) manufacturing. Of cousre, there are a lot of other possibilities as well, and each has its own strengths and weaknesses. For example, some designs use strained silicon -- actually "straining" the lattice of silicon molecules in the crystal formation so they're either closer together or further apart. Other designs change the basic wafer construction -- a traditional wafer is simply a layer of silicon. SOI is Silicon On Insulator -- a later of insulation, with a thin layer of silicon over the type. Again, creating the wafer this way costs some extra, but more importantly (at least to the designer) a transistor built this way has something of a memory effect -- the way it acts at any given time depends not only on the voltage applied right now, but also on its previous state. While this may be usable for embedded memory [innovativesilicon.com] it can be a real PITA for everything else.

      Anyway, I suspect the real limit will be mostly economic: a current fabrication facility costs a LOT of money -- around 1 1/2 billion US dollars (non-US residents feel free to assume I really meant 1 milliard Euro).

      This expense has already lead to a couple of things: even large companies often can't afford to build a fab on their own anymore, so they often have to form/join some sort of consortium to build a modern fab. Another business model simply separates the companies into two halves: fabless design houses, and then a few companies that just fabricate designs for various others. For an obvious example, neither nVidia nor ATI does their own fabrication -- they design chips that are then built (along with a lot of other people's) by Taiwan Semiconductor Manufacturing Corporation (TSMC). Of course, TSMC ha

      • Re:Is there a limit? (Score:3, Informative)

        by Helvick ( 657730 )
        Parent needs to be modded up more it is the most coherent comment on the topic posted so far. One minor nit pick - a 65nm\45nm fab costs about $3.5billion see here for the investment required for Intel's Fab 28 in Israel [technologynewsdaily.com]. That's an increase of $1.5 billion on the cost of the existing 90nm\65nm Intel Fab 24 in Ireland [intel.com].
      • Re:Is there a limit? (Score:3, Interesting)

        by rbrander ( 73222 )
        " It won't come to a screeching halt at any obvious point, but expect to see smaller improvements spread further apart."

        Nearly 10 years back, before the word "blog" existed, I did a little web article called The End of Moore's Law - Thank God! [cuug.ab.ca] that used the info in two excellent Scientific American articles which hypothesized a slow levelling off of the Moore's Law exponent around ... well, a year or two ago, actually, rather than a few years from now. But close enough.

        The second Sci. Am. article stres

        • $15B would not be such a big deal for Intel... it already spends bilions in process research, spends many more building experimental fabs and production lines, it already has two working 65nm fabs with two others being upgraded from 90nm to 65nm during 2006 and many more at 90nm. With Intel's volume, a single 10nm fab may be insufficient for itself, I do not think they would share - at least not until they got a second or third one. Since unused 10nm fab capacity would be expensive, I would not be surprised
  • by Anonymous Coward
    Um? Haven't we had 65nm and 35nm processors for a while? Is this just another Slashvertisement?
    • by PsychicX ( 866028 ) on Sunday December 04, 2005 @05:44PM (#14180575)
      Intel has been building a 65nm fab and retooling existing fabs for 65nm. 35nm is planned but hasn't actually been done yet. It's unlikely to help much either, because current leakage at those levels is being insane. If you save 40% power by switching to a smaller manufacturing process and lose 35% back to leakage, that leaves you 5% better. With the costs involved in switching process sizes, you would have been better off not switching in the first place. Even past 90nm is getting pretty shaky in terms of leakage. Intel and AMD are both definitely goign to 65nm, but I don't know if there's much of a future for chips beyond that unless somebody comes up with some real ingenious tweak to the crystal structures.
      • Actually Intel is already starting the move to 45nm right now and expects to have the first foundries online in 2nd half 2007.
      • by pla ( 258480 ) on Sunday December 04, 2005 @09:13PM (#14181663) Journal
        35nm is planned but hasn't actually been done yet. It's unlikely to help much either, because current leakage at those levels is being insane.

        Although we might not gain anything by going below 30-35nm gates, don't overlook the huge fallout rate of current photolithography (if you can still call it "photo" when dealing with "soft" x-rays as the light source).

        If you can produce, at your extreme limit, a 65nm feature, then trying to produce exactly 65nm features leaves almost no room for error. If, however, you can produce down to 5nm features, then you can manage 35nm features with a huge margin of error.

        Thus, your fallout rate drops from the current of over 50% (or so I've heard - I don't know the exact figure), to very nearly zero.


        The practicality of clock speed increases and heat/energy reduction aside, better photolithography (or whatever manufacturing techniques we eventually move on to) means higher yields of better quality at the same size.

        Also, consider the fact that some parts of a modern CPU run a LOT faster than other parts - Compare addition with division, for example. Addition has taken a single clock (less, actually, but assuming a serial dependancy, you can't do better than one op per clock) for several generations now, while division still brings the CPU to a crawl. If you could make a full adder "fast enough" at whatever size optimizes energy consumption (90nm seems pretty good at the moment; 65 might waste more than it saves), while chewing through power to perform a division in fewer clocks with 15nm gates - That would both improve performance and save power at the same time.
      • Why is everyone so worried about leakage?

        You can dramatically reduce leakage by tweaking the process to give you a slightly slower process. It's not the end of the world folks. It's just at this point in time, it makes more sense to have the faster process and pay for it with leakage power. In the future that may or may not be true.

        http://www.tgdaily.com/2005/09/20/new_intel_65_nm_ lithography_promises_reduced_leakage_for_small_dev ices [tgdaily.com]

        With billions of dollars at stake - it is unwise to underestimate the
    • You're probably thinking 0.35 and 0.65 micron... and there are a thousand of those to a nanometer, so that'd be 350 and 650 nm, respectively.
    • Samsung [samsung.com] is currently manufactering flash memory in at least limited quantities (don't know if it's in full production yet) on a 50nm process.
      To the best of my knoweledge that is smallest process in production, Intel and IBM are certainly producig 65nm chips that will be on the market in the next few months.
  • Nanotechnology? (Score:5, Insightful)

    by Leomania ( 137289 ) on Sunday December 04, 2005 @05:25PM (#14180478) Homepage
    We've had sub-micron CMOS processes for years now. Many of us are using computers with 90nm chips in them. But I've never heard of it called nanotech before. Maybe it's not inaccurate, but in my mind that term is more descriptive of other materials employing nanoscale materials that never did before (clothing comes to mind).
    • the most commonly used definition is "1-100 nanometers", so anything since the 90nm generation would qualify. However, I am not sure what definition researchers using the top-down, engineering approach use. I am a chemist and approach the problem from the other direction (trying to assemble lots of .2 nanometer atoms into organized multi-nanometer stuctures).
    • Re:Nanotechnology? (Score:4, Interesting)

      by GroeFaZ ( 850443 ) on Sunday December 04, 2005 @06:24PM (#14180794)
      The term has, over the last years, become something of a catch-all phrase for all things below 100 nm, also including fairly ordinary chemistry, unfortunately. Originally, the term was invented by Norio Taniguchi, but broadly popularized by Eric Drexler with the famous book "Engines of Creation" (available for free as in beer at http://www.foresight.org/EOC/index.html [foresight.org]). "Engines" was over the top in some respects and often criticized, but even ardent opponents of Drexler's vision of nanotech like the recently deceased Richard Smalley admit they have been brought into nanotechnology by this very book. Back in the days of "Engines", nanotechnology was strictly confined to the not yet developed "mass-manufacturing of devices to atomic precision and specification".

      Note that Drexler himself has presumably ceded the term to its current usage and has called Intel's 90nm chips "nanotechnology", although it bears no resemblence whatsoever to Engines-style nanotech. He prefers "zetatech" (mega, tera, peta, exa, zeta) nowadays because of the quantity of atoms involved, but I think it's rarely used. Molecular Manufacturing is the preferred term for what used to be Nanotechnology. Let's see how many more rearguard action Nanotechnology has yet to fight before it becomes reality at last.
    • I wouldn't consider this nanotechnology myself. I mean, it is still using the same lithography process they have been using for decades, just scaled down moreso. Nanotechnology should really be defined as the ability to shape things on the molecular level with precise detail. That is, can they build a single transistor using just component atoms? That's much more impressive than shining a light through a retical and getting your resulting chip (or however they do it, actually, it's still very impressive con
  • by Anonymous Coward
    ... comes increased RF interference and possible heat concerns, with more electrons flowing through the same amount of area.

    What we need is chips that work smarter, not harder.
    • I want chips that work smarter and harder.
    • RF interference has been a problem for a long while - I remember first reading about this when the Pentium-60 came out. An article back then mentioned that future processors would have frequencies in the FM radio range, and that this would be a huge problem for chip designers.

      Of course, chip designers coped, like they had doubtlessly coped with problems like that before, and nothing happened. The same will probably be true here, too: sure, there'll be problems, but the chip manufacturers will sort them out.
  • by janneH ( 720747 ) on Sunday December 04, 2005 @05:37PM (#14180543)
    Bottom up construction has been a central tenet in some parts of the nanotechnology community. The idea that putting things together by controlling the position of individual atoms/molecules during fabrication will allow enormous breakthroughs in computing and other fields. But at least in the silicon based semiconductor business, the top down approach keeps marching mercilessly toward the bottom. This while bottom up synthesis/fabrication is still stuck at proof of concept. Might "top down" make it to the bottom - before the "bottom up" makes it to the top?
    • I think conventional silicon semiconductors might never see bottom-up fabrication, for a couple of reasons:
      a) There is too much money invested in the traditional top-down process, and
      b) the industry will not abandon a proven concept for at best marginal improvements in a dying technology. As we know, silicone is doomed to fail as keeper of Moore's Law, because you can only reduce features to so such and such dimensions before tunneling effects kick in, heat ablation becomes an insurmountable problem, and
      • It's silicon. Silicone is a polymer. With a melting point of 1414 degC, I find it hard to believe you'll get much atomic rearrangement in silicon at 65 degC or whatever your operating temperature may be. The rule of thumb for ceramics is to sinter at about 2/3 the melting point (850 degC for Si) in order to get enough atomic movement to rearrange atoms on any reasonable timescale and densify the ceramic.

        One of the key issues in reducing CMOS transistor size is the dieletric properties of the oxide laye

  • by sidney ( 95068 ) on Sunday December 04, 2005 @05:44PM (#14180577) Homepage
    We already have 65 nanometer process chips in production. Even this article, after parroting the NEC press release mentions that Intel is building a 45 nm process plant, which is a step further along than "NEC has developed a technology" to make 55 nm chips.

    Here is an article from two years ago [architosh.com] with an expected timetable for chip process width that exactly matches what we have seen since then: 90 nm in 2004, 65 nm in 2005-2006 and 45 nm in 2007-2008. There really isn't anything exciting about this press release from NEC.
    • Here is an article from two years ago with an expected timetable for chip process width that exactly matches what we have seen since then: 90 nm in 2004, 65 nm in 2005-2006 and 45 nm in 2007-2008. There really isn't anything exciting about this press release from NEC.

      The reason chip process widths exactly match those numbers is because those are specific targets set by an international semiconductor processing consortium. It is what the industry hopes to achieve by certain dates, not what they expect to
    • Intel is in the process of building one plant (Fab 32) which contains 45nm processes in Chandler, AZ, and just announced plans to build a second 45nm plant (Fab 28) in Israel.

      See for yourself. [google.com]

  • so when are they going to get strong enough to take over the enterprise?
  • Along similar lines, intel has announced [arstechnica.com] the opening of Fab 28 in Israel, which will be used for making processors at a 45nm scale.
  • In other news (Score:2, Insightful)

    Telescopes see farther, and batteries last longer.
  • I would have expected it to be more. But the, what do I know what these things cost? Anyone know how much the previous generation factories cost?
    • I would have expected it to be more. But the, what do I know what these things cost? Anyone know how much the previous generation factories cost?

      It's been in the billion+ range for quite a while. It depends not only on geometry, but also on capacity. Based on the price (and owner) I'd guess this is quite a large, high-capacity fab. Then again, 300 mm wafers translate almost directly to fairly high capacity, and I doubt anybody's building equipment for 45 nm to work with smaller wafers.

      --
      The univers

  • BS Article (Score:3, Insightful)

    by Jason1729 ( 561790 ) on Sunday December 04, 2005 @06:03PM (#14180688)
    Chip fab size has nothing to do with nanotech.
  • Moving to finer geometeries is not panning out in standard CMOS processes anymore. Currently, the Intels, AMDs, ATIs & Nvidias ship with 90nm chips. However, the transition from 130nm to 90nm has been slower than the transition from 180 to 130nm. There are several reasons for this, but primarily leakage power is becoming worse, getting good yield on 90 took the fabs years (longer than before), a lot of people got burnt when they moved too quickly from 180 to 130nm, the area savings on area & incr
    • Currently, the Intels, AMDs, ATIs & Nvidias ship with 90nm chips.

      At least the last time I noticed, nVidia was still using 110 nm. ATI's latest X1 series (R520-based) use 90 nm fabrication, but I'm not aware of these being available as real products yet. The previous generation (e.g. X800) were 110 nm, unless memory serves me poorly.

      TI [ti.com] and IBM [ibm.com] also produce 90 nm chips. IBM (same page as above) claims to have a 65 nm ASIC production capability on line as well, though I don't know whether they have

    • Slower for TSMC or UMC. Intel has no problem with yields at 65nm. Intel is after all going to launch dual core 65nm NOTEBOOK chips early next year.

      Intel is also laying down 3.5 billion dollars for a Fab in Israel which will be 45nm for 2007 production. Moore's law continues to have legs. It is ASTONISHING that this is something we take for granted! And in 2009, I predict 32nm. And in 2011, I predict 22nm. Guess where we'll be in 2013?

      the area savings on area & increase in performance is no longer
  • Wouldn't a finer, more intricate process RAISE the production cost? Poster needs to go back to college and re-take Common Sense 101.
    • "Wouldn't a finer, more intricate process RAISE the production cost?"

      Initially. But then the chips get smaller, more can be made at a time, and costs go down.
      • True. But you still have to take into account that the market gets saturated. Even though the chips are small I don't think they'll fit in an overcrowded market :-). Sorry, had to be sarcastic. Anywho, this already happened at the mobile phone market, so now its all getting service-orientated.

        So, my point is, that it does raise the production cost which decreases significantly over time, but also increases the risk of overcrowding the market by lowering the cost. So it's a vicious circle and re-taking a
    • It sure does raise cost, exactly as you say. But if you're making the components smaller, you'll be able to make the chips smaller, implying:

      1) more chips in each wafer
      2) assuming same density of defects in the silicon crystal, a higher yield rate, as there is a lower chance that there is an error in each chip, as the area of each chip gets smaller. (easy demonstration: take a paper, draw 10 random dots on it. If you then split the paper in 8 pieces the chance of having a dot on a specific piece of paper

    • Comment removed based on user account deletion
  • by Doc Ruby ( 173196 ) on Sunday December 04, 2005 @06:13PM (#14180741) Homepage Journal
    Nanochem promises to allow even tinier feature sizes. The atoms in a molecule are about half a nanometer across, but they can form structures with gaps even smaller. Benzene rings have diameters also about 0.5nm, and can be made in regular arrays as nanotubes [umich.edu]. More complex structures can twist these feature spaces even closer, and in vast numbers of regular arrangement. Their production through chemical, rather than mechanical, engineering promises more efficiency, lower cost, and larger production yields.

    We are now looking at the nanometer from above, pulling our micrometer structures towards the new horizon. Once across it, we will still use nanometer-scale engineering to produce picometer (and smaller) scale results.
    • Nanotubes have a very tough road before used in electronics, for several reasons.
      1) Producing nanotubes of consistent chirality has proved very very difficult. Chirality is how "twisted" the nanotube is (chemists, I know that's a poor description), and depending on the nature of the chirality the nanotube can be semiconducting or metallic to different degrees. If you produce a huge amount of nanotubes but not all are semiconducting, or they're semiconducting but with different electronic properties because
      • Sure, we chemists can make all sorts of little tubes, balls, rods, pyramids, etc. Unfortunately, as you said, they are usually a mixture of many different sizes (and hence properties) as well as contaminated with all sorts of crap. The SEM and TEM pictures you see in the journals are assuredly the prettiest of the bunch.

        Worse yet, we have almost no control over the arrangement of our little tinker-toys. At best, we can get them to sort-of line up or form some sort of regular lattice on a large scale,
      • One approach to nanotube quality control is to make them cheap and dirty, then separate them chemically or mechanically (centrifuge, phoresis etc). Especially with different electromagnetic properties by which to separate them. Doping nanotubes for different chirality, especially heterogenous chirality in a single tube surface, is one of the more compelling avenues for nanocomputing research. Tubes a few dozen nanometers in diameter and dozens of centimeters long (10K:1 ratio), which is a pretty long wire.
    • by Anonymous Coward
      Picometer or smaller???

      Atom-atom spacing is on the order of angstroms (.1 nm). 100 picometers is an angstrom. In other words, with the current chemistry we can do today, we _are_ at the bottom.

      The interesting goal we now face is not getting smaller, but getting bigger-- being able to exert order on larger and larger scales in interesting ways, i.e. self-assembly of these units into larger, more complicated devices.
      • Let's say you make a lattice of 1Å (100pm) atoms with bond lengths of 1Å. The 3D geometry of the lattice can bring the atoms into proximity limit by their electrical repulsion and the angles of their bonds. That proximity can be shorter than their bond length - it can be nearly any size or shape. This is how enzymes make active sites with feature details at highly precise scales. Another analogous example, especially at these scales, is how relatively large wavelengths can combine to create diff
    • Yes, but a nanometer is a unit of spatial measurement that is 10-9 meter, or one billionth of a meter, so we can't rush this. While commercial products are starting to come to market, some of the major applications for nanotechnology are five to ten years out. Private investors look for shorter-term returns on investment, more in the range of one to three years.
  • I can't see how this article has any connection with nanotechnology -- except in the sense that it's about something small, and nanotechnology is about something small. People are throwing the words "nanotechnology" and "nanotech" and nano-everything around without the foggiest idea what they mean.

    CLUE: We do not have nanotechnology yet. No company today, anywhere on Planet Earth, is producing working nanomachines that do something useful. The article is about computer chips: it's as ridiculous as some
    • Please, do not say such things. Nanotech does exist and I have seen it with my own eyes (aided by an electron microscope, mind you). In fact, corporations are developing technologies and some have already developed technologies integrating nanotechnologies. New tennis raquets use nanotubes to become stiffer and stronger than the older models. Samsung has developed a display utilizing nanotubes which hasn't hit the market yet, but will once some issues are resolved (the display works fine, but it is a wee bi
      • AFAIK, nanotech was origonally about the construction of componets from the atom and up.

        Whilst we may be building small things, it's really still chemistry and lithography that we're tinkering with. Only a few scanning tunnelling microscopes are actually building anything one atom at a time.

      • Nanotubes and buckyballs aren't nanotechnology, as I see it. They are precursors of nanotechnology. They're getting ready for nanotech, working towards nanotech. . . And they can indeed be useful and profitable in their own right. But with regards to actual nanotech, we aren't there yet. And I never said that "no one" is investigating nanotech. A great many people are investigating it and working on the problem. They just haven't solved it yet.

        As for the Ad hominem argument. . . ? Oh wait, I see.
  • Nanotechnology Gets Finer
    Frow now on, picotechnology it is.
    Or reallyreallysmalltechnology.
    You choose.
  • It used to be, back in the 90s, that you could do all kinds of cool stuff: Dynamic logic, they called it -- precharge-evaluate, domino logic, zipper logic... google 'em; they're cool. Nowadays, we can't even do that. I was talking to a guy from AMD the other day; he explained that the leakage currents and noise levels are so high that everying ends up needing to be boring old AOI CMOS. "It's not as fun for the circuit designers as it used to be," he said. Ah well.

    Quantum dots!

  • I was just thinking, what drives this evolution? Is it science-driven, or technology-driven? In other words, are there any scientific bariers left to take when reducing the size?
  • LEON's GETTING LAERGERRRR
  • Does anybody remember an old sci-fi book that talks about how the Chinese and the Japanese created miniature armies, and tried to take over the world?

    hmmm..
    • Yeah I remember that one. Wasn't one of the Japanes divisions called "23rd Tamagotchi Division", nicknamed "Devil Spawns of Infernal Evil" (translation) or somesuch? Also, who could forget the dreaded "Pokemon Legion". More recently, the "Hello Kitty" spec-ops have joined the fight as well. The race isn't going too well for the Chinese, eh?
    • You may be referring to Slapstick by Kurt Vonnegut, part of which described the Chinese breeding themselves over many generations to be smaller. The intent was that they could reduce their food needs, but they accidently went too far and became microscopic. Then any normal-sized person who breathed in a bunch of Chinese people would die when they clogged up his lungs.
      • YES, thank you..
        come to think of it, I may be mixing the memory of that book with, ummm, Isaac Asimov's Fantastic Voyage I think.., wasn't there a race for miniturization in that book?
        But definitely Vonnegut.
        • Not that I remember. I'm pretty sure the miniaturization thing was entirely an accident. But I read the book around 7 years ago, so my memory might not be entirely accurate.
  • Not to start a debate, but let's say that The Utopians develop nanotechnology that eventually allows them to survive the change of climate from what we have now to significantly warmer. Most of the other humans (and species) die...

    Is this:

    * evolution?
    * progress?
    * some kind of perverted Intelligent Design where the intelligent designers were human?

    Let's say that The Utopians develop nanotechnology that eradicates, say, the Dog 'Flu (which is as effective as Ebola Zaire and contagious
  • by lop367 ( 936126 )
    ummm how small can it get?.... do our pocket will be also smaller.... knowing all what is up to come, dual core... quad core??... meaning BIG HEAD SINK
  • Japan's NEC Electronics has developed a technology to make advanced microchips with circuitry width of 55 nanometers, or billionths of a meter...

    Great, we'd be seeing Japanese nano MP3 players real soon! That should give Apple's iPod Nano a run for their money.
  • Finer circuitry decreases the size of a chip and cuts per-unit production costs... NOT!

    Moore's Law is showing it's age... The cheapest transistors in the world are not build in 65nm. They are built in 180nm, a much older process.

    In China, you can get 8-inch 180nm (.18u) wafers for $600. Today, a 90nm 8-inch wafer is more than 4X more expensive, and you cannot yet buy 65nm wafers. The cost per transistor is actually higher! And people wonder why we're taking our time to move to finer geometry process

  • I'd be interested in hearing what the course covered with respect to environmental, health and safety issues around nanomaterials. While these new materials bring interesting properties, they could also present some interesting, unexpected health hazards. By virtue of their size, nanoparticles can cross the blood/brain barrier. For some materials this new route of entry could be the difference between toxic and nontoxic. Materials that previously were thought of as nontoxic in the micron and above particle

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...