Please create an account to participate in the Slashdot moderation system


Forgot your password?

Bell Labs claims to have found new limit for chip size 104

Nocturna writes " reports scientists at Bell Labs claim to have found a new limit on how small they can make chips, doubling the life left in silicon technology. " Essentially, what Bell Labs is saying that you can't go any smaller then 5 atoms of silicon dioxide at the heart of the machine. As before, they are saying that this the limit-although this time it may veryw ell be true, with current materials.
This discussion has been archived. No new comments can be posted.

Bell Labs claims to have found new limit for chip size

Comments Filter:
  • Five *atoms*? Even getting it that small is amazing. What are they doing, passing current with subatomic particles instead of electrons?

    Oh, yeah -- first post! :)


  • I remember reading in WIRED (The one with Woz on the cover) that there are some silicon substitutes on the way.

    Does anyone know more about this/still have the article?
  • So what are we gonna do when we finally run out of space? Parallel processors? Are there any articles out there that discuss the possibilities?
  • I find it interesting that they are basing the thickness on the number of atoms given that the oxide is not a crystalline structure? Also, which atoms are they talking about, the silicon or the oxygen, since both are required to make the insulator....maybe they are talking about consuming 5 layers of silicon during the oxidation?
  • Did you see that? A 10GHz chip is possible with current technology?

  • They are talking about the gate insulator, the part that you don't want to have current pass through (but is leaky as could be). Electrons (and holes) will always be the means to convey electrical current, by definition.
  • The big problem with GaAs is that it doesn't have a kind native oxide (insulator) like Si does, so those chips probably won't be using MOSFETs, but BJTs instead, and consume power in a big way.
  • If they could pass current with subatomic particles, it would seem the limit could go lower (unless, as was suggest by the previous comment, they are already using subatomic particles). Or, I was thinking, if they could possibly make a chip runnig at 10,000 mHz, wouldn't the cooling systems needed by it being rather expensive, etc. for that to be an impracticality?
  • daveo believes an electon *is* a sub-atomic particle, no? the article says that the layer must practically be 10 atoms, although the limit is 5. the current limit is 25, and daveo still thinks that is amazing. it says this will give them another 10-12 years to shrink chips which puts us at 2010. didn't moore suggest problems as soon as 2017? we may see it sooner than that, although the limit is 10,000 mhz, which is, as of now enough for any app you could possibly run (accept maybe m$ office :). so if silicon goes 10-12 years, what's next? organic comptuers? daveo read some time ago in the ny times about carbon strings that could be randomly formed at high temperatures. does any one have any more information on this?
  • Yeah, Hemos seems to be having some problems today...either not enough sleep or too much alcohol...or both? :) He went back and fixed most of the other errors, hopefully he'll fix this story too.
  • If I remember correctly, GaAs uses MESFETs, in which the gate is laid directly on top of the channel.

    Another problem with GaAs is that p-channel devices are horrendously slow (like a factor of ten compared to the n-channel devices -- somebody correct me if I'm wrong). With Si, the p-channel devices are only a third the speed of n-channel devices of the same size.

    Perhaps the way to go is to supercool traditional Si technology, thereby increasing the electron/hole speed ceilings (1.0x10e7 cm/s at room temperature). The changes in the fabrication equipment will be significantly cheaper and the current generation of circuit designers would not need to be completely retrained.
  • That's pretty far off from now, so I wouldn't be too worried about maxing out the mhz. Then again, 640k is all anybody will ever need ;).

    On a side note, why don't designers use 3D designs? It just seems like 2D transistor grids aren't the optimum. In 3d, the clock pulse would have a much shorter path to follow, allowing higher clock speeds. Sure, it would take a 100k layer process, but you could get away with a much smaller die size.

  • Silicon dioxide (SiO2) is a crystalline structure at the macroscopic level. Saying 5 atoms of silicon dioxide makes no sense whatsoever, 5 molecules is what you're looking for.

  • by Christopher Thomas ( 11717 ) on Thursday June 24, 1999 @09:24AM (#1834540)
    This is only one of several limits to feature size, though it is a significant one. Other limits include:
    • Electromigration

      When current flows through a wire, atoms in the wire tend to be dragged along with the current. The current density - current per unit cross-sectional area of the wire - has to be kept below safe limits (dependent on temperature) to prevent this. Faster chips are made by passing the same amount of current through smaller transistors - but this means through smaller wires, too. Electromigration limits how small you can shrink the wires before your chip dies an early death. Copper helps - it is much more resistant to electromigration than aluminum - but it's still a big problem, and will keep getting bigger.

    • Capacitive coupling

      You get capacitive coupling between wires that are close together - signal leaks from one to the other. This is worse for wires that are closer together, and worse for higher frequencies. As chips shrink and are clocked more quickly, capacitive coupling becomes an ever-greater problem. Capacitive coupling also causes signal leakage between the various parts of a transistor, as well as between transistor sources/drains and the substrate (though silicon-on-insulator helps eliminate this last effect).

    • Heat Generation

      A chip's total parasitic capacitance doesn't depend that much on the size of its transistors; just on its total area. Charging and discharging this capacitance dissipates a certain amount of energy (dependent on the chip voltage). As chips are clocked more quickly, power dissipation goes up in proportion to the clock speed. Reducing the core voltage helps a bit, but the core voltage must always be considerably higher than the transistor threshold voltage. Silicon-on-insulator lowers the total parasitic capacitance, but only to a certain point. The problem remains.

    This list completely ignores fabrication difficulties at finer linewidths, though those look like they're tractable. However, electrical problems will still pose limits to how small you can shrink features on a chip. When exactly these limits will come into play remains to be seen, but they are lurking.

  • Presumably the channel length geometry would also shrink -- reducing the capacitance, and consequently the power needed to drive the chip.
  • Title says it all.

    Pretty small ones, at that.
  • so if silicon goes 10-12 years, what's next?

    IMO, most likely better use of silicon at a fixed feature size. You can improve performance by making transistors with a lower threshold voltage (with better-doped silicon or by using another material). You can also boost performance by tweaking the materials used to reduce parasitic capacitance. You could also start developing true multi-layer chips that have more than one layer of transistors, to keep ramping up density (though cost per transistor will level off very quickly and stay constant). More work could also be put into cooling systems that let you clock chips more quickly without having to worry about electromigration. Several other optimizations are probably possible.

    Basically, what will happen is that integrated circuits will become a mature technology. Right now they're still in their rapid development stage (think of it as a really long adolesence :)).

  • I believe Bose-Einstein condinsate and light are the theoretical switch technology that will replace silicon

    That would almost certainly be impractical, as your computing device would have to be kept extremely cold (cold enough to make liquid helium look hot).

  • by Christopher Thomas ( 11717 ) on Thursday June 24, 1999 @09:39AM (#1834545)
    On a side note, why don't designers use 3D designs? It just seems like 2D transistor grids aren't the optimum. In 3d, the clock pulse would have a much shorter path to follow, allowing higher clock speeds.

    There are two obstacles that I can think of. The first is heat disspiation; heat will have to travel farther through the chip before reaching the surface. This could be ameliorated by putting sheets of thermally conducting material between layers, but this is complicated, and they'd have to be pretty thick (unless they were thermal superconductors; IIRC these exist at room temperature).

    The other obstacle is depositing a layer of crystalline silicon to make transistors with. Current wafers are still sliced from single crystals of silicon. However, silicon that is deposited tends to be polycrystalline. This gives it poor electrical properties.

    We'd either have to figure out how to grow or place single-crystal layers of silicon on to an outer oxide layer of a chip, or else figure out how to make fast circuitry with polycrystalline silicon.

    That having been said, this is an idea that I like very much. It is one of the logical ways of extending chips once linewidth reaches its limits.

  • Electromigration is supposedly one of the damaging factors of running a cpu overclocked. The electromigration will shorten the chip's lifespan by making weak spots in the "wires". Cooling of the chip helps make the aluminium less resistive. Does this also lower the amount of electromigration? I believe it does. Perhaps all chips in the future will have to be built like Kryotech's [] computers.
  • > why don't designers use 3D designs?

    Because 3d is much, much harder to design. Right now, 2d is relatively easy for a human designer to keep track of, but 3d is very very hard to visualise without severe loss of information.

    Additionally, routing software and other tools related to design right now just aren't equipped to deal with especially 3-dimensional designs. Throw a third dimension in and you complicate routing dramatically.

    Then there's the problem of heat dissipation. It'll get real hot in the middle of that silicon cube.

    i.e. 3d chip designs are doable, yes, but they're so much trouble that most designers/producers don't feel that it's worth it right now. I'm sure we'll get to it eventually when we run out of other options.
  • by battjt ( 9342 )
  • Last Time I checked, electrons were subatomic
  • Dude, do you even know what you're talking about? Why would you want to use a Bose-Einstein condensate for a switch? Cause it sounds cool?
  • by Zoinks ( 20480 ) on Thursday June 24, 1999 @10:13AM (#1834554)
    ``Top-of-the line computers currently sport chips with 600 megahertz of power. Timp said a chip with the smallest features possible would allow for computer processing of at least 10,000 MHz.''

    That must mean my house is very low power - it only has 60 Hz of power! How will I be able to power one of these chips if my house doesn't have enough power?

  • When we say we've doubled the life of silicon technology we have to remember that advancements in this field are made exponentialy. We might have double the potential of the technology but not how long it will be around.
  • "Five atoms is the minimum thickness possible for the silicon dioxide film at the heart of computers" in the original article might actually refer to the number of atoms in the layer, regardless of whether the atoms are silicon or oxygen (of course, oxygen atoms and silicon atoms have different atomic radii).

    On the other hand, "Essentially, what Bell Labs is saying that you can't go any smaller then 5 atoms of silicon dioxide at the heart of the machine.", as posted on makes no sense, since silicon dioxide is not an atom, but a molecule at the microscopic level (SiO2) and a crystal at the macroscopic level (as stated above).
  • H may just be having a bad day, but as high profile as Slashdot is becoming you'd think it wouldn't be too much trouble to run things through a spell checker. Or re-read what they write before posting it. Slashdot is fairly fast moving. Even if errors are corrected, it is already too late, many people have already seen the error.

    H: I like your articles. Nice dose of non-linux/geek stuff usually. Please take this as constructive criticism and proofread.

  • The structure of an IC is already 3-dimensional with all it's via's, bonds, doping etc....
  • Duh. Everyone knows clock speed (measured in megahertz) isn't even a good measure of performance. It's all a matter of how much current is flowing through the chip. My calculations lead me to believe the industry is heading in the wrong direction--larger circuits will lower resistance, allowing massive amounts of current through to the very core of the processor!

    If you can't afford to change the frequency of your house power supply, you can always buy bigger fuses.

    200 amps of pure processing power.
  • The actual switches (the MOSFETs) are all in the same plane, on the surface of the silicon wafer.
  • In our group, the main reason 3D is not in use today has more to do with the materials and the thermal cycles required to manufacture a true 3D wafer. Currently, SOI (Silicon On Insulator) is not the technology of choice, but is required (by all that I have seen) to add a 3rd dimension. Remember, each switch needs to be electrcially isolated from the others to some extend. Now, the thermal issue is even more confounding. Once you make your first level of transistors, they will undergo all the following steps of the process, so they will see all the thermal cycles as you make more layers of transistors. This will lead to dopant diffusion, which won't allow the very steep doping profiles required for a well behaved MOSFET. Don't give up hope....we have been able to produce devices on two layers and may be close to adding a third without too much damage to the first, although the first layer may be entirely larger size devices used to drive the smaller guys.

    Then we can start to worry about the other issues people have brought up.....
  • Fractal surfaces are better insulators than flat surfaces, as the roughness impedes the curculation of the coolant. Radiative cooling would also be less effective.

    This taken into consideration, a properly sculpted surface might have improved cooling properties, at least under conditions where the coolant was coerced into running through a channel. It may also be necessary to use heat-exchanger techniques, and powered pumps for internal circulation of some high efficiency coolant.

    What has REALLY LOW!! viscosity, and yet has thermal dimensional stability and high per/unit thermal absorbtion capability? It would also be good if it were an electrical insulator, was a terrible solvent [i.e., didn't like to dissolve things], and had very low capaticance. I can't think of anything quite like that right now. The best that I've come up with is liquid Nitrogen... but that's not very thermally stable (so although it can be used on the outside of the chip, one wouldn't want to use it on the inside, as one might crack the chip [Yes, some PC boards used it on the inside, but we are talking about a different order of magnitude of dimensions here!]).
  • We'd either have to figure out how to grow or place single-crystal layers of silicon on to an outer oxide layer of a

    This is a known technology....single-crystal epitaxial growth....hence the name Epi-man. It isn't ready for mass production yet, but it is doing some nice stuff in the labs.
  • To quote Guy L. Steele (or perhaps he was quoting someone else): just because parallel processing is the only answer, doesn't mean it *is* an answer. Parallel algorithms (which are notoriously hairy to deal with) don't always speed things up, so one might be at a loss if there turned out to be no way to speed a chip up. At that point, computer makers might actually have to worry about speeding up peripherals, or -- god forbid! -- the code itself.

    But then again, there's a hell of a lot of money in this industry. Something tells me they'll find another paradigm to move to (nanotechnology, DNA computers, etc.) given enough profit potential.
  • Silicon dioxide (SiO2) is a crystalline structure at the macroscopic level.

    But when you look at MOSFET gates, there is no crystral structure to be seen, so their suggestion of using this as a measure doesn't make sense to me. Judging from their numbers, it sounds like they are saying the average "atom" is about 2 Angstroms, so why don't they just say the limit is around 1 nm? (haven't read the link to the true report, still going off the mercury story)
  • Posted by 2B||!2B:

    There's an easy way to get around the heat issue: redesign so the heat isn't generated in the first place.

    I've seen lectures demonstrating solutions for many of the heat issues. At the University of Utah there are research projects (with a bunch of funding from Intel and IBM, where the results are being targetted at production) which tackle the issue of how to use fully asynchronous circuits within a standard CPU, and how to eliminate the refresh of the entire CPU on each clock cycle. Without getting into the specifics (they're all detailed on their web site), the result is a CPU which uses far less current for the same results, while at least doubling its speed due to the improved performance of the asynchronous algorithms. Anyway, heat will be far less an issue as Intel and others make more use of these techniques. And CPU's will be much more appropriate for portable computers, since the power requirement drops significantly.
  • Nobody is forcing you to read the articles. You want perfect spelling, grab a dictionary. If you don't like someone's spelling, keep it to yourself, and stop bitching. Nobody else wants to hear about it.

    -- Give him Head? Be a Beacon?

  • They will use quantum processors. They can work faster and use similar materials. You don't have the same problems as electrons(wide wires, electrical interference, etc). You can also use different wavelengths of light to trigger a quantum logic gate. There is also little heat.

    You will also move into parallel processing on the chip. Multiple execution paths etc.

    We still have a long way to go to get the most out of silicon.
  • Yes, I know electrons are technically subatomic, as are protons and neutrons. I meant particles like bosons, quarks, fruity pebbles.

    I've never heard of any research on passing current using such particles. Has it been done? That would change the playing field quite a bit; it's well beyond my practical understanding, though. If anyone could point to a URL about such research (preferably in layperson's terms), I'd love to see it.


  • by / ( 33804 )
    I could've sworn that quote was by Eisenhower, not Ford.

    does that answer the question?
  • With the death of SDI (starwars) came the death of
    reaserch into carbon semiconductors. What company
    is willing to play with diamond waffers when
    benifits might be a decade away? Stock holders
    would not tolerate it.
  • Opticom [] is a Norwegian based company developing unique all-organic and hybrid silicon/organic memory. They have a working prototype of a 1 Gb 62ns ROM chip. They use a hybrid design combining silicon driver circuitry and Opticoms ROM film. The ROM chips demonstrated the feasibility of multiple memory layers (2-6 layers).
  • If the semiconductor CPU hits the speed bump, there'll be one rather wrenching consequence: it'll throw off backward compatibility.

    No matter what the chip technology used today, the underlying architechture is pretty similar, and this has resulted in a highly interlinked supportive infrastructure - not only do apps stay backward compatible, but algorithms, programs, technogies (and even technologists) continue to feed off of past groundwork.

    However, quantum computing involves an entirely different form of math/algorithmic processing which is radically different from that of today's architechture. For instance, unlike sequentially forking down if/else paths, quantum machines simultaneously arrive at all solutions, which requires a different way of programming them.

    If the software/logic/algorithms to run on quantum machines is unable to be backward compatible with present computers, it creates a huge gaping chasm between the two.

    The consequences should be interesting.

  • If the 5 atom rule is true for silicon, is this also true for geranium? A more interesting issues, because IBM is sitting on some vary interesting Geranium technology... it will be putting in cell phones... The 10ghz may seem high but 50ghz is the outside limit of Geranium, and I am guessing here, that was based on a 25 atom limit, thus all I wondering is, dose that put Geranium now at 200 to 400ghz? Any Ideas?
  • Let me ask my question here. Doses the 5 atom Rule apply to gallium Arsenide? If so what is the outside speed limits given what we know?
  • then != than

    Numerous other glitches exist in today's stuff, which even a single proofread should be enough to find.
  • The topic speaks for itself. This really doesn't surprise me, Moore's Law won't just suddenly run out..

    I can imagine back in the days of Vacum Tubes that people didn't expect to come up with a new neat way to shrink technology.. Not until it happened anyway.

    The lesson to be learned? Expect great things from technology. Don't bet on anything. Expect limits to be broken or avoided.

    I just hope that with an advance like this that we won't stop looking into the next generation of computing (Quantum)
  • Electrons are fine for the time being. Those smaller particles tend to behave, well, wierd... Also electrons are quite stable compared to these other things. I think that by the time we understand quantum mechanics thoroughly enoughto really harness the power of such a system, we will be using DNA (or other organic base) computers. These will be quite a bit more powerful than our retrofitted old style CPUs and after 20-40 years of those we might be ready for REAL quantum computers (not systems where you learn to deal with the oddities, but where you can take advantage of the oddities).

    There may be some more intermediate systems based on photons, but that probably will not be a long lasting step but more of a stepping stone...
  • Then when we have maxed out 3d chip designs, we go 4d.
  • Top-of-the line computers currently sport chips with 600 megahertz of power. Timp said a chip with the smallest features possible would allow for computer processing of at least 10,000 MHz.

    assuming doubling of power every 18months (1.5 yrs) ....

    1.5 yrs 1200Hz
    3.0 yrs 2400Hz
    4.5 yrs 4800Hz
    6.0 yrs 9600Hz
    7.5 yrs 19200Hz

    time for chip 19200Mhz is 7.5yrs from this year?
  • Liquid Helium is most probably a Bose-Einstien condensate. Bose-Einstein switching devices could be practical at several degrees K.

  • Dude! Do you even know what a Bose-Einstien condensate is? Maybe you should READ, before you say something stupid. BEC's can be used for switching because of the incredible phase lag that they cause (READ: the speed of light in a BEC can be counted in 10's of miles/hour), which is ideal for switching.

  • Before any of you pull out your AFMs and start building these things, remember that thermodynamics is going to make life really hard on you, unless you can erect massive diffusion barriers (never mind electromigration, the "wires" will interdiffuse unless you keep the stuff real cold).

    All you need are a few atoms to migrate in your 5 atom width device and voila, no more device. Migration barriers for self diffusion in Si tend to be only a few eV high at most (some barriers are around 1 eV if my memory serves me). The atoms will sample these barriers around 10**12 1/s, so it is quite likely that at room temperature you will see effects in a short period of time.

    Does anyone remember the threading defects in blue solid state lasers when they first came out? They would work for only a few seconds, and then die from thermodynamic driven diffusion, threading defects (basically releaving strain in the lattice by displacing a line of atoms).

    I suspect the 5 atom problems will be harder to overcome.
  • GaAs has some problems. But Low Temperature grown GaAs is promising. Basically an MBE grown material, you can engineer in whatever defects you like... or sort of... You can drive the material As rich easily (it prefers this), or Ga rich (harder). The difficult part is understanding the correct dopants for GaAs, as the defect behaviors are different. Interstitials are highly mobile in GaAs. Also, defect complexes are very important electrically to the material. The high temperature grown material requires an overpressure of As gas to grow in the requisite stoichiometry. The low temperature grown material requires an MBE setup which is difficult to use on a mass production line. GaAs is the material of the future, and it always will be.
  • so if silicon goes 10-12 years, what's next? organic comptuers? daveo read some time ago in the ny times about carbon strings that could be randomly formed at high temperatures. does any one have any more information on this?

    They've been experimenting with silicon-on-sapphire (SOS) technology for awhile (I always seem to associate this with ECL, for some reason). And I'd be surprised if there wasn't a significant amount of research into synthetic diamond substrate structures. There are also people researching carbon microtubules (strings of Bucky balls), creating circuits using atomic force microscopes to lay them out. So I'd say once the CMOS processes "mature", there'll be a new batch of technologies to pick up where CMOS leaves off.
  • i didnt know moores law applied to clock speed?

    besides Kryotech will sell a 1,000 MHz K7 this year and Intel has a 3,000 MHz chip (Deerfield)planned for two years from now.
  • the discussion is about the future of semiconductor materials, not the future of architechture philosophy. quantum processors made on silicon will run into the same problems!
  • I wouldn't notice with a thousand proofreads. I
    don't know when to use than so I always use then. :-) I haven't had anyone not understand me yet.
  • Actually, what you've described is a nice description of water... :)

    Silicon Dioxide is rather different. First, the sizes between Si and O are not that big, with the Silicon atom being about 50% larger.

    Secondly, and more importantly, SiO2 forms a tetrahedral crystal form, so that rather than just having individual SiO2 molecules, each silicon atom shares each oxygen atom with another silicon atom. In fact, it ends up that each silicon atom shares 4 different oxygen atoms with 4 other silicon atoms. So, while the total amount of silicon and oxygen works out to 2 oxygens for every silicon atom, there are no actual single SiO2 molecules.
  • My Geranium-powered Beowulf cluster in front of my house processes like shit. And it's using a lot more than 25 atom channels. So unless adding more peat moss would help (mulching worked a little) I would have to say IBM is barking up the wrong tree.
  • Even if quantum computing can ever be made to work (meaning Shor-style computation - the way computers work today already depends on quantum effects) it is far too specialised to be useful for general purpose computation.

    The successor I've seen for electrical computing is fully-optical computing. Lasers carry your signals, optical gates switch them. You can cross signals over without interference, and the theoretical limits on gate performance and size are ludicrously high. Sorry, no URL - I saw it at a lecture about, uh, fifteen years ago. But I know it's still an area of active research.
    Employ me! Unix,Linux,crypto/security,Perl,C/C++,distance work. Edinburgh UK.
  • I have looked at a lot of gate-oxides on semi-current CPU's (most recent was the ppc750), and the smallest gate oxide I have ever seen was about 92 nanometers in thickness. That is roughly 920 atoms thick.

    2.5 nanometers is about the limit of a resolvable object on our SEM.

    Plus what are they talking about 5 atoms thick? not all atoms are the same size, and Silicon Dioxide is 3 atoms per molecule right? so wouldn't the limit be 6 atoms?


  • That was truly a great post. I am still laughing!

  • I have never heard of Germainium Dioxide. Not that it doesn't exist I guess.

    This limit was applied to Silicon Dioxide. This is also known as GLASS. This is an insulator! Not a semi-conductor.
  • 92 nm is enormously huge! 250nm devices have typically been modeled with 50-60 A (5-6 nm) thick gate oxides. I have seen many papers discussing the reliability of 1-1.5 nm (1.5 nm seeming to be the magic stopping point from a reliability standpoint) thick oxides. I am not sure how you saw 92 nm GATE oxides in any modern devices, perhaps you were looking at the field ox?
  • There's an easy way to get around the heat issue: redesign so the heat isn't generated in the first place.

    I've seen lectures demonstrating solutions for many of the heat issues. At the University of Utah there are research projects (with a bunch of funding from Intel and IBM, where the results are being targetted at production) which tackle the issue of how to use fully asynchronous circuits within a standard CPU, and how to eliminate the refresh of the entire CPU on each clock cycle.

    This does indeed help - however, not that much on a well-designed chip.

    A lot of the focus of chip optimization nowadays has been on improving scheduling techniques to let programs take full advantage of all of the chip's facilities at any given time. The eventual goal is that if the chip has two FPUs and three integer arithmetic units, it will be performing two FP calculations and three integer calculations per clock, with no units sitting idle. Asynchronous chips give you a large power savings when you _do_ have chip components sitting idle - you are no longer clocking a module that isn't being used. However, for a chip that is using all parts of itself, all components _have_ to be clocked, which limits the savings that you get from making a chip asynchronous.

    It's still a worthwhile optimization; it just won't save you from heat problems as clock speeds rise.

  • Cooling of the chip helps make the aluminium less resistive. Does this also lower the amount of electromigration?

    Yes, it does. At the suggestion of another slashdot reader, I did more research on electromigration, and it actually has a very strong dependence on temperature.

    Cooling computers to very low temperatures does solve or at least help a lot of problems, but is impractical for many applications. Heat flow problems will also be significant for chips that generate a lot of heat in very small areas.

  • Fortunately, there's still an awful lot of wasted space in most computers. If they reach these limits using current technology and can't use a different technology, there's a lot more packaging improvement available.

    • Even without parallel processing, more memory and more logic can create more power. I'm assuming more transistors equate to more computing power either through parallel processing ( MOSIX [], Beowulf [] ), larger caches, or more CISC-like design (multiple arithmetic units, more instruction decoders, alterable instructions...). In the 1970's I used a CDC Cyber with 2 CPUs and 14 helper processors; there's a lot more that can still be done with existing tech.
    • Existing chip packages are large. They could be made smaller [].
    • Chip packages can be actively cooled with
    • Or make your computer the size of a building and tuck it into a warp bubble [], so it can be small to our perception...although I don't know if inertia would let you put a handle on it and move it easily...

    does that answer the question?

    The article referenced does not appear to relate to the topic of making chips of any kind in three dimensions.

    Also, as was pointed out in the comments, frequency-domain multiplexing of the type described doesn't let you build a computer.

    Re. optical computers in general, there are also strong limits on how small you can shrink the feature size on optical devices, as photons will leak through the walls of the waveguides if they are made too small, and your photons will damage the device if you shorten the wavelength too much.

  • anyone, there is such a thing as a thermal superconductor??

    If I understand correctly, electrical semiconductors are also thermal superconductors, though the converse isn't true (thermal superconductors don't have to be electrical superconductors).

    I could be mistaken about this, but I've seen references in a couple of places.

    Re. thermal superconductors, I remember seeing a reference to "superconductor-like behaviour" being observed at room temperature. I was told that this was thermal superconduction, though I have no way to substantiate this rumour.

    Can anyone familiar with the original article pass on what "superconductor-like behavior" means?

  • A relatively unexplored area considering it's complexity.

    Certainly, human brain cells are relatively huge (and really watery).

    I'm not talking physical media, but rather transmission speed/method.

    (i.e. as exemplified in neurotransmission.)

    Anyone out there with some relevant info ?
  • I could have been looking at the field oxide, but I don't think so. But perhaps it wasn't the gate oxide either. I have a hard copy of the picture somewhere, if I get a chance I will scan it and you can take a look.


The intelligence of any discussion diminishes with the square of the number of participants. -- Adam Walinsky