Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology Hardware

UK Scientists Make Transistor One Atom Long, 10 Atoms Wide 186

Bibek Paudel points out a story about the latest step forward in the development of nano-scale circuits. Researchers from the University of Manchester have created some of the smallest transistors ever, measuring only one atom by 10 atoms. The transistors are made out of graphene, which has the potential to replace silicon in the never-ending hunt for smaller computer technology. From NewScientist: "There are other kinds of prototype transistors in this size range. But they usually need supercooling using liquid gas, says Novoselov. The new graphene devices work at room temperature. Such prototypes are typically made by building one atom at a time, or wiring up individual molecules. Those approaches are complex and impractical, Novoselov says. By contrast, the graphene transistors were made in the same way that silicon devices are, by etching them out of larger pieces of material. 'That's their big advantage,' he says."
This discussion has been archived. No new comments can be posted.

UK Scientists Make Transistor One Atom Long, 10 Atoms Wide

Comments Filter:
  • Science or Magic (Score:5, Interesting)

    by whisper_jeff ( 680366 ) on Friday April 18, 2008 @08:38AM (#23115952)
    Seriously, sometimes I feel the line between science and magic gets fuzzy. A transistor one atom by 10 atoms? That's on such a small scale that is so hard to comprehend that it'd almost be easier to hand-wave it and just say "it's magic."
  • Re:Science or Magic (Score:4, Interesting)

    by UnHolier than ever ( 803328 ) <unholy_&hotmail,com> on Friday April 18, 2008 @08:55AM (#23116064)
    Worst. Graphene being 1 atom wide? Graphene [wikipedia.org] is a planar sheet with a honeycomb lattice. I fail to see how you can make a 1 atom wide honeycomb lattice. Unless what they mean is 1 atom thick, but then this is a 1 atom X 10 atoms X 10^6 atoms transistor. This isn't quite the same thing.
  • by JustinOpinion ( 1246824 ) on Friday April 18, 2008 @09:51AM (#23116582)
    In Eric Drexler's [wikipedia.org] book "Nanosystems [amazon.com], he carefully analyzes questions like this regarding the possible failure modes of atomically-precise devices. The book goes through the math in detail. The short answer is that even with fairly pessimistic assumptions (e.g. that a single-atom defect created during manufacture or afterwards by cosmic rays or other radiation will completely destroy a particular functional sub-unit), you can still design highly robust devices.

    The most obvious way is to build in some level of redundancy. Naively you can have dozens of redundant sub-units, and use things like "majority voting" to pull out the signal from the noise. In practice there are more elegant ways to do this (e.g. error correction). Many modern chips do indeed have some redundancies so that even with manufacturing defects, the chip still runs (perhaps with some reduction in functionality). Organizing the chip so that failsafe-checks occur during operation is certainly possible.

    Again, check out the book for more details. The point is that these questions have been thought about and they are not insurmountable. The rate of defects generated from spurious environmental damage (e.g. cosmic rays) is low enough that it can be overcome with fairly straightforward engineering.
  • by somersault ( 912633 ) on Friday April 18, 2008 @12:06PM (#23118598) Homepage Journal
    well, for the cases where performance matters (games, 3D rendering, CAD, simulation, etc), all it would take in that case is for one company to spend the extra money, and they will reap the benefits in better sales. Then the others would have no choice but to follow suit. Hopefully those type of companies actually do put some thought into performance of their code anyway.

    Having said that, I do agree that it's good to try and keep the purpose of code as clear as possible through sensible variable names and constructs, and a good compiler should be capable of optimising a lot of code anyway, but it's no excuse for just being lazy and using more resources than you actually need..
  • by bheer ( 633842 ) <{rbheer} {at} {gmail.com}> on Friday April 18, 2008 @01:53PM (#23120272)
    Note - I am not a car fanatic. I like cars that take me from A to B reliably, and with decent handling. The problem is, I bought a car so that I'd not have to worry about the distance between A and B.

    For work, I drive about 20 minutes - less than 20 miles. But the thing is, I'd like to use the same car for longer drives. The notion of having a commute-only car and a separate car for everything else is not very appealing to me. As for the EV1, I'd love to see a ROI analysis of the costs of putting up the fuel infrastructure for a car that only urban and suburban Americans could use 4 days a week (because they could be driving out/in on Fridays/Mondays).

    That said, you should look at the Tesla -- it's supposed to do 220 miles on a single charge. And if they can get their transmission issues sorted out, it's going to make electric cars not just eco-friendly, but *desirable*.
  • by Anonymous Coward on Friday April 18, 2008 @03:03PM (#23121176)
    Note, however, that those are transistors, not flash cells, so the ramifications are actually much different than the iPod comparison. These aren't bulk storage, they're plain old flip-flops like those that make up the registers and cache in a CPU. You need to power them or they'll lose their data.

    In other words, imagine a CPU that has no cache at all... because it's got a few hundred GiB of main memory (I use that lower figure because it's more reachable; besides, the chip isn't a whole square inch and you'll want to devote a lot of it to other logic than memory). Main memory running at the same speed of the CPU. No fancy bus issues, no clock multipliers. In other words, no more von Neumann bottleneck; two layers of the modern memory hierarchy vanish in a puff of new technology. Among other things, this changes the way certain software is written; since you no longer have a cache thrashing penalty, you no longer need to worry about locality of reference, and suddenly you find radix sort performing better than quicksort and certain list fragmentation penalties go away.

    After that, consider the effects of have such huge amounts of high speed memory. A very complicated aspect of OS design is suddenly very simple, because you no longer really need to worry about swapping stuff out to disk. You can just use a huge page size and a fast and simple (but space inefficient) algorithm like the buddy system. You can also have a separate chunk of memory on the CPU die, say a modest 64 GiB, hooked into the system bus for devices to talk to, with a crossbar so the CPU can still access it at full speed, so certain I/O scheduling issues also become simpler.

"I don't believe in sweeping social change being manifested by one person, unless he has an atomic weapon." -- Howard Chaykin

Working...