New Silicon-Based Memory 5X Denser Than NAND Flash 162
Lucas123 writes "Researchers at Rice University said today they have been able to create a new non-volatile memory using nanocrystal wires as small as 5 nanometers wide that can make chips five times more dense than the 27 nanometer NAND flash memory being manufactured today. And, the memory is cheap because it uses silicon and not more expensive graphite as been used in previous iterations of the nanowire technology. The nanowires also allow stacking of layers to create 3-D memory, even more dense. 'The fact that they can do this in 3D makes makes it highly scalable. We've got memory that's made out of dirt-cheap material and it works,' a university spokesman said."
It has been obvious for years. (Score:3, Interesting)
When we run out of possibilities in shrinking the process we go vertical and take advantage of the third dimension. Moore's law is safe for a good long time.
This tech is still several years out from production but other 3D silicon options are in testing, and some are in production.
When the Z density matches the X and Y density in fifteen years or so we'll be ready for optical or quantum tech.
Re:It has been obvious for years. (Score:5, Insightful)
We don't just go vertical without solving the heat dissipation problem. We already have a hard time dissipating the heat off the surface area of one layer. Now imagine trying to dissipate the heat off of the layer that is trapped between two more layers also generating the same amount of problematic heat. Then try to figure out how to dissipate the heat off a thousand layers to buy you just 10 more years of Moore's law.
Re: (Score:2, Interesting)
Well, at least you have a theoretical possibility to avoid that problem in ssd-disks.
Since you are only going to access one part of the memory at a time the rest could be unpowered. This gives a constant heat do get rid of regardless of the number of layers.
This is of course not possible for CPU's and other circuits where all parts are supposed to be active.
Re: (Score:2)
True, that does help with the ssd, as long as you don't try to write too much data too fast.
Re: (Score:2)
Re:It has been obvious for years. (Score:4, Interesting)
We don't just go vertical without solving the heat dissipation problem
The obvious solution to that: don't generate any heat. Now, where are the room-temperature superconductors I was promised???
Re: (Score:3, Interesting)
Maybe I should patend this "idea" for a transistor, I am probably to late though.
Re: (Score:2)
Re: (Score:2)
The solution is reversible computing [wikipedia.org].
The first solution that comes to my mind (to your hypothetical limiting condition, at least ;D) would be to put leads on destructive logic gates to conserve the unused information electronically. Imagine an AND gate with a rarely used "remainder" bit, for example. Designers could glom on that if they wanted it, or if not lead most of the unutilized results off into a seeding algorithm for /dev/urandom, and the rest (those prone to entropy feedback) into controlling blinky LEDs.
Yeah, that's what they're real
Re: (Score:2)
That would hardly help reduce heat generation in CMOS. At current gate lengths, a significant portion of the heat is generated due to leakage through the channel when the transistor is "off".
Maybe there is some switching device implementable with HTSCs that I am not familiar with, but it still wouldn't apply to silicon devices.
Re: (Score:2)
Yes, it's non trivial. Such a gap would have to be more than a few air molecules wide to allow free flow (avoid turbulence against the edges). This would make the size of your third dimension grow much faster, negating a lot of the proposed benefit in terms of Moore's law scaling. Also, existing air-flow dissipation strategies just wind up heating the nearby air, and trying to dispose of that heat, which means we'd still have a growing problem to deal with ... so even if we've gotten the heat an inch awa
Re: (Score:2)
Where does your heat from the ducts GO? You reach the surface of the device, and now you still have 1000 times the heat to dissipate that you had trouble dissipating with fans/heatsinks/liquid cooling already. And that assumes you can do a PERFECT job of reaching the surface of the device with your strategy.
Well that may be problematic (Score:3, Interesting)
One thing you could run in to are heat issues. Remember that high performance chips tend to give off a lot of heat. Memory isn't as bad, but it still warms up. Start stacking layers on top of each other and it could be a problem.
Who knows? We may be in for a slowing down of transistor count growth rate. That may not mean a slow down in performance, perhaps other materials or processes will allow for speed increases. While lightspeed is a limit, that doesn't mean parts of a CPU couldn't run very fast.
Also it
Re: (Score:2, Interesting)
They're all over that. As the transistors shrink they give off less heat. New transistor technologies also use less energy each per square nanometer, and there's new ones in the pipe. Not all of the parts of a CPU, SSD cell or RAM chip are working at the same time so intelligent distribution of the loads give more thermal savings. Then there are new technologies for conducting the heat out of the hotspots, including using artificial diamond as a substrate rather than silicon, or as an intermediary elect
Re: (Score:2)
If they were making the same spec part that would be fine, but as transistors shrink they cram more into the same space, so total heat flux tends to go up. Also the leakage gets worse too - but that gets offset by lower voltages.
Re:Well that may be problematic (Score:4, Interesting)
This might be a dumb question, but why not have some sort of capillary-esque network with a high heat-capacity fluid being pumped through it? Maybe even just deionized water if you have a way of keeping the resistivity high enough.
Re: (Score:2)
The problem with such kind of proposals is that there is no means to actualy build the channels. It is a great idea in theory, and quite obvious (so there is a huge amount of research already on it), but nobody could actualy build it.
Re: (Score:2)
Re: (Score:3, Insightful)
L1 CPU caches are shamefully stuck with the laughable 20-year old 640K meme in rarely noticed ways. Everyone's first thought is about RAM memory, but remember that CPU's are less change friendly and benefit more from tech like 128K * 5 size at the new density improvement.
Our supposedly macho CPU's have only 128K L1 sizes and comparably, absurdly high L2 and L3 [amd.com] sizes to make up.
The current excuse is that cost and die-space constraints keep size-improvements mostly on the L2 and L3 side. Sadly, someone tagge
There's only so much you need (Score:4, Informative)
Cache is not a case where more is necessary. What you discover is it is something of a logarithmic function in terms of amount of cache vs performance. On that scale, 100% would be the speed you would achieve if all RAM were cache speed, 0% is RAM only speed. With current designs, you get in the 95%+ range. Adding more gains you little.
Now not everything works quite the same. Servers often need more cache for ideal performance so you'll find some server chips have more. In systems with a lot of physical CPUs, more cache can be important too so you see more on some of the heavy hitting CPUs like Power and Itanium.
At any rate you discover that the chip makers are reasonably good with the tradeoff in terms of cache and other die uses and this is demonstrable because with normal workloads, CPUs are not memory starved. If the CPU was continually waiting on data it would have to work below peak capacity.
In fact you can see this well with the Core i7s. There are two different kinds, the 800s and the 900s and they run on different boards, with different memory setups. The 900s feature faster memory by a good bit. However, for most consumer workloads, you see no performance difference with equal clocks. What that means is that the cache is being kept full by the RAM, despite the slower speed, and the CPU isn't waiting. On some pro stuff you do find that the increased memory bandwidth helps, the 800s are getting bandwidth starved. More cache could also possibly fix that problem, but perhaps not as well.
Bigger caches are fine, but only if there's a performance improvement. No matter how small transistors get, space on a CPU will always be precious. You can always do something else with them other than memory, if it isn't useful.
Re: (Score:2)
Exponential growth doesn't last for ever.
Don't be too sure; the human population has been growing more or less exponentially for a million years.
Re: (Score:2)
Re: (Score:2)
That and a few other catastrophes.
Re: (Score:2)
Tell that to the first world war, second world war, famines, plagues, mongols and so on.
Re:It has been obvious for years. (Score:5, Insightful)
It's not as obvious as it sounds. Some things get easier if you're basically still building a 2D chip but with one extra z layer for shorter routing. It quickly gets difficult if you decide you want your 6-core chip to now be a 6-layer one-core-per-layer chip. Three or four issues come to mind.
First is heat. Volume (a cubic function) grows faster than surface area (a square function). It's hard enough as it is to manage the hotspots on a 2D chip with a heatsink and fan on its largest side. With a small number of z layers, you would at the very least need to make sure the hotspots don't stack. For a more powerful chip, you'll have more gates, and therefore more heat. You may need to dedicate large regions of the chip for some kind of heat transfer, but this comes at the price of more complicated routing around it. You may need to redesign the entire structure of motherboards and cases to accommodate heatsinks and fans on both large sides of the CPU. Unfortunately, the shortest path between any two points is going to be through the center, but the hottest spot is also going to be the center, and the place that most needs some kind of chunk of metal to dissipate that heat is going to have to go through the center. In other words, nothing is going to scale as nicely as we like.
Second is delivering power and clock pulses everywhere. This is already a problem in 2D, despite the fact that radius (a linear function) scales slower than area and volume. There's so MUCH hardware on the chip that it's actually easier to have different parts run at different clock speeds and just translate where the parts meet, even though that means we get less speed than we could in an ideal machine. IIRC some of the benefit of the multiple clocking scheme is also to reduce heat generated, too. The more gates you add, the harder it gets to deliver a steady clock to each one, and the whole point of adding layers is so that we can add gates to make more powerful chips. Again, this means nothing will scale as nicely as we like (it already isn't going as nicely as we'd like in 2D). And you need to solve this at the same time as the heat problems.
Third is an insurmountable law of physics: the speed of light in our CPU and RAM wiring will never exceed the speed of light in vacuum. Since we're already slicing every second into 1-4 billion pieces, the amazing high speed of light ends up meaning that signals only travel a single-digit number of centimeters of wire per clock cycle. Adding z layers in order to add more gates means adding more wire, which is more distance, which means losing cycles just waiting for stuff to propagate through the chip. Oh, and with the added complexity of more layers and more gates, there's a higher number of possible paths through the chip, and they're going to be different lengths, and chip designers will need to juggle it all. Again, this means things won't scale nicely. And it's not the sort of problem that you can solve with longer pipelines - that actually adds more gates and more wiring. And trying to stuff more of the system into the same package as the CPU antagonizes the heat and power issues (while reducing our choices in buying stuff and in upgrading. Also, if the GPU and main memory performance *depend* on being inside the CPU package, replacement parts plugged into sockets on the motherboard are going to have inherent insurmountable disadvantages).
Re: (Score:3, Informative)
First is heat. Volume (a cubic function) grows faster than surface area (a square function). It's hard enough as it is to manage the hotspots on a 2D chip with a heatsink and fan on its largest side. With a small number of z layers, you would at the very least need to make sure the hotspots don't stack.
I'm not saying your point is entirely invalid, however, heat isn't necessarily a problem if you can parallelize the computation. Rather the opposite, in fact. If you decrease clock frequency and voltage, you get a non-linear decrease of power for a linear decrease of processing power. This means two slower cores can produce the same total number of FLOPS as one fast core, while using less power (meaning less heat to dissipate). As an extreme example of where this can get you, consider the human brain -- a m
Re: (Score:2)
The problem is for parallel operations, we start to run into Amdahl's Law: http://en.wikipedia.org/wiki/Amdahl's_law [wikipedia.org]
Re: (Score:2)
Indeed, although if the need for more processing power arises from increasing data sets, Amdahl's law isn't as relevant. (Amdahl's law applies when you try to solve an existing problem faster, not when you increase the size of a problem and try to solve it equally fast as before, which is often the case.)
I wasn't trying to say that we can parallelize every problem -- I was commenting that 3D processing structures might very well have merit, since it is useful in cases where you can parallelize. And those ca
Re: (Score:2)
You can't compare a brain to a computer; they are nothing alike. Brains are chemical, computers are electric. Brains are analog, computers are digital.
If heat dissipation in the brain was a problem, we wouldn't have evolved to have so much hair on our heads and so little elsewhere; lack of heat to the brain must have been an evolutionary stumbling block.
Re: (Score:2)
Hair is insulation against the sun. The reason why Africans have curly hair is to provide insulation while letting cooling air circulate. In colder climates, straight hair still provides enough protection from the sun while letting some air circulate.
Re: (Score:2)
I'm not sure I would call the neurons either analog or digital -- they are more complicated than that. But regardless, both the brain and a computer do computations, which is the important aspect in this case.
Not that brain heat dissipation matters for the discussion (as we already know roughly how much energy the brain consumes), but as far as I can recall, some theories in evolutionary biology assumes that heat dissipation from the head actually has been a "problem".
Re: (Score:2)
But regardless, both the brain and a computer do computations, which is the important aspect in this case
Both an abacus and a slide rule will do computations, too, but they're nothing alike, either. A computer is more like a toaster than a brain or slide rule; you have electrical resistance converting current to heat. The brain has nothing like that (nor does a slide rule or abacus, even though the friction of the beads agaiinst wires and the rules sliding must generate some heat).
Re: (Score:2)
Re: (Score:2)
But it doesn't address the heat problem in electric circuts -- again, it's more like a toaster than a brain. And note that it takes a computer far less time to compute PI to the nth digit than it does the human brain, despite the brain's 3D model and the computer's 2D model.
Re: (Score:2)
That would be because afaik most of the brains power goes into conceptualizing, and all kinds of other tasks. Pure maths is very small part of activities.
But then again, those people who's visual cortex (or whatever the area is called which handles visual data and eye movements) are simply amazing in maths. Autistic persons can do amazing things as well.
It's matter of how the horsepower is used, not availability in the case of brains. For brains it's an very easy task to detect objects, and attach all kinds
Re: (Score:2)
Re: (Score:2)
... consider the human brain -- a massively parallel 3D processing structure. The brain has an estimated processing power of 38*10^15 operations per second (according to this reference [insidehpc.com]), while consuming about 20 W of power (reference [hypertextbook.com])...
Good point. I believe I have solved Moore's Law in computing for some years. I need shovels, accomplices, and every Roger Corman movie.
Re: (Score:2)
Yes, but I'd like to see a human brain run the Sieve of Eratosthenes, or accurately simulate a 3-body orbit, or run a given large-scale cellular automata for more than a couple thousand steps.
There's times when parallel computing is useful, but there's also times when pure "how fast can you add 1 + 1" type calculations are incredibly useful. You can't just abandon linear computation completely.
Re: (Score:2)
As another poster said, more efficient software / methods / (development process) is probably more important. The problem is that it's hard to balance development with slow, unreliable humans with good design.
Re: (Score:2)
Human brains is FAR from unreliable and slow. We don't just consciously know about the insane multitude of seemingly very trivial and simple tasks yet requiring immense processing power to make happen.
Can you stand on 1-leg? can you run and while running jump and keep running without falling down? Are you able to very delicately almost touch your girlfriend, but not really touch and she feels that, with all fingers in your hand and run your hand through her back, just almost touching?
All tasks of fine motor
Re: (Score:2)
Re: (Score:2)
I'd like to see a human brain run the Sieve of Eratosthenes, or accurately simulate a 3-body orbit, or run a given large-scale cellular automata
Those problems are all very well suited to parallel processing. I wonder if that's what you meant to imply, or if I misunderstood you.
Re: (Score:2)
For some people quite accurate visualization in 3D space, interactively realtime is very easy task. For most people it isn't.
You could say not everyone's operating system for their brains is the latest version.
Re: (Score:2)
Re: (Score:2)
For low power but high transistor (or transistor substitute) count stuff like memory i'm inclined to agree with you.
For processors afaict the limiting factor is more how much power can we get rid of than how many transistors can we pack in.
Also (unless there is a radical change in how we make chips) going 3D is going to be expensive since each layer of tranistors (or transistor substitutes) will require seperate deposition, masking and etching steps.
Re: (Score:3, Funny)
Re:It has been obvious for years. (Score:4, Funny)
Re: (Score:3, Informative)
2D : anything that only has connections in 2 directions. The fact that it's stacked does not change it's 2Dness, if the layers don't interact in a significant way (a book would not be considered 3d, nor even 2.5D, nor would a chip structured like a book).
2.5D : anything that has connections in 3 directions, but one of the directions is severely limited in what it can connect, and which way the wires can run (e.g. you can only have wires straight up with no further structure)
3D : true 3D means you can etch a
Memory crystals (Score:5, Funny)
Nope, no one saw that one coming.
Re: (Score:1, Funny)
But Captain the Memory Crystals are taken a beatin, i dont think theyll last much longer (Scottish Accent)
Re: (Score:1, Offtopic)
Fool Me Once... (Score:1)
We've got memory that's made out of dirt-cheap material and it works,' a university spokesman said.
Tell me when it's the head of manufacturing at XYZ Dirt-Cheap-Mass-Produced-Memory Corp saying that, then I'll care.
Re: (Score:2)
I wouldn't be so sure (Score:3, Informative)
"Dirt cheap" isn't here to stay.
Their technology requires polycrystalline silicon & the demand is increasing much faster than the supply.
China might build more polysilicon factories, but they'll undoubtedly reserve the output for their own uses.
This isn't a new problem, since mfgs have been complaining about shortages since 2006-ish (IIRC).
Re:I wouldn't be so sure (Score:5, Insightful)
If it takes 18 months to bring a plant online, that is pretty much the limit of the market's ability to cope with surprise demand(minus any slack in existing capacity that can be wrung out). For highly predictable stuff, no big deal, the plant will be built by the time we need it; but surprises can and do happen, even for common materials(especially given the degree to which "just in time" has come to dominate the supply chain. This isn't your merchant-princes of old, sitting on warehouses piled high. Inventory that isn't flowing like shit through a goose is considered a failure, with the rare exception of "national security" justified stockpiles or the rare hedge or futures position that is actually stored in kind, rather than in electronic accounts somewhere...)
Re: (Score:2)
Sir! Sir! Yes, you! I have a package for you here; it's a plaque from from the "most awesome remarks ever" voting board. Yes, that's right, sign here, initial there. Yes, you too sir. Have a good day, sir.
Re: (Score:2)
You can certainly hire better people faster by throwing more money at them; but that isn't instant either.
The exact shape of the tradeoff curve between time and money varies by enterprise; but it never passes through T=0.
Re: (Score:2)
I'm not sure if this was an economic choice, or if there is some impenetrable-to-mere-laymen solid state physics reason; but they say they are using polycrystalline slices...
Great, it's denser. (Score:4, Funny)
Great, it's denser. Does this mean it now comes in a yellow-white, almost blonde color?
Re: (Score:2)
I'd say it's a safe bet you won't insult anyone on /. with that comment
25x more dense, not 5x more dense... (Score:5, Insightful)
The big question I have for all of these technologies is whether or not is is mass production worthy and reliable over a normal usage life.
But you've got to cool it ... (Score:2)
Re: (Score:2)
Math is easy, it's English that's tricky.
Re: (Score:2)
Ignoring percentages and simply focusing on "X is two times more than Y" meaning X=2*Y, I'm assuming that "X is 1.1 times more than Y" means X=1.1*Y. Does "X is 0.5 times more than Y" mean that X=0.5*Y, that X is actually less than Y? Would this mean that "X is 0 times more than Y" means that X=0?
A useful guide
Here they come... (Score:4, Insightful)
Best Buy and Amazon are both selling Intel's 40 GB flash drive for just under $100 this week... I'm building a server based around it and will likely later post on how that goes. Intel recently announced that they're upping the sizes so you're likely going to see the 40 GB model in the clearance bin soon.
It's here, it's ready... and when you don't have a TB of data to store they're a great choice, especially when you read much more often that you write.
And if you want a big SSD (Score:5, Insightful)
And if you do need a big SSD Kingston has had a laptop 512GB SSD out since May with huge performance, and this month Toshiba and Samsung will both step up to compete and bring the price down. We're getting close to retiring mechanical media in the first tier. Intel's research shows failure rates of SSD at 10% that of mechanical media. Google will probably have a whitepaper out in the next six months on this issue too.
This is essential because for server consolidation and VDI the storage bottleneck has become an impassable gate with spinning media. These SSDs are being used in shared storage devices (SANs) to deliver the IOPs required to solve this problem. Because incumbent vendors make millions from each of their racks-of-disks SANs, they're not about to migrate to inexpensive SSD, so you'll see SAN products from startups take the field here. The surest way to get your startup bought by an old-school SAN vendor for $Billions is to put a custom derivative of OpenFiler on a dense rack of these SSDs and dish it up as block storage over the user's choice of FC, iSCSI or Infiniband as well as NFS and SAMBA file based storage. To get the best bang for the buck, adapt the BackBlaze box [backblaze.com] for SFF SSD drives. Remember to architect for differences in drive bandwidths or you'll build in bottlenecks that will be hard to overcome later and drive business to your competitors with more forethought. Hint: When you're striping in a Commit-on-Write log-based storage architecture it's OK to oversubscribe individual drive bandwiths in your fanout to a certain multiple because the blocking issue is latency, not bandwidth. For extra credit, implement deduplication and back the SSD storage with supercapacitors and/or an immense battery powered write cache RAM for nearly instantaneous reliable write commits.
I should probably file for a patent on that, but I won't. If you want to then let me suggest "aggregation of common architectures to create synergistic fusion catalysts for progress" as a working title.
That leaves the network bandwidth problem to solve, but I guess I can leave that for another post.
Re: (Score:2)
Re: (Score:3, Informative)
more IOPS than god
God doesn't need any Outputs. It's all one-way traffic with him.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Funny)
And you, sir, win the Bullshit of the Day award.
Congrats!
Re: (Score:2)
density isn't everything (Score:1)
Re: (Score:2)
No, yes, maybe.
Re: (Score:2)
So it's more dense than NAND flash (and 3D, wow!), but how does it compare on speed, reliability, and endurance?
Taking a wild-guess here.
TFA states that the 1/0 is stored as a nanowire that is continuous/interrupted (thus not require any electric charge).
It seems reasonable to think:
24 nanometers, not 27 (Score:1)
Toshiba has started mass production of 24nm NAND cells. Just saying...
Intel and Micron are already at 25nm in their most recent production lines, Hynix at 26nm.
Only Samsung, albeit the world's first NAND manufacturer, seems to be at 27nm.
What about performance? (Score:2)
so how wide is 5nm? (Score:5, Informative)
Will special glasses be needed to read 3D memory? (Score:5, Funny)
looking for high density ROM to stop digital decay (Score:4, Interesting)
I'm still waiting for some cheap, stable, high density ROM or preferably WORM/PROM. Even flash has only about 20 years retention with the power off. Which sounds like a lot, but it's not all that difficult to find a working synthesizer or drum machine from the mid-80s in working condition. But if you put flash in everything your favorite devices may be dead in 20 years. for most devices this is OK. But what if some of us want to build something a little more permanent? Like an art piece, a space probe, a DSP based guitar effects pedal, or a car?
Some kind of device with some nano wires that I can fuse to a plate or something with voltage would be nice if it could be made in a density of at least 256Mbit (just an arbitrary number I picked). EPROMs (with the little UV window) also only last for about 10-20 years (and a PROM is just an EPROM without a window). So we should expect to already have this digital decay problem in older electronics. Luckily for high volumes it was cheaper to use a mask ROM than a PROM or EPROM. But these days NAND flash(MLC) is so cheap and high density that mask ROMs seem like a thing of the past, to the point that it is difficult to find a place that can do mask ROMs that can also do high density wafers.
Re: (Score:2)
Time dependent flash sounds like the dream world for planned obsolescence driven corporations...
Re: (Score:2)
My parents were told that they were lucky the clutch and clutch plate in their car could be replaced, because the car is a whopping 16 years old. A different part for a different model had to be fitted by a tech who happened to be able to figure out it would work, then the adjustments needed to be twiddled. If Ford Motor Company has problems with the rate at which parts become obsolete, I don't imagine many CE companies are planning for 20-year serviceability either.
Re: (Score:2)
It is still quite easy to buy replacement parts for 1970s Fords, Chevys and Chryslers, most of them are third party aftermarket parts. If your parents took their old car to the dealer, that is likely why there were problems acquiring the part because they will just looking in inventories of OEM parts. An independent mechanic can do a broader search and save you quite a bit of money when fixing an old car.
Re: (Score:2)
Cheap??? (Score:2)
I guess the materials alone don't determine the price, but the expertise/work to put them together. I'm also typing on a computer that's made out of cheap materials (lots of plastic, some alumin(i)um, small quantities of other stuff) - but it didn't come that cheap.
Re: (Score:2)
Sure it did. When the IBM-PC came out with a 4 mz processor and 64kb of RAM it cost ~$4,000. My netbook has an 1800mz processor, 1,000,000kb of ram plus a 180gb hard drive, and it cost ~$300. I'd say that's pretty damned cheap.
Oblig B5 joke (Score:3, Funny)
But when will it be able to do ... (Score:2)
... 18,446,744,073,709,551,616 erase/write cycles?
the new news sucks (Score:2)
Boy do I miss the old news, they way they would write this one for example, would be to put some large number for the thumb drives, as in
"USB thumb drives of in the future could reach 150 terabytes."
Or something.
Hmm... (Score:2)
Stripping oxygen sounds very similar to the memristor process.
Re:Is anybody writing this down? (Score:4, Insightful)
All we ever see is a drop in the price of USB sticks in the shop, but under the surface the duck is paddling as hard as ever.
Re: (Score:2, Funny)
Re: (Score:3, Funny)
Nope. Microsoft is that stupid dog that keeps laughing at you when you can't shoot the ducks.
Re: (Score:3, Insightful)
how many do we ever actually purchase?
Some. Is that not enough to make it newsworthy?
Re: (Score:3, Informative)
All of the tech we actually purchase comes out of tech published in articles like this one. Processor process technologies, bus evolutions, memory architectures, advancements in lithography are printed here and wind up in the products you buy. Not all of the articles are successful technologies but all of the successful technologies have articles and the time reading about the failures are the price we pay to know about such things in advance. Most of us don't mind, because there are lessons in failures
Re: (Score:2)
Every breakthrough you have ever had the opportunity to purchase started out like this.
Re: (Score:2)
I think the criticism here is aimed at the university labs, where people invent stuff using outrageous amounts of money that is difficult or impossible to commercialize.
Absolutely. Commerical labs rarely do this kind of whizz-bang pre-announcement, which means that virtually any story like this is about a technology that is a) still in the lab and b) will never get out.
You have to get to the second page of the article to find out that some tiny tech company no one has ever heard of is "testing" a 1 kilo-bit chip these guys have made. That's right, a whole 128 bytes!
Unsurprisingly, the company is impressed. I was always impressed by the stuff my clients were doing too, w
Re: (Score:2)
Metamods: overrated unmoderated post.
Re: (Score:3, Insightful)
Re: (Score:2)