Interviews: Ask Technologist Kevin Kelly About Everything 135
Kevin Kelly has for decades been involved in some of the most interesting projects I know about, and in his roles as founding editor (and now editor at large) of Wired Magazine and editor of The Whole Earth Catalog has helped spread the word about many others. Kelly is probably as close to a Rennaisance man as it's possible to be in the 21st century, having more-than-passing interest and knowledge in a range of topics from genetic sequencing and other ways that we can use measurement in pursuit of improved health to how technology is used and reused in real life. Among other projects, he's also the founder of CoolTools, which I consider to be (unsurprisingly) the closest current equivalent to the old Whole Earth Catalogs. (Disclaimer: I've had a few reviews published there, too.) (He's also one of the founders of The WELL, now part of Salon.) Kelly is also Secretary of the Board of Directors of the Long Now Foundation, the group which for years has been designing a clock to ring on 10,000 years in the future. Below, ask questions of Kelly, bearing in mind please the Slashdot interview guidelines: ask as many questions as you want, but please keep them to one per comment. He'll get back soon with his answers.
difficultiesd (Score:5, Funny)
the SeaMicro servers handled the load with no difficultiesd
Hmm... now there's a daemon you really don't want to see running...
Re: (Score:2)
This is a tried and true method to use less power per server.
vim /etc/inittab
find :
id:5:initdefault:
and change to:
id:0:initdefault:
Reboot server. - Done.
- Dan.
DEC patents bloooming after Intel stole/bought. (Score:1, Interesting)
I was looking at my DEC Alpha today and this story came up, and I pondered on how Digital could have ever been replaced by Intel and AMD after being aquired and stifled by not one but two aquisitors in the likes of Compaq and HP. Here was 1997, and Digital already had a 64-bit 1GHz DEC/Alpha on the EV6 line of lab test and despite being an American-fabricated semiconductor from an all-American company then how was it that now the industry is destitute to ever again fabricate it's own semiconductors ever si
Re: (Score:1)
Oh, good. The Alpha fanboys have made an appearance!
What is with you old farts? The Alpha was a good chip at the time, but that was a long fucking time ago. Intel took the goodness out and it's in other, much superior designs now.
Mostly, those of us who aren't old farts are sick of fucking hearing about the god damn Alpha. Jesus Tapdancing Christ.
Re: (Score:2)
Actually didn't AMD get the goodness out and come up with hypertransport and a much better systems bus then Intel had for years?
Re: (Score:1)
Actually the Alpha was a ripoff of a architecture designed by California RISC Systems in 1983. There is nothing new here that IM not fully aware of. Now if you'll excuse me the cheese on my nachos is getting cold.
Re: (Score:2)
And now that The United States is in charge, America is being liquidated.
Watch the youtube video" confessions of a economic hitman", its worse than most know...
We talk about this need a lot at work. (Score:2)
We have quite a few machines in the server room, and we have constant problems keeping the room cool. But ultimately many of the boxes really don't need that much CPU power - they have a fairly simple job that they need to do. We have speculated about using an old laptop on AC power for some of the jobs that don't require a lot of CPU and don't require a lot of disk space.
These servers sound like they would work quite a bit better for this purpose however..
Re: (Score:1)
Isn't that the use case for consolidating your servers into less hardware using virtual machines?
Re: (Score:1)
Why is virtualisation so good? We have
- Multi-tasking o/s that generally have resource allocation/partitioning/caps etc
- Database software that runs multiple instances/databases/schemas
- Web servers that run multiple sites on multiple threads
So why is virtualisation better than stuffing your physical servers to capacity rather than adding overhead of multiple o/s
Seems like the logical end-point will be a single-process O/S run off a mega hypervisor, which is almost indistinguishable
Re: (Score:1)
The reasons for this vary dependent on who is using them and why, but one possibility is someone wanting the mail server on a separate machine than the web server, yet working on a company small enough that it doesn't make sense to have a physical machine for each (in the sense of too much money/power consumption for a machine that will hardly do anything)... You virtualize the two machines and put them running on an actual server.
This allows you to have different access levels and security restrictions fo
Re: (Score:1)
Anyway, thanks for the reply.
Re: (Score:2)
Also by keeping the tasks on separate (virtual) machines you reduce the chance of a configuration change for one app having unexpected effects on another. Shared libraries can be an issue to - some commercial app vendors will not support you if you don't have exactly the right library versions and those exact versions might no
Re: (Score:1)
For us the reasons to go virtual (since 2006 iirc), in no particular order
- reduce power usage and cooling
- less hardware to manage
- more efficient use of hardware
- faster deployment of new servers
- separation of applications (mainly a problem on Windows, applications simply don't co-exist in an nice way on the same server)
We're not a major player by any standards but we now have 10 hosts running a total of 380 vm:s. Lot's of room for more vm:s in there still but it's designed for redundancy, 5 of the hosts
Re: (Score:2)
Host clustering, for one - you make a cluster of host servers, and you can move the guests between them as necessary, in most case with not a single packet lost - yes, that's live migration to another physical machine.
You can even keep a live copy of a running VM in sync on a different physical host, so that even if a physical host crashes, the service can transparantly flip to the copy with everything intact, including in-flight transactions. Pretty impressive, really.
And, of course, capacity expansion is
Re: (Score:2)
Fundamentally, because it's easier to manage a bunch of machines doing one thing each, than one machine doing a bunch of things.
Re: (Score:2)
Re: (Score:2)
So why is virtualisation better than stuffing your physical servers to capacity rather than adding overhead of multiple o/s
You're forgetting you can make virtualization hosts that are not identical.
I have a box with a very elaborate disk array, another box that has a very elaborate high speed network, another thats just a slush server that holds little stuff that takes up space. So... The web cache lives on the box with the high speed network, and the database lives on the fast ethernet connected box with the elaborate disk array. Also, darn near without telling the end users, I can almost instantly and transparently move ima
Re: (Score:2)
Like RAM in the 90's, the limiting reagent for a lot of server rooms is the strength of your AC. I wonder if we'll see major leaps forward in computer cooling, or oil-bath server rooms, or server rooms perpetually doused in extra dense gas.
Re:leaps in cooling (Score:1)
I wonder if we'll see major leaps forward in computer cooling, or oil-bath server rooms, or server rooms perpetually doused in extra dense gas.
I'm pretty sure bringing liquid cooling to the server room on large scale is the next thing in this arena. Immersion in a dielectric coolant is one way to go. I've worked with Hardcore Computer and their Liquid Blades a bit, and am optimistic: http://www.hardcorecomputer.com/servers/liquid-blade/index.html [hardcorecomputer.com]
Re: (Score:2)
Virtualization can be an immediate, cheap solution. Since you don't need much CPU on each server, just stock one with ram and put all the servers on one box. We cut our server room from 20 or more machines down to just 8, each running 4-6 virtual machines. We backup VMs between hosts in case of hardware failure. In 4 years it has run amazingly well. Combine virtualization with low-power computers (multi-cores) would seem to be a winning combination.
Re: (Score:1)
if yer servers are getting hot just go in there with a hose and spray em down a bit.
Re: (Score:1)
Because you can often do in a laptop with 100W what you cannot do in a 1U server with 300W. A laptop is a much more efficient design -- it's the next best thing to a blade server.
Re: (Score:2)
http://www.apple.com/au/macmini/server/specs.html [apple.com]
85W. And you can get a rack for them. http://www.sonnettech.com/product/rackmacmini.html [sonnettech.com]
Re: (Score:3)
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
What media-based nonsense? You sound like a bigot. The mini has the same power as any laptop, but in a better form factor for non-mobile use.
512 Atoms in 10U (Score:3)
512 atoms in 10U doesn't compare that favourable to 480 opteron cores in 10U (standard 1U, 4 socket 6100s). The atoms draw (apparently) 2.5Kw. That sounds a little low: that's about 4W each. That's plausible for just the chips themselves, but what about the RAM, etc?
By contrast, the opterons will have a 1kW PSU each for a maximum power draw of less than 10kW, which is 4x as much.
So, is a 2.3GHz opteron core 4x faster than whatever atom cores they use? Quite probably. Though they might use dual core atoms, in which case there are 1024 cores which swings it in favour of the atoms again.
Basically, the article is far too light on details.
But as always, vast arrays of weak processors is likely to be popular in some applications and be massively overhyped in others.
The atom isn't an especially efficient CPU. It's low power for x86, but the high end processors have to be very efficient to fit within the thermal envelope.
Re: (Score:2)
The atom isn't an especially efficient CPU.
Indeed. Subjectively, my dual ARM Xoom delivers far more compute power while eating less power than my Atom 450 netbook (Dell mini 10). This gives me hope for the future of the desktop, which as of today usually entails large amounts of heat and noise. In my ideal scenario, Android gets a big chunk of the tablet market, so vendors come up with the novel concept of Android desktop machines. Soon they discover that users are wiping Android and putting on standard Linux in order to run real desktop apps. So th
Re: (Score:1)
I actually have more faith in Android graduating to WIMP than on Ubuntu getting things right. Android seems to be run as a business, with customers needs wants and likes paramount. Ubuntu seems to be run like some hacker's project, where the latest tech shiny wins the day, regardless of whether it's actually useful, user-friendly, feature complete, and reliable. Cases in point: Grub2, Unity...
My one worry is Google limiting Android in the desktop space to leave room for Chrome OS, even though Chrome OS seem
Re: (Score:2)
Re: (Score:2)
Android's biggest failure is the committee-style update process. Google is running on a ship early, ship often approach and running head long into the ultra-conservative, test, test and test some more attitude of the handset makers (who dump their crapware on) and the phone service providers (who dump their crapware on). So Google's buggy code ends up on handsets and never goes away, even though they updated and fixed the bugs months ago.
I realize the last thing the handset manufacturers want is to end up s
Re: (Score:2)
I think this is more about PC vs network: Nobody cares if a handset crashes, which is what Google, and the users, deal with. Even I don't care if my telephone crashes (which happens about 1 a month). Carriers are more afraid of one buggy device, or a whole series of buggy devices, bringing down / overtaxing their network. This justifies the carriers' validation process. Not the crapware !
Re: (Score:2)
Yep, I'm amazed at how many times a day the Internet crashes because of buggy devices. You'd think someone would do something about it.
That's their excuse, not their reason.
Re: (Score:2)
Android has a very, very long way to go before it can be seriously considered for content creation. Not talking about applications, but just simple things like knowing when I have an external keyboard connected, don't pop up the on-screen keyboard. Cursor navigation is also just terrible, although I'll know more tomorrow when I pick up a Bluetooth mouse to see if that helps.
And just simple things like having a "home" button on a browser... Does someone have a patent on it or something?
So we throw Ubuntu or
Re: (Score:2)
"Subjectively, my dual ARM Xoom delivers far more compute power while eating less power than my Atom 450 netbook (Dell mini 10)."
Are you sure about that? I love ARM chips as much as the next guy (heavy Android user), but I get the feeling that most of the processing (be it rendering web pages or whatever) is offloaded to the GPU and other supplementary chips... last gen's ARM chips were barely fast enough to render DivX content in software.
Are there even any tasks on tablets that are particularly CPU-intens
Re: (Score:2)
ARM chips are designed for power efficiency. They kick the crap out of atom's in that sense. Atom's are just seriously underclocked, old processors.
Re: (Score:2)
But as always, vast arrays of weak processors is likely to be popular in some applications and be massively overhyped in others.
Tilera uses an architecture that eliminates the on-chip bus interconnect, a centralized intersection where information flows between processor cores or between cores and the memory and I/O. Instead, Tilera employs an on-chip mesh network to interconnect cores. Tilera says its architecture provides similar capabilties for its caching system, evenly distributing the cache system load for better scalability.
Interconnect speed has been a longstanding problem for networking.
Tilera is opening up the useful bandwidth on the CPU and surprise! they can do more work per watt.
Re: (Score:1)
Re:512 Atoms in 10U (Score:5, Informative)
They're using 256 dual core Atoms, for 512 cores at 1.6Ghz. Those 2.3Ghz Opterons will do roughly twice the work per core as the Atoms, with eight cores per chip, four chips per box, a 1U box will replace roughly 32 Atoms, requiring 8U to achieve the raw power of the 10U SeaMicro box. Those 32 processors run 80W ACP each, so including memory, disk, and chipset, you're looking at 3-4kW under load, versus 2-2.5kW for the SeaMicro.
So how much will this thing cost you? The CPUs are $500 apiece. $300 for a 1U case. $800 for a board. $60 for a 4GB stick of registered ECC DDR3, times eight per processor. $275 for four 120GB SSDs. You end up around $6K per box. Now the SeaMicro uses Infiniband for its internal networking, for communication intensive tasks. Lets do the same. $900 each for dual port 40Gbps cards, and another $6K on a 36-port QLogic switch.
That adds up to just over $61K, versus $150K for the SeaMicro box of roughly equivalent performance. For nearly three times the cost, you get maybe a third lower power consumption. At worst you have double the power consumption, an extra 2kW, and say one more for the AC. That's 30 YEARS before you make up the difference in initial cost. Therein lies the problem with SeaMicro claims. They compare power consumption against hardware two and three generations old, the servers that are going to be replaced. If you actually compare them against new hardware, they're pretty mediocre.
Re: (Score:2)
atom only has a very short time when it will be relevant (some argue its no longer relevant and some even say it never was due to the chipset being the power hog, not the cpu).
seamicro tries to abstract or virtualize the whole hardware and make it 'visible' by all the cpus. sorta kinda.
thing is, atoms still suck as cpus and the big box is still a power and noise pig.
nice try but not really worth the price.
How about 20w xeons? (Score:2)
I wonder how would latest sandy bridge based xeons compare to these ... E3-1220L has two cores and 20W tdp, I'm trying to find it for my home server for the next 5 years or so.
Re: (Score:2)
Re: (Score:2)
They're using 256 dual core Atoms, for 512 cores at 1.6Ghz.
Well, that's not great from the atom point of view.
Those 2.3Ghz Opterons will do roughly twice the work per core as the Atoms,
That sounds a little low. The clock rate of the opeterons is higher, they are much wider (more superscalar), and have out of order execution. From the old netbook benchmarks, the 1st gen P3 900 (somewhat comparable to the opteron core in terms of features, but older now) runs about as fast as as the 1.6GHz atom. I would expe
Re: (Score:2)
Those 2.3Ghz Opterons will do roughly twice the work per core as the Atoms,
That sounds a little low. The clock rate of the opeterons is higher, they are much wider (more superscalar), and have out of order execution. From the old netbook benchmarks, the 1st gen P3 900 (somewhat comparable to the opteron core in terms of features, but older now) runs about as fast as as the 1.6GHz atom. I would expect the opteron cores to go 3-4x as fast, at least.
Entirely possible. I intentionally chose a safe underestimate of the performance difference.
The full 1/2 TB would cost more like EUR16000. Of course at EUR8500 per box, you get 2TB RAM into the 8U. That would be equivalent to having 4GB per atom, which sounds reasonable.
The SeaMicro box I saw priced for $150K only had 1GB of memory per core, 512GB total. My estimate was for double that, 128GB per quad socket server.
$900 each for dual port 40Gbps cards
Every year it astonishes me how cheap cool stuff has got.
I'm not sure where I read they used infiniband internally, but that seems to be not the case. That's likely just an external interface. Internally, they use some custom toroidal design similar to older supercomputers, which is likely where the bulk of their cost goes,
Re: (Score:2)
If your data center has limited power the savings go far beyond the price of the unit.
Re: (Score:2)
Re: (Score:2)
You know upgrading your utility costs? Getting the capitol together to replace your data center is more difficult than in the case where you are replacing your server with something that uses less power. Am I missing something? If you don't have the cash you don't have the cash.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
"Basically, the article is far too light on details."
So is your comment since you only detail the power consumption. You fail to mention how many Libraries of Congress these systems can process. This is the all important benchmark on Slashdot and if you can't tell us this then I hope you don't get modded informative.
I should be modded informative since I will inevitably inspire someone to respond to this post with the correct answer to "How many Libraries of Congress this system process?". `
Re: (Score:2)
have you tried the intel i3 (i5, i7) cpus? and their chipsets?
I doubted it until I got one. go touch the heatsink on the i3 (any modern chipset) northbridge. (do they still call them NB's? maybe not.) at any rate, its cold to the touch even with the system doing things! totally shocked me.
I can turn the fan OFF on my i5 system. I can run 100% fanless for minutes on end, even half an hour or more if I'm doing light office/web work. even just one gen before (the core2 duos and quads) were so much hott
Re: (Score:2)
atom is a dead cpu.
I recently replaced my Atom MythTV server with an i5, and while the i5 is about 5x faster and only uses about 2x the power under load the i5 CPU and motherboard alone cost more than the complete Atom system.
Re: (Score:2)
Re: (Score:2)
Did the Atom provide sufficient power to run the MythTV server?
It ran fine as an SD PVR for over two years, but it's just too slow for HD transcoding in a reasonable amount of time. It's still running as a Zoneminder server, Zabbix server and game server.
Re: (Score:2)
You can't really gauge the efficiency of sandybridge (the current cycle of intel i3/i5/i7 cpus) by their "northbridge" heat. It's true that these chips don't draw so much power, but with sandybridge platform has memory controllers and integrated graphics right in the CPU; you can guess that these two components draw the most power. It's probably still a minor win for power efficiency, but it's not like there's been a huge efficiency breakthrough.
Re: (Score:2)
It's probably still a minor win for power efficiency, but it's not like there's been a huge efficiency breakthrough.
I disagree: one reason why I decided to replace my Atom with an i5 is that the Sandy Bridge systems are extremely power-efficient compared to PCs of a few years ago. I haven't measured the power consumption of mine yet, but online benchmarks show i5 systems with integrated graphics idling at half the power usage of my Atom system.
Re: (Score:2)
Re: (Score:2)
"have you tried the intel i3 (i5, i7) cpus? and their chipsets?
I doubted it until I got one. go touch the heatsink on the i3 (any modern chipset) northbridge. (do they still call them NB's? maybe not.) at any rate, its cold to the touch even with the system doing things! totally shocked me.
I can turn the fan OFF on my i5 system. I can run 100% fanless for minutes on end, even half an hour or more if I'm doing light office/web work. even just one gen
Re: (Score:2)
have you tried the intel i3 (i5, i7) cpus? and their chipsets?
I have an i7 desktop (first gen 975). That thing can chuck out heat on full. I love the i7s because of the really high pre-thread performance. That makes them fantastic for developing stuff on and running small quantities of batch jobs with a quick turnaroud. I also love my AMD compute servers.
I've never tried disconnecting the fan. It's conneded to one of those sealed all in one water cooling units.
Re: (Score:2)
I doubted it until I got one. go touch the heatsink on the i3 (any modern chipset) northbridge. (do they still call them NB's? maybe not.)
There isn't really a northbridge in a LGA1156 or LGA1155 system. The main functions traditionally provided by a northbridge are the memory controller, the high speed IO (usually used for the graphics card though it doesn't technically have to be these days) and the integrated graphics (if present). With LGA1156 and LGA1155 these functions are integrated into the CPU.
The chip you are feeling is probablly the PCH which is essentially the eqivilent of a southbridge. Southbridges always ran much cooler than nor
Re: (Score:2)
Tilera and memory bandwidth? (Score:2)
Last I looked at Tilera's offerings, their core count to memory controller ratio was really high. It seemed to be really focused on purely streaming data applications, like packet inspection or video conversion.
Anybody know if how Facebook is using them is actually memory-constrained, or it's just low power enough not to matter when 80 quadjillion requests are eventually handled quickly enough, regardless of actual latency?
Also, lack of ARM is disappointing.
Re: (Score:2)
ARM is not i686 compatible.
that's the ONLY real advantage of atom, is that it runs binary code that 'everyone already has'.
ARM only works well if you own source. most data centers do not (not all of it).
Re: (Score:2)
ARM is not i686 compatible.
Neither are Tilera's processors. Being x86-compatible is often unnecessary in the non-PC, non-Windows world. If you're running a web server farm with Linux boxes and server software written in Java, Perl/Python/PHP, or even C/C++, the processor architecture matters very little. And that's the way it should be.
Re: (Score:2)
Re: (Score:3, Insightful)
Thank you! Yet another good reason to avoid Oracle.
Re: (Score:2)
Unfortunately, if you plan to run the Oracle JVM, yes it does matter. Only a few {operating system, architecture} tuples are supported. For example, no {openbsd, sparc}.
True. One of the big disappointments of Java is how few platforms this supposedly cross-platform software actually runs on. Sun really dropped the ball on that aspect. Meanwhile, scripting languages like Perl run on many more platforms, even though cross-platform compatibility was never really their main goal.
nice things (Score:1)
Price of Quanta's Tilera servers? (Score:2)
I've been googling around; their site is just a "Contact us" sort of sales deal, and no other hit seems to mention a price, like can sometimes be found in articles about headlining sales.
Anybody in the know here?
details of the CPUs? (Score:2)
That's something else I'd like to know.
It's like, "We don't know anything about pricing yet. But trust us, we'll do it right."
And, then, "We know everything you need to know about the CPU so you don't need to. Trust us. We'll do it right."
Or is it, "If you have to ask about price, you needn't bother." followed by, "If you have to ask about the CPU's instruction set, register count, etc., you needn't bother."
I know, CPU is so passe, now, but I'd still like to know. (Yeah, and money is also passe.)
Re: (Score:2)
Ah, one of the comments on the article linked to the original paper: http://gigaom2.files.wordpress.com/2011/07/facebook-tilera-whitepaper.pdf [wordpress.com]
This board uses the older 32-bit model CPU, 64 registers, 3-way VLIW, 5-deep pipeline, in-order, 64KB of L2 per core, 24KB of L1, cache coherence with an automatic subscription model and all the L2 cache combining to the effective L3 cache, 2 user-space mesh interconnect lines available between cores, full MMU with up to 64GB physical memory (still 32 bit, so 4GB per
Re: (Score:2)
Tilera describes their instruction set as "64-bit VLIW with 64-bit instruction bundle", so that doesn't sound quite like stock MIPS if they truly mean multiple independent parallel instructions per 64 bit instruction word. However, MIPS is also desirable for their low-power features, so their licensing might have been for something along those lines.
Re: (Score:2)
Is that why my firefox crashes daily ? (Score:2)
Please Mozilla, go back to real CPUs please. And to fixing bugs, instead of playing with fancy servers.
Hmm... SGI Was Right About It (Score:2)
Just a bit too early...
Proofreading... (Score:1)
Re: (Score:3)
Neither are coherent topics.
Am I the only one that sees the blurb at the top is about asking some guy at Wired questions, yet all the comments are about low powered chips used in servers?
Re: (Score:3)
Re: (Score:1)
It appears the comment stream got merged with the article that discussed Mozilla's use of SeaMicro servers [slashdot.org]. Oddly enough, the Mozilla article now has no comments...
This ^ . I was replying to the low-powered servers article, not anything about some technologist named Kevin. I was referring to the fact that the article synopsis misspelled the word "at", instead saying "st", hence the intentional typo in my post.
but what about.... (Score:1)
http://tech.slashdot.org/story/11/07/26/1324212/Microsoft-Suggests-Heating-Homes-With-Data-Furnaces [slashdot.org]
How Frequently Do You Kill or Leave a Project? (Score:3)
Long now clock (Score:3)
What insights can you provide to the /. crowd about building the clock?
Project management anecdotes about the clock project?
Re: (Score:2)
get someone to finance it with big big big big big big money.
Yes I was mystified why Rolex etc isn't paying all the bills, or if they are, why they aren't showing up on the site.
I can almost see the magazine ads now. Its like a perfect made for PR - puff piece moment. Probably even get some journalist coverage that way.
I bet Rolex only has to sell a handful of those "presidents" or "submariner" things to finance the whole LN project... That's how it is with a luxury product.
The 10,00 year clock (Score:2)
how much money do technologists make? (Score:2)
and how do i become one?
So... (Score:2)
Everything is complex (Score:2)
Where did everything come from ?
I feel everything is too far away, is this a common perception ?
There seems to be a lot of useless stuff around, are you happy with the composition of everything ?
If we could change the form of everything, what should we change it too ?
Does this make me a troll ?
Long-term thinking (Score:1)
In 10,000 years... (Score:1)
Will you please do a story about... (Score:2)
... the irony of technologies of abundance in the hands of those thinking in terms of scarcity?
http://www.pdfernhout.net/recognizing-irony-is-a-key-to-transcending-militarism.html [pdfernhout.net]
Why "exotropy"? (Score:2)
I still cite Out of Control as the most readable introduction to the oft confused subject of complexity, and am right now wading through What Technology Wants but finding it far more forced (sleep inducing). While I clearly don't disagree with the idea of seeing technology as a partner with humanity [meme.com.au], your newer book reads like you have invested too long in a world constructed from your imaginings and cut back your level of interest in looking at what is actually going on, an interest which seemed to pervade
Re: (Score:1)
I can understand your frustration with What Technology Wants. Having come to similar conclusions as Mr. Kelly on my own, I find the way that he relates the idea rather slow and repetitive. However, the idea that he is asking you to take in and think about is so difficult to explain without sounding like it's taking place in a dorm room that I appreciate what he's trying to do. I find even the name Technium kind of silly, but he could have puzzled over what word to use for years. Better to go with a silly na
Renaissance Man? (Score:1)
Kelly is probably as close to a Rennaisance man as it's possible to be in the 21st century, having more-than-passing interest and knowledge in a range of topics from genetic sequencing...
Since when was genetic sequencing a hot topic in the Renaissance? I'd be more impressed if he was fluent in stained glass artistry, music composition, and Latin and Greek.
Serious Question (Score:1)
OK, I'll take TFA seriously and ask a serious question. I have a good idea of the answer, or parts of it anyway, but I'm interested in other viewpoints.
Q: Why don't we have fusion power yet? What are the specific technical, political, economic and social obstacles to replacing dirty fossil fuel and potentially catastrophic nuclear fission power plants with nuclear fusion plants? I know this is kind of a "where's my flying car" question, but I feel that if our society really wanted affordable, practical
value of travel (Score:2)
Tool philosophy for software tools (Score:2)
Re: (Score:3)
instead of the not production-ready salt silo hydroelectricity seems a more realistical solution - either as pumped-storage to balance the fluctuating production of photovoltaic/wind or run-of-the-river as 24/7 supplier. the latter is the only source of energy for one quite big hosting company here in Germany
Re: (Score:2)
If your load isn't very computationally expensive, then you can have weaker cores running them "fast enough" at a fraction of the power, rather than a slice of a very fast machine churning through it very quickly while taking lots of power. It's all about performance ratios against power.
But I don't think the product's intent is talking along the same lines as people here, the latter namely being consolidating unused computing potential. Rather, it's more about getting machines taking less power while hav