Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Earth Mozilla Power The Media Technology Hardware

Interviews: Ask Technologist Kevin Kelly About Everything 135

Kevin Kelly has for decades been involved in some of the most interesting projects I know about, and in his roles as founding editor (and now editor at large) of Wired Magazine and editor of The Whole Earth Catalog has helped spread the word about many others. Kelly is probably as close to a Rennaisance man as it's possible to be in the 21st century, having more-than-passing interest and knowledge in a range of topics from genetic sequencing and other ways that we can use measurement in pursuit of improved health to how technology is used and reused in real life. Among other projects, he's also the founder of CoolTools, which I consider to be (unsurprisingly) the closest current equivalent to the old Whole Earth Catalogs. (Disclaimer: I've had a few reviews published there, too.) (He's also one of the founders of The WELL, now part of Salon.) Kelly is also Secretary of the Board of Directors of the Long Now Foundation, the group which for years has been designing a clock to ring on 10,000 years in the future. Below, ask questions of Kelly, bearing in mind please the Slashdot interview guidelines: ask as many questions as you want, but please keep them to one per comment. He'll get back soon with his answers.
This discussion has been archived. No new comments can be posted.

Interviews: Ask Technologist Kevin Kelly About Everything

Comments Filter:
  • by bbk ( 33798 ) on Monday July 25, 2011 @06:36PM (#36877698) Homepage

    the SeaMicro servers handled the load with no difficultiesd

    Hmm... now there's a daemon you really don't want to see running...

  • by Anonymous Coward

    I was looking at my DEC Alpha today and this story came up, and I pondered on how Digital could have ever been replaced by Intel and AMD after being aquired and stifled by not one but two aquisitors in the likes of Compaq and HP. Here was 1997, and Digital already had a 64-bit 1GHz DEC/Alpha on the EV6 line of lab test and despite being an American-fabricated semiconductor from an all-American company then how was it that now the industry is destitute to ever again fabricate it's own semiconductors ever si

    • Oh, good. The Alpha fanboys have made an appearance!

      What is with you old farts? The Alpha was a good chip at the time, but that was a long fucking time ago. Intel took the goodness out and it's in other, much superior designs now.

      Mostly, those of us who aren't old farts are sick of fucking hearing about the god damn Alpha. Jesus Tapdancing Christ.

      • Actually didn't AMD get the goodness out and come up with hypertransport and a much better systems bus then Intel had for years?

      • Actually the Alpha was a ripoff of a architecture designed by California RISC Systems in 1983. There is nothing new here that IM not fully aware of. Now if you'll excuse me the cheese on my nachos is getting cold.

    • And now that The United States is in charge, America is being liquidated.

      Watch the youtube video" confessions of a economic hitman", its worse than most know...

  • We have quite a few machines in the server room, and we have constant problems keeping the room cool. But ultimately many of the boxes really don't need that much CPU power - they have a fairly simple job that they need to do. We have speculated about using an old laptop on AC power for some of the jobs that don't require a lot of CPU and don't require a lot of disk space.

    These servers sound like they would work quite a bit better for this purpose however..

    • by Anonymous Coward

      Isn't that the use case for consolidating your servers into less hardware using virtual machines?

      • Genuine question:
        Why is virtualisation so good? We have
        - Multi-tasking o/s that generally have resource allocation/partitioning/caps etc
        - Database software that runs multiple instances/databases/schemas
        - Web servers that run multiple sites on multiple threads
        So why is virtualisation better than stuffing your physical servers to capacity rather than adding overhead of multiple o/s
        Seems like the logical end-point will be a single-process O/S run off a mega hypervisor, which is almost indistinguishable
        • The reasons for this vary dependent on who is using them and why, but one possibility is someone wanting the mail server on a separate machine than the web server, yet working on a company small enough that it doesn't make sense to have a physical machine for each (in the sense of too much money/power consumption for a machine that will hardly do anything)... You virtualize the two machines and put them running on an actual server.
          This allows you to have different access levels and security restrictions fo

          • Thanks. While I agree it's a possible or even likely scenario, I'm pretty sure virtulisation in this case would be an unnecessarily wasteful solution. Don't most O/S s allow you to apply security at the application/service level? I'm not saying you're wrong in that people think like that, but I'd expect admins to know that it's not the only way.
            Anyway, thanks for the reply.
            • You could manage security at the per service level, but often managing it at a per server level is slightly less hassle, so the VM option reduces admin effort a little in that area.

              Also by keeping the tasks on separate (virtual) machines you reduce the chance of a configuration change for one app having unexpected effects on another. Shared libraries can be an issue to - some commercial app vendors will not support you if you don't have exactly the right library versions and those exact versions might no
        • For us the reasons to go virtual (since 2006 iirc), in no particular order
          - reduce power usage and cooling
          - less hardware to manage
          - more efficient use of hardware
          - faster deployment of new servers
          - separation of applications (mainly a problem on Windows, applications simply don't co-exist in an nice way on the same server)

          We're not a major player by any standards but we now have 10 hosts running a total of 380 vm:s. Lot's of room for more vm:s in there still but it's designed for redundancy, 5 of the hosts

        • Host clustering, for one - you make a cluster of host servers, and you can move the guests between them as necessary, in most case with not a single packet lost - yes, that's live migration to another physical machine.

          You can even keep a live copy of a running VM in sync on a different physical host, so that even if a physical host crashes, the service can transparantly flip to the copy with everything intact, including in-flight transactions. Pretty impressive, really.

          And, of course, capacity expansion is

        • by drsmithy ( 35869 )

          Why is virtualisation so good?

          Fundamentally, because it's easier to manage a bunch of machines doing one thing each, than one machine doing a bunch of things.

        • It is simply easier and less fiddly to manage applications when they are all totally isolated from each other. I've found that it's pretty damn complicated (especially if your dumb ass sales guys haven't negotiated a proper maintenance window) to negotiate downtime across multiple customers or constituencies. Applications step on each other, have subtly different requirements and expectations, and generally expect to be the only thing running on a system. For instance you stick multiple applications in t
        • by vlm ( 69642 )

          So why is virtualisation better than stuffing your physical servers to capacity rather than adding overhead of multiple o/s

          You're forgetting you can make virtualization hosts that are not identical.

          I have a box with a very elaborate disk array, another box that has a very elaborate high speed network, another thats just a slush server that holds little stuff that takes up space. So... The web cache lives on the box with the high speed network, and the database lives on the fast ethernet connected box with the elaborate disk array. Also, darn near without telling the end users, I can almost instantly and transparently move ima

    • by cgenman ( 325138 )

      Like RAM in the 90's, the limiting reagent for a lot of server rooms is the strength of your AC. I wonder if we'll see major leaps forward in computer cooling, or oil-bath server rooms, or server rooms perpetually doused in extra dense gas.

    • by caseih ( 160668 )

      Virtualization can be an immediate, cheap solution. Since you don't need much CPU on each server, just stock one with ram and put all the servers on one box. We cut our server room from 20 or more machines down to just 8, each running 4-6 virtual machines. We backup VMs between hosts in case of hardware failure. In 4 years it has run amazingly well. Combine virtualization with low-power computers (multi-cores) would seem to be a winning combination.

    • if yer servers are getting hot just go in there with a hose and spray em down a bit.

  • by serviscope_minor ( 664417 ) on Monday July 25, 2011 @06:44PM (#36877784) Journal

    512 atoms in 10U doesn't compare that favourable to 480 opteron cores in 10U (standard 1U, 4 socket 6100s). The atoms draw (apparently) 2.5Kw. That sounds a little low: that's about 4W each. That's plausible for just the chips themselves, but what about the RAM, etc?

    By contrast, the opterons will have a 1kW PSU each for a maximum power draw of less than 10kW, which is 4x as much.

    So, is a 2.3GHz opteron core 4x faster than whatever atom cores they use? Quite probably. Though they might use dual core atoms, in which case there are 1024 cores which swings it in favour of the atoms again.

    Basically, the article is far too light on details.

    But as always, vast arrays of weak processors is likely to be popular in some applications and be massively overhyped in others.

    The atom isn't an especially efficient CPU. It's low power for x86, but the high end processors have to be very efficient to fit within the thermal envelope.

    • The atom isn't an especially efficient CPU.

      Indeed. Subjectively, my dual ARM Xoom delivers far more compute power while eating less power than my Atom 450 netbook (Dell mini 10). This gives me hope for the future of the desktop, which as of today usually entails large amounts of heat and noise. In my ideal scenario, Android gets a big chunk of the tablet market, so vendors come up with the novel concept of Android desktop machines. Soon they discover that users are wiping Android and putting on standard Linux in order to run real desktop apps. So th

      • I actually have more faith in Android graduating to WIMP than on Ubuntu getting things right. Android seems to be run as a business, with customers needs wants and likes paramount. Ubuntu seems to be run like some hacker's project, where the latest tech shiny wins the day, regardless of whether it's actually useful, user-friendly, feature complete, and reliable. Cases in point: Grub2, Unity...

        My one worry is Google limiting Android in the desktop space to leave room for Chrome OS, even though Chrome OS seem

        • ...only if you run cyanogenmod. Have you seen the crap support from venders?
        • by grumling ( 94709 )

          Android's biggest failure is the committee-style update process. Google is running on a ship early, ship often approach and running head long into the ultra-conservative, test, test and test some more attitude of the handset makers (who dump their crapware on) and the phone service providers (who dump their crapware on). So Google's buggy code ends up on handsets and never goes away, even though they updated and fixed the bugs months ago.

          I realize the last thing the handset manufacturers want is to end up s

          • I think this is more about PC vs network: Nobody cares if a handset crashes, which is what Google, and the users, deal with. Even I don't care if my telephone crashes (which happens about 1 a month). Carriers are more afraid of one buggy device, or a whole series of buggy devices, bringing down / overtaxing their network. This justifies the carriers' validation process. Not the crapware !

            • by grumling ( 94709 )

              Yep, I'm amazed at how many times a day the Internet crashes because of buggy devices. You'd think someone would do something about it.

              That's their excuse, not their reason.

      • by grumling ( 94709 )

        Android has a very, very long way to go before it can be seriously considered for content creation. Not talking about applications, but just simple things like knowing when I have an external keyboard connected, don't pop up the on-screen keyboard. Cursor navigation is also just terrible, although I'll know more tomorrow when I pick up a Bluetooth mouse to see if that helps.

        And just simple things like having a "home" button on a browser... Does someone have a patent on it or something?

        So we throw Ubuntu or

      • "Subjectively, my dual ARM Xoom delivers far more compute power while eating less power than my Atom 450 netbook (Dell mini 10)."

        Are you sure about that? I love ARM chips as much as the next guy (heavy Android user), but I get the feeling that most of the processing (be it rendering web pages or whatever) is offloaded to the GPU and other supplementary chips... last gen's ARM chips were barely fast enough to render DivX content in software.

        Are there even any tasks on tablets that are particularly CPU-intens

        • ARM chips are designed for power efficiency. They kick the crap out of atom's in that sense. Atom's are just seriously underclocked, old processors.

    • But as always, vast arrays of weak processors is likely to be popular in some applications and be massively overhyped in others.

      Tilera uses an architecture that eliminates the on-chip bus interconnect, a centralized intersection where information flows between processor cores or between cores and the memory and I/O. Instead, Tilera employs an on-chip mesh network to interconnect cores. Tilera says its architecture provides similar capabilties for its caching system, evenly distributing the cache system load for better scalability.

      Interconnect speed has been a longstanding problem for networking.
      Tilera is opening up the useful bandwidth on the CPU and surprise! they can do more work per watt.

    • http://www.seamicro.com/node/164 [seamicro.com] Here is the Seamicro page on the system. Its using the dual core Atoms.
    • Re:512 Atoms in 10U (Score:5, Informative)

      by wagnerrp ( 1305589 ) on Monday July 25, 2011 @08:40PM (#36878870)

      They're using 256 dual core Atoms, for 512 cores at 1.6Ghz. Those 2.3Ghz Opterons will do roughly twice the work per core as the Atoms, with eight cores per chip, four chips per box, a 1U box will replace roughly 32 Atoms, requiring 8U to achieve the raw power of the 10U SeaMicro box. Those 32 processors run 80W ACP each, so including memory, disk, and chipset, you're looking at 3-4kW under load, versus 2-2.5kW for the SeaMicro.

      So how much will this thing cost you? The CPUs are $500 apiece. $300 for a 1U case. $800 for a board. $60 for a 4GB stick of registered ECC DDR3, times eight per processor. $275 for four 120GB SSDs. You end up around $6K per box. Now the SeaMicro uses Infiniband for its internal networking, for communication intensive tasks. Lets do the same. $900 each for dual port 40Gbps cards, and another $6K on a 36-port QLogic switch.

      That adds up to just over $61K, versus $150K for the SeaMicro box of roughly equivalent performance. For nearly three times the cost, you get maybe a third lower power consumption. At worst you have double the power consumption, an extra 2kW, and say one more for the AC. That's 30 YEARS before you make up the difference in initial cost. Therein lies the problem with SeaMicro claims. They compare power consumption against hardware two and three generations old, the servers that are going to be replaced. If you actually compare them against new hardware, they're pretty mediocre.

      • atom only has a very short time when it will be relevant (some argue its no longer relevant and some even say it never was due to the chipset being the power hog, not the cpu).

        seamicro tries to abstract or virtualize the whole hardware and make it 'visible' by all the cpus. sorta kinda.

        thing is, atoms still suck as cpus and the big box is still a power and noise pig.

        nice try but not really worth the price.

      • I wonder how would latest sandy bridge based xeons compare to these ... E3-1220L has two cores and 20W tdp, I'm trying to find it for my home server for the next 5 years or so.

        • You're looking at maybe 3x the total performance, and 4-5x the single threaded performance of the Atom, at roughly twice the power consumption.
      • They're using 256 dual core Atoms, for 512 cores at 1.6Ghz.

        Well, that's not great from the atom point of view.

        Those 2.3Ghz Opterons will do roughly twice the work per core as the Atoms,

        That sounds a little low. The clock rate of the opeterons is higher, they are much wider (more superscalar), and have out of order execution. From the old netbook benchmarks, the 1st gen P3 900 (somewhat comparable to the opteron core in terms of features, but older now) runs about as fast as as the 1.6GHz atom. I would expe

        • Those 2.3Ghz Opterons will do roughly twice the work per core as the Atoms,

          That sounds a little low. The clock rate of the opeterons is higher, they are much wider (more superscalar), and have out of order execution. From the old netbook benchmarks, the 1st gen P3 900 (somewhat comparable to the opteron core in terms of features, but older now) runs about as fast as as the 1.6GHz atom. I would expect the opteron cores to go 3-4x as fast, at least.

          Entirely possible. I intentionally chose a safe underestimate of the performance difference.

          The full 1/2 TB would cost more like EUR16000. Of course at EUR8500 per box, you get 2TB RAM into the 8U. That would be equivalent to having 4GB per atom, which sounds reasonable.

          The SeaMicro box I saw priced for $150K only had 1GB of memory per core, 512GB total. My estimate was for double that, 128GB per quad socket server.

          $900 each for dual port 40Gbps cards

          Every year it astonishes me how cheap cool stuff has got.

          I'm not sure where I read they used infiniband internally, but that seems to be not the case. That's likely just an external interface. Internally, they use some custom toroidal design similar to older supercomputers, which is likely where the bulk of their cost goes,

      • If your data center has limited power the savings go far beyond the price of the unit.

        • If your data center has limited power, you need to replace the data center, or upgrade the utility. Power reductions ignoring the cost of all else is foolish and short sighted. If you buy the traditional servers now, you'll only spend a third the cost of the SeaMicro box, at the expense of increased power consumption. Two years from now, you'll be able to replace all those traditional servers with more traditional servers, achieving the same performance for half the price, now consuming far less power th
          • You know upgrading your utility costs? Getting the capitol together to replace your data center is more difficult than in the case where you are replacing your server with something that uses less power. Am I missing something? If you don't have the cash you don't have the cash.

            • If you don't have the cash to upgrade your utility, you don't have the cash to spend 3x the cost on servers either. It's not like an extra 15A@110V is that hard to come by.
    • "Basically, the article is far too light on details."

      So is your comment since you only detail the power consumption. You fail to mention how many Libraries of Congress these systems can process. This is the all important benchmark on Slashdot and if you can't tell us this then I hope you don't get modded informative.

      I should be modded informative since I will inevitably inspire someone to respond to this post with the correct answer to "How many Libraries of Congress this system process?". `

    • have you tried the intel i3 (i5, i7) cpus? and their chipsets?

      I doubted it until I got one. go touch the heatsink on the i3 (any modern chipset) northbridge. (do they still call them NB's? maybe not.) at any rate, its cold to the touch even with the system doing things! totally shocked me.

      I can turn the fan OFF on my i5 system. I can run 100% fanless for minutes on end, even half an hour or more if I'm doing light office/web work. even just one gen before (the core2 duos and quads) were so much hott

      • by 0123456 ( 636235 )

        atom is a dead cpu.

        I recently replaced my Atom MythTV server with an i5, and while the i5 is about 5x faster and only uses about 2x the power under load the i5 CPU and motherboard alone cost more than the complete Atom system.

        • Did the Atom provide sufficient power to run the MythTV server? If it didn't, then however cheap it may have been is irrelevant.
          • by 0123456 ( 636235 )

            Did the Atom provide sufficient power to run the MythTV server?

            It ran fine as an SD PVR for over two years, but it's just too slow for HD transcoding in a reasonable amount of time. It's still running as a Zoneminder server, Zabbix server and game server.

      • You can't really gauge the efficiency of sandybridge (the current cycle of intel i3/i5/i7 cpus) by their "northbridge" heat. It's true that these chips don't draw so much power, but with sandybridge platform has memory controllers and integrated graphics right in the CPU; you can guess that these two components draw the most power. It's probably still a minor win for power efficiency, but it's not like there's been a huge efficiency breakthrough.

        • by 0123456 ( 636235 )

          It's probably still a minor win for power efficiency, but it's not like there's been a huge efficiency breakthrough.

          I disagree: one reason why I decided to replace my Atom with an i5 is that the Sandy Bridge systems are extremely power-efficient compared to PCs of a few years ago. I haven't measured the power consumption of mine yet, but online benchmarks show i5 systems with integrated graphics idling at half the power usage of my Atom system.

      • I recently got an i7 4-core MacBook Pro and wonder if I should have got an i5, because this one can hardly do anything without the fan coming on, and is often uncomfortably hot on my lap.
      • "have you tried the intel i3 (i5, i7) cpus? and their chipsets?

        I doubted it until I got one. go touch the heatsink on the i3 (any modern chipset) northbridge. (do they still call them NB's? maybe not.) at any rate, its cold to the touch even with the system doing things! totally shocked me.

        I can turn the fan OFF on my i5 system. I can run 100% fanless for minutes on end, even half an hour or more if I'm doing light office/web work. even just one gen

      • have you tried the intel i3 (i5, i7) cpus? and their chipsets?

        I have an i7 desktop (first gen 975). That thing can chuck out heat on full. I love the i7s because of the really high pre-thread performance. That makes them fantastic for developing stuff on and running small quantities of batch jobs with a quick turnaroud. I also love my AMD compute servers.

        I've never tried disconnecting the fan. It's conneded to one of those sealed all in one water cooling units.

      • I doubted it until I got one. go touch the heatsink on the i3 (any modern chipset) northbridge. (do they still call them NB's? maybe not.)

        There isn't really a northbridge in a LGA1156 or LGA1155 system. The main functions traditionally provided by a northbridge are the memory controller, the high speed IO (usually used for the graphics card though it doesn't technically have to be these days) and the integrated graphics (if present). With LGA1156 and LGA1155 these functions are integrated into the CPU.

        The chip you are feeling is probablly the PCH which is essentially the eqivilent of a southbridge. Southbridges always ran much cooler than nor

  • Last I looked at Tilera's offerings, their core count to memory controller ratio was really high. It seemed to be really focused on purely streaming data applications, like packet inspection or video conversion.

    Anybody know if how Facebook is using them is actually memory-constrained, or it's just low power enough not to matter when 80 quadjillion requests are eventually handled quickly enough, regardless of actual latency?

    Also, lack of ARM is disappointing.

    • ARM is not i686 compatible.

      that's the ONLY real advantage of atom, is that it runs binary code that 'everyone already has'.

      ARM only works well if you own source. most data centers do not (not all of it).

      • by imroy ( 755 )

        ARM is not i686 compatible.

        Neither are Tilera's processors. Being x86-compatible is often unnecessary in the non-PC, non-Windows world. If you're running a web server farm with Linux boxes and server software written in Java, Perl/Python/PHP, or even C/C++, the processor architecture matters very little. And that's the way it should be.

        • Exactly. I guess it's news when a big business discovers the world outside x86, where some of us have been living for years.
  • i got one of those tiny plug in servers that draw pretty mutch no power acting like a nas on a usb hdd. it does its job very well.
  • I've been googling around; their site is just a "Contact us" sort of sales deal, and no other hit seems to mention a price, like can sometimes be found in articles about headlining sales.

    Anybody in the know here?

    • That's something else I'd like to know.

      It's like, "We don't know anything about pricing yet. But trust us, we'll do it right."

      And, then, "We know everything you need to know about the CPU so you don't need to. Trust us. We'll do it right."

      Or is it, "If you have to ask about price, you needn't bother." followed by, "If you have to ask about the CPU's instruction set, register count, etc., you needn't bother."

      I know, CPU is so passe, now, but I'd still like to know. (Yeah, and money is also passe.)

      • Ah, one of the comments on the article linked to the original paper: http://gigaom2.files.wordpress.com/2011/07/facebook-tilera-whitepaper.pdf [wordpress.com]

        This board uses the older 32-bit model CPU, 64 registers, 3-way VLIW, 5-deep pipeline, in-order, 64KB of L2 per core, 24KB of L1, cache coherence with an automatic subscription model and all the L2 cache combining to the effective L3 cache, 2 user-space mesh interconnect lines available between cores, full MMU with up to 64GB physical memory (still 32 bit, so 4GB per

    • Last I checked several months ago, the 256 processor (512 core) version was going for $150K.
  • Please Mozilla, go back to real CPUs please. And to fixing bugs, instead of playing with fancy servers.

  • Just a bit too early...

  • ... is something that's not found st Slashdot.
    • by Jeng ( 926980 )

      Neither are coherent topics.

      Am I the only one that sees the blurb at the top is about asking some guy at Wired questions, yet all the comments are about low powered chips used in servers?

      • It appears the comment stream got merged with the article that discussed Mozilla's use of SeaMicro servers [slashdot.org]. Oddly enough, the Mozilla article now has no comments...
        • It appears the comment stream got merged with the article that discussed Mozilla's use of SeaMicro servers [slashdot.org]. Oddly enough, the Mozilla article now has no comments...

          This ^ . I was replying to the low-powered servers article, not anything about some technologist named Kevin. I was referring to the fact that the article synopsis misspelled the word "at", instead saying "st", hence the intentional typo in my post.

  • this sucks, i was planning on heating my home from the heat given off by servers.

    http://tech.slashdot.org/story/11/07/26/1324212/Microsoft-Suggests-Heating-Homes-With-Data-Furnaces [slashdot.org]
  • It looks like you've been involved in many projects. I've got about 10 different side projects (outside of work) going on at any given time in several different realms. How often do you decide it's time to end a project so that you can focus on a better project? Have any projects that you devoted a lot of time to result in nothing or have all come to fruition in one way or another? What is your criteria for this?
  • by vlm ( 69642 ) on Tuesday July 26, 2011 @03:48PM (#36888172)

    What insights can you provide to the /. crowd about building the clock?
    Project management anecdotes about the clock project?

  • Will there be an actually clock completed (other than prototypes) before the 10,000 years are up?
  • and how do i become one?

  • What time is it... really?
  • Where did everything come from ?
    I feel everything is too far away, is this a common perception ?
    There seems to be a lot of useless stuff around, are you happy with the composition of everything ?
    If we could change the form of everything, what should we change it too ?

    Does this make me a troll ?

  • One purpose of the Long Now Clock is to encourage long-term thinking. Aside from the Clock, though, what do you think people can do in their everyday lives to adopt or promote long-term thinking?
  • Will my bitcoins be worth more than they are now?
  • ... the irony of technologies of abundance in the hands of those thinking in terms of scarcity?
    http://www.pdfernhout.net/recognizing-irony-is-a-key-to-transcending-militarism.html [pdfernhout.net]

  • I still cite Out of Control as the most readable introduction to the oft confused subject of complexity, and am right now wading through What Technology Wants but finding it far more forced (sleep inducing). While I clearly don't disagree with the idea of seeing technology as a partner with humanity [meme.com.au], your newer book reads like you have invested too long in a world constructed from your imaginings and cut back your level of interest in looking at what is actually going on, an interest which seemed to pervade

    • I can understand your frustration with What Technology Wants. Having come to similar conclusions as Mr. Kelly on my own, I find the way that he relates the idea rather slow and repetitive. However, the idea that he is asking you to take in and think about is so difficult to explain without sounding like it's taking place in a dorm room that I appreciate what he's trying to do. I find even the name Technium kind of silly, but he could have puzzled over what word to use for years. Better to go with a silly na

  • Kelly is probably as close to a Rennaisance man as it's possible to be in the 21st century, having more-than-passing interest and knowledge in a range of topics from genetic sequencing...

    Since when was genetic sequencing a hot topic in the Renaissance? I'd be more impressed if he was fluent in stained glass artistry, music composition, and Latin and Greek.

  • OK, I'll take TFA seriously and ask a serious question. I have a good idea of the answer, or parts of it anyway, but I'm interested in other viewpoints.

    Q: Why don't we have fusion power yet? What are the specific technical, political, economic and social obstacles to replacing dirty fossil fuel and potentially catastrophic nuclear fission power plants with nuclear fusion plants? I know this is kind of a "where's my flying car" question, but I feel that if our society really wanted affordable, practical

  • I know that you did a lot of travel when you were younger (e.g. backpacking in Asia for years). How important was that for your status as a "Renaissance man"? Would you still recommended extensive travel to young people, or has globalization changed the opportunities?
  • What is your philosophy on software tools? Do you prefer to use a lot of small pieces, loosely assembled, using scripts to join things together and get things done, or do you like to find a software suite (such as Office) and work within that?

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...