Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Ray Kurzweil's Slippery Futurism 308

wjousts writes "Well-known futurist Ray Kurzweil has made many predictions about the future in his books The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999) and The Singularity is Near (2005), but how well have his predictions held up now that we live 'in the future'? IEEE Spectrum has a piece questioning the Kurzweil's (self proclaimed) accuracy. Quoting: 'Therein lie the frustrations of Kurzweil's brand of tech punditry. On close examination, his clearest and most successful predictions often lack originality or profundity. And most of his predictions come with so many loopholes that they border on the unfalsifiable. Yet he continues to be taken seriously enough as an oracle of technology to command very impressive speaker fees at pricey conferences, to author best-selling books, and to have cofounded Singularity University, where executives and others are paying quite handsomely to learn how to plan for the not-too-distant day when those disappearing computers will make humans both obsolete and immortal.'"
This discussion has been archived. No new comments can be posted.

Ray Kurzweil's Slippery Futurism

Comments Filter:
  • by Laxori666 ( 748529 ) on Monday November 29, 2010 @06:29PM (#34380908) Homepage
    I don't agree with his predictions.

    A) it is assuming that we will always have a technological breakthrough at the right moment to allow the doubling of computing power every 18 months. Maybe this is the case, but it's still a big assumption.

    B) He assumes if we put enough cyber neurons together in a neural net you will develop intelligence and conscience. This may be the case, and it will be interesting to see, but I don't think you can take it for granted. He also spent about 2 pages in his book about this from a philosophical perspective, basically a: "Here is what three people thought about consciousness. Anyway, moving on..." Seems like it should be a central point.

    C) I think he also assumes that having such massive massive amounts of computing power will solve all our problems. Has he heard of exponential-time problems, or NP-Completeness? Doubling computing power every 18 months equates to adding one city to a traveling salesman problem every 18 months.
  • Punditry Pays (Score:5, Insightful)

    by Infonaut ( 96956 ) <infonaut@gmail.com> on Monday November 29, 2010 @06:32PM (#34380958) Homepage Journal
    The point isn't to be accurate; it's to be engaging. We live in an age in which it is more important to entertain than to inform. Look at all the hack prognosticators in the business and technology press who make a living making predictions – most of them are wildly off the mark but nobody cares enough to go back and call them on their failures.
  • by mangu ( 126918 ) on Monday November 29, 2010 @06:36PM (#34381018)

    Well, Ray Kurzweil seems to me about as effective at predicting the future of technology as Oracle is effective at managing data bases.

    This analogy is pretty good, but it's not exactly what some people might imagine.

  • by davester666 ( 731373 ) on Monday November 29, 2010 @06:40PM (#34381066) Journal

    "Claims made about the future were so vague that they can't be wrong."

  • by WrongSizeGlass ( 838941 ) on Monday November 29, 2010 @06:40PM (#34381074)

    Seems like a lucrative field. I bet I could do it! Let me think, ah, in the future... Nope. I got nothin'.

    I predict you'll be modded 'Funny', then 'Overrated' and finally 'Informative'.

  • Re:Oh yeah? (Score:5, Insightful)

    by 0123456 ( 636235 ) on Monday November 29, 2010 @06:42PM (#34381106)

    Why isn't there an equal skepticism about Space Nuttery like Moon colonies, space-based solar power and asteroid mining? They are equally delusional.

    No they're not, and there was plenty of skepticism about such claims when O'Neill in the 70s was proclaiming that we could be doing them all in a few years, because it was clearly technologically impossible with any reasonably justifiable amount of money. There's far less skepticism today because we can see that they could be viable in a few decades.

    Similarly, I haven't seen too much wrong with Kurzweil's claims, other than that he expects things to happen within the next few years, rather than the next few decades (or centuries if you're pessimistic).

    I believe Clarke once said something along the lines that near-term predictions were always optimistic and far-future predictions pessimistic, because humans expect linear progress when most things are exponential.

  • by Pinball Wizard ( 161942 ) on Monday November 29, 2010 @06:45PM (#34381140) Homepage Journal

    A) It's not that big of an assumption. The exponential curve in computing power doesn't just go back to the advent of computers, it goes back as far as we could perform simple arithmetic. It's an assumption based on our long history of improving methods and fabricating machines to compute. Unless we have capped our ability to invent new methods of computing, it's a fairly safe assumption to make. Our ability to compute is probably not limited by the number of transistors we can pack on a silicon disk.

    B) given a large enough knowledge base and a set of really good AI algorithms, one should be able to create intelligent machines. There's nothing to prevent them from replicating, either. However, I don't think that they will ever be truly sentient. Even so, careful design will be necessary to ensure Asimov's laws of robotics are strictly enforced.

    C) I don't believe Kurzweil has ever claimed NP-Hard problems would be solved by the exponential increase in computing power.

  • by v1 ( 525388 ) on Monday November 29, 2010 @06:46PM (#34381156) Homepage Journal

    On close examination, his clearest and most successful predictions often lack originality or profundity. . And most of his predictions come with so many loopholes that they border on the unfalsifiable. Yet he continues to be taken seriously enough as an oracle of technology...

    Oh where have I heard that description before.... oh ya, here [wikipedia.org]

  • by Chapter80 ( 926879 ) on Monday November 29, 2010 @06:53PM (#34381222)

    For the 50 millionth time, Bill Gates didn't make any such claim about 637K, 640K or whatever.

    I'm with you. I hate when people exaggerate and mis-attribute claims. Like GWB said, "if I said it once, I said it a hundred zillion times... I hate exaggeration."

  • by Arancaytar ( 966377 ) <arancaytar.ilyaran@gmail.com> on Monday November 29, 2010 @06:57PM (#34381284) Homepage

    Ironically, they will probably be saying this even if they live on the Mars colony.

  • Re:Punditry Pays (Score:4, Insightful)

    by greenbird ( 859670 ) on Monday November 29, 2010 @07:01PM (#34381338)

    The point isn't to be accurate; it's to be engaging... nobody cares enough to go back and call them on their failures.

    And thus we have the modern press/news regime. No need to actually report correct information. Just report what is entertaining whether it's true or not and certainly don't waste any time trying to determine the truth of anything.

  • by mschuyler ( 197441 ) on Monday November 29, 2010 @07:09PM (#34381416) Homepage Journal

    I'm all for criticizing the excesses of Kurzweil, but I don't think the article is up to snuff and reads like a personal attack on Kurzweil rather than a well-reasoned refutation of Kurzweil's predictions.The author seems to take the position that Kurzweil wasn't exactly 100% accurate in all the factes of his predictions, therefore he was wrong and besides, somebody else already thought of it anyway before Kurzweil did. It's kind of a specious hit piece that cherry picks a couple of examples and doesn't really measure up as a serious analysis of Kurzweil's record. Maybe it would be nice of someone actually did that, but this article is nowhere near it.

  • What Futurists Do (Score:5, Insightful)

    by Doc Ruby ( 173196 ) on Monday November 29, 2010 @07:10PM (#34381430) Homepage Journal

    Futurists don't "predict the future". They discuss the past and present, talk about its implications, and get people in the present to think about the implications of what they do. They talk about possible futures. Which of course changes what actually happens in the future. They typically talk about a future beyond the timeframe that's also in the future but in which their audience can actually do something. Effectively they're just leading a brainstorming session about the present.

    This practice is much like science fiction (at least, the vast majority, which is set in "the future" when it's written), which doesn't really talk about the future, but rather about the present. You can see from nearly all past science fiction that it was "wrong" about its future, now that we're living in it, though with some notable exceptions. In fact "futurists" are so little different from "science fiction writers" that they are really just two different names for the same practice for two different audiences. Futurism is also not necessarily delivered in writing (eg. lectures), and is usually consumed by business or government audiences. Those audiences pay for a product they don't want to consider "fiction", but it's only the style that makes it "nonfiction".

    This practice is valuable beyond entertainment. Because there is very little thinking by government, business, or even just anyone about the consequences of their work and developments beyond the next financial quarter. Just thinking about the future at all, especially in terms that aren't the driest and narrowest statistical projections, or beyond their own specific careers, is extremely rare among people. If we did it a lot more we'd be better at it. But we don't, so "inaccurate" is a lot more valuable than "totally lacking". Without futurism, or its even less accurate and narrower form in science fiction, the future would take us by surprise even more. And then we'd always suffer from "future shock", even more than we do now.

    If we don't learn from futurism that it's not reliable, but still valuable, then it's not the fault of futurists. It's our fault for having unreasonable expectations, and failing to see beyond them to actual value.

  • by bloosqr ( 33593 ) on Monday November 29, 2010 @07:24PM (#34381606) Homepage

    Our joke about Kurzweil was he was someone who didn't take his "series expansion" to enough terms.. What he does is look at emergent phenomena and notice the exponential growth curve .. (which occurs in a variety of phenomena from biology to physics to even economics) .. and from that draw the conclusion that everything (or particular aspects of technology really) will continue to grow exponentially ad infinitum .. to a "singularity" etc.. This is both intuitively not true and factually not true because of resource / energetic issues (however one wants to define it for your particular problem) .. The point is you can actually look at the same phenomenon that Kurzweil claims to and notice in fact actually new phenomena/technology/etc only initially look "exponential" and then for all the obvious reasons flatten out (again really only initially (but further down the time curve than the exponential growth phase)) so your curve in the end looks really like a sigmoidal function.. (given whatever metric you choose) The hard part is to figure out how quickly you'll hit the new pseudo steady state .. but its certainly absurd to assume it never happens.. which is what the absurd conclusions he draws are always based on..

  • by SheeEttin ( 899897 ) <sheeettin@nosPam.gmail.com> on Monday November 29, 2010 @07:38PM (#34381768) Homepage

    We always thought that we could turn off unfriendly robots, but we can't really turn off the internet

    Sure you can. Just take out a few key backbone sites, and there you go. That'll disable a good chunk of the Internet for long enough for you to clean up the rest.

    Or, just lobby (i.e. pay) your Congressman to pass a killswitch bill...

  • by lgw ( 121541 ) on Monday November 29, 2010 @08:01PM (#34381990) Journal

    Nope - if you have "commuter lanes" or some other restricted lanes on your local highways, you'll see it's not a stretch to have those be dedicated to self-driving cars before much longer. The technology is nearly here. The infrastructure (always the hard part) is already here.

  • by HiThere ( 15173 ) <charleshixsn@@@earthlink...net> on Monday November 29, 2010 @08:03PM (#34382008)

    I think you are misunderstanding both the nature and the purpose of his predictions.

    You didn't note that they are essentially unfalsifiable. You should have. If you had, you would have noticed that your first complaint was wrong. They are unfalsifiable for the same reason that the "predictions" of Toffler's "Future Shock" were unfalsifiable. They are a description of potentials, not of things that will happen, but of things that *may* happen.

    I'm not sure that he's wrong in general, but I'm quite convinced that he's not only wrong in detail, but that he expects to be wrong in details. He's describing trends. With trends you don't predict exactly when something will happen, but when to start looking for it, and when it will likely be successful when it appears. This is a sort of mechanistic interpretation of Charles Fort's "Steam Engine time". (A real phenomenon, with an uncertain causality. E.g., three people tried to patent the telephone in, I believe, the same month, but certainly within the same year.)

    On to point 2. I can't believe that he's a silly as you are claiming. I read those books, and I think I would have noticed. I suspect that you are misinterpreting something you heard or read, or that you read a secondary source who misunderstood things. (Possibly on purpose. Reporters process news to make it more interesting with an almost total disregard for truth.) OTOH, this could result from a simple grammatical misunderstanding. He does believe (and I agree) that a sufficient neural net would be equivalent to a brain. This, however, depends not only on quantity, but also upon organization. (And he certainly knows this better than I do, as he as produced inventions based on neural nets.)

    As for point 3.... No. He doesn't assert that. He doesn't believe that. And that's not even a distortion of what he says. It's too wrong for that.

    As for an accurate guide to the future...
    Best you consult a crystal ball. Kurtzweil, and other futurists, describe possibilities. And they tend to project with a large fudge factor in their time span. Even so they are NEVER correct, except partially. If you expect otherwise you are being unreasonable. It *is* best to think of them as a more fact based and less dramatized version of science fiction, however. Psychohistorians they aren't.

  • No quack. (Score:3, Insightful)

    by poopdeville ( 841677 ) on Monday November 29, 2010 @08:16PM (#34382118)

    The pig go. Go is to the fountain. The pig put foot. Grunt. Foot in what? ketchup. The dove fly. Fly is in sky. The dove drop something. The something on the pig. The pig disgusting. The pig rattle. Rattle with dove. The dove angry. The pig leave. The dove produce. Produce is chicken wing. With wing bark. No Quack.

  • by LUH 3418 ( 1429407 ) <maximechevalierb@nOsPaM.gmail.com> on Monday November 29, 2010 @08:44PM (#34382466)
    It's not a question of cheating. Those algorithms are simply approximate. They can't be guaranteed to get the optimal solution, but only to get a solution that is within some factor as good as the optimal... Or sometimes give no guarantees at all (e.g.: genetic algorithms). Those are often the solutions used in practice for NP-complete problems, because they're fast and will often get you very very close to the optimal solution. So close that you don't really care it isn't guaranteed optimal. Methods such as genetic algorithms or simulated annealing work by sampling the space of possible solutions and performing random mutations on the better solutions that are found in an attempt to get even better solutions.
  • by Mitchell314 ( 1576581 ) on Monday November 29, 2010 @08:46PM (#34382502)
    50 Mb? Compressing that is a walk in the park. A cakewalk. A walk down easy street.

    But I hope you weren't planning on extracting and using it though. Lossy compression can lead to certain . . . artifacts. So I hope you don't mind a third arm growing out of the subject's knee.
  • by aynoknman ( 1071612 ) on Monday November 29, 2010 @09:11PM (#34382734)

    B) given a large enough knowledge base and a set of really good AI algorithms, one should be able to create intelligent machines. There's nothing to prevent them from replicating, either. However, I don't think that they will ever be truly sentient. Even so, careful design will be necessary to ensure Asimov's laws of robotics are strictly enforced.

    Asimov's Laws of Robotics deal primarily with social realities. E.g., "A robot may not injure a human being . . ." -- Human being does that include a Jew? a capitalist running dog? a fertilized human ovum? Terri Schiavo? The humanity of each of these has been called into question in one social context or other. Try making a formalized specification of what a human being is.

    Read the laws carefully and you'll see a significant number of other terms that are difficult to define. Asimov explores some of the inherent ambiguities to make his robot stories interesting.

    Hard to conceive of how one could have careful design to strictly enforce such laws.

    Such hidden hand-waving in a seemingly formal statement is Kurzweillian.

  • by human_err ( 934003 ) on Monday November 29, 2010 @09:24PM (#34382842)

    Although I agree it is a bit disingenuous to couch his predictions in scientific language, there is a positive side effect to his spiel. Who else can attract the resources to gather so many geniuses in a room? Scrolling through the list [singularityu.org] of advisors, I recognize such luminaries as Vint Cerf and Will Wright. Think of him as a story-teller, not a weatherman. The weatherman may help you plan for the immediate future. The story-teller or myth-maker primes the imagination to build a better future based on affirming deliberate values rather than history and habit. Inspiring, corralling, and funding the wills and insights of smart people in multiple fields is bound to produce something of value despite our failure to precisely anticipate the result.

    In other words, there's some good in the ra ra. After all, inventions originate from "I want to believe."

  • by Antisyzygy ( 1495469 ) on Monday November 29, 2010 @09:59PM (#34383162)
    As a disclaimer, I have no knowledge of genetics, however I do know a thing or two about data representation because we've had to use it as part of our research in facial recognition. There are techniques of compression that are quite extraordinary. An example is Wavelets, a Code Book (Bag of Words), PCA ect. How much you can compress the genomic data depends on its statistics. I.e. distributions, patterns, ect., and how much precision you are willing to lose. If you represent an image as simply color values for each pixel, it requires a crap-load of disk space. If you however use something akin to JPEG-2000 (which uses wavelets) you can compress it and retain a reasonable amount of information. However, If genetic data is essentially white noise there is minimal hope using humanities' current knowledge (or perhaps anything I am aware of).
  • Re:Oh yeah? (Score:3, Insightful)

    by Antisyzygy ( 1495469 ) on Monday November 29, 2010 @10:20PM (#34383300)
    You are a nut job. You fail to recognize that propulsion technology may render "rockets" obsolete, i.e. you assume that we only have chemical rockets for all eternity, and you overestimate the length our current economic system will last, i.e. you assume indefinitely. Either humanity will eventually colonize other places (at least in our own solar system) or we will go extinct. Its a natural progression. I bet people like you complained when one of their tribesmen built a slightly bigger ship more suitable for travel between islands. You would be the person saying "Nope. The ocean is endless, all that exists is the land behind us." or "There is no reason to go look for new lands over the ocean because its economically unfeasible to bring goods back from whatever lands may exist over the sea." Funny thing about that is, technology eventually developed to make overseas travel a matter of months (wind powered ships), then a matter of hours or days (airplanes). The next progression was space flight which brought us to the moon and sent probes past Jupiter. Basically, you assume no new technology will ever be developed as far as space travel is concerned and so far history has proven your stance wrong. The only way you could ever be right is if we all are brought back to the stone age or human beings become extinct.
  • by Eivind ( 15695 ) <eivindorama@gmail.com> on Tuesday November 30, 2010 @04:37AM (#34386034) Homepage

    To be fair, Kruzweil isn't that dumb. He's not suggesting that mereley doing the same thing, on a much faster computer, suddenly magically turns into a different thing. Infact the opposite is likely to be true: throwing more power at a problem tends to yield diminishing returns.

    But one of the things we use our tools for, is to make better tools. One of the tasks where computers currently help out, is with building better computers. And one of the tasks where software-tools help, is in making better software-tools. The argument is that this is an exponential process. And some of that, is accurate.

    There are, infact, many problems which are solvable in much less time and/or much better because of better tools. Tossing up a reasonably good BLOG using Django and a modern lamp-stack on modern hardware, does infact yield quicker results than coding the same thing using the best available tools of 1990 (including the hardware of 1990!)

    But it's incremental improvement, and I do think Kurzweil overestimates the impact. Brooks in the mythical man month, argues that there has been, and will be, no silver bullet. That is, that improvements to software-development though real, will be incremental and limited, and we will not get new methodologies or tools that radically change the picture overnight.

  • by MrHanky ( 141717 ) on Tuesday November 30, 2010 @07:01AM (#34386626) Homepage Journal

    No, it really isn't. Biological evolution has never had any need for a Turing machine. The Turing machine, however, came into being only hundreds of thousands of years after the human brain invented symbols. Symbols are sometimes a great way to understand things, but most people understand that a symbol isn't identical to its object. To a Turing machine, however, such a difference doesn't exist, as it has only symbols and no object at all.

    And neither does it to you, evidently, boldly proclaiming that the object you're trying to model must be identical to the model, unless there be things you don't understand -- which you then boldly dismiss as religious mumbo-jumbo. Which is to say that you don't only confuse the map with the territory, but you test your model against a different and supposedly wrong model (religious mumbo-jumbo) instead of checking it against its object. So yes, indeed, your brain might be a Turing machine. But that's nothing to be proud of.

  • by lgw ( 121541 ) on Tuesday November 30, 2010 @11:52AM (#34389134) Journal

    We can't make much progress without a breakthrough in efficiency.

    First is your presumption that wattage needs are linearly related to computational capacity

    So, what does "efficiency" mean to you, then? I thought "useful work done per unit of energy" was understood by any engineer?

    Give it a decade without an efficiency breakthrough and we're talking "space age" SciFi computers that filled buildings (with attached atomic power station).

    Companies like Google are doing a lot of computation on your behalf all day long in data centers and drawing so much power that in some areas they operate their own energy supply.

    Did you maybe reply to the wrong post?

  • by mcgrew ( 92797 ) * on Tuesday November 30, 2010 @12:50PM (#34390150) Homepage Journal

    At present we can't even all agree on which animals (if any) are sentient, let alone create a sentient machine of the most rudimentary sort.

    A gnat's brain is tiny, yet we can't even understand, let alone, match that. People have been calling computers "thinking machines" since the day a computer was a room-sized pocket calculator.

    How many beads do I have to put on my abacus before it becomes sentient?

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...