Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

Ray Kurzweil's Slippery Futurism 308

wjousts writes "Well-known futurist Ray Kurzweil has made many predictions about the future in his books The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999) and The Singularity is Near (2005), but how well have his predictions held up now that we live 'in the future'? IEEE Spectrum has a piece questioning the Kurzweil's (self proclaimed) accuracy. Quoting: 'Therein lie the frustrations of Kurzweil's brand of tech punditry. On close examination, his clearest and most successful predictions often lack originality or profundity. And most of his predictions come with so many loopholes that they border on the unfalsifiable. Yet he continues to be taken seriously enough as an oracle of technology to command very impressive speaker fees at pricey conferences, to author best-selling books, and to have cofounded Singularity University, where executives and others are paying quite handsomely to learn how to plan for the not-too-distant day when those disappearing computers will make humans both obsolete and immortal.'"
This discussion has been archived. No new comments can be posted.

Ray Kurzweil's Slippery Futurism

Comments Filter:
    • "Claims made about the future were wrong"

      Actually, the accusation is that the claims aren't even wrong. [wikipedia.org]
      • Re: (Score:3, Insightful)

        by davester666 ( 731373 )

        "Claims made about the future were so vague that they can't be wrong."

        • by Thud457 ( 234763 ) on Monday November 29, 2010 @06:05PM (#34381374) Homepage Journal

          Greetings, my friend. We are all interested in the future, for that is where you and I are going to spend the rest of our lives. And remember my friend, future events such as these will affect you in the future. You are interested in the unknown... the mysterious. The unexplainable. That is why you are here. And now, for the first time, we are bringing to you, the full story of what happened on that fateful day. We are bringing you all the evidence, based only on the secret testimony, of the miserable souls, who survived this terrifying ordeal. The incidents, the places. My friend, we cannot keep this a secret any longer. Let us punish the guilty. Let us reward the innocent. My friend, can your heart stand the shocking facts of grave robbers from outer space?

          -- Criswell


          oh, wait, you said Kurzweil...

      • "Claims made about the future were wrong"

        Actually, I'm pretty sure he predicted he'd sell a bunch of books and his pubisher believed him. Sometimes all it takes is one believer to make something happen.

      • by RDW ( 41497 )

        He's made some pretty dubious claims about the present, too, like the whole thing about the human genome being compressible to as little as 50 Mb, about an order of magnitude better than anyone has managed without cheating (e.g. by just compressing the diff to the reference sequence, or ignoring non-coding sequences). Publish the algorithm!

        • Re: (Score:3, Insightful)

          50 Mb? Compressing that is a walk in the park. A cakewalk. A walk down easy street.

          But I hope you weren't planning on extracting and using it though. Lossy compression can lead to certain . . . artifacts. So I hope you don't mind a third arm growing out of the subject's knee.
      • by Eudial ( 590661 )

        "Claims made about the future were wrong"

        Actually, the accusation is that the claims aren't even wrong. [wikipedia.org]

        Slashdot puts narrow constraints on the length of titles. What's a guy to do?

    • why do I get the feeling that there will be no news at 11 covering this?

      Another failed prediction?

    • by Crazyswedishguy ( 1020008 ) on Monday November 29, 2010 @06:28PM (#34381660)

      News at 11.

      How do you know? It's only 5:30. Is that you, Kurzweil?

    • Re: (Score:3, Informative)

      by makomk ( 752139 )

      Of course, when pointing out the flaws in someone else's claims about the future, it helps to get your claims about the present correct. For example, stacked chips may not be quite as common as he suggests, but they're still fairly ubiquitous. Nearly every microSD card uses a stacked-chip design, for example, as do many full-sized SD cards. So do the CPUs used in the iPhone, the iPad, and many other phones. We're only just getting started too... there are plausible rumours AMD are considering stacked chips

  • by spun ( 1352 ) <loverevolutionary&yahoo,com> on Monday November 29, 2010 @05:24PM (#34380846) Journal

    Seems like a lucrative field. I bet I could do it! Let me think, ah, in the future... Nope. I got nothin'.

  • I think the general trends to predictions about future technology is that optimistic predictions often winds up being wrong (which isn't too say that overly cautious predictions are any better - like Bill Gate's 637 kb of memory claim).

    I'm still waiting for my ticket to the moon from Pan Am to be a reality, 9 years after 2001, and 48 years after 1968.

    • Re: (Score:2, Informative)

      by vistapwns ( 1103935 )
      For the 50 millionth time, Bill Gates didn't make any such claim about 637K, 640K or whatever. The memory limit in MS-DOS was dictated by the CPU, the 8086 made by Intel, and chosen by IBM for the IBM PC. Sorry to be off topic but I get sick of people slandering this guy, who would never say a bad word about IBM and Intel for doing exactly what they accuse Bill Gates of, because of their support of Linux and Apple.
      • by Chapter80 ( 926879 ) on Monday November 29, 2010 @05:53PM (#34381222)

        For the 50 millionth time, Bill Gates didn't make any such claim about 637K, 640K or whatever.

        I'm with you. I hate when people exaggerate and mis-attribute claims. Like GWB said, "if I said it once, I said it a hundred zillion times... I hate exaggeration."

        • Re: (Score:2, Funny)

          by craash420 ( 884493 )

          Like GWB said, "if I said it once, I said it a Brazilian times... I hate exaggeration."

          There, that's better.

      • I can predict the future of the Windows Phone and of Steve Balmer. Fail + Fail = New M$ CEO for January! I remember when the Zune was going to kill the iPod, and the Kin was going to do something I can't remember now, and Slate, and Vista... need we remind you further?

        • Re: (Score:3, Interesting)

          I can predict the future of the Windows Phone and of Steve Balmer. Fail + Fail = New M$ CEO for January! I remember when the Zune was going to kill the iPod, and the Kin was going to do something I can't remember now, and Slate, and Vista... need we remind you further?

          You can't predict the future by remembering the past. History is just the shackles of the mind. What we need are some forward thinkers who are willing to make the same mistakes over and over again. I call them 'American Voters'. We think we know what we're doing and we act like we know what we're doing, but every two years we don't seem to get anywhere. Which is OK because the present is where it's at. What did the future ever do for us anyway?

      • by HiThere ( 15173 )

        FWIW:
        I've read the denial, and I've several times read the debunking that he made the claim.

        I'm not convinced.

        OTOH, at the time he made the claim it was basically true. He was being asked about the design of (IIRC) MSDOS, and people were saying that it would get in the way of expanding RAM. Then (at a time when the average RAM was around 16K) he said "640KB should be enough for anyone". He wasn't being unreasonable, or short-sighted (no matter how it looks now). He was being practical. And he was basic

    • by hitmark ( 640295 )

      Another is that we can not predict what is not being developed or is in use in some way today.

      Observe how the "futurists" of the 60s focused on the automobile and such, while basically didn't see the mobile phone or the equivalent of the internet.

      • Re: (Score:3, Interesting)

        by funwithBSD ( 245349 )

        I went to a SVUG meeting once and Douglas Engelbart was speaking there during the 90's

        I got picked to ask him a question about what the next interface computers might be after the keyboard and mouse.

        He was taken aback and answered:

          I don't know.

        On the bright side, I won a copy of OS/2 for stumping the speaker!

      • by CrimsonAvenger ( 580665 ) on Monday November 29, 2010 @06:42PM (#34381806)

        Observe how the "futurists" of the 60s focused on the automobile and such, while basically didn't see the mobile phone or the equivalent of the internet.

        Of course, Bob Heinlein had his characters using mobile phones in the 50's and 60's. Between Planets opened with the main character receiving a phone call while riding a horse in the back end of nowhere. Space Cadet had the main character receiving a phone call while standing in line for processing into the Patrol, while another character mentioned leaving his phone in his luggage so his mother couldn't worry at him...

        Closest to the internet I can recall was Asimov's "The Last Question", which had characters connected (various input/output methods, from voice to direct neural feed) to world- (and later galaxy- and universe-) wide computer systems.

        • Re: (Score:3, Informative)

          by Nursie ( 632944 )

          Asimov... Generally he foresaw one big computer. There's even an intro he wrote for a short story compilation in which he talks about it, from the perspective of 20 years or so after writing.

          He says "Basically I didn't see miniaturisation coming, so I missed out on computers becoming small or ubiquitous". So he thought of computers occupying whole cities, planets or even systems. I *think* that's the situation in the story you mention too. One huge computer.

          Of course as networking and distributed computing

    • Where the hell did 637kb come from?

      The myth was always 640k, and it was never true.

      Hit up wikipedia on the 8086 processor and you'll see where the 640k limitation came from. Further reading would inform you that the reason the limitation lasted so long was because of Intel's backwards compatibility policies (a good thing, but poorly planned in that particular respect).

  • by Anonymous Coward on Monday November 29, 2010 @05:28PM (#34380892)

    "continues to be taken seriously enough as an oracle of technology to command very impressive speaker fees at pricey conferences, to author best-selling books"

    Sarah Palin.

    Yours In Anchorage,
    Kilgore Trout.

  • by Laxori666 ( 748529 ) on Monday November 29, 2010 @05:29PM (#34380908) Homepage
    I don't agree with his predictions.

    A) it is assuming that we will always have a technological breakthrough at the right moment to allow the doubling of computing power every 18 months. Maybe this is the case, but it's still a big assumption.

    B) He assumes if we put enough cyber neurons together in a neural net you will develop intelligence and conscience. This may be the case, and it will be interesting to see, but I don't think you can take it for granted. He also spent about 2 pages in his book about this from a philosophical perspective, basically a: "Here is what three people thought about consciousness. Anyway, moving on..." Seems like it should be a central point.

    C) I think he also assumes that having such massive massive amounts of computing power will solve all our problems. Has he heard of exponential-time problems, or NP-Completeness? Doubling computing power every 18 months equates to adding one city to a traveling salesman problem every 18 months.
    • by Pinball Wizard ( 161942 ) on Monday November 29, 2010 @05:45PM (#34381140) Homepage Journal

      A) It's not that big of an assumption. The exponential curve in computing power doesn't just go back to the advent of computers, it goes back as far as we could perform simple arithmetic. It's an assumption based on our long history of improving methods and fabricating machines to compute. Unless we have capped our ability to invent new methods of computing, it's a fairly safe assumption to make. Our ability to compute is probably not limited by the number of transistors we can pack on a silicon disk.

      B) given a large enough knowledge base and a set of really good AI algorithms, one should be able to create intelligent machines. There's nothing to prevent them from replicating, either. However, I don't think that they will ever be truly sentient. Even so, careful design will be necessary to ensure Asimov's laws of robotics are strictly enforced.

      C) I don't believe Kurzweil has ever claimed NP-Hard problems would be solved by the exponential increase in computing power.

      • B) If you don't think that machines can ever be "sentient" but you do believe that biological organisms can be, then you must explain what magic is happening in biology which can not be replicated in other media.

        Also, if you could explain at exactly which level of biological intelligence "sentience" emerges. I'll assume you would claim humans as sentient. Is that all humans? How about apes? Monkeys? All mammals? All vertebrates? Maybe if we can determine who is sentient and who isn't, we can study the diffe

      • Re: (Score:3, Insightful)

        by aynoknman ( 1071612 )

        B) given a large enough knowledge base and a set of really good AI algorithms, one should be able to create intelligent machines. There's nothing to prevent them from replicating, either. However, I don't think that they will ever be truly sentient. Even so, careful design will be necessary to ensure Asimov's laws of robotics are strictly enforced.

        Asimov's Laws of Robotics deal primarily with social realities. E.g., "A robot may not injure a human being . . ." -- Human being does that include a Jew? a capitalist running dog? a fertilized human ovum? Terri Schiavo? The humanity of each of these has been called into question in one social context or other. Try making a formalized specification of what a human being is.

        Read the laws carefully and you'll see a significant number of other terms that are difficult to define. Asimov explores some of the

    • by Rockoon ( 1252108 ) on Monday November 29, 2010 @05:59PM (#34381316)

      A) it is assuming that we will always have a technological breakthrough at the right moment to allow the doubling of computing power every 18 months. Maybe this is the case, but it's still a big assumption.

      Intel and AMD are both doubling the width of their SIMD capabilities with AVX in the next year. This is simply a design decision, not a breakthrough. More cores is also a design decision, not a breakthrough.

      When the first vector processors hit super-computing, it became plainly obvious that computational capacity could always be doubled.

      Remember that capacity is not velocity, or in more geeky terms.. MIPS is not MHz.. bandwidth is not latency...

      There hasnt been a breakthrough in many years now, yet computational capacity continues to grow exponentially.

      • by lgw ( 121541 ) on Monday November 29, 2010 @06:57PM (#34381946) Journal

        When the first vector processors hit super-computing, it became plainly obvious that computational capacity could always be doubled.

        Always? We can't make much progress without a breakthrough in efficiency. My gaming PC needs a 1 kW power supply (and 11 fans). Double that and I'll trip my breaker. Double that again and it's past what's safe for home wiring. Double that again and you're past what's safe for normal commercial wiring, and you really need something special purpose (beyond 30 A @ 240V). Give it a decade without an efficiency breakthrough and we're talking "space age" SciFi computers that filled buildings (with attached atomic power station).

        Any there's only so much that can be done on the efficiency front. Beyond a certain point, addional parallelism mandates additional latency, because you need physical volume for cooling and therefore separation of components, so you're really talking about adding more computers to a network, and not the power of individual computers.

        We already have a network of computers that exceeds the computing power of the human brain, IMO. What makes the human brain so amazing is what it can do with ~100 W of power. That kind of efficiency gain is not a given.

    • C) I think he also assumes that having such massive massive amounts of computing power will solve all our problems.

      No, I'm pretty sure that his main predictions only require that computing power increase enough to provide a cheap simulation of the human brain.

      • by mugnyte ( 203225 )

          When the simulation becomes indistinguishable from the real thing for any given medium, there is no higher bar left to test with.

        for example, here at slashdot, a simple phrase generator might accumulate excellent karma.

        • No quack. (Score:3, Insightful)

          by poopdeville ( 841677 )

          The pig go. Go is to the fountain. The pig put foot. Grunt. Foot in what? ketchup. The dove fly. Fly is in sky. The dove drop something. The something on the pig. The pig disgusting. The pig rattle. Rattle with dove. The dove angry. The pig leave. The dove produce. Produce is chicken wing. With wing bark. No Quack.

    • by jlar ( 584848 )

      A) it is assuming that we will always have a technological breakthrough at the right moment to allow the doubling of computing power every 18 months. Maybe this is the case, but it's still a big assumption.

      That is mainly a question of timing. The main point being that we will have computational power in a relatively near (~50 years) future to make a computer which has computational capabilities exceeding that of the human brain.

      B) He assumes if we put enough cyber neurons together in a neural net you will develop intelligence and conscience. This may be the case, and it will be interesting to see, but I don't think you can take it for granted.

      I believe that you can. If you simulate the processes of the brain the simulation will act as a brain.

      C) I think he also assumes that having such massive massive amounts of computing power will solve all our problems. Has he heard of exponential-time problems, or NP-Completeness

      I don't believe he assumes that. But it would of course solve a lot of our problems. And create a lot of new problems.

    • Re: (Score:3, Insightful)

      by HiThere ( 15173 )

      I think you are misunderstanding both the nature and the purpose of his predictions.

      You didn't note that they are essentially unfalsifiable. You should have. If you had, you would have noticed that your first complaint was wrong. They are unfalsifiable for the same reason that the "predictions" of Toffler's "Future Shock" were unfalsifiable. They are a description of potentials, not of things that will happen, but of things that *may* happen.

      I'm not sure that he's wrong in general, but I'm quite convinc

  • John Rennie is just pissed that he can't command such nice speaking fees.

    • John Rennie is just pissed that he can't command such nice speaking fees.

      I was thinking the same thing after reading the article. Jealous much, Mr. Rennie?

      To those who didn't bother to RTFA, John Rennie was the editor-in-chief for Scientific American from 1994 to 2009. You know, the guy who took a formerly great science periodical and ran it into the ground by turning it into a magazine full of puff-piece op-eds masquerading as science articles.

      Most of Kurzweil's ventures have been a success. Rennie, on

  • Punditry Pays (Score:5, Insightful)

    by Infonaut ( 96956 ) <infonaut@gmail.com> on Monday November 29, 2010 @05:32PM (#34380958) Homepage Journal
    The point isn't to be accurate; it's to be engaging. We live in an age in which it is more important to entertain than to inform. Look at all the hack prognosticators in the business and technology press who make a living making predictions – most of them are wildly off the mark but nobody cares enough to go back and call them on their failures.
    • Re:Punditry Pays (Score:4, Insightful)

      by greenbird ( 859670 ) on Monday November 29, 2010 @06:01PM (#34381338)

      The point isn't to be accurate; it's to be engaging... nobody cares enough to go back and call them on their failures.

      And thus we have the modern press/news regime. No need to actually report correct information. Just report what is entertaining whether it's true or not and certainly don't waste any time trying to determine the truth of anything.

    • Re: (Score:3, Interesting)

      by hey! ( 33014 )

      True, but I'd go further. Part of true genius is not being afraid of being wrong. A very intelligent person isn't necessarily a genius, but take that person and have him lavish his time and effort on something others think is a crock, and if he succeeds he's a genius.

      So what happens when a recognized genius becomes, in effect, a *professional* genius? Even genius has its gradations. Not every genius can be a Mozart, an Einstein or a Ramanujian. Such individuals are in a different class. They needn't wo

  • yet so easy to nay...about the future :)

  • by mangu ( 126918 ) on Monday November 29, 2010 @05:36PM (#34381018)

    Well, Ray Kurzweil seems to me about as effective at predicting the future of technology as Oracle is effective at managing data bases.

    This analogy is pretty good, but it's not exactly what some people might imagine.

  • by SteveWoz ( 152247 ) on Monday November 29, 2010 @05:36PM (#34381026) Homepage

    I used to disdain all these vague futurists. in many cases, it's sure to happen in the far distant future, and after the fact a few act smart enough to have said it long before. And many times it doesn't happen close to the way that's predicted. I always tended toward the practical side of things, rather than the theoretical.

    But one thing after another after another that was obvious and predictable just by applying Moore's law, still surprised almost everyone when they became reality. Things like lots of movies on a tiny chip.

    I was a singlularity denier, for one thing. But I have to reverse myself and admit that I'm wrong. Oddly, it was Ray, presenting to an audience in Vienna, which convinced me otherwise. The only thing about being a singularity futurist is that you've predicted what's already happened. Try living without today's technology and internet and see how far you get. It's already unclear to what extent the creators (ourselves) or that which we have created (technology) is the master. We always thought that we could turn off unfriendly robots, but we can't really turn off the internet, which is the largest robot yet (and the one that replaces most human brains for getting the best answers to things).

    Ray takes a lot of flak but he deserves respect, even when you think he's wrong.

    • Ray takes a lot of flak but he deserves respect, even when you think he's wrong.

      You seems to thinks that everyone deserve respect, you are too kind, stop it ;)

    • Re: (Score:2, Insightful)

      by SheeEttin ( 899897 )

      We always thought that we could turn off unfriendly robots, but we can't really turn off the internet

      Sure you can. Just take out a few key backbone sites, and there you go. That'll disable a good chunk of the Internet for long enough for you to clean up the rest.

      Or, just lobby (i.e. pay) your Congressman to pass a killswitch bill...

  • I'm sure he did, just as he predicted everything and everyone that did and didn't happen. He even predicted the master, Bruce Lee.

    At any rate, conning a bunch of execs into a pointless training is hardly worthy of note. Not even if you get them to paint their asses blue and run around naked in the forest. As a group, or one at a time, they aren't that bright and it isn't their money.

    People like Kurzeil are a service to the industry. All those self-styled experts blabbering infantile gibberish about

  • I can't imagine computers will make humans obsolete. There's one thing about us humans and that is that we are quite psychopathic when it comes to exploiting our environment and dominating every other living thing. And we don't clear up our waste properly, either. I think that when the time comes, and the regular PC is an uber conscious super intellectual being, the computers of this world will just up-sticks and bugger off to some other planet. Like Mars, where with a a few solar panels and a bit of ingenu
    • I think that when we have uber-intelligent computers that they'll basically just be a part of us, rather than some separate entity in competition with us. We'll "evolve" (well, engineer ourselves) to include artificial parts to do what the meat doesn't do well, and the tech will "evolve" to rely on the meat for the stuff the metal can't yet handle.

      At some point in the future, it wouldn't surprise me if we did find a way to do away with the meat all together and that some meatless "humans" buggered off, but

  • But of Course (Score:5, Interesting)

    by eldavojohn ( 898314 ) * <eldavojohn@gm a i l . com> on Monday November 29, 2010 @05:40PM (#34381072) Journal
    We have discussed this many times. I debated writing out a lengthy post espousing the many problems with Kurzweil's predictions. Of course I (and Slashdot stories) have done this [slashdot.org] before [slashdot.org]. But you know after reading this article, I have this sort of urge to read more of Kurzweil's writings in an attempt to develop an equivalent process for identifying something we could call "Technological Stock Spiel." To some of you Sagan nuts and skeptics, you might recognize the phrase "stock spiel" as something used to designate parlor tricks and underhanded wording to get people to believe that you're a psychic. It's also been called cold reading strategy [freeonline...papers.com] and you've seen shows from Family Guy to South Park parody it.

    Basically I suspect that Kurzweil is adept at standing up in front of a group of people and employing this same sort of strategy that preys on people's understanding of technology instead of their emotions. But both of those things have in common the fact that people want to believe great things. If he's talking to computer scientists, he'll extrapolate on biology. If he's talking to biologists he'll extrapolate on computer science and so on and so forth. And he probably knows exactly what to say so that more than enough people gobble that up. Because of the things that I have studied extensively through college, this man is very capable of talking like he knows just enough and using vague analogies to get people going "Yup, yeah, uh huh I see now, I want to believe!"

    As Walter Sobchak might say, "Forget it, Donny, you're out of your element!"

    That is, of course, unless he's talking to a group of futurists. Then he's just preaching to the overly optimistic choir.
  • by v1 ( 525388 ) on Monday November 29, 2010 @05:46PM (#34381156) Homepage Journal

    On close examination, his clearest and most successful predictions often lack originality or profundity. . And most of his predictions come with so many loopholes that they border on the unfalsifiable. Yet he continues to be taken seriously enough as an oracle of technology...

    Oh where have I heard that description before.... oh ya, here [wikipedia.org]

  • by painandgreed ( 692585 ) on Monday November 29, 2010 @05:49PM (#34381176)
    People in 2110 will be looking at copies of the Scientific American from 2010 that have Ray Kurzweil in them talking about a Singlularity and saying they want it. They'll also be wanting their flying cars, AI, and fusion power which the singularity was supposed to give them.
  • Self driving cars on the Highway are on the way, if the pun is excused. There are quite a lot of experiments and development. There is an EU program, etc. Sure, to get them on the roads (and integrate their systems with highways etc) will certainly take at least another decade.

    The point is, the subject is not a joke, as the article insinuated.

    That said, I'd not trust Kurzweil's claims on e.g. economics or cancer research. I might give some credibility to experts in those areas.

    • by 0123456 ( 636235 )

      Self driving cars on the Highway are on the way, if the pun is excused. There are quite a lot of experiments and development. There is an EU program, etc. Sure, to get them on the roads (and integrate their systems with highways etc) will certainly take at least another decade.

      I predict that self-driving cars will be in widespread use on public roads about a year after flying cars are available in your local Ford dealer.

      • Re: (Score:3, Insightful)

        by lgw ( 121541 )

        Nope - if you have "commuter lanes" or some other restricted lanes on your local highways, you'll see it's not a stretch to have those be dedicated to self-driving cars before much longer. The technology is nearly here. The infrastructure (always the hard part) is already here.

  • Another topic that's an excuse to hate on Kurzweil. I'm really looking forward to a bunch of depressing, bitter pessimists babbling about how the future is impossible and if men were meant to fly God would have given them wings. So the man's a little nutty, is that really why so many hate him? I think it's jealousy.
    • by glwtta ( 532858 )
      So, here's my problem: apparently I shouldn't "hate on" Ray Kurzweil, "hating on" is a bad thing. But I do hate Ray Kurzweil, not personally mind you, I'm sure he's an excellent individual, but in the same way that I hate any useless person in the public eye who makes their living peddling bullshit.

      I suppose I could be jealous, though I'm not exactly sure what I would be jealous of. I assume he's relatively well-off, but there are plenty of rich people I have no problem with; is it his ability to set a
    • Is that really why so many hate him?

      No, we dislike his nuttery because it moves attention from the achievable to the non-achievable. In addition, he makes it sound to many powerful people who have control of funding projects that what he espouses is inevitable, giving them cover to defund projects that may actually benefit mankind because, if the singularity is around the corner, why should they fund anything... In short, Ray is a crackpot who does more harm than good, sort of like a fundamentalist preac

  • by mschuyler ( 197441 ) on Monday November 29, 2010 @06:09PM (#34381416) Homepage Journal

    I'm all for criticizing the excesses of Kurzweil, but I don't think the article is up to snuff and reads like a personal attack on Kurzweil rather than a well-reasoned refutation of Kurzweil's predictions.The author seems to take the position that Kurzweil wasn't exactly 100% accurate in all the factes of his predictions, therefore he was wrong and besides, somebody else already thought of it anyway before Kurzweil did. It's kind of a specious hit piece that cherry picks a couple of examples and doesn't really measure up as a serious analysis of Kurzweil's record. Maybe it would be nice of someone actually did that, but this article is nowhere near it.

  • What Futurists Do (Score:5, Insightful)

    by Doc Ruby ( 173196 ) on Monday November 29, 2010 @06:10PM (#34381430) Homepage Journal

    Futurists don't "predict the future". They discuss the past and present, talk about its implications, and get people in the present to think about the implications of what they do. They talk about possible futures. Which of course changes what actually happens in the future. They typically talk about a future beyond the timeframe that's also in the future but in which their audience can actually do something. Effectively they're just leading a brainstorming session about the present.

    This practice is much like science fiction (at least, the vast majority, which is set in "the future" when it's written), which doesn't really talk about the future, but rather about the present. You can see from nearly all past science fiction that it was "wrong" about its future, now that we're living in it, though with some notable exceptions. In fact "futurists" are so little different from "science fiction writers" that they are really just two different names for the same practice for two different audiences. Futurism is also not necessarily delivered in writing (eg. lectures), and is usually consumed by business or government audiences. Those audiences pay for a product they don't want to consider "fiction", but it's only the style that makes it "nonfiction".

    This practice is valuable beyond entertainment. Because there is very little thinking by government, business, or even just anyone about the consequences of their work and developments beyond the next financial quarter. Just thinking about the future at all, especially in terms that aren't the driest and narrowest statistical projections, or beyond their own specific careers, is extremely rare among people. If we did it a lot more we'd be better at it. But we don't, so "inaccurate" is a lot more valuable than "totally lacking". Without futurism, or its even less accurate and narrower form in science fiction, the future would take us by surprise even more. And then we'd always suffer from "future shock", even more than we do now.

    If we don't learn from futurism that it's not reliable, but still valuable, then it's not the fault of futurists. It's our fault for having unreasonable expectations, and failing to see beyond them to actual value.

    • by glwtta ( 532858 )
      That was the most long-winded way of saying "public masturbation" that I've ever seen.

      Sorry, but I have no respect for anyone describing themselves as a "futurist"; or as someone who's out to "get people to think" for that matter - people do that on their own, when you yourself present something thoughtful.

      And the only reason anyone mentions Kurzweil's lack of (meaningful) accuracy is his constant self-congratulation on how accurate he is - no one cares otherwise.
      • Where are these people who think about the future on their own?

        Even just the next few years when their adjustable rate mortgage jumps to over 10%. Or when having a half dozen drinks when they're driving themselves home in a couple of hours. Or when they change lanes without looking. Or when developing a product, apart from (possibly) the immediate revenue in the next quarter or two. Thinking about the future is very rare.

        Really, where do you live and work, where they think about the future in more than the

  • by bloosqr ( 33593 ) on Monday November 29, 2010 @06:24PM (#34381606) Homepage

    Our joke about Kurzweil was he was someone who didn't take his "series expansion" to enough terms.. What he does is look at emergent phenomena and notice the exponential growth curve .. (which occurs in a variety of phenomena from biology to physics to even economics) .. and from that draw the conclusion that everything (or particular aspects of technology really) will continue to grow exponentially ad infinitum .. to a "singularity" etc.. This is both intuitively not true and factually not true because of resource / energetic issues (however one wants to define it for your particular problem) .. The point is you can actually look at the same phenomenon that Kurzweil claims to and notice in fact actually new phenomena/technology/etc only initially look "exponential" and then for all the obvious reasons flatten out (again really only initially (but further down the time curve than the exponential growth phase)) so your curve in the end looks really like a sigmoidal function.. (given whatever metric you choose) The hard part is to figure out how quickly you'll hit the new pseudo steady state .. but its certainly absurd to assume it never happens.. which is what the absurd conclusions he draws are always based on..

  • "Your manuscript is both good and original; but the parts that are good are not original, and the parts that are original are not good." Samuel Johnson, over 200 years ago.
  • Guy's extreme fear of death fails to grant him ability to predict future.

  • Assessing Kurzweil is a good yardstick for whether a person is capable of deep thinking. He's one of the slipperiest grease poles around. Yet sadly, he's usually miles ahead of the criticisms put forward.

    This article is not much of an exception. Kurzweil defines common as a few percent, the lower knee of the adoption S curve. If you think habitually in exponential terms, one percent is common. What is one percent when the cost of genetic sequencing decreased by five orders of magnitude over one decade?

  • Oh come off it, we all know it's just the Fermi-estimations of a man thinking out loud. It's not science, it is, as the summary says, punditry. Give over "exposing" it, we all know it's very, very far from rigorous or even (gasp) godlike. It exposes itself, we all know it's rubbish.

    But the ideas are a good enough conversation starter, and it's a possibly important idea to be talking about. So who wants to accept that Kurzweil isn't science and discuss the idea of the technological singularity instead?
  • Is there a difference?
  • The people in charge won't let that happen. It would change everything, and that will harm their profits. See how the potential of internet, of free interchange of information, all together pushing knowledge and mankind forward got badly crippled by copyrights, patents, lawsuits and so on. Getting to true AI is worse than just risking lives, it put business at risks, specially if another one have it, so all the ideas that could push forward in that direction are patented, taken, and not being able to be use

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...