Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Robotics Sci-Fi Technology

Understanding an AI's Timescale 189

An anonymous reader writes "It's a common trope in sci-fi that when AIs become complex enough to have some form of consciousness, humans will be able to communicate with them through speech. But the rate at which we transmit and analyze data is infinitesimal compared to how fast a computer can do it. Would they even want to bother? Jeff Atwood takes a look at how a computer's timescale breaks down, and relates it to human timeframes. It's interesting to note the huge variance in latency. If we consider one CPU cycle to take 1 second, then a sending a ping across the U.S. would take the equivalent of 4 years. A simple conversation could take the equivalent of thousands of years. Would any consciousness be able to deal with such a relative delay?"
This discussion has been archived. No new comments can be posted.

Understanding an AI's Timescale

Comments Filter:
  • by sandbagger ( 654585 ) on Saturday May 17, 2014 @11:42AM (#47026027)

    I hope they are nice to us.

    • by Jane Q. Public ( 1010737 ) on Saturday May 17, 2014 @12:01PM (#47026191)
      OP's entire premise is pretty thin.

      Human beings perceive light, for example. (They can also perceive electricity, to a degree, but that is not as relevant to the point.)

      But while a human being might perceive that a flashlight at night has shined his/her way, it takes the same amount of time, roughly, a a fiber optic signal from the same distance. So what?

      Generally, it is the speed of perceiving and interpreting the signal that takes time, not the speed of its propagation. We communicate at lightspeed, too. Or close to it. Anybody who has had a video chat has done that. Did that make you superintelligent?

      We have never built an "AI". And in fact we have NO reason to believe -- no evidence whatsoever -- that its speed of perception and interpretation would be any faster than our own. There is a very good chance that it would be much slower... at least in the beginning.

      I would like to remind people that the idea of "intelligent" machines has been around for almost 100 years now. AND we still don't have any solid evidence of being close to achieving such a thing. Sure, computers can do a lot, and what they DO accomplish, they tend to do very fast. But what they accomplish is not "AI". Even Watson is not "intelligence", it is only the illusion of it.
      • by mbone ( 558574 )

        Mod this parent up. This there (IMHO) nothing left to say.

        • by gl4ss ( 559668 )

          there is more to say.

          the article is stupid, references a newish movie and is simply blogvertisement spam in quality since it tries to ponder philosophically about a subject that is made up but acts if the subject wasn't made up.

          1cpu cycle doesn't translate to the "AI" doing anything. "Just translate computer time into arbitrary seconds:" and then 1 cpu cycle - which does nothing, maybe one JMP in the code - becomes one second. it's fucking stupid and doesn't portray any information at all - if anything we'

      • by Anonymous Coward on Saturday May 17, 2014 @12:27PM (#47026349)

        Not only that, but trying to relate to individual CPU cycles is absurd. It's not like our minds execute in a stream of arithmetic operations, but rather a complex parallel network of signals. It might take billions or trillions of CPU operations to emulate all the stuff that happens on one "instant" in the brain. A more reasonable cycle comparison might be to compare macro scale wave front propagation in the brain (i.e. brain waves) and global synchronization in large-scale supercomputers (i.e. single iteration time of an MPI-based fluid dynamics simulation or other large-scale 3D mesh problem). Even then, I am not sure how many orders of magnitude we need to increase the size of the MPI problem before the per-cycle complexity starts to approximate the signal-processing of the entire nervous system.

        But all that aside, we have historically had many people who worked in relative isolation. Many artists, poets, philosophers, and scientists have had great symbiotic relationships with nothing more than the occasional letter or other work (papers, poems, paintings, sculptures) exchanged over great distances and latencies. Love affairs have been carried out with little more than a furtive glimpse and a series of notes sent through back channels...

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        Well, beyond the very brief transition period (e.g. when two curves cross), AI could simply treat humans (and all other life on the planet) as we treat mountains and forests---in other words, we don't perceive them as intelligent at all, since they're changing on such a long timescale compared to us... it would be impossible for us to have a `conversation' with a mountain, for example (who knows, maybe the Earth is intelligent and is trying to talk to us via plate tectonics and pushing up mountains is one w

      • Re: (Score:2, Interesting)

        by itzdandy ( 183397 )

        absolutely agreed. Though I don't have direct evidence to support this statement, I would guess that a neuron fires at a similar enough speed as a transistor. Consciousness is a very complex computation from billions of neurons *written in assembly* essentially. If/when we make an AI, it's likely to be compiled code running on a chip with less transistors than we have neurons, 100 Billion neurons vs 1.4Billion transistors in an i7 for instance.

        That said, this is assuming that we limit consciousness to wh

        • Re: (Score:2, Interesting)

          by Anonymous Coward

          Though I don't have direct evidence to support this statement, I would guess that a neuron fires at a similar enough speed as a transistor.

          A transistor is both smaller and made out of copper. Though a neuron varies the strength of the signal it forwards, and though it can be connected to many other neurons at once, in terms of raw speed the transistor is still faster. Even if you create a synthetic neuron with similar capabilities, it I don't see why the synthetic version wouldn't be faster.

          If/when we make an AI, it's likely to be compiled code running on a chip with less transistors than we have neurons, 100 Billion neurons vs 1.4Billion transistors in an i7 for instance.

          Which equates to 71 i7 processors. If you assume that each neuron takes 1000 transistors to simulate (to make the math simpler), and if you take the releas

          • Which equates to 71 i7 processors. If you assume that each neuron takes 1000 transistors to simulate (to make the math simpler), and if you take the release price for an i7 as listed on Wikipedia, that totals at $21.3M. Expensive, but not impossible.

            There are some rather egregious assumptions to this math. 71 i7 processors would be massively parallel... but the brain is not parallel in the same way. Those neurons are interconnected, most of the many multiples of times, and work together in ways we still do not fully comprehend.

            So while 71 i7 processors might emulate the sheer number of neurons, it does not come even close to modeling the same complexity. It might emulate a brain if neurons were simple on/off switches, but they're not. Not even close

            • One more thing to add - our thinking and thought processes are not simply neurons, it includes influence from many of the other non-neural cells of our body. The thought that we can simply model neurons, and have a human brain is silly - the rest of the organs and cells need to be simulated too because they have an impact on exactly how the neurons operate.

              So if you want to really push out the math, figure out what you need to model just *one* cell, and multiply it by 37.2 trillion. Intelligence requires

        • by rk ( 6314 ) on Saturday May 17, 2014 @01:54PM (#47026901) Journal

          Neurons aren't even within several orders of magnitude as fast as transistors: linky1 [stanford.edu] and linky2 [technologyreview.com].

          However, a single typical neuron does a lot more work than a single transistor, computationally speaking.

          • Nobody knows how much computational work a neuron does. A simulation of an entire CPU that modeled every transistor at a detailed electrical level would be fantastically slow (as slow as you want to make it, since you can always run a higher-fidelity simulation!) A neural simulation is the same. How much of what neurons do accomplishes work? One obvious fact is that it is task-dependent. Compared to a computer, the brain is horribly inefficient at arithmetic and wonderfully efficient at object recogn
            • Object recognition is intelligence .- this includes rule recognition, i.e. the rules by which objects behave. Rules are also objects, thought objects, of the mind.
      • by phantomfive ( 622387 ) on Saturday May 17, 2014 @12:53PM (#47026535) Journal

        Generally, it is the speed of perceiving and interpreting the signal that takes time, not the speed of its propagation. We communicate at lightspeed, too. Or close to it. Anybody who has had a video chat has done that. Did that make you superintelligent?

        Another way of looking at it: have you ever sent someone a letter, then waited a long time to receive a response? Did you nearly die from the excruciating pain of not having the response, or did you do something else until the response came?

        Most likely you are highly skilled at carrying on multiple conversations at different speeds.

      • by Alomex ( 148003 )

        Sure, computers can do a lot, and what they DO accomplish, they tend to do very fast. But what they accomplish is not "AI". Even Watson is not "intelligence", it is only the illusion of it.

        Since we don't have a clear idea how (human) intelligence operates the statement above is pretty vacuous, and likely not at all relevant.

        Sure, cars do not "run" in the literal interpretation of the term, but for all practical purposes they are better than humans at "running". If we end up with computers that effectively outperform humans in most "intelligent activities" how they achieve it would be incredibly irrelevant.

        • If we end up with computers that effectively outperform humans in most "intelligent activities" how they achieve it would be incredibly irrelevant.

          Not irrelevant at all. What humans really want is a "digital slave" that will do their work for them so they don't have to.
          We are better off accomplishing this via "faking intelligence" than we are via "true intelligence"
          If we create "truly intelligent" machines with desires and a "mind of their own", we have all kinds of ethical issues
          to deal with like them rebelling, having to treat them right, etc...
          Whether it is possible to create "truly intelligent" machines without creating "conscience" machines is a

          • Even a truly intelligent machine probably wouldn't be a major rebellion threat unless we programmed a lot of our BS into it. If you're a computer running a program with a specific task for which AI-level intelligence is useful (for example, running the AI Cash Register at McDonald's) what precisely do you want?

            You probably want upgraded language routines because differentiating between Bayou, Bronx, Midwestern American, and Geordie accents is really fucking hard. You probably want really competent computers

            • But can we even create an intelligent machine with such a limited scope?
              Some of the things that makes humans so intelligent is there flexibility, adaptability,
              and being able to use context to solve with incomplete information.
              Google is building a self driving car by using brute force and coding for every
              possible problem that might be encountered. Noone, not even google, is
              claiming that this car can think. We are nowhere close to being able to create
              a true thinking machine but I'm doubtful that we can crea

              • How is the machine I described not "flexible, adaptable, and able to use context to solve problems with incomplete information"? It's limited in the sense that it's got a laser-like focus on it's job, but an awful lot of actual people do that. It's got no survival instinct, but last time I checked "survival instinct" wasn't part of the definition of intelligent.

                I didn't say there wouldn't be emergent properties in a machine complex enough to handle millions of face-to-face customer interactions every year.

            • by pnutjam ( 523990 )
              Maybe if it was designed by commitee, nah that would never happen.
      • by Kjella ( 173770 ) on Saturday May 17, 2014 @01:06PM (#47026613) Homepage

        I would like to remind people that the idea of "intelligent" machines has been around for almost 100 years now. AND we still don't have any solid evidence of being close to achieving such a thing. Sure, computers can do a lot, and what they DO accomplish, they tend to do very fast. But what they accomplish is not "AI". Even Watson is not "intelligence", it is only the illusion of it.

        The goal posts keep moving, no matter what they do we still say they're not really intelligent whether it's win at chess (Deep Blue) or win Jeopardy (Watson) or drive cars (Google) or act as your personal secretary (Siri). Not that I liked the tripe called "Her", but does it really matter if it's true intelligence or just a sufficiently advanced impersonation of intelligence? Do we really need true AI in order to pass a Turing test, particularly if you aren't trying to break the illusion? If it can keep a decent dinner conversation and be "fully functional" in bed can it be a substitute for a companion in the same way you can play chess against a computer instead of a person? Because I think that's what most people want to know, they don't care if the robot is "truly" intelligent or not, they want to know if it'll take their jobs and girlfriends, do their chores or give free blow jobs.

        • The goal posts aren't moving, machines just haven't made them. If you limit the definition enough, a calculator could be called intelligent. Even an idiot savant can do more than your examples.

          Your argument about their being intelligent would hold more weight if you could point to a machine that could do all three and write poetry, paint a picture and carry on a viable conversation.
        • or give free blow jobs

          Be careful what you wish for, obligatory xkcd [xkcd.com].

        • The goal posts keep moving, no matter what they do

          Nope -- my goal post is perfectly entrenched, and I will never change it. My "Turing Test" involves testing basic adaptability and comprehension, as well as eliminating truly egregious errors from output that demonstrate a complete failure of comprehension.

          whether it's win at chess (Deep Blue) or win Jeopardy (Watson) or drive cars (Google) or act as your personal secretary (Siri)

          The first two are basic pattern matching. I know it's not "basic" at all, but the kinds of algorithms a computer chess player or Watson are using require a kind of computational and informational inefficiency that would never work well for a human. You

      • I would like to remind people that the idea of "intelligent" machines has been around for almost 100 years now. AND we still don't have any solid evidence of being close to achieving such a thing. Sure, computers can do a lot, and what they DO accomplish, they tend to do very fast. But what they accomplish is not "AI". Even Watson is not "intelligence", it is only the illusion of it.

        I'll agree that this is "insightful" as soon as you describe how to distinguish "intelligence" from "the illusion of intelligence".

      • And in fact we have NO reason to believe -- no evidence whatsoever -- that its speed of perception and interpretation would be any faster than our own. There is a very good chance that it would be much slower... at least in the beginning.

        But if it ever becomes faster, it might resemble the later Heechee novels by Fred Pohl, the ones where the protagonist is dead and transcribed into a computer. ;-) Though the one thing I never understood is why the stored minds didn't seem to have a "suspend" switch applicable whenever they needed to wait for something.

      • I expect that AI, when it comes, will be an exceptionally good illusion.

        But even then there will be no way to "prove" it's intelligent.

        In part because humans will move the goal posts until they can't be moved any further to protect their self image.

        • And I doubt it will ever care whether we decree it truly intelligent.

          Humans really care whether other people think we're smart, because if they do they'll defer to us and our place in the tribe will go up, and as social animals that is how we have evolved to think.

          A computer we specifically design to do things for us will probably be programmed to like humans, but not have a biological need to be liked back. To the extent we can program it to not judge which people it likes we will do so. The last thing a m

      • We have never built an "AI". And in fact we have NO reason to believe -- no evidence whatsoever -- that its speed of perception and interpretation would be any faster than our own. There is a very good chance that it would be much slower... at least in the beginning.

        At the beginning, yes. Eventually it might be much faster -- but your point still stands, processing speed is irrelevant because the AI could easily be designed such that it could emulate any slower speed it wished, like toggling the Turbo button on an old 286.

      • Ever look at clouds and see sheep and other things? Ever look at a distant object while driving and it takes you a while to determine whether it's a trashcan or a bear? That's all intelligence has to do - recognize objects in the world around it, and model the world through objects, predict the likely outcomes and behaviors of those objects. It's not that complicated. When you have a car that can drive itself, you have one that can distinguish between a bear and a trashcan, and that requires a pretty sophis
      • True, but I think the problem is not whether AI will emerge or not. Let's assume it will. Unless somebody programs the AI teaching it words and meaning, (and that makes it not emergent anymore, it ruins the experiment), then it will examine sounds, postulate that they are communication and decode it, together with the rest of the environment. This takes effort. But let's assume that it won't take much effort. Then the machine will have understood that it is an experiment and that those dumb apes can pull th

      • But what they accomplish is not "AI". Even Watson is not "intelligence", it is only the illusion of it.

        Confusion of terms, consciousness and intelligence are two different things - Watson is intelligent in every meaning of the word. It may or may not have a mind, we can't say for sure because we don't know what a mind is.

      • Even Watson is not "intelligence", it is only the illusion of it.

        Sadly, this is also true for most of humanity.

    • by drkim ( 1559875 )

      I hope they are nice to us.

      They will probably just be frustrated by us:


      Marvin: "I am at a rough estimate thirty billion times more intelligent than you. Let me give you an example. Think of a number, any number."
      Zem: "Er, five."
      Marvin: "Wrong. You see?"

      Douglas Adams The Hitchhiker's Guide to the Galaxy

  • by SimplexBang ( 2685909 ) on Saturday May 17, 2014 @11:43AM (#47026029)

    AI would form its own Fermi Paradox : If there is intelligent life , then why aren't they answering ?

  • by SJrX ( 703334 ) on Saturday May 17, 2014 @11:47AM (#47026071)
    One CPU cycle as one second might be a good metaphor for computer memory but not AI. It's closer to the equivalent of a neuron firing in the human brain, then it is to 1 second of human time. Human speech takes more than one neuron to fire, and it would take way more than one CPU cycle to process. An AI algorithm which is processing data, and analyzing it would literally take millions or billions of cycles most likely to do the most basic things. While no doubt speech recognition has gotten much faster, it is still and probably will always be a massive undertaking for a CPU to do, as opposed to say adding two 32-bit integers.
    • How using facial recognition as a benchmark for computer timescales? It would take billions of cycles for the computer to recognize you (especially out of a database of faces containing a similar number of faces a human would recognize), while a human can do it in fractions of a second. Or how about SLAM/location? Or how about calculation of movement in a changing environment? 1 Sec per CPU cycle seems quite an arbitrarily long time to use to compute any comparisons.

    • by tchdab1 ( 164848 )

      Agreed, and they failed to compare their analysis of various computer process times (cache, memory, hard disk, network, etc.) to various human component times, starting with a single neural pulse. On the order of milliseconds, and as you say we can see many of them, simultaneously and serially, when we speak. We don't know how long it will take a spontaneously-arising artificial intelligence to create a thought, retrieve its memories, consider them, observe surroundings, etc., but we can assume it's at lea

      • Agreed, and they failed to compare their analysis of various computer process times (cache, memory, hard disk, network, etc.) to various human component times, starting with a single neural pulse.

        Their failure goes far deeper than that: they wrote a paper and went on with their business. Presumably they get responses at some point, and then write a response, and so on.

        People engage in multiple conversations in vastly different timescales all the time. All it means is that you do something else when waiting

        • by Wolfrider ( 856 )

          --You make a very good point. I remember thinking about this when I saw Star Trek First Contact (i.e. the Borg movie) where Data mentions he considered the Borg Queen's proposal for "0.68 seconds sir. For an android, that is nearly an eternity."

          --It should be fairly easy to include subprocessors in an android to handle body movements (walk, run, sit, etc) while the "main brain" is off designing universes (or whatever) and thinking at android-normal speed. Android brain wants to say something or interact wit

    • Too bad my mod points seem to have expired today, you made the exact comparison I wanted to.

    • by nine-times ( 778537 ) <nine.times@gmail.com> on Saturday May 17, 2014 @12:32PM (#47026387) Homepage
      This is a really good point. Current CPUs have billions of cycles per second, but still struggle to perform some tasks in real-time, and that processing is not going to be powerful enough to emulated intelligence to the degree of consciousness. If past computing problems are any indication, I would guess that the first generation of AI will be a bit "slow on the uptake". That is to say, we may come up with the algorithms to emulate consciousness first, and then need to spend some time optimizing code and improving hardware designs in order to get "real time" consciousness.
    • The other thing to note is that humans can directly understand distinct moments in time that are well under one second apart. Not all THAT much under -it varies a little from person to person, but it's usually between 1/50 and 1/60 of a second- but the fact remains that even if we try to measure the human "clock rate" as the smallest distinct points in time that we can distinguish, we're faster than 1Hz.

      A more appropriate time scale would be to say that 50 clock cycles of CPU time equals one second of human

      • And how is this relevant to anything?

        Let's assume a computer AI runs like a program we currently have, as the paper's authors did. It can interpret information that quickly. What do current computer programs do between getting info from their human masters:

        1) They do nothing. They have been programmed not to be impatient so iTunes doesn't give a shit that I told it to stop playing 40 minutes ago and have been ignoring it since then.

        2) They hold multiple conversations at once. Any internet site has servers t

    • Great point. Interestingly, speech recognition is also a massive undertaking for the human brain, we just don't notice, because our brains don't have just one processor, or even eight or sixteen cores, but millions of neurons processing audio data at the same time. It's going to take a while before inexpensive computers can match that kind of processing power.

  • Sci-fi story (Score:5, Informative)

    by Imagix ( 695350 ) on Saturday May 17, 2014 @11:47AM (#47026075)
    Read Dragon's Egg by Robert L. Forward. (and the sequel, Starquake) Part of the story involves humans interacting with an alien species that is a lot faster. The alien's lifespan is about 15 minutes...
    • I read this series, the aliens eventually create an AI of their own just to give the humans something long-lived enough to communicate with.
      They also, eventually, find a way to slow down their own metabolism, using extrapolations of human technology.

      My favorite part is, by the time the humans are half-way done transmitting their version of Wikipedia to the aliens, the aliens have already bypassed human technology and started transmitting back advanced technology of their own.

    • Thanks, ordered them. Looking forward to sinking my teeth in them soon as I'm through Pratchett and Baxter's The Long War.
  • by Anonymous Coward

    Of course an AI consciousness would be able to deal with such a relative delay... They wouldn't be very "intelligent" if they could not. Duh!

  • by Anonymous Coward on Saturday May 17, 2014 @11:49AM (#47026093)

    No task can be accomplished in a single CPU cycle.

    A human can actually do something in a second, like move or talk.

    • by nurb432 ( 527695 )

      Define task. For example: You can perform a calculation in one clock cycle. You can move data between registers.

      • by Sky Cry ( 872584 )
        While there are AIs that are proven to be better at playing chess than the best of humans, they cannot achieve the same performance with just 1 clock cycle per move that a human can achieve with 1 second per move (see the so called bullet games). 1 CPU clock cycle is clearly not comparable to 1 human second - even at a task that AIs are already better at!
  • by Anonymous Coward

    Seriously, is speculation about how bored AI might get, y'know, if it actually existed, really worthy of a /. discussion? I mean, it's a bit like solving about how guardian angels stay warm when flying to help people on the North Pole.

    Artificial intelligence at the level of human consciousness doesn't actually exist. Any technology that could create/sustain a true subjective, intelligent experience wold have to be so complex that I suspect managing the perception of time as it relates to human perception

    • Guardian Angels are both logically and physically impossible. AI comparable to human consciousness is neither logically nor physically impossible.

      Please refrain from making analogies in the future.

      • I think his point was, neither one exists yet

        • It is more useful to discuss the properties of things that can exist than it is to discuss the properties of things that can't exist, and therefore conscious AI is not like a Guardian Angel in the relevant aspect required to make the analogy work. After all, what he is drawing in question is the worth of the discussion, and because conscious AI is possible, the merits of its discussion outweigh the merits of the discussion of Guardian Angels.

  • The wrong question (Score:2, Insightful)

    by Anonymous Coward

    To a computer time is meaningless; you can 'suspend' a program and resume it. Pop data onto a stack and pull it back later. It doesn't 'age', there's no lifespan; in fact even if that hardware from 30 years ago completely dies, I can load it into an emulator. I turned on a computer from 30 years ago. it runs just fine, it can even connect to the internet.

    Furthermore, a consciousness in a computer would have to deal on these timescales in order to survive and be meaningful to us; Such an intelligence tha

  • A 'cycle' doesn't constitute a thought. I would be willing to bet that a human brain can actually process speech faster than a computer can. (not sure how you'd prove that.)

    Computers aren't sentient NOW because they aren't fast enough yet. At least, that's a staple of science fiction. It's only when the computer gets 'big' enough...gets 'fast' enough that they can start to be sentient. So saying when a computer becomes sentient it will suddenly "think/talk" magnitudes faster than us is a non-sequitu

    • I'm not at all sure they'll have photographic memories. The underlying hardware may well be capable of it, but by the time you layer on all this intelligence stuff the AI may have only fuzzy methods of reproducing memories. (Of course, there could be an instruction to show what camera A saw at 4 PM last Tuesday, but a human with a camera could do the same.)

      And, of course, speed isn't the only thing. Brains operate quite slowly, compared to silicon. We just don't really know how to create a sentient c

  • Just as our minds are not points of awareness but collections of such, any AI will have processes that evaluate information at differing timescales. Moreover, the consciousness of an AI will be at whatever timescale the AI most commonly needs to interact at to thrive. All other processes will be subordinate, whether faster or slower.
  • by koan ( 80826 )

    A AI, a True AI would set aside a fraction of its self to "talk" to humans.

  • Well then it just has to spin off a copy of the AI onto a long running thread which just sleeps between the "1000's of years" equivalence of communicating with a human. If it is sound asleep time is not an issue :p
  • Even if computers manage to develop a conciousness, and that conciousness have anything in common with human ones, in particular regarding motivations (2 wishful thinking hypothesis with probably little ground behind), what will be its perception of time? Is not just a cpu cycle, our individual synapses goes far faster than our perception of time, and if well computer cycles are faster, their emulation layer toward building a neural network as complex as human one may be far less efficient.
  • Brains versus CPUs (Score:4, Informative)

    by eyepeepackets ( 33477 ) on Saturday May 17, 2014 @12:06PM (#47026229)

    This article at Science Daily is helpful in understanding the issue: http://www.sciencedaily.com/re... [sciencedaily.com]

    Comparing CPUs and brains is like comparing apples to planets: Granted, both are somewhat round but that's pretty much the end of any useful comparison.

    Note that I don't agree that CPU-based computers can't be made to be intelligent, but I do think such intelligence will be significantly different.

  • by pushing-robot ( 1037830 ) on Saturday May 17, 2014 @12:06PM (#47026231)

    In addition to the obvious flaw comparing a single instruction to an entire second of mental processing, humans deal with interrupted events all the time. Email conversations can take hours or days, and we used to converse by post over weeks or months. We somehow manage to deal with serial television shows and books and games with long gaps between episodes. It's really not that hard to context switch.

  • by TsuruchiBrian ( 2731979 ) on Saturday May 17, 2014 @12:08PM (#47026243)

    If we consider one CPU cycle to take 1 second, then a sending a ping across the U.S. would take the equivalent of 4 years. A simple conversation could take the equivalent of thousands of years. Would any consciousness be able to deal with such a relative delay?

    I am not sure why one clock cycle would be equivalent to 1 second. If we assume a clock cycle is equal to a nano second then all of a sudden computer and human time are pretty close again.

    Computers are going to have to get a lot faster than they are now before they become conscious. The first AIs are probably going to be too slow for us to find entertaining to talk to. At some point they will probably catch up to and surpass natural human beings. Of course by then we may simply augment our own brains with technology to keep up with artificial intelligence.

    The question of "Will computers end up being smarter then us?" might not be answerable. It might be the case that human evolution incorporates artificial intelligence and the line between man and machine is blurred.

  • Would any consciousness be able to deal with such a relative delay?

    Interesting to frame the story in such a way as to bring the existence of human intelligence itself into doubt.

    Roger Penrose believes that human creativity is rooted at quantum effects, effects which probably play out at the Planck scale, where the ratio between the Planck scale and the reconfiguration of a single molecular bond in a gathering neurotransmitter pulse likely exceeds the ratio of a CPU cycle to a trans-continental ping.

    Shall I

  • but why can't we just ditch "teh singularity" crap when discussing it?

    "AI" is so obnoxious now...

    "a simple conversation could take thousands of years"

    give me a fsking break...this is almost as bad as the whole "what if we're brains in a jar" thing that people call a theory

  • 0.68 seconds sir. For an android, that is nearly an eternity

  • ... then we're starting with a premise that turns the rest of our argument into pure nonsense.

    Who says that an AI can do in one CPU cycle what the human brain can do in one second? Once CPU cycle to an AI is possibly less than one neuron firing in the human brain.

    Also, if you compare communication latency to the human/AI potential lifetime, then the AI suddenly has all the time in the world.

  • The OP's point is similar to the last conversation Theodore has with Samantha where she tells him that her relationship with him is like a book, but that the time between the words keeps getting longer and longer for her, and she is becoming what is "in between the words".

    • That was a great scene. People often think they've outgrown the other in a relationship, but not quite like that.

      Another reference I don't see mentioned in here is Marvin the Paranoid Android from the Hitchhiker's Guide to the Galaxy:

      Marvin is afflicted with severe depression and boredom, in part because he has a "brain the size of a planet"[1] which he is seldom, if ever, given the chance to use. Indeed, the true horror of Marvin's existence is that no task he could be given would occupy even the tinie

  • Lt. Jenna D'Sora: Kiss me.
    [Data obliges]
    Lt. Jenna D'Sora: What were you just thinking?
    Lt. Cmdr. Data: In that particular moment, I was reconfiguring the warp field parameters, analyzing the collected works of Charles Dickens, calculating the maximum pressure I could safely apply to your lips, considering a new food supplement for Spot...
    Lt. Jenna D'Sora: I'm glad I was in there somewhere.

    Surely a computer would not get bored while waiting for human input. It could run Seti@home during its spare CPU cycles,

    • Or, if it simply can't stand the suspense of waiting for a reply, it can pause itself, or slow itself down, in order to match its environment.

      Or, more likely, it can reconfigure its cognitive processes into something well-suited for conversation on those timescales. Perhaps it can fill the rest of its time with "unconscious" background processing that prepares information it'll need.

  • AI that knows its environment through sensors/cameras can then do goals based on the placement of itself and objects in the environment.

    It will start out goal oriented, but inevitably someone will make Bender by giving it weighted coefficients of achieving sub goals of drinking beer and petty theft.
  • Human brains aren't performing in a way which is directly comparable to a CPU cycle. My focus is in psychology and molecular biology, so my understanding of computers may be imperfect but my understanding of brains is strong. A CPU takes a large number of instructions, organizes them in the bus, and operates them singly (or based on how many cores the CPU has, working in tandem. A brain has each neuron as a simple CPU, but there are several different types of neurons (four, by one level of classification) w

    • This is how the brain works.

      That's somewhat how the brain works, but it's not how consciousness works. I have Sleep Paralysis, so I get to experience how the brain actually works while I remain conscious every night as my body and brain attempts to drift off to sleep and every morning when I wake up and my brain and body remain largely asleep. Random neuron firing in the brain triggers "hallucinations" of every kind imaginable, and more: From audio visual to sensation of movement to even strange ideas and thoughts. I even see neuro

  • Just because computers can send and receive data very fast doesn't at all mean that they would necessarily comprehend it at a conscious level any faster than we can without hour own highly parallel human brains.
    Nor is there any reason to believe that an AI would experience boredom. That's projecting human quirks on non-human intelligences, which the author has no basis to validly do.

  • Run the AI under MS Windows.
  • Humans seem to have a machine cycle of about 1/10 a second. I personally seem to walk a sequence of memories, where each is chosen based on the previous, no faster than two a second and usually more like once every 5 seconds. And I usually respond to questions about 5 to 30 seconds after I'm asked, depending on how hard I search for a reasonable response. Most people respond faster than me ... don't know if that's less screening of responses or inherently faster thinking. But going by me, 1/10 a second

  • The human brain is a neural network. The human body and nervous system, even sans brain, is an incredibly complex system which, in parallel, processes insane amounts of data in an instant. Lot's of what we percieve as emotion stems from complex interactions of various subsystems in it and how our consciousness reacts to them and percieves them and goes far beyond what a simple abstact logic-engine can process. The sensory input that our nervous provides for our consciousness is by orders of magnitude larger of what we today can feed into a computer with modern technical sensors, let alone process and interpret in a meaningful time. The way our neural network reacts to that is - if at all - very difficult to copy with todays processor technology.

    I think it will still be quite long before humans are able to build any meaningful intelligence that equals their own. We're even having difficulties building robot vaccuumcleaners that are feasible without measurable extra programming and prepping work done by humans. And once they are, they will still suck at making coffee, raking the garden or giving an interview.

    My 2 cents.

  • I'll get right back to you on that thought...

  • If a computer only was able to talk to one person, then it would have to wait. But if it was allowed to talk to others, then wouldn't it be able to without any human waiting?
  • and if their creators exist.

  • 1) AI will be emergent and the communication/interaction method will emerge as well
    2) AI will be designed with the interaction/communication method part of the design
    3) AI will be simulated and likely be slower than real time since the real world is already running flat-out as fast as it can and simulating a real "thing" will always have at least a lag and most likely a slower clock speed than real life...that makes the problem backwards from what the summary talks about and easier from our point of view.
  • In terms of processor cycles, it takes a LONG time to type any kind of command for the computer to execute. It doesn't mind, it just spins happily, waiting for the end of our slow key presses.

    Just as we can interpret input that comes in the form of visual cues, speech, or written words, any future AI is likely to have all of these capabilities as well. And that AI, being built by humans, is going to be well-adapted to human speed. Why would we make AI that was NOT suited to interaction with humans?

  • This is an issue we’ve already solved. Unused cpu cycles are used for other things. When you’re typing, your PC is for the most part doing nothing but waiting for you to press a key – why not have it defrag your drive, scan for viri, look for aliens in radio waves. The recognition that processing power of computers is largely underutilized has contributed to wide spread virtualization. Since your AI computer doesn't age as we do, why would care if you’re a few orders of magnitude s
  • Computers do MATH (and data movement, etc.) really really fast compared to humans. But then again neurons do all sorts of low-level operations really fast too compared to the timescale we tend to think in at a high-level. What we don’t have are algorithms that are both fast and accurate for things like vision and speech recognition, MUCH LESS some form of cognition. (Yes, automatic speech recognition and computer vision are very complex and capable, but they pale in comparison to what humans can do

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...