Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing AI Software Technology

First Demonstration of Artificial Intelligence On a Quantum Computer 98

KentuckyFC writes: Machine learning algorithms use a training dataset to learn how to recognize features in images and use this 'knowledge' to spot the same features in new images. The computational complexity of this task is such that the time required to solve it increases in polynomial time with the number of images in the training set and the complexity of the "learned" feature. So it's no surprise that quantum computers ought to be able to rapidly speed up this process. Indeed, a group of theoretical physicists last year designed a quantum algorithm that solves this problem in logarithmic time rather than polynomial, a significant improvement.

Now, a Chinese team has successfully implemented this artificial intelligence algorithm on a working quantum computer, for the first time. The information processor is a standard nuclear magnetic resonance quantum computer capable of handling 4 qubits. The team trained it to recognize the difference between the characters '6' and '9' and then asked it to classify a set of handwritten 6s and 9s accordingly, which it did successfully. The team says this is the first time that this kind of artificial intelligence has ever been demonstrated on a quantum computer and opens the way to the more rapid processing of other big data sets — provided, of course, that physicists can build more powerful quantum computers.
This discussion has been archived. No new comments can be posted.

First Demonstration of Artificial Intelligence On a Quantum Computer

Comments Filter:
  • Captchas! (Score:5, Funny)

    by Lanforod ( 1344011 ) on Wednesday October 15, 2014 @10:28AM (#48150205)
    Crap. now what are we going to do instead of using a captcha?!
    • by zlives ( 2009072 )

      i think we are ok until they teach it to recognize hand written "I vs l"

      • by Empiric ( 675968 )
        And we're quite okay on the AI claim until they teach it to critique in detail the respective statements "l think therefore l am" versus "I think therefore I am"...
        • by zlives ( 2009072 )

          oh I thought they were claiming that the recognition was done by Al, Not AI.

        • Re:Captchas! (Score:4, Insightful)

          by Jane Q. Public ( 1010737 ) on Wednesday October 15, 2014 @01:43PM (#48152597)
          The phrase "artificial intelligence" does seem to get thrown around just a bit too freely these days. I don't see anything "artificial intelligence" about this at all. It's just an image recognition algorithm.
          • The truism about Artificial Intelligence is that once a cutting edge problem in AI gets solved, the masses just redefine it as "computer science algorithm". Image recognition was once the leading edge of AI. It's still AI, just not leading edge anymore (unless you're doing something completely novel, like doing it on a quantum computer). Intelligence *is* pattern recognition, of which image recognition is one type.

            • It's still AI, just not leading edge anymore (unless you're doing something completely novel, like doing it on a quantum computer).

              Indeed. The headline here shouldn't be AI; it should be that the algorithm ran in logarithmic time instead of polynomial time [wikipedia.org], as hypothesized.

            • It's still AI, just not leading edge anymore (unless you're doing something completely novel, like doing it on a quantum computer). Intelligence *is* pattern recognition, of which image recognition is one type.

              No, you have it backward. The truth is that NONE of it is "AI".

              We have no idea how to do AI, and labeling things "AI" when they just aren't waters down the whole concept.

    • Feed it crapchas [crapcha.com] until it catches on!
  • by gweihir ( 88907 ) on Wednesday October 15, 2014 @10:29AM (#48150217)

    And also no working quantum computer, except for very small toys. Pattern recognition is not AI.

    • If it has a name, it is not AI.

      AI is forever at the horizon, but that is also what makes AI research great.

      • Re: (Score:3, Informative)

        Pattern recognition, decision theory, game theory, and partitioning are AI subjects. AI isn't just the mysterious general-purpose thinking machine always on the horizon.

        Some pattern recognition uses neural networks for training.

        • by Anonymous Coward

          Pattern recognition is weak AI.

          Strong AI does not exist yet.

      • Let's not forget that AI stands for Artificial Intelligence.

        The key word is Artificial.

    • by QilessQi ( 2044624 ) on Wednesday October 15, 2014 @10:48AM (#48150461)

      You're right that the wording is overblown, but AI is a big field, and pattern recognition is a big part of it -- vision, voice recognition, decision making, and other facets of human intelligence all rely on automated categorization of inputs to some degree.

      Getting a tiny piece of the puzzle to work in a test tube is a necessary first step to bigger and better things. No one is going to put together a working brain in one shot (if ever).

    • Kind of what I was thinking. I had an ex who was doing machine learning 20 years ago.

      Training neural nets and the like to recognize patterns was seen as a step to machine learning, and a way to apply it to specific problems.

      But identifying the difference between a '6' and a '9'? I agree that this is 'AI' as much as me heating something in the microwave makes me a chef.

      This isn't 'AI' as far as I'm concerned. It's neat, it's cool. But it aint AI.

      • by captjc ( 453680 )

        Of course it is AI, not cutting edge AI, but it is still AI. Just because it is now a mature solved problem, doesn't make it any less valid.

        It is like saying that someone doing the old "calculate the landing position of a cannonball fired at X velocity at Y angle" problem isn't doing physics because modern physics now involves super-tiny particles and / or traveling at speeds near the speed of light.

        • I'm more inclined to think it is more akin to calculating trajectories than it is AI.

          There's no 'intelligence', there's fancy pattern recognition.

          I have no idea of the formal definition of AI, but to me without some form of abstract decision making and actually applying it to something, it's just clever automation.

          Vending machines have been able to identify what kind of coin you put in for decades. That doesn't make them 'intelligent'.

          Is it a more sophisticated form of input that a keyboard? Sure. But, t

          • by Immerman ( 2627577 ) on Wednesday October 15, 2014 @12:15PM (#48151633)

            >I have no idea of the formal definition of AI

            You seem to be thinking of "Strong AI" - which is an actual thinking machine and the potentially immensely dangerous holy grail of AI research. All the various components - pattern recognition, descision-tree analysis, etc constitute Weak AI - basically everything that we can do on "autopilot" without conscious intervention.

            And incidentally there's a growing body of evidence that our own brains may be composed of a large number of complexly interacting "weak intelligence" modules. For example theres a small area that appears to be dedicated just to face recognition - damage it and cognition is apparently unaffected but you can no longer recognize faces. Stimulate it and strangers faces seem to shift and look like someone you almost know.

      • by Half-pint HAL ( 718102 ) on Wednesday October 15, 2014 @01:50PM (#48152689)

        But identifying the difference between a '6' and a '9'? I agree that this is 'AI' as much as me heating something in the microwave makes me a chef.

        This isn't 'AI' as far as I'm concerned. It's neat, it's cool. But it aint AI.

        You're forgetting one important factor: they did it in a quantum computer. Do you know how difficult those things are to build? Do you appreciate that this makes them expensive? And can you see how this would mean that all the quantum computers in existence are very very small in terms of component numbers compared to computers that work within the bounds of Newtonian physics?

        The machine they used has 4 quantum bits. 4 quantum bits! That really is very little computing power. And with that they did a non-negligible task.

        But the important thing isn't that this was a breakthrough in AI research, it was that quantum computing reduced the task from polynomial time to logarithmic. I think the summary calling this "a significant improvement" is a bit of an understatement.

    • Posting uninformed opinions on slashdot is not NI.

      Okay, that's a bit hostile, but the point is that AI as a field refers to creating software solutions to problems without knowing the full details of what those problems entail. Solutions that can be easily applied to a completely different kind of problem without re-engineering.

      That doesn't make it amazing, as good as a human(or even an insect), but the field isn't "Artificial Personhood" for a reason.

      • by CaptainDork ( 3678879 ) on Wednesday October 15, 2014 @11:03AM (#48150713)

        And it's a fucking misnomer.

        Artificial implies not real and intelligence implies thinking.

        Artificial intelligence is so unattainable that the original definition has evolved to something meaningless.

        True artificial intelligence is when a computer becomes depressed because it lost its connection to Facebook.

        • The way ya'll keep equating humanness to intelligence, even as a joke, is a really stupid thing.

          Our tests, we humans use on each other to determine intelligence like IQ or GI tests? They aren't testing our humanity, our empathy, our emotionality, our drive, our neuroticism. They're testing, get this, our pattern recognition.

          The exact thing the OP was whining about being called AI.

            • Ah the good old "arugment ad dictionarium" going exclusively to definition #2 to prove that your accusations of being narrow-minded are totes unreasonable.

              • Listen, asshole: I started this digital shit back when Moby Dick was a minnow and I've watched the wilting of the definition of AI over the years.

                I didn't write the definition I cited, right?

                • Listen, asshole: I started this digital shit back when Moby Dick was a minnow and I've watched the wilting of the definition of AI over the years.

                  I didn't write the definition I cited, right?

                  True, but you also didn't highlight the first definition, the definition the dictionary compilers thought was more important: "a branch of computer science dealing with the simulation of intelligent behavior in computers."

                  Visual pattern recognition is intelligent behaviour. Unless your definition of intelligence is predicated exclusively on higher-order reasoning and free will.

                  • The goal of AI was to be humanoid. Recall that I was there. The bar has been lowered, according to TFS to recognize the difference between a 6 and a 9.

                    • Visual processing is a subproblem of human cognition. Your complaint is akin to moaning to someone studying human anatomy trying to work out how the left ventricle works, on the grounds that "the left ventricle is a human being".

                      Remember that "Artificial intelligence" is the name of the research field, and it doesn't imply that an individual research outcome is "intelligent".

                      Now, if you were there at the start and you are disappointed in the progress in the field, you clearly had been reading too much scien

                    • I apologize for reading too much science fiction and the nonfiction computer science, as well.

            • by gweihir ( 88907 )

              Yes, and that is the meaning I use when I say or hear "AI". All other things are just misdirection, intended to get lots of funding from people that also use this definition and are unaware of the fundamental dishonesty of many people that are in the AI field. Scientists that use "AI" for all the weaker stuff around are just liars.

              I am extremely tired of that whole field. Personally, I would sack everybody in there that has ever used this misdirection and make sure they never work as scientists again.

              • I was about to ask you what your preferred replacement for the term AI would be, but then I got to this point:

                I am extremely tired of that whole field.

                I don't think you really care enough to have thought of a better term, do you?

                • by gweihir ( 88907 )

                  Oh ye of simple mind. "AI" is just fine. Its misuse is not. And I should have said "I am extremely tired of a certain type of person working in that field".

                  • Oh ye of simple mind.

                    Oh ye of simpler.

                    "AI" is just fine. Its misuse is not. And I should have said "I am extremely tired of a certain type of person working in that field".

                    Are you a computer? Do I need to be extra carefully specific in order to avoid compile time errors? What is your preferred replacement for the term AI for referring to the wide field which you think using AI for is an abuse of the term?

              • Mod +1

            • What webster writes is not really relevant (unless they would everywhere: 'lay man usage')

              Relevant however is how the scientific field, more precisely the subdiscipline called 'AI' in the field of computer science defines that themseleves.

              So no: 99% of AI have nothing to do with simulating 'human behaviour', that is only the case in SF stories ;D

              • No, that is only the case of people in the business changing the marketing hype to adjust for their failure to meet the original definition.

                • Oh, and you are old enough to know the original definition?

                  Hint: there is none ... the thema is a congloromat of hundreds of disciplines that merged together under the umbrella 'AI'.

                  If you had studied computer sciense you would know that :)

                  • congloromat

                    If you had studied computer sciense you would know that

                    Can you distinguish between a 6 and a 9 and stuff?

                    • Yes, I can. But as my iPad randomly refuses to underline misspelled words red, I don't see typos.
                      But good that you have those eagle eyes to 'spot' them as well as the neuron wiring to 'pattern match' them.

                      As I plan to translate a book to english, fancy to correct the spelling errors?

                    • I won't correct your spelling errors, but here's a site that would benefit you [missmanners.com].

        • >Artificial implies not real and intelligence implies thinking.

          No. Lets consult the dictionary shall we:

          Artificial: made or produced by human beings rather than occurring naturally, typically as a copy of something natural.

          Comes from the same root as "artifact" - a made thing.
          So AI literally means a human-made intelligence, as opposed to a naturally grown one.

          Now intelligence is a much more slippery term that has never been well-defined, and no it doesn't necessarily imply thinking in any sort of conscious sense. Let's see what wikipedia has to say:

          Intelligence has been defined in many different ways such as in terms of one's capacity for logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving. It can also be more generally described as the ability to perceive and/or retain knowledge or information and apply it to itself or other instances of knowledge or information creating referable understanding models of any size, density, or complexity

          I'd say that large umbrella covers an awful lot of the va

          • In the early days, AI was a simple concept [wikipedia.org].

            • Yes. And then early chat-bots showed that fooling humans was actually really simple and said a lot more about us than the software. Then they moved on to developing software that could tackle various specific tasks that we believed required thinking to solve. The result being that we now have a whole lot of "thinking machines" that can perform useful, domain-specifc, analysis and decision making far faster and more reliably than a human.

              Personally I think that's a good outcome - creating a true, self-awa

        • And it's a fucking misnomer.

          Artificial implies not real and intelligence implies thinking.

          Artificial intelligence is so unattainable that the original definition has evolved to something meaningless.

          True artificial intelligence is when a computer becomes depressed because it lost its connection to Facebook.

          Since when does artificial imply "not real" ? Artificial implies "man made".

        • Artificial implies not real

          No it does not. All artificial means in this case is that it was not 'built and designed' by natural processes.

      • by gweihir ( 88907 )

        Really? Do I have to call it "strong AI" or "true AI" , because that AI field is just full of f***** liars that use a name that gets them lots of funding but boils down to intentional misdirection?

        • Does your brain do pattern matching? Yes, it does. Therefore they are artificially modelling a process involved in human (and animal) intelligence. Would you prefer that they called it "synthetic psychology"? "Computation neuroscience"? "Electronic subconcious studies"?
          • by gweihir ( 88907 )

            Does the brain consume oxygen? Yes, it does. So an acetylene-torch is an AI device?

            Really, you are a failure an natural intelligence.

            • Does the brain consume oxygen? Yes, it does.

              Mine certainly does, but I think yours was deprived of it at some point.

              So an acetylene-torch is an AI device?

              Building strawmen with oxy-acetylene blowtorches is a fire risk. Does the oxy-acetylene blowtorch attempt to model the operation of a human brain and provide a mechanism to examine, prove and/or disprove theories about the operation of the brain? No.

      • This new definition of AI is several steps down from what Minski, McCarthy and company were aiming for. While this work is the direct descendent of theirs, and is often significant and sometimes impressive in its own right, there is an odor of self-congratulatory aggrandizement about the current usage.

        • This new definition of AI is several steps down from what Minski, McCarthy and company were aiming for. While this work is the direct descendent of theirs, and is often significant and sometimes impressive in its own right, there is an odor of self-congratulatory aggrandizement about the current usage.

          In which case, there must have been an "odor of self-congratulatory aggrandizement" about Minsky (I haven't studied AI in general since 1998, but at least I know how to spell his name) because the guys working on it now are a lot nearer to what he was aiming for when he started out.

          Now remind me (as I said, I haven't read the name Marvin Minsky since 98) was he a strong or a weak AI advocate?

          • This new definition of AI is several steps down from what Minski, McCarthy and company were aiming for. While this work is the direct descendent of theirs, and is often significant and sometimes impressive in its own right, there is an odor of self-congratulatory aggrandizement about the current usage.

            In which case, there must have been an "odor of self-congratulatory aggrandizement" about Minsky (I haven't studied AI in general since 1998, but at least I know how to spell his name) because the guys working on it now are a lot nearer to what he was aiming for when he started out.

            Only if he claims to have achieved those original goals.

  • by QilessQi ( 2044624 ) on Wednesday October 15, 2014 @10:29AM (#48150221)

    I read this:

    Their quantum computing machine consists of a small vat of the organic liquid carbon-13-iodotrifluroethylene, a molecule consisting of two carbon atoms attached to three fluorine atoms and one iodine atom. Crucially, one of the carbon atoms is a carbon-13 isotope.

    And immediately thought of this:

    The principle of generating small amounts of finite improbability by simply hooking the logic circuits of a Bambleweeny 57 Sub-Meson Brain to an atomic vector plotter suspended in a strong Brownian Motion producer (say a nice hot cup of tea) were of course well understood ...

    God, I love how weird the future is.

    • by chinton ( 151403 )
      Next up, Genuine People Personalities.
      • we already have those - look at all the 'bots posting on here for starters...

        • In Soviet Russia, you imagine a Beowulf cluster of f*** beta, you Microsoft shill/Apple fanboi/Linux neckbeard.

          Simplest variation of the Turing test on the planet: imitating slashdot posters.

  • by Grantbridge ( 1377621 ) on Wednesday October 15, 2014 @10:51AM (#48150509)

    What they actually did if you read the paper is:

    1) Encode a 6 or 9 image into 2 numbers, based on the number of excess pixels in the left vs right, and top vs bottom quadrants. From the article: After these preprocessing, the two printed image with standard font can be represented by ~x1= (0:9872;0:1595) for character "6" and ~x2= (0:3544;0:9351) for character "9"

    2) Use a training algorithm to find the appropriate pulse sequence to give a up result from the molecule's NMR C13 spectra from a 6, and a down signal from a 9.

    3) Run the NMR spectrum, feed in pulses based on the parameters produced from pixels encoded in a vector form like 1), get the result of "up" for a 6 and "down" for a 9.

    It's certainly neat experimental NMR work, but I don't really see how it's quantum computing. But then maybe that's the NMR spectroscopist in me talking....

    • (I can't believe I didn't notice I misspelt Quantum in that subject field.)

      • by jfengel ( 409917 )

        That's OK, you also misspelled "exaggerating", so I didn't notice ;-)

        Mostly, yeah, it's pretty exaggerated as AI. It's potentially an interesting piece of work, but I always get skeptical when the PR departments feel they have to exaggerate.

      • by Arkh89 ( 2870391 )

        (I can't believe I didn't notice I misspelt Quantum in that subject field.)

        If you want to, we have this neat quantum computer which is capable of making the difference between 'n' and 'h'...It might be pricey but it is totally worth it...

    • Does the this mean that AI is only 20 years away?
  • When will you become self aware?
  • It's smart and not smart at the same time.

  • Oh... I was interested in this, and then saw it was Medium.com
    Meaning it would be overhyped nonsense and have nothing to do with the title of the story.
    Saddly, I was interested enough to read through it and prove my "Medium.com sux" theory is still correct.

  • Ho.

    Ly.

    Shit.

    15 or 20 years ago, I was saying that because quantum computers perform multiple calculations on similar inputs simultaneously, they'll be perfect for the sorts of pattern recognition tasks needed for (artificial) intelligence. And now these smart people have figured out how to do it for the first time, albeit with a miniscule 4 qubit quantum computer.

    But since quantum computing capabilities scale according to 2^n, where n is the number of qubits, a 24 qubit computer (i.e. 6 times the size of wha

    • part of the definition of a singularity is that you dont notice anything as you pass the boundary, but are unable to communicate with the other side.
    • I am beginning to sense the coming Kurzweil Singularity...

      I switched to Dragon Naturally Speaking years ago....

  • The /. summary says "The computational complexity of this task is such that the time required to solve it increases in polynomial time with the number of images in the training set and the complexity of the "learned" feature." Moore's Law is such that any polynomial time problem will be trivially solved by the exponential advances of Moore's Law. If this problem were exponential in nature, not polynomial, then quantum computing might be our only hope. But polynomial-time problems are not the sweet spot f
    • by ahto ( 108308 )

      The /. summary has copied the expression from the medium.com report. But if you read the paragraph this comes from, the description makes it clear they really mean the growth is exponential, they just use the wrong term and the /. submitter did not correct this either.

  • Clearly this demonstration was designed by someone who was searching for the deeper meaning of the work of Jimi Hendrix:

    http://www.youtube.com/watch?v=eyGWbpNzH2o [youtube.com]

Genius is ten percent inspiration and fifty percent capital gains.

Working...