Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Google Technology

Baidu's Supercomputer Beats Google At Image Recognition 115

catchblue22 writes: Using the ImageNet object classification benchmark, Baidu’s Minwa supercomputer scanned more than 1 million images and taught itself to sort them into about 1,000 categories and achieved an image identification error rate of just 4.58 percent, beating humans, Microsoft and Google. Google's system scored a 95.2% and Microsoft's, a 95.06%, Baidu said. “Our company is now leading the race in computer intelligence,” said Ren Wu, a Baidu scientist working on the project. “I think this is the fastest supercomputer dedicated to deep learning,” he said. “We have great power in our hands—much greater than our competitors.”
This discussion has been archived. No new comments can be posted.

Baidu's Supercomputer Beats Google At Image Recognition

Comments Filter:
  • Great power (Score:4, Insightful)

    by Anonymous Coward on Thursday May 14, 2015 @08:05PM (#49694509)

    I'm not sure an improvement of .5 percent on image cataloging is really that impressive to get not one but two greats...

    • Unless they ran multiple initial image sets as well as multiple 'test' sets, there's absolutely no way to know whether their win was statistically insignificant.

    • Going from 99.5 to 100.0 percent is extremely impressive, while going from 50.0 to 50.5 is probably just noise.

    • The article says "error rate of just 4.58 percent ... Google's system scored a 95.2% and Microsoft's, a 95.06%". That means Google's and Microsoft's error rates are absolutely terrible and they really should just toss a coin!
      • The article says "error rate of just 4.58 percent ... Google's system scored a 95.2% and Microsoft's, a 95.06%". That means Google's and Microsoft's error rates are absolutely terrible and they really should just toss a coin!

        Err no.. that was just terrible reporting. Google's attempt had an error rate of 4.82 and Microsoft, 4.94.
        I guess Baidu reported it this way to make their "win" sound more sensational.
        4.58 error rate vs an error rate of 4.82 or 4.94 doesn't sound that phenomenal, I guess.

        To quote:
        "The system trained on Baidu’s new computer was wrong only 4.58 percent of the time. The previous best was 4.82 percent, reported by Google in March. One month before that, Microsoft had reported achieving 4.94 percent, becom

        • Exactly what I though too:

          Meh, they are all basically in the same ballpark (including humans). No breakthrough achievement. Wake me when some one achieve 10x better than humans and competition.

  • by Malenx ( 1453851 ) on Thursday May 14, 2015 @08:07PM (#49694523)

    This is actually News for Nerds.

    I'm curious how much difference in computational power was thrown at training these by Google, Microsoft, and Baidu, though it's going to be great to watch how these continue to evolve.

    • Re: (Score:1, Insightful)

      I'm curious how much difference in computational power was thrown at training these

      Training a NN requires a lot of cycles (usually GPU farms) but there is a limit to how much is useful. If you just continue to cycle over the same data, you end up over training your network, so that it basically just memorizes the training set, but fails to generalize to other data sets. Rather than just throwing more computational power at the problem, it is usually more productive to use more data, and improve your algorithms and configurations. Using more data wasn't an option in this case, since it

      • Re: (Score:2, Funny)

        I went to see Ex Machina yesterday. I want my own Ava (but with the "stab" feature disabled).

        Spoiler alert!

      • Most deep learning algorithms used for image classification tasks use a data augmentation step - wherein they alter the training image through scaling, translation, etc. According to the paper published here:http://arxiv.org/abs/1501.02876, they do additional transformations in the training images to make the learned model even more robust. So the risk of using up cpu cycles on the same data again and again is reduced.
    • Pretty soon Penny's shoe app [youtube.com] will become a reality.
    • the fact that the head guy at Baidu now, came from Google. Basically, he took Google's technology and then was funded by China's gov ( who is behind Baidu's funding on this ).

      Hopefully, someday soon, the west will realize that hiring Chinese means simply giving your technology over to the CHinese gov.
  • Only $7.42 including shipping! (AC adapter not included.) Estimated 21-47 days delivery time to USA.

  • Just Guess (Score:5, Funny)

    by Greyfox ( 87712 ) on Thursday May 14, 2015 @08:07PM (#49694531) Homepage Journal
    After all the news stories from the past couple of years, it seems like you could just guess "Yeah, that's a penis" and be correct about half the time. Seems like most people if you give them a camera, they're going to take a picture of a penis with it. And subsequently post that picture to the internet somewhere.
    • picture of a penis

      Or cats!

    • by PPH ( 736903 )

      This will make searching for other porn featuring the subject of 4chan threads more efficient. Even a quarter of a percent improvement in performance is going to save companies millions of dollars as their employees browse /b/.

    • Which reminds me...I sure got tired of all the "Anthony Wiener" jokes a while back, but just think of how much fun we'll be missing: we won't have Letterman to kick him around anymore [sniffle].

  • Bad summary (Score:5, Insightful)

    by Anonymous Coward on Thursday May 14, 2015 @08:08PM (#49694533)

    The summary is written to imply that Google/MS have error rates in the 90's, while the competition only has about 5% error. The values got inverted - Google/MS also have error rates around 5%, but are behind by fractions.

  • by ZG-Rules ( 661531 ) on Thursday May 14, 2015 @08:11PM (#49694551) Homepage

    As a pedant, I need to point out that the improvement is 0.24%

    "The system trained on Baidu’s new computer was wrong only 4.58 percent of the time. The previous best was 4.82 percent, reported by Google in March. One month before that, Microsoft had reported achieving 4.94 percent, becoming the first to better average human performance of 5.1 percent."

    Also why are the numbers reversed to quote success rates for Google and Microsoft in the summary on Slashdot - it would have been much clearer if the actual numbers in the article (which were all error rates) were quoted!

    • by rudy_wayne ( 414635 ) on Thursday May 14, 2015 @08:44PM (#49694739)

      Also why are the numbers reversed to quote success rates for Google and Microsoft in the summary on Slashdot - it would have been much clearer if the actual numbers in the article (which were all error rates) were quoted!

      Because this is Slashdot and it is required that all stories be written as poorly as possible.

      Baidu's new computer was wrong only 4.58 percent of the time. The previous best was 4.82 percent, reported by Google in March.

      If Google is only wrong 4.82% of the time then why is it whenever I search for an image I get thousands of pictures that have absolutely nothing to do with what I am searching for?

      • by xvan ( 2935999 )
        Because Google can't read your mind... yet... so it needs to guess multiple contexts.
        • by rtb61 ( 674572 )

          Well, I would suppose that means that image recognition is just a matter of opinion of the viewer of the image. Not so much artificial intelligence as bias in pattern recognition. Currently the best goal for artificial intelligence is accurate in context translation services. First the written word and then the spoken word.

      • Also why are the numbers reversed to quote success rates for Google and Microsoft in the summary on Slashdot - it would have been much clearer if the actual numbers in the article (which were all error rates) were quoted!

        Because this is Slashdot and it is required that all stories be written as poorly as possible.

        Baidu's new computer was wrong only 4.58 percent of the time. The previous best was 4.82 percent, reported by Google in March.

        If Google is only wrong 4.82% of the time then why is it whenever I search for an image I get thousands of pictures that have absolutely nothing to do with what I am searching for?

        Try using Bing image search. I won't say the results are always better but there is much less noise.

      • by gl4ss ( 559668 )

        different image set.
        the real life image set is harder.

        also the test has... well.. it has set imagesets for trainig and then as the test. would the chinese CHEAT? surely not! or perhaps they just threw more cpu and memory at the problem until they could beat the previous by a fraction(you can mess with the first dataset if you want to).

        who the fuck cares anyways since half the pictures on the internet wont be baidu reachable anyways.

      • Get a more mainstream porn fetish, you pervert

      • why is it whenever I search for an image I get thousands of pictures that have absolutely nothing to do with what I am searching for?

        Never search for hard core porn with safe-search turned on, that just doesn't work. Yes, I know. Sometimes, I also wish that Google came with an instructional manual, but sadly, it doesn't.

        Another area Google fails miserably at, especially compared to Baidu, is searching for copyrighted materials. With Google, it's DMCA this and it's DMCA that, along with a number of annoying paywalls. With Baidu, as long as you're not searching for photos of Tiananmen Square, you're golden.

      • Perhaps CXIX / CXXV (ratio of roman numerals) instead of some more common representation like like 95.2% ?

        Because this is Slashdot and it is required that all stories be written as poorly as possible.

    • Being even more pedant I will point out that the improvement is a lore more. What is important here is the error rate.

      Simply speaking, they went from 4.82 to 4.58 so the improvement is (4.82-4.58)/4.82 = 0.0497 ~= 5%

      Another way to see that is that Google made 48200 errors on the full set of 1 Million images while Baidus made only 45800 errors.

       

    • 0.24 is about 5% of 4.82

  • by Idarubicin ( 579475 ) on Thursday May 14, 2015 @08:22PM (#49694603) Journal

    Okay, so we have a benchmark where the bog-standard human being scores 94.9%.

    Then in February (that's three months ago), Microsoft reports hitting 95.06%; the first score to edge the humans.

    Then in March, Google notches 95.18%.

    Now it's May, and Baidu puts up a 95.42%.

    Meh. Swinging dicks with big iron are twiddling with their algorithms to squeeze out incremental, marginal improvements on an arbitrary task.

    “Our company is now leading the race in computer intelligence,” said Ren Wu, a Baidu scientist working on the project. ... “We have great power in our hands—much greater than our competitors.”

    I presume that next month it will be IBM boasting about "leading the race" and being "much greater than their competitors". The month after that it will be Microsoft's turn again. Google will be back on top in August or so...unless, of course, some other benchmark starts getting some press.

    • The real question, of course, is whether Google, Microsoft, and Apple will soon have to face a serious international competitor. It's true that Baidu's incremental image recognition changes might not be a game changer. But if there's any substance to these claims about speech recognition [forbes.com], Baidu might be on track to produce an actual competitive advantage in ways highly relevant to consumers.
    • by Anonymous Coward

      Baidu doesn't have as many women or as much racial diversity, which is why they're going to fall behind pretty quickly.

    • Nah; next will be Wolfram [imageidentify.com], based on crowdsourcing.

    • Okay, so we have a benchmark where the bog-standard human being scores 94.9%.

      Yes, and now the algorithms are better. More importantly, the 'standard human' only does that when it is paying attention, which it can't do for more than 15 minutes or so. The computer does it day in, day out, forever. And it will get better over time.

      Then in February (that's three months ago), Microsoft reports hitting 95.06%; the first score to edge the humans. Then in March, Google notches 95.18%. Now it's May, and Baidu puts up a 95.42%. Meh. Swinging dicks with big iron are twiddling with their algorithms to squeeze out incremental, marginal improvements on an arbitrary task.

      You denigrate their work, but that's the way science works: incrementally almost all the time. In any field, you will see tweaking, slight improvements, variations, and a couple of new ideas. And then one of the researchers will hit on the next big idea

  • Build a "Watson style" chatterbot that can win on Jeopardy and despite this miraculous achievement, have the company go under because management are dicks to the level of being able to fuck up a free lunch with an error rate of just 4.3%.

  • by koan ( 80826 )

    Baidu said "Your Kung Fu no good in my village"

    • by dave420 ( 699308 )
      Hahaha! Stereotypes are funny! Isn't that right, Mr. obese geographically-challenged backwards American? :)
      • by koan ( 80826 )

        Yep, I come from a time when all my friends (mixed race as we were) had no issues with stereotypes, we routinely teased each other, I was the "cheese eater" and it's true... I love cheese.

        In fact a Latino friend once told me "White folks have nothing on Mexicans when it comes to racism" apparently racism is a real problem down south of the border (and every where else), especially for the indigenous population.
        And... every where I have lived in the World, light skin was favored, in arranged marriages light

  • by Anonymous Coward

    I only took basic AI in university but...

    The power of the computers is not the important thing here if it takes 3 weeks to train the neural network or 1 day does not change the ACCURACY. The running of the NN to identify a picture is also only a fraction of the training time.

    Maybe its more about the TRAINING SET here rather than CPU power. It seems extraordinary. 1 000 000 images sorted into 1000 categories must have been done by humans right? Humans sitting like dog,dog,dog,airplane,dog, house, dog OHPLEAS

  • by Chalnoth ( 1334923 ) on Thursday May 14, 2015 @09:03PM (#49694843)

    The computer has 72 processors and 144 GPU's. That's tiny. Seriously tiny. Sure, GPU's are powerful, especially for image processing. But the larger computers these days are running tens to hundreds of thousands of processors in parallel.

    For example, assuming each shelf has 2 processors and 4 GPU's, and they can fit 12 shelves into a single rack, that's a total of 2 racks. Compare that to this image [google.com] of one of Google's datacenters, where you can see dozens of racks, each containing 14 shelves by my count. And that's just one row. These are gigantic warehouses, with row upon row of racks.

    The level of processing power claimed here is closer to the level of a university processing cluster. The larger scientific clusters can be ten or a hundred times larger, and it's not clear just how big private datacenters are.

    So overall I'm very, very skeptical. There's a very good chance that they fudged the data somehow to make theirs appear better. But if it is better, well, there's no reason why Google and Microsoft couldn't easily outcompete them in short order.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      It may not be processing power alone, but perhaps a better learning algorithm.

      When it comes to solving problems, elegance can sometimes beat out brute force. :D

      • Sure, but that's why Microsoft and Google will rapidly catch up if the numbers are real. Both employ lots of extremely talented and creative people exactly for solving problems like this, and the methods they use have been published.

        Anyway, if they did really manage to produce some better algorithms, that's impressive and important work. But bragging about such a tiny computer seems seriously out of place.

    • by Anonymous Coward

      You said (emphasis mine):

      But the larger computers these days are running tens to hundreds of thousands of processors in parallel.

      I don't know what GPU they're using, but if they're Nvidia GeForce GTX Titan Z (700 series; released in March 2014), then that could be 144 * 2880 * 2 = 829,440 shader cores, obviously in parallel (running at 705MHz to 876MHz).

      That card can do 8.1 TFLOPs (single-precision), or 2.7 TFLOPs (double-precision). That means 144 of them could do over over 1.1 PFLOPs (single-precision). That's nothing to sneeze at.

      p.s. I got my figures from here [wikipedia.org].

      • by Anonymous Coward

        The high end of the TOP 500 super computers use tens of thousands of GPUs (at least among those that use GPUs at all); for instance the Titan at ORNL has 18,688 nVidia Tesla K20's for a total of (roughly) 46 million CUDA cores.

        One generally does not count individual CUDA cores, however (nor the equivalent for AMD GPUs).

      • a) Each CPU in these clusters typically has anywhere from 4-8 cores, and may support two or more times as many threads.

        b) It's far, far more difficult to make full use of GPU hardware than CPU hardware. The best application for stressing GPU hardware is 3D graphics rendering, and even there if you run through the numbers, you find that it's rare that they really push half of their theoretical processing limit. General processing is significantly less efficient on GPU hardware, in particular because it's d

    • More info on the specs of the "supercomputer" that TFA only glossed over:

      The result is the custom-built supercomputer, which we call
      Minwa . It is comprised of 36 server nodes, each with 2
      six-core Intel Xeon E5-2620 processors. Each sever con-
      tains 4 Nvidia Tesla K40m GPUs and one FDR InfiniBand
      (56Gb/s) which is a high-performance low-latency inter-
      connection and supports RDMA. The peak single precision
      floating point performance of each GPU is 4.29TFlops and
      each GPU has 12GB of memory. Thanks to the GPUDirect
      RDMA, the InfiniBand network interface can access the re-
      mote GPU memory without involvement from the CPU. All
      the server nodes are connected to the InfiniBand switch.
      Figure 1 shows the system architecture. The system runs
      Linux with CUDA 6.0 and MPI MVAPICH2, which also
      enables GPUDirect RDMA.

      In total, Minwa has 6.9TB host memory, 1.7TB device
      memory, and about 0.6PFlops theoretical single precision peak performance.

      It's not that powerful overall, but seems to be well thought out for what it is doing. I do the see point about fudging data somehow, they do provide a lot of information of what they supposedly did here [arxiv.org]

      I don't know how this is verifiable, it's not like they have released source code or binaries for the software as far as I can tell.

  • You get a small improvement on an old benchmark --- so promote the heck out of it before somebody beats you again, in turn.

    There's nothing really wrong with this announcement -- It's just not a big breakthrough of any real sort.

  • by gront ( 594175 ) on Thursday May 14, 2015 @11:52PM (#49695543)
    http://image-net.org/challenge... [image-net.org]

    Has the 2014 competition, including test images and validation images.

    Browsing the images, and the 200 or so categories, "artichoke", "strainer", "bowl", "person", "wine bottle"... the challenge is a bit strange: A drawing of a person isn't a "person" category, but a bottle of boyle's cream soda is a "wine bottle".

    And why is "artichoke" something we need to identify in photographs?
    • by gl4ss ( 559668 )

      it's not strange at all.
      it's just artificial ;)

      I mean, the categories aren't exact or well done or even fitting. it's still a manageable recognition contest, it just means that the results aren't useful in comparison if you want to use it to make searches from human queries...

      I mean, humans already score lower than google, ms or baidu's engine does. which makes honing these partial percentages pretty stupid.

      they should just devise a better contest quite frankly, with combination categories or lists of "what

      • they should just devise a better contest quite frankly, with combination categories or lists of "whats in the picture in relation to each other", like "wine in a glass" vs. "wine glass and a wine bottle"

        Yes, they should 'just' create a better contest. The issue with that is that creating a contest, identifying objects, labels, testing, error-correcting, etc. is a slow, expensive, and unglamorous process. The ILSVC is only a couple of years old. And already it is showing its age; I really don't think that they expected it to be solved for much longer.

        So, what's next in terms of contests? Probably a multi-object challenge, where a picture can have many objects; alternately the task would be to label no

  • Do an image search for "man in a purple hat holding a watermelon" Google's results are the most intelligent followed by Bing with Baidu a long way back in third.
  • First they beat us at math. Then at strategy games. Now they beat us at one of the few things we still did better, visually distinguishing apples from oranges.

  • All of them should have their system train on several seconds of video to identify action: drinking, running, etc. Next, train on sequential actions to identify cause and effect. Next, train on movie,tv,news,youtube,cctv audio->text/dialog/script to identify the relationship between words, actions, cause and effect. Finally, they should build a chatbot that maps input words to actions, actions to cause, cause to effects, etc until they get back to output words. It wouldn't pass a turing test but it woul
  • Do you want Skynet? Because that’s how you get Skynet.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...