Baidu's Supercomputer Beats Google At Image Recognition 115
catchblue22 writes: Using the ImageNet object classification benchmark, Baidu’s Minwa supercomputer scanned more than 1 million images and taught itself to sort them into about 1,000 categories and achieved an image identification error rate of just 4.58 percent, beating humans, Microsoft and Google. Google's system scored a 95.2% and Microsoft's, a 95.06%, Baidu said. “Our company is now leading the race in computer intelligence,” said Ren Wu, a Baidu scientist working on the project. “I think this is the fastest supercomputer dedicated to deep learning,” he said. “We have great power in our hands—much greater than our competitors.”
Great power (Score:4, Insightful)
I'm not sure an improvement of .5 percent on image cataloging is really that impressive to get not one but two greats...
Re: Great power (Score:2, Informative)
That's not cheating, that's actually how everybody do it.
Trick? (Score:1)
And you get +4 for claiming an algorithm improvement is a 'trick'?
Re: (Score:2)
Unless they ran multiple initial image sets as well as multiple 'test' sets, there's absolutely no way to know whether their win was statistically insignificant.
Relative improvement (Score:2)
Going from 99.5 to 100.0 percent is extremely impressive, while going from 50.0 to 50.5 is probably just noise.
Re: (Score:1)
Re: (Score:3)
The article says "error rate of just 4.58 percent ... Google's system scored a 95.2% and Microsoft's, a 95.06%". That means Google's and Microsoft's error rates are absolutely terrible and they really should just toss a coin!
Err no.. that was just terrible reporting. Google's attempt had an error rate of 4.82 and Microsoft, 4.94.
I guess Baidu reported it this way to make their "win" sound more sensational.
4.58 error rate vs an error rate of 4.82 or 4.94 doesn't sound that phenomenal, I guess.
To quote:
"The system trained on Baidu’s new computer was wrong only 4.58 percent of the time. The previous best was 4.82 percent, reported by Google in March. One month before that, Microsoft had reported achieving 4.94 percent, becom
Same ballpark (Score:2)
Exactly what I though too:
Meh, they are all basically in the same ballpark (including humans). No breakthrough achievement. Wake me when some one achieve 10x better than humans and competition.
Note to Slashdot Editors (Score:5, Interesting)
This is actually News for Nerds.
I'm curious how much difference in computational power was thrown at training these by Google, Microsoft, and Baidu, though it's going to be great to watch how these continue to evolve.
Re: (Score:1, Insightful)
I'm curious how much difference in computational power was thrown at training these
Training a NN requires a lot of cycles (usually GPU farms) but there is a limit to how much is useful. If you just continue to cycle over the same data, you end up over training your network, so that it basically just memorizes the training set, but fails to generalize to other data sets. Rather than just throwing more computational power at the problem, it is usually more productive to use more data, and improve your algorithms and configurations. Using more data wasn't an option in this case, since it
Re: (Score:2, Funny)
I went to see Ex Machina yesterday. I want my own Ava (but with the "stab" feature disabled).
Spoiler alert!
Re: Note to Slashdot Editors (Score:1)
Re: (Score:3)
it was less about having more CPU power, then .... (Score:3)
Hopefully, someday soon, the west will realize that hiring Chinese means simply giving your technology over to the CHinese gov.
get one now! (Score:2)
Only $7.42 including shipping! (AC adapter not included.) Estimated 21-47 days delivery time to USA.
Just Guess (Score:5, Funny)
Re: (Score:2)
picture of a penis
Or cats!
Re: (Score:1)
But is it a piebald cat, and are the tabby areas grey or orange?
Re: (Score:2)
This will make searching for other porn featuring the subject of 4chan threads more efficient. Even a quarter of a percent improvement in performance is going to save companies millions of dollars as their employees browse /b/.
Re: (Score:2)
Which reminds me...I sure got tired of all the "Anthony Wiener" jokes a while back, but just think of how much fun we'll be missing: we won't have Letterman to kick him around anymore [sniffle].
Bad summary (Score:5, Insightful)
The summary is written to imply that Google/MS have error rates in the 90's, while the competition only has about 5% error. The values got inverted - Google/MS also have error rates around 5%, but are behind by fractions.
Your maths is off... (Score:5, Insightful)
As a pedant, I need to point out that the improvement is 0.24%
Also why are the numbers reversed to quote success rates for Google and Microsoft in the summary on Slashdot - it would have been much clearer if the actual numbers in the article (which were all error rates) were quoted!
Re:Your maths is off... (Score:5, Insightful)
Also why are the numbers reversed to quote success rates for Google and Microsoft in the summary on Slashdot - it would have been much clearer if the actual numbers in the article (which were all error rates) were quoted!
Because this is Slashdot and it is required that all stories be written as poorly as possible.
Baidu's new computer was wrong only 4.58 percent of the time. The previous best was 4.82 percent, reported by Google in March.
If Google is only wrong 4.82% of the time then why is it whenever I search for an image I get thousands of pictures that have absolutely nothing to do with what I am searching for?
Re: (Score:2)
Re: (Score:2)
Well, I would suppose that means that image recognition is just a matter of opinion of the viewer of the image. Not so much artificial intelligence as bias in pattern recognition. Currently the best goal for artificial intelligence is accurate in context translation services. First the written word and then the spoken word.
Re:Your maths is off... (Score:5, Informative)
This was the first thing I thought after reading the summary, too. I had to dig into a paper about the 2014 ImageNet challenge, but here is the likely answer:
My second question was, if humans failed to label the images correctly, how did they get a correct label in the first place?
The methodology they used just to label the images is impressively sophisticated. Briefly, they crowdsourced through Amazon Mechanical Turk. A first person would draw bounding boxes around individual items in each image, then additional people would classify the items in each box. Only when a majority of labelers agreed on a label did they consider the label correct.
Re: (Score:2)
Re: (Score:2)
There is a TED talk about it,
https://www.ted.com/talks/fei_... [ted.com]
Quite interesting.
Re: (Score:2)
Also why are the numbers reversed to quote success rates for Google and Microsoft in the summary on Slashdot - it would have been much clearer if the actual numbers in the article (which were all error rates) were quoted!
Because this is Slashdot and it is required that all stories be written as poorly as possible.
Baidu's new computer was wrong only 4.58 percent of the time. The previous best was 4.82 percent, reported by Google in March.
If Google is only wrong 4.82% of the time then why is it whenever I search for an image I get thousands of pictures that have absolutely nothing to do with what I am searching for?
Try using Bing image search. I won't say the results are always better but there is much less noise.
Re: (Score:2)
different image set.
the real life image set is harder.
also the test has... well.. it has set imagesets for trainig and then as the test. would the chinese CHEAT? surely not! or perhaps they just threw more cpu and memory at the problem until they could beat the previous by a fraction(you can mess with the first dataset if you want to).
who the fuck cares anyways since half the pictures on the internet wont be baidu reachable anyways.
Re: (Score:3)
Get a more mainstream porn fetish, you pervert
Re: (Score:2)
why is it whenever I search for an image I get thousands of pictures that have absolutely nothing to do with what I am searching for?
Never search for hard core porn with safe-search turned on, that just doesn't work. Yes, I know. Sometimes, I also wish that Google came with an instructional manual, but sadly, it doesn't.
Another area Google fails miserably at, especially compared to Baidu, is searching for copyrighted materials. With Google, it's DMCA this and it's DMCA that, along with a number of annoying paywalls. With Baidu, as long as you're not searching for photos of Tiananmen Square, you're golden.
"poorly as possible..." Re:Your maths is off... (Score:2)
Because this is Slashdot and it is required that all stories be written as poorly as possible.
Re: (Score:2)
Being even more pedant I will point out that the improvement is a lore more. What is important here is the error rate.
Simply speaking, they went from 4.82 to 4.58 so the improvement is (4.82-4.58)/4.82 = 0.0497 ~= 5%
Another way to see that is that Google made 48200 errors on the full set of 1 Million images while Baidus made only 45800 errors.
The improvement is about 5% (Score:2)
0.24 is about 5% of 4.82
Your monthly algorithm tweak brought to you by... (Score:5, Insightful)
Okay, so we have a benchmark where the bog-standard human being scores 94.9%.
Then in February (that's three months ago), Microsoft reports hitting 95.06%; the first score to edge the humans.
Then in March, Google notches 95.18%.
Now it's May, and Baidu puts up a 95.42%.
Meh. Swinging dicks with big iron are twiddling with their algorithms to squeeze out incremental, marginal improvements on an arbitrary task.
“Our company is now leading the race in computer intelligence,” said Ren Wu, a Baidu scientist working on the project. ... “We have great power in our hands—much greater than our competitors.”
I presume that next month it will be IBM boasting about "leading the race" and being "much greater than their competitors". The month after that it will be Microsoft's turn again. Google will be back on top in August or so...unless, of course, some other benchmark starts getting some press.
Re: (Score:2)
Re: (Score:1)
You do realize WHO is actually working at Baidu right? A Mr. Andrew Ng.
CPU humor (Score:1)
Marginal improvements are worthwhile if the cost of failure is relatively large. c.f. branch prediction in computer architecture....
Your Mom's relatively large.
When she gets on the bus the ALU just gives up.
Her Branch prediction is that if she climbs branches they'll break off.
And I hear she likes JZ and XOR.
Re: (Score:1)
Baidu doesn't have as many women or as much racial diversity, which is why they're going to fall behind pretty quickly.
Re: (Score:2)
China has state policies on human rights and freedom of religion, as well.
Re: (Score:1)
Nah; next will be Wolfram [imageidentify.com], based on crowdsourcing.
Re: (Score:3)
Okay, so we have a benchmark where the bog-standard human being scores 94.9%.
Yes, and now the algorithms are better. More importantly, the 'standard human' only does that when it is paying attention, which it can't do for more than 15 minutes or so. The computer does it day in, day out, forever. And it will get better over time.
Then in February (that's three months ago), Microsoft reports hitting 95.06%; the first score to edge the humans. Then in March, Google notches 95.18%. Now it's May, and Baidu puts up a 95.42%. Meh. Swinging dicks with big iron are twiddling with their algorithms to squeeze out incremental, marginal improvements on an arbitrary task.
You denigrate their work, but that's the way science works: incrementally almost all the time. In any field, you will see tweaking, slight improvements, variations, and a couple of new ideas. And then one of the researchers will hit on the next big idea
The real award goes to Baidu when they can: (Score:1)
Build a "Watson style" chatterbot that can win on Jeopardy and despite this miraculous achievement, have the company go under because management are dicks to the level of being able to fuck up a free lunch with an error rate of just 4.3%.
Coolio (Score:2)
Baidu said "Your Kung Fu no good in my village"
Re: (Score:1)
Re: (Score:1)
Yep, I come from a time when all my friends (mixed race as we were) had no issues with stereotypes, we routinely teased each other, I was the "cheese eater" and it's true... I love cheese.
In fact a Latino friend once told me "White folks have nothing on Mexicans when it comes to racism" apparently racism is a real problem down south of the border (and every where else), especially for the indigenous population.
And... every where I have lived in the World, light skin was favored, in arranged marriages light
Correct me if Im wrong (Score:2, Interesting)
I only took basic AI in university but...
The power of the computers is not the important thing here if it takes 3 weeks to train the neural network or 1 day does not change the ACCURACY. The running of the NN to identify a picture is also only a fraction of the training time.
Maybe its more about the TRAINING SET here rather than CPU power. It seems extraordinary. 1 000 000 images sorted into 1000 categories must have been done by humans right? Humans sitting like dog,dog,dog,airplane,dog, house, dog OHPLEAS
Huh? This is not a very powerful computer. (Score:5, Interesting)
The computer has 72 processors and 144 GPU's. That's tiny. Seriously tiny. Sure, GPU's are powerful, especially for image processing. But the larger computers these days are running tens to hundreds of thousands of processors in parallel.
For example, assuming each shelf has 2 processors and 4 GPU's, and they can fit 12 shelves into a single rack, that's a total of 2 racks. Compare that to this image [google.com] of one of Google's datacenters, where you can see dozens of racks, each containing 14 shelves by my count. And that's just one row. These are gigantic warehouses, with row upon row of racks.
The level of processing power claimed here is closer to the level of a university processing cluster. The larger scientific clusters can be ten or a hundred times larger, and it's not clear just how big private datacenters are.
So overall I'm very, very skeptical. There's a very good chance that they fudged the data somehow to make theirs appear better. But if it is better, well, there's no reason why Google and Microsoft couldn't easily outcompete them in short order.
Re: (Score:2, Insightful)
It may not be processing power alone, but perhaps a better learning algorithm.
When it comes to solving problems, elegance can sometimes beat out brute force. :D
Re: (Score:3)
Sure, but that's why Microsoft and Google will rapidly catch up if the numbers are real. Both employ lots of extremely talented and creative people exactly for solving problems like this, and the methods they use have been published.
Anyway, if they did really manage to produce some better algorithms, that's impressive and important work. But bragging about such a tiny computer seems seriously out of place.
Re: (Score:1)
You said (emphasis mine):
But the larger computers these days are running tens to hundreds of thousands of processors in parallel.
I don't know what GPU they're using, but if they're Nvidia GeForce GTX Titan Z (700 series; released in March 2014), then that could be 144 * 2880 * 2 = 829,440 shader cores, obviously in parallel (running at 705MHz to 876MHz).
That card can do 8.1 TFLOPs (single-precision), or 2.7 TFLOPs (double-precision). That means 144 of them could do over over 1.1 PFLOPs (single-precision). That's nothing to sneeze at.
p.s. I got my figures from here [wikipedia.org].
Re: (Score:1)
The high end of the TOP 500 super computers use tens of thousands of GPUs (at least among those that use GPUs at all); for instance the Titan at ORNL has 18,688 nVidia Tesla K20's for a total of (roughly) 46 million CUDA cores.
One generally does not count individual CUDA cores, however (nor the equivalent for AMD GPUs).
Re: (Score:3)
a) Each CPU in these clusters typically has anywhere from 4-8 cores, and may support two or more times as many threads.
b) It's far, far more difficult to make full use of GPU hardware than CPU hardware. The best application for stressing GPU hardware is 3D graphics rendering, and even there if you run through the numbers, you find that it's rare that they really push half of their theoretical processing limit. General processing is significantly less efficient on GPU hardware, in particular because it's d
Re: (Score:1)
More info on the specs of the "supercomputer" that TFA only glossed over:
The result is the custom-built supercomputer, which we call
Minwa . It is comprised of 36 server nodes, each with 2
six-core Intel Xeon E5-2620 processors. Each sever con-
tains 4 Nvidia Tesla K40m GPUs and one FDR InfiniBand
(56Gb/s) which is a high-performance low-latency inter-
connection and supports RDMA. The peak single precision
floating point performance of each GPU is 4.29TFlops and
each GPU has 12GB of memory. Thanks to the GPUDirect
RDMA, the InfiniBand network interface can access the re-
mote GPU memory without involvement from the CPU. All
the server nodes are connected to the InfiniBand switch.
Figure 1 shows the system architecture. The system runs
Linux with CUDA 6.0 and MPI MVAPICH2, which also
enables GPUDirect RDMA.
In total, Minwa has 6.9TB host memory, 1.7TB device
memory, and about 0.6PFlops theoretical single precision peak performance.
It's not that powerful overall, but seems to be well thought out for what it is doing. I do the see point about fudging data somehow, they do provide a lot of information of what they supposedly did here [arxiv.org]
I don't know how this is verifiable, it's not like they have released source code or binaries for the software as far as I can tell.
PR, not science? (Score:1)
There's nothing really wrong with this announcement -- It's just not a big breakthrough of any real sort.
More info on the ImagNet Competition (Score:3)
Has the 2014 competition, including test images and validation images.
Browsing the images, and the 200 or so categories, "artichoke", "strainer", "bowl", "person", "wine bottle"... the challenge is a bit strange: A drawing of a person isn't a "person" category, but a bottle of boyle's cream soda is a "wine bottle".
And why is "artichoke" something we need to identify in photographs?
Re: (Score:2)
it's not strange at all. ;)
it's just artificial
I mean, the categories aren't exact or well done or even fitting. it's still a manageable recognition contest, it just means that the results aren't useful in comparison if you want to use it to make searches from human queries...
I mean, humans already score lower than google, ms or baidu's engine does. which makes honing these partial percentages pretty stupid.
they should just devise a better contest quite frankly, with combination categories or lists of "what
Re: (Score:2)
they should just devise a better contest quite frankly, with combination categories or lists of "whats in the picture in relation to each other", like "wine in a glass" vs. "wine glass and a wine bottle"
Yes, they should 'just' create a better contest. The issue with that is that creating a contest, identifying objects, labels, testing, error-correcting, etc. is a slow, expensive, and unglamorous process. The ILSVC is only a couple of years old. And already it is showing its age; I really don't think that they expected it to be solved for much longer.
So, what's next in terms of contests? Probably a multi-object challenge, where a picture can have many objects; alternately the task would be to label no
I have a better test that proves Google is No.1 (Score:1)
Once again, computers beat humans (Score:1)
First they beat us at math. Then at strategy games. Now they beat us at one of the few things we still did better, visually distinguishing apples from oranges.
now lets consider the next dimension ... time... (Score:1)
Skynet (Score:2)
Do you want Skynet? Because that’s how you get Skynet.