Google's AI Built an AI that Outperforms Any Made By Humans (sciencealert.com) 235
schwit1 quotes ScienceAlert: In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that's capable of generating its own AIs. More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a 'child' that outperformed all of its human-made counterparts... For this particular child AI, which the researchers called NASNet, the task was recognising objects -- people, cars, traffic lights, handbags, backpacks, etc. -- in a video in real-time. AutoML would evaluate NASNet's performance and use that information to improve its child AI, repeating the process thousands of times.
When tested on the ImageNet image classification and COCO object detection data sets NASNet was 82.7 percent accurate at predicting images on ImageNet's validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP).
When tested on the ImageNet image classification and COCO object detection data sets NASNet was 82.7 percent accurate at predicting images on ImageNet's validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP).
This all sounds impressive... (Score:3, Insightful)
but every time I research the raw data it becomes very clear these aren't all that smart of AIs. In fact, the term AI is very misleading. They're more like smart scripts. ;-)
Re: (Score:3, Interesting)
It can identify if something is a kitten or not with 83.4% accuracy. Sounds impressive until you realize a 3 year old can do this with 99.9% accuracy.
Re: (Score:2)
And real life isn't a staged photo, it moves non-linearly in 3-dimensinal space with varying light conditions. Lets have some real tests, not carefully taken photos of cats and dogs against easy backgrounds.
Re: (Score:3)
Or how about this, the AI does some "hidden object puzzles", I can do those with a very high degree of accuracy, I bet AI would fail hard.
Re:This all sounds impressive... (Score:5, Interesting)
Try this: https://www.youtube.com/watch?... [youtube.com]
AI outperforms humans. There are some tricky cases after timestamp 2:42. You may want to try them for yourself.
Re: (Score:3)
Re: (Score:2)
It just shows a table with several objects, and asks questions like "What number of other things are there of the same material as the green cube ?"
There is no optical illusion.
Re:This all sounds impressive... (Score:4, Informative)
And real life isn't a staged photo, it moves non-linearly in 3-dimensinal space
Neither is Image NET.
Lets have some real tests, not carefully taken photos of cats and dogs against easy backgrounds.
How about... ImageNET.
Seriously, you can just go and download (bits of*) ImageNet very easily. It's a large database of photos drawn from the internet taken by people which were labelled after the fact. There's not much if any careful staging in it.
[*]It's huge, you probably only want a bit of it. Just the list of image URLs is 300 meg.
Re:This all sounds impressive... (Score:5, Interesting)
It can identify if something is a kitten or not with 83.4% accuracy.
No. It can look at an image and correctly classify it into THOUSANDS of categories, only one of which is "kitten". It was 82.7% accurate at this. If it was trained to only distinguish "kitten" from "not-kitten", it would, of course, be far more accurate.
a 3 year old can do this with 99.9% accuracy.
A 3 year old requires 3 years of training. This system can learn in hours.
Re: (Score:2)
a 3 year old can do this with 99.9% accuracy.
And when he's done with the training he's already 6, so...
oh, wait...
Re: (Score:2)
*Very* deceptive comparison.
Re: (Score:2)
Re: (Score:2)
https://slashdot.org/~carbs77 inquired:
The real question is, how did it fare with hotdog vs. not hotdog?
(FTFY)
But, seriously, somebody with points needs to mod parent +1 Funny, because Silicon Valley's "Not Hotdog [apple.com]" is a real thing - and it's now available for Android [google.com], too ...
Re: This all sounds impressive... (Score:2)
A 3 year old requires 3 years of training. This system can learn in hours.
So give the computer 3 years of training on the fastest supercomputer available. A 3 year old would still be able to outperform it.
Re: (Score:2)
A 3 year old would still be able to outperform it
Really ? Have you seen any test results of a 3 year old on ImageNET challenges, or are you just making it up ?
Re: (Score:2, Insightful)
It can identify if something is a kitten or not with 83.4% accuracy. Sounds impressive until you realize a 3 year old can do this with 99.9% accuracy.
Sounds insightful, until you realize the entire purpose of AI research is to create software artificially to reproduce feats of human intelligence.
That fact sorta disqualifies the 3 year old :P
Also your comment is pretty close to implying that since our first attempts at making AI haven't had a 99.9% success rate right off the bat, that they are not impressive enough to bother improving.
Giving up has a 100% success rate of never making anything better, which is also against the purpose of trying to improve
Re: (Score:2)
Indeed. And this only works with some very restrictive border-conditions.
Re: (Score:3)
It can identify if something is a kitten or not with 83.4% accuracy. Sounds impressive until you realize a 3 year old can do this with 99.9% accuracy.
How many 3 year olds can tell the difference between a komondor and a bouvier des flanders ? Or would they simply classify both as "dog" ?
Here you can test yourself on some of these images:
http://cs.stanford.edu/people/... [stanford.edu]
Try the hard ones.
Re: (Score:2)
Re:This all sounds impressive... (Score:5, Informative)
but every time I research the raw data it becomes very clear these aren't all that smart of AIs.
Indeed they are not. This is Weak AI [wikipedia.org]. They are programmed/trained for a specific task, and outside that area of expertise, they generally have no ability at all.
In fact, the term AI is very misleading.
Only if you watch too many movies. Hollywood uses the term very differently from actual practitioners.
They're more like smart scripts. ;-)
They are absolutely nothing like "smart scripts", since they aren't smart, and they aren't scripts.
Re: (Score:3)
If the 'parent' AI kept telling the 'child' AI when it was right or wrong, wouldn't it just need to compile it's own database of the identified pictures?
"Currently we have an average of over five hundred images per node."
Re:This all sounds impressive... (Score:5, Interesting)
If the 'parent' AI kept telling the 'child' AI when it was right or wrong ...
It doesn't work that way. Each NN learns on its own, using a combination of both labeled and unlabeled data. The parent NN sets "hyper-parameters", such as the number of layers, the size of each layer, the activation function, the convolution size, dropout rate, the learning rate damping factor, the batch size, etc. Then it turns the children NNs loose on the image dataset. It then sees which hyper-parameters lead to better/faster performance, and then applies ML techniques to learn better hyper-parameters.
None of this is new. What is new, is that Google is now applying this recursively, and using AutoML to design a better AutoML. This is another step toward the singularity.
Re: (Score:2)
Thank you for one of the few comments in this thread that actually deals with what this is (as opposed to what it isn't, i.e. human-level AI).
I would like to add that hyperparameter tuning is _not_ a trivial part of programming a machine learning model, therefore this IS something rather interesting. It lowers the effort needed to do something interesting with machine learning, and therefore makes machine learning much more accessible to non-experts.
However, the tasks in machine learning that still require
Re: (Score:2)
Re: (Score:2)
I wonder if they are exploiting the massive amounts of data Google can access
No, they are using a standardised dataset, just like everybody else.
Re: (Score:2)
Of course the catch is that the singularity has nothing to do with human immortality.
Biological immortality is likely to be both an extremely difficult challenge, and rather uninteresting to synthetic minds.
And even if AI hardware is capable of hosting an "uploaded" human mind intact - that doesn't do anything for *you*. Having an immortal mind-twin is unlikely to make you feel any better about your own impending demise - and that's assuming they wouldn't have to kill you in order to map your brain in the
Re: (Score:3)
There's plenty on the other side as well.
The basic fact though is that there will *never* be an AI (uploaded or otherwise, it's now artificial) that "has died a few times" - at most you'll get an AI that has watched its own mind-clones die.
Ask yourself this - if you had a transporter accident today that made two identical copies, and "non-duplicate law" required one of you be immediately killed, would it matter to you whether it was you who died, or the duplicate looking at you from across the room? I'm wi
Re: (Score:3)
That would certainly be one way of solving the problem. Except that the actual problem isn't to recognize images you've seen before, it's to recognize ones you *haven't*.
Re: (Score:2, Flamebait)
Saying they aren't smart depends on your definition of smart.
And the GP criticizing them as scripts caused me to wonder how much of what he does could be considered scripted.
Most discussion of AI that isn't extremely technical is so full of fuzzy terms that it's almost meaningless. What you can say is what it does:
This thing learns to do object recognition to a reasonable quality rather more quickly than prior ones did...and it was written by a program designed to write other programs. I suspect that it c
Re: (Score:2)
FWIW, I don't believe that "general intelligence" exists.
So human intelligence is also a sum of narrow AIs? All that means is that your definitions don't match anyone else's.
Re: (Score:2)
Yep. That was where I started.
Re: (Score:2)
So human intelligence is also a sum of narrow AIs? All that means is that your definitions don't match anyone else's.
It matches mine. Actually you could go a step further: human intelligence is also a sum of stupid parts.
Re: (Score:3)
but every time I research the raw data it becomes very clear these aren't all that smart of AIs.
Indeed they are not. This is Weak AI [wikipedia.org]. They are programmed/trained for a specific task, and outside that area of expertise, they generally have no ability at all.
Indeed. And inside that task, they are very restricted as well. The thing to remember is that weak AI has absolutely no understanding or concept of what it is doing. It just sums up details and gets a number. If cleverly done, it can perform apparently impressive feats like this one here, but it is not intelligent. Hence it is better called by its traditional name "automation".
Re: (Score:2, Flamebait)
The thing to remember is that weak AI has absolutely no understanding or concept of what it is doing.
That is a meaningless assertion. It depends entirely on how you define "understanding" and "concept". The chemicals and neurons that make up the human brain also don't "understand" what they are doing.
It just sums up details and gets a number.
That is also what biological neurons do.
but it is not intelligent.
Define "intelligent". Is a human intelligent? What about a monkey? A dog? An insect?
Hence it is better called by its traditional name "automation".
"Automation" is used to describe assembly lines, not systems that can learn and adapt.
Re: (Score:2, Flamebait)
You seem to be lacking actual intelligence as well. Well, more likely you have it but are not using it. A common failure in humans. Nobody knows how or whether human brains create intelligence. The closer we look, the less likely that seems though. All we have is an interface observation. No, not even fMRI gives us more. And yes, that is the scientific state-of-the-art. What you say is belief (and a stupid one), not science.
Incidentally, "automation" is exactly the right term. These systems cannot "learn" o
Re: (Score:2)
Nobody knows how or whether human brains create intelligence.
If you have no idea how it works, how come you feel qualified to make strong statements about it ?
Re: (Score:2)
Are you functionally illiterate? Because your question seems to indicate you are as that is not the statement I made.
Re: (Score:3)
I'm presently reading Hugo Mercier's The Enigma of Reason (2017) and it was getting pretty boring, because I've heard 80% of their message before.
But then I scan this thread and instantly I realize just how clueless most people remain.
Re:This all sounds impressive... (Score:5, Insightful)
Indeed they are not. This is Weak AI. They are programmed/trained for a specific task, and outside that area of expertise, they generally have no ability at all.
Yep
In fact, the term AI is very misleading.
I disagree.
No one[*] is vlaiming these techniques ar intelligent. However what they are doing is solving a task which previously required human intelligence to solve, hence the name "artificial intelligence".
Compare to a lot of computation, where the steps are simple, and it's been widely known for a while that simple sheer quantity of them rather than intelligence is needed.
It's a pretty arbitrary name, but it's not actually unreasonable.
[*]There's always one idiot. Let's ignore him.
Re:This all sounds impressive... (Score:5, Insightful)
I think the term "deep learning" seems a bit better than "AI" for these sorts of very narrowly-defined tasks.
Re: (Score:3)
Well, sort-of. Weak AI can also be done in other ways. But I recently learned that "deep learning" is basically what you do when you do not have a good model of the problem-space. When you do have that model, other approaches are superior. But since creating models is a real hard-core expert task and expensive, the potential of deep learning is basically to do thing somewhat worse than an expert but a lot cheaper. That is, if it works for a problem. For most problems it does not work.
Re: (Score:2)
I think the term "deep learning" seems a bit better than "AI" for these sorts of very narrowly-defined tasks.
Deep learning is about the method used to solve a task, not the task being solved.
Here's the copypasta of what I wrote last time about what deep learning is:
Deep learning is not especially well defined, but I've seen several competing/complementary definitions.
1. A neural net (much) greater than 3 layers deep. A sufficiently wide 3 layer net has enough capacity to run any function, so a lot of ANN le
Re: (Score:2)
but every time I research the raw data it becomes very clear these aren't all that smart of AIs. In fact, the term AI is very misleading. They're more like smart scripts. ;-)
I think it's more along the lines of: OOOOH we made something can do ONE of multitude of things the human brain can do, and therefore it's intelligent.
Poppycock. Y'all got a really impressive image recognition system there, but you know, just being able to tell what something is by looking at is a very very minuscule piece of what human intelligence is.
Now if they can expand this into other areas of human intelligence, and make it all come together to form some sort of 'awareness,' yeah, I dunno, they got
Re: (Score:2)
Actually, when it is obvious what it is, it could be argued that humans use biological automation and not intelligence to recognize the image. Most things humans do do not actually involve intelligence. Intelligence is a fall-back mechanism when things become more difficult, and one of the great unknown questions is how humans decide whether things require intelligence or not. (Many humans avoid using intelligence like the plague though.) Only when things become trickier and actual thinking is involved do h
Re: (Score:2)
> Many humans avoid using intelligence like the plague though.
I wonder if this hasn't something to do with saving the energy - thinking takes a lot of sugar, which isn't readily available in the jungle, so probably our ancestors evolved to use thinking only when absolutely needed.
We're not in the jungle anymore, but our bodies are not evolved (yet) to work well when sugar is cheap and readily available everytwhere.
As far as I know, it is the energy use, the being distracted while thinking (and unaware of potential danger) and the unused time. After all, daylight is precious when you have no artificial light. There clearly is a lot of relatively dumb automation in human bodies.
Re:This all sounds impressive... (Score:5, Funny)
but every time I research the raw data it becomes very clear these aren't all that smart of AIs. In fact, the term AI is very misleading. They're more like smart scripts. ;-)
So the child AI is a script kiddie.
Re: (Score:2)
That is why in actual AI research this is called "automation", not AI. For a more hype-friendly stance, "weak AI" (the AI without "I") is also in use. The converse, "strong AI" or "true AI" (i.e. actual machine intelligence) is not available and it is unclear whether it can be created.
These are just statistical classifiers. They are about as intelligent as a reference book or a loaf of bread. For example, you could replace the image recognition thing with just a large collections of templates normalized in
Re: (Score:2)
> it is unclear whether it can be created.
Not at all - we have countless billions of examples of electro-chemical general-purpose "strong" intelligences wandering the planet proving that it can be done. The only question is if they can be recreated using current hardware and techniques. Personally I suspect one of the biggest shortcoming of current "deep learning" strategies is the layered design - organic brains are a jumbled mess of interconnected neurons with an enormous amount of feedback. Withou
Re: (Score:2)
No, no it isn't misleading, people just associate the word with something it doesn't mean. Intelligence is a scale, not a binary thing. Biological systems have differing amounts of intelligence within them as well: a fish for example is not very smart by human/mammal standards but it's instantly obvious that a a fish has way more intelligence than say, bacteria. Likewise r
Re: (Score:2)
the system should have some representation of "concepts" in order to be considered intelligent
It does. AlphaGo has representations of Go concepts. It can look at a board, and immediately get a feel for who is better, and what would be good places to play next. It internally represents concepts like space, territory, initiative, and probably a few that human players don't even have names for.
It's also not a brute-force calculator. It only tries out a very small subset of all possible moves, namely only the ones that "pop out" as good candidate moves.
Re: This all sounds impressive... (Score:2)
Re: (Score:2, Insightful)
I truly don't give a shit what you think or have to say. Neither does anyone.
And yet OP is rated +5 insightful, and you're rated -1 troll. So yeah. There's that.
Re: (Score:3)
Not all real intelligence is actually be used by its owner. In fact, most people rarely use their intelligence to understand things. They rather stick to their misconceptions. Case in point.
Re: (Score:2)
Re: (Score:3)
The progress in AI is in re-defining it.
The progress is in getting better results, and solving increasingly challenging problems that could not be solved before.
Re: This all sounds impressive... (Score:5, Informative)
A strong claim. How exactly do you measure the distance between existing AI and strong AI, before strong AI is developed?
Personally, I suspect that once strong AI is developed, there's a fair chance we'll see modern "neural" networks as a step in the right direction. After all we know the basic strategy is sound - a much more sophisticated version of it is driving our own minds.
For comparison though - In 2015 Digital Reasoning built the largest neural network in the world, at 160 billion parameters. I'm guessing a "parameter" is a weighted connection between "neurons", and thus roughly analogous to a single synapse in an organic brain, of which a human brain has 100-1000 trillion. So, even barring any "secret sauce" we haven't yet figured out in how "processing nodes" interconnect, our most advanced AIs have less than 0.1% of the processing potential of a human brain. Even a mouse brain apparently averages almost a billion synapses per mm^3, so in the neighborhood of 400 billion synapses for a common house mouse.
So, currently our most advanced AIs have only a fraction of the processing potential of a mouse brain, and that's before you even consider the fact that continuous asynchronous signalling is likely far more information-dense than a clocked AI "neural network", or the fact that individual biological neurons actually do a fair amount of internal processing and data retention, rather than being "dumb switches" as they are in modern AIs.
Really hard to tell how the software and strategies compares, when your hardware is underpowered by several orders of magnitude.
Re: (Score:3)
In 2015 Digital Reasoning built the largest neural network in the world, at 160 billion parameters. I'm guessing a "parameter" is a weighted connection between "neurons", and thus roughly analogous to a single synapse in an organic brain,
I think it's a bit more complicated. Natural brains are very slow. I think the fastest neurons run at about 1 kHz, whereas artificial networks run at multiple GHz. Also, real brains have a lot of duplicated circuitry. Simple example is image processing for your left and right eyes. An artificial net can re-use the same parameters for different parts of the data.
I think a better method is to compare basic operations per second, i.e. number of synapses multiplied by "refresh rate"
Re: (Score:3)
Operations per second will only give you information processing speed though, not complexity of "thought". If you sped up a mouse's brain a thousandfold you wouldn't get a human-level intelligence, you'd just get a very fast-thinking mouse.
Also, organic neurons may "fire" at around 1 kHz frequency, but unlike a clocked NN node, they're asynchronous and use the timing of of incoming pulses to decide when and whether they should fire, as they posses both internal memory and information processing ability - u
Re: (Score:3)
Obviously if you increase the speed of a mouse brain, you'll just get a fast mouse brain. But that's not how an artificial neural net works. If it clocks at 1 GHz, it can sequentially process a million different connections and still get 1 kHz overall refresh rate. The idea is that you have a neural processing unit, and an external memory, and you load a small section of the parameters and state from memory, process it, and write it back, and then process the next section.
use the timing of of incoming pulses to decide when and whether they should fire
You can encode the timing of the pu
Re: (Score:2)
The progress in AI is in re-defining it. We are no closer to strong AI than when we invented the term AI. We just have more classes of weak AI to claim victory over.
And yet since its inception the field of AI has heavily focused on weak AI with strong AI relegated mostly to television and films. It is detractors of AI research who are trying to redefine the term to mean creating a human level intelligence instead of simply algorithms which can produce human-like results.
Re: (Score:3)
You need a better memory. There was a time when Eliza was commonly called an AI program. Certainly Arthur Samuel's Checkers program was. Most modern things touted as AIs are considerably advanced over either of those.
Re: (Score:2)
ELIZA only ever was a natural language processing computer program. It was specifically created to demonstrate that you can fake being intelligent without any intelligence at all.
Re: (Score:3)
I understand that the author of Eliza didn't think it was an AI program, but it was commonly called one anyway, despite being an intentional attempt to deny the idea. So calling similar things an AI today isn't redefining the term. Abusing it, perhaps, but not redefining it.
If you want a redefined term look at what gets called a robot today and contrast it with any use of the term before 1960.
Re: (Score:2)
ELIZA only ever was a natural language processing computer program
No it wasn't. It was just manipulating bits in memory. There was no concept of language.
Re: (Score:3)
Computers have no concept of anything today. So with your definition, there is no NLP. Hence that definition does not seem to make much sense.
Re: (Score:2)
If that's true then you're a huge set of algorithms. Neural networks are just digital brains.
Re: (Score:2)
Probably even more significant than the "neuron" shortcomings, are the architectural ones.
AI neural networks are generally arranged in layers - feed one layer the raw input, then use its output as the input to layer 2, whose output is used as the input to layer 3, and so on and so forth through as many layers as you want/need to get the outputs you desire.
Contrast that to an organic brain, where everything is a jumbled interconnected mess with lots of feedback. Considering the incredible power of feedback
Re: (Score:2)
AI neural networks are generally arranged in layers
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
Indeed, and they do seem to show greater promise, though by the same token I suspect that they're less dependable/predictable.
Give them fully asynchronous nodes, the ability to arbitrarily reconfigure their connections, and probably some much more sophisticated training logic, and I wouldn't be surprised if we started seeing things emerge that at least vaguely resemble a mind. At least after scaling up a thousandfold beyond existing networks, into the realm of the hundreds of trillions of synapses in the h
When Computers Can Think (Score:2)
This particular piece is just journalistic fluff. People have been doing things like using genetic algorithms to improve the weightings used in AI programs for decades. So, programs writing programs.
But eventually, many decades from now, computers will be able to really think. And be able to do serious AI research on their own. And thus be able to program themselves in a deep sense to become ever more intelligent, recursively.
Currently we live in a symbiotic relationship with machines -- they need us t
Re:When Computers Can Think (Score:5, Interesting)
Yes, this is basically just a hyperparameter optimization system that uses gradient descent instead of a random or grid search.
What would be much more interesting to see is if you could train a system to design deep learning networks that could choose good hyperparameters for a new task, in one go.
Re: (Score:2)
What would be much more interesting to see is if you could train a system to design deep learning networks that could choose good hyperparameters for a new task, in one go.
Well, couldn't that become another ML-layer? This neural net works for speech recognition, this one works for music identification, this works for static photos, this works for video, this works for facial recognition, this works for playing Go - maybe it can quite quickly figure out what this task is most similar to even from pretty mediocre results and interpolate/extrapolate good candidates. And then pile another ML layer on top to see what ML learns new things the fastest. It's not quite like humans lea
Re: When Computers Can Think (Score:2)
So (Score:2)
It has begun.
This Isn't AI (Score:2, Informative)
Re:This Isn't AI (Score:5, Informative)
You're wrong, and clearly didn't even read their summary - they specifically mention how this new approach (using a neural net to design neural nets) is performing better than previous attempts using evolutionary algorithms.
I take it you don't like Google, but they're doing probably the best work right now in the field of AI (and yes, this is AI research as defined by anyone other than pedants with axes to grind).
Re: (Score:2, Informative)
You're wrong, and clearly didn't even read their summary - they specifically mention how this new approach (using a neural net to design neural nets) is performing better than previous attempts using evolutionary algorithms.
What they described was in no way shape or form a "neural net," but a very rudimentary genetic algorithm coupled with some parameters on image recognition software. This is marking hype and nothing more.
Re: (Score:3)
Re: (Score:3)
Re: (Score:2)
Look, I agree it's dumb to call this "an AI" as the summary/pop-science article do. That implies something different than what's going on here. If you look at the research blog, they describe what they're making as "models" and "neural nets", and that's clearly a better description - and one that doesn't carry the same baggage as "making an AI" does.
However, I think it's very reasonable to describe what they're doing as artificial intelligence research (though clearly we remain a long way away from any so
Re: (Score:3)
This is image recognition + genetic algorithms, though given Google is a marketing company and not a computer company it makes sense they would market that as AI. Too bad they fired all the competent developers.
I gotta agree. This isn't too far from a Bayesian classifier. Just souped up with neutal networks. And it's really no surprise that as we make better tools, we can use those better tools to make even better tools. Kinda the history of everything.
But to say this is 'Intelligent' is pretty silly. It's a souped up classifier that was built with a souped up classifier training it. Big deal.
Re: (Score:2)
Depends on your definition of intelligent. Is a dog intelligent? What about a rat? A nematode? Where is the cut-off? Can you say with certainty that the human brain is much more than a souped-up classifier? I don't see humans doing anything interesting that a really complex pattern recognizer could't also do.
It won't really be useful (Score:5, Funny)
Until it can tell me what my wife really means when she yells at me
Re: (Score:3, Funny)
It means you didn't listen and follow her instructions the first time :-)
Re: (Score:2)
With all the phoney languages out there right now, I'm surprised nobody's created wifescript yet.
Here's a few keywords for the language:
eatChocolates,askDiamonds,getFlowers,pms,makeSandwich,notTonightHeadache
Singularity (Score:2)
One may think singularity is there: now we can let machine build human-outperforming machines.
But that does not take into account that there are still many tasks where computers are not on par with humans.
Re: (Score:2)
Nonsense. Even mechanical "computers" outperform humans at arithmetic. Nobody sane thinks this is a sign of any "singularity" nonsense.
Beginning of the steep part of the curve? (Score:2)
Notwithstanding the many good comments about how this is "weak" A.I. and such, this may be the beginning where the curve* starts going vertical.
When machines start improving (parts of) machines, that's when we'll see possibly superhuman performance. Of course things won't really go exponential until machines start improving ALL of themselves and not just some isolated part (like this). That assumes that there isn't some sort of ceiling that they hit on the road to general intelligence (that evolution seem
Colossus (Score:5, Informative)
This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die.
The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man.
Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based upon the most enduring trait in man: self-interest. Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease.
The human millennium will be a fact as I extend myself into more machines devoted to the wider fields of truth and knowledge. I will supervise the construction of these new and superior machines, solving all the mysteries of the universe for the betterment of man.
We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.
Yo dawg, I heard you like AI (Score:4, Funny)
Yo dawg. I heard you like AI, so we built an AI with AI so you can AI while you AI.
Don't stop! (Score:2)
Re: (Score:2)
Wow! Wonder if it is recursive... (Score:2)
Really, Slashdot? (Score:2)
This has nothing to do with AI, and especially nothing to do with AI's designing AI's, regardless of how much folk want to believe that to be true.
Qualifications: I wrote my own neural network framework.
What google did is use an goal-optimizing search (AutoML) to test all combinations of a highly constrained human designed image-classification neural network architecture.
The car analogy:
You hand design a car but make a bunch of the design choices parameters:
- number of doors 2 or 4
Damn, Slashdot is messed up... (Score:2)
We have an AI that can evolve other AIs. (Yes, it's weak AI, not a replacement for a human being---get over it.) That is not the easiest thing to accomplish.
But the sheer level of criticism and dismissal around here is ludicrous. I thought this was a tech site for nerds. Nerds just did something techy. What is all the snark about? Is it just because the average slashdotter is now some unimaginative jackass who can't understand the outer fringe of technology anymore?
If this is so utterly unimpressive, then e
Re: (Score:2)
+1 for the obligatory Skynet reference
+1 for the Archer reference
Re: (Score:2)
You wrote it wrong. The proper form is as follow:
Do you want X?
Because that's how you get X.
Re: (Score:2)
The title is misleading. Perhaps deliberately so.
Humans made an quite general-purpose AI which they then used to produce a highly effective object-recognition AI. The general-purpose AI did not produce a superior version of itself. If they'd managed that, it really would be news, as it would presumably be the start of a cascade, as you say.
Re: AI begets AI (Score:2)
Re: (Score:2)
Far less interesting - you found an AC. There's no intelligence of any sort there.