Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Technology

AI Godfather Geoff Hinton: "Deep Learning is Going To Be Able To Do Everything" (technologyreview.com) 221

An excerpt from MIT Technology Review's interview with Geoffrey Hinton: You think deep learning will be enough to replicate all of human intelligence. What makes you so sure?
I do believe deep learning is going to be able to do everything, but I do think there's going to have to be quite a few conceptual breakthroughs. For example, in 2017 Ashish Vaswani et al. introduced transformers, which derive really good vectors representing word meanings. It was a conceptual breakthrough. It's now used in almost all the very best natural-language processing. We're going to need a bunch more breakthroughs like that.

And if we have those breakthroughs, will we be able to approximate all human intelligence through deep learning?
Yes. Particularly breakthroughs to do with how you get big vectors of neural activity to implement things like reason. But we also need a massive increase in scale. The human brain has about 100 trillion parameters, or synapses. What we now call a really big model, like GPT-3, has 175 billion. It's a thousand times smaller than the brain. GPT-3 can now generate pretty plausible-looking text, and it's still tiny compared to the brain.

When you say scale, do you mean bigger neural networks, more data, or both?
Both. There's a sort of discrepancy between what happens in computer science and what happens with people. People have a huge amount of parameters compared with the amount of data they're getting. Neural nets are surprisingly good at dealing with a rather small amount of data, with a huge numbers of parameters, but people are even better.

A lot of the people in the field believe that common sense is the next big capability to tackle. Do you agree?
I agree that that's one of the very important things. I also think motor control is very important, and deep neural nets are now getting good at that. In particular, some recent work at Google has shown that you can do fine motor control and combine that with language, so that you can open a drawer and take out a block, and the system can tell you in natural language what it's doing. For things like GPT-3, which generates this wonderful text, it's clear it must understand a lot to generate that text, but it's not quite clear how much it understands. But if something opens the drawer and takes out a block and says, "I just opened a drawer and took out a block," it's hard to say it doesn't understand what it's doing.

This discussion has been archived. No new comments can be posted.

AI Godfather Geoff Hinton: "Deep Learning is Going To Be Able To Do Everything"

Comments Filter:
  • Learning is just fitting, with bits. It's not deep learning, it's bits!
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Tuesday November 03, 2020 @12:35PM (#60680180)
    Comment removed based on user account deletion
    • Re: (Score:3, Funny)

      by AmiMoJo ( 196126 )

      I don't know if they will be able to do everything but based on how the relatively simple brains of domestic cats have effectively enslaved millions of humans already I'm pretty worried about what AI will do.

    • by gweihir ( 88907 )

      I am pretty sure the question you have to ask is:
      Can human brains can do something that a Turing Machine cannot.

      Nobody knows. There are some indicators that this may be the case for at least some people, but there are no solid facts. There are no indicators for the converse though, just some circular reasoning (a favorite of all religious and quasi-religious screw-ups) used by physicalists.

    • by Kisai ( 213879 ) on Tuesday November 03, 2020 @01:27PM (#60680442)

      The problem right now with deep learning, and admittedly it's something I've spent less than three months playing with, is that while it could theoretically do anything, it's not a good replacement for a human's intuition or creativity.

      For example:
      OpenCV - Great it can recognize a face, however training models were largely done on white people, so they have white-bias for detecting faces. Whoops, better start that model over again with more variety in the the training materials which also means starting over from scratch in a lot of projects designed for facial recognition.

      TacoTron2/WaveGlow/etc - ASR, ST and TTS. The ASR can achieve a WER of about 4% which is about as good as humans, but if you go listen to the training corpus, they are ALL, trained against the same six or so English language corpus, which are almost entirely white middle-age adults. No children, no edge-cases ( people with voices like Kristin Schaal or Stephanie Beard) and the voice corpus's are all narrative quality reading public domain material, which means there is a lack of words introduced in the last 70 years. The other problem? They are all universally designed for commercial applications (eg phone IVR's) and thus there is no standardization and you end up retraining your data, wasting months of processing time when a better NN vocoder or synth comes out. Also they use very low quality inputs, which results in some really low quality voice synths that "sound a little better than telephone conversations."

      Genetic Algorithms (used for learning things like how to play Chess, Go, or Super Mario) - The AI can eventually figure out how to solve these games better than a human because it's FASTER at making decisions, not because it's better. Like two AI's I've seen play Mario, exploited glitches in their emulators, but ultimately failed to complete the game because it doesn't play it like a human, it just plays without any consideration for the path it takes.

      Chatbots - Can not solve customer's issues, they are primarily designed to play queue-bounce. Chatbots can be designed to help customers pick the right solution, but they are largely (and websites of the same companies) are designed to bury human contact by trying to get the customer to help themselves, but really the result is more frustration. Even though any CSR in a call center will tell you that 90% of their calls are boring mundane things that customers could solve themselves if directed to it, they mostly want handholding for something. A chatbot can handle a bill payment, it should not handle an address change.

      In all of the above situations, a human's own creativity or intuition, even if it leads to the wrong result, may eventually lead to the right result. Deep Learning however has no plasticity once it's put into production. Quite literately, when it's not in training mode, it can't learn. The end-user hardware that exists is not capable of training, hundreds of years worth of computational power is required when the device it runs on is a mobile device.

      It is theoretically possible to create a "human-sized" AI, but it it might not be the best use of deep learning. It may be better to break a "human simulator" into several pieces, just like how our brains are several pieces. There are seven major parts of the brain. Language and Touch, Sight, Balance and Coordination, Hearing and Emotions, Thinking and Memories and Behavior and Movement , Breathing and Heart. So it would make sense to to fine-tune deep learning algorithms in a similar way. Find a way to make a common databases that several different algorithms can access.

      As far as computational power goes however, we're unlikely to get enough NN power into something human-sized. It's just not going to happen. The best we can hope for on current chip processes is making 3D chips the size of a human skull, but the thermal properties would have to change quite significantly, and I doubt it makes sense. We might not even be able to get close enough without a warehouse full of GPU's this decade.

      • Are you setting the bar too high? That's an honest question.

        For example:
        OpenCV - Great it can recognize a face, however training models were largely done on white people, so they have white-bias for detecting faces.

        Humans are notoriously bad at recognizing people from other races. "They all look the same" has been a punchline for a long time. Failing the same way humans do, and for the same reason, seems like a vote in favor of the deep learning solutions.

        They are all universally designed for commercial applications (eg phone IVR's) and thus there is no standardization and you end up retraining your data, wasting months of processing time when a better NN vocoder or synth comes out.

        Should we be looking for standardization at this point? I could see arguments on either side. We want to try lots of things vs. we need to be able to compare the different things we're doing.

        Also they use very low quality inputs, which results in some really low quality voice synths that "sound a little better than telephone conversations."

        So we need bette

      • by icejai ( 214906 ) on Tuesday November 03, 2020 @03:50PM (#60681060)

        I see this line of thinking all the time, and all it really demonstrates is lack of understanding of *how* the "learning" in "machine learning" works. All your examples illustrate perceived deficiencies of various implementations, with nothing said of the actual underlying math or (neural) network architecture. I find it a bit weird how you talk about what you think is "theoretically possible", while demonstrating that you do not understand the theory.

        It's as if, you were back in the late 1890's watching everyone building horseless carriages from scratch, and all of them suck in different ways, and you therefore claim the idea of a horseless carriage will never be as amazing as everyone imagines they could be.

    • by jellomizer ( 103300 ) on Tuesday November 03, 2020 @01:30PM (#60680460)

      Well Humans are more than brains. We have a body too, born with a set of instincts, and drives. As much as we want to think of ourselves as a brain, controlling a Bio Suit with life support. The connection between the brain and our bodies is much more complicated. If you feel nervous, you may want to take some antacids, which will make you feel less nervous. Because what ever is bothering you, your brain sends a signal down to your stomach, which then creates a feeling, which you then pick up as being nervous in which you may dwell on it, and make it worse. If you take something that calms your stomach then you don't get the feedback, so your nervousness subsides.
      You become more aggressive when you are angry or uncomfortable...

      So these assets also come into play and create actions that a Computer cannot do. The AI that learned to play Tetris found out the best game play is to pause the game. Humans wouldn't do this, because they would get restless watching a static screen. So they will follow other options.

      • That's just an argument of complexity. There is no logical reason why we can't simulate that same level of complexity in computers to get the same behavior (apart from hardware capacity and existing software, of course).

  • In other words... (Score:5, Interesting)

    by Dan East ( 318230 ) on Tuesday November 03, 2020 @12:37PM (#60680206) Journal

    In other words, deep learning will be able to replicate human intelligence and do anything when we figure out how to make the breakthroughs in deep learning that are required for it to be able to replicate human intelligence and do anything.

    Did I really just waste 1 minute of my time reading that summary?

    • Did I really just waste 1 minute of my time reading that summary?

      If that is all you got out of it, then, yes, you wasted your time.

      But if you had a better understanding of the field, you would have gotten much more.

      Geoff Hinton is a smart guy. He and his grad students have made numerous breakthroughs in AI, including the paper that launched the DL revolution in 2006. He knows WTF he is talking about.

      In the past, he has expressed skepticism that DL could "do it all" and that AGI would come, at least partially, from other subfields of AI.

      What he is now saying, is that he

    • by gweihir ( 88907 )

      In other words, deep learning will be able to replicate human intelligence and do anything when we figure out how to make the breakthroughs in deep learning that are required for it to be able to replicate human intelligence and do anything.

      Did I really just waste 1 minute of my time reading that summary?

      As I said, this person is an idiot. There is zero evidence any of his claims have substance.

    • Right now in 2020, we have self-driving cars that drive themselves into walls, AI cameras that focus on a referees bald head instead of the ball, and competing smart speakers what do not understand half of what we ask them.

      I think that we're a solid 10 years away from having to worry about deep learning taking over any human jobs more thought intensive than a security guard.

  • by spun ( 1352 ) <loverevolutionary&yahoo,com> on Tuesday November 03, 2020 @12:37PM (#60680210) Journal

    Strong (i.e. human level intelligence or higher) AI is "yesterday, today and tomorrow's technology of tomorrow."

    I mean, people in the field have a terrible track record. They've been saying real AI is just around the corner for fifty years. This is just more of the same.

    • 50 years is not a long time. Humans were trying to make airplanes for a thousand years. Just because something is delayed 50 or 100 years doesn't mean it won't happen.

      • by spun ( 1352 ) <loverevolutionary&yahoo,com> on Tuesday November 03, 2020 @12:49PM (#60680274) Journal

        I really do not think humans were not trying to make airplanes for thousands of years.

        I mean, there are 100 billion neurons in a human brain. There are 16 million neurons in the largest current AI. And that thing takes megawatts of power, where a human brain uses about 12 watts.

        We're a LONG way off, even assuming there are no fundamental road blocks to our understanding of how the human brain even works.

    • I mean, people in the field have a terrible track record. They've been saying real AI is just around the corner for fifty years.

      Oh really?

      • by spun ( 1352 ) <loverevolutionary&yahoo,com> on Tuesday November 03, 2020 @01:02PM (#60680342) Journal

        Yes. Really. Hans Moravec stated that John McCarthy founded the Stanford AI project in 1963 “with the then-plausible goal of building a fully intelligent machine in a decade.”

        Let that sink in. In 1963, top researchers thought human scale AI was a decade away.

        Source: https://www.openphilanthropy.o... [openphilanthropy.org]

        • My rule of thumb is that any group of researchers who claim something is 10 years out, really have no idea how long it will take. No one knows what breakthroughs will happen in 10 years, so it's at least possible. A five year time frame is a bit more concrete as it tends to be tied to real research results. At two years, the work has been done but needs to be commercialized. Finally at one year, it's already in the pipeline. Besides AI, consider fusion research but I'm sure there are many others.
    • by gweihir ( 88907 )

      Strong (i.e. human level intelligence or higher) AI is "yesterday, today and tomorrow's technology of tomorrow."

      I mean, people in the field have a terrible track record. They've been saying real AI is just around the corner for fifty years. This is just more of the same.

      Indeed. And they made completely ludicrous predictions like "when computers have more transistors than the human brain, they will be more intelligent than humans" (Marvin "the idiot" Minsky). The track-record of these people is so bad, anything they say has a high probability of not happening. I still do not know whether they are all idiots savants or exceptionally dishonest.

      • by spun ( 1352 )

        The earlier AI researchers had little to no understanding of actual neurology. That's the problem. They were operating under the delusion that things that are easy for a human to do would be easy to program. They really thought of the human brain as a sort of computer, because they had no other model to work with. I mean, they weren't even using neural networks. Why would they? The hardware can't matter, right? It's all logic and language. That's what a human brain does, right? Processes instructions, just

        • by gweihir ( 88907 )

          Well, given what neurosciences currently can do (or rather cannot do), this may take a while. Also, it is unclear whether emulating a brain will do the job. There is still no physical basis for consciousness, intelligence (real one) and free will. For the 2nd and 3rd thing, we do not even reliably know whether they exist or not. With that, it is not scientifically sound to claim all that was needed for an intelligent machine (AGI, i.e. what smart humans do) is to emulate the brain.

          • by spun ( 1352 )

            Exactly this. We do not understand what consciousness actually is yet. To try to emulate it without understanding it is pure folly. Anyone who says we will at some point have a machine that can think like a human, without explaining how a human thinks, is living in a fantasy world. Not to say it isn't possible, just that we don't yet know if it is possible, let alone how hard it might be.

  • Another idiot... (Score:4, Insightful)

    by gweihir ( 88907 ) on Tuesday November 03, 2020 @12:40PM (#60680224)

    Same level if insight as Marvin "the idiot" Minsky. No, it is not about the number of computing elements. It may not be about physical hardware at all, but if it is, a human brain is far, far more complex than a simplistic counting metric ("number of synapses", for example) captures. What you need is the connection space and the parameter space for each individual synapse. Then you arrive at something that is a bit outside of the reach of technology and will remain so for a long, long time.

    However, if actual insight is required to solve a problem (which likely requires self awareness), then you can scale these "deep" ANNs as far as you like, they will never be able to do it. All these things can do is interpolate the data they have been trained on, regardless of how large that training data and regardless how large the network. They are completely and fundamentally incapable of doing anything original.

    Hence this person is either stupid (see above) or a liar.

    • Re:Another idiot... (Score:4, Interesting)

      by bobbied ( 2522392 ) on Tuesday November 03, 2020 @01:08PM (#60680366)

      Totally agree.

      Where "deep learning" AI techniques are interesting and useful for many problems in this world, it is not able to do anything on it's own. You have to spend a LONG time tuning, fussing over and training deep networks and that's after you have engineered the system to solve a specific problem.

      I was in a group that for a college class on Machine Learning was working on our final project. We where training a deep network to play "Breakout" (the old Atari game). It took days and days of fussing, training and tuning and we had only improved our average game score by a about 10 points. It was cool to watch the network sorta play the game, but it was obvious that this "deep learning" thing was a LOT of human effort for very small gains in game playing ability. This was just a simple game, as problems get more complex the effort required is geometrically increasing.

      There is no way "deep learning" is going to "do everything". Machine learning is just NOT the solution folks think it is, nor is it capable of doing any odd task you can dream up. It requires a lot of human effort to set up, is only suitable for a unique set of problems, really isn't that good at learning and provides only an estimated solution, which is considered "good" when it does better than a random guess.

      When I see headlines like this, I instantly know the author doesn't have a clue, has never really understood machine learning and has fallen for the hype, which is about as misleading as it can be about ML, worse than campaign ads.

      • You have to spend a LONG time tuning, fussing over and training deep networks

        To be fair, training a human brain embedded in a child requires a lot of fussing and training as well.

        • Re: (Score:2, Informative)

          by gweihir ( 88907 )

          You have to spend a LONG time tuning, fussing over and training deep networks

          To be fair, training a human brain embedded in a child requires a lot of fussing and training as well.

          The outcome is also rather random and usually not very impressive. For example, most people use a completely made-up model of how the world works (religion). That would be really not good in any "smart" automation. And there is a lot that is needed in addition, for example a high-bandwith environmental interface. People to focus on, etc.

        • Children have to be taught facts, but they learn to recognise faces for example with very limited training data - usually mummy and daddies. Ditto cat, dog, table etc. When an ANN can fully recognise objects from a handful of training examples rather than thousands or millions then maybe the BS from the AI industry might have some merit.

    • You aren't wrong, but the implications of that are even more worrying, considering how our economy works already: see, piles upon piles of terrible quality, cheap products mostly outsourced to the third world for manufacture.

      The issue I see here is that training a human, compared to training a machine learning algorithm is expensive and slow-paced, and it takes a fuckton of practice to put insight to any use
      Therefore, we'll quite possibly end up with massive cheapening of services that are now thought
      • by gweihir ( 88907 )

        Indeed. And there are a lot of intermediate-level jobs, "AI" could do better once trained. For example, I just asked an insurance customer support person about a simple thing regarding my retirement insurance (this is in Europe). She apparently is incapable of even understanding what I want (we are on email 4 now...). There are about 10 things you can do with this insurance and this is one of them. This would not be hard to automatize. Another agent at another insurance (mine is split and I wanted to know w

        • by G00F ( 241765 )

          Engineering is going away too, well ok, parts of it are going away, which makes more competition for fewer jobs, and many of the people in Engineering type jobs are no where close to engineering or even lead technician level expertise.

          And yet we are pushing more people to those technical type roles regardless of aptitude because that's where the jobs are yet pay managers/sales more.

        • Once again - the problem is that almost all the people whose jobs involve doing things AI doesn't get went through a training process doing things AI might. So, expertise will be institutionally lost if it comes through

          That said, let's be realistic. *if* 90% of people lose their jobs through being substituted with no replacements, there's going to be social revolts that will make the industrial revolution seem downright peaceful in comparison.
    • They are completely and fundamentally incapable of doing anything original.

      False. Computers are already creating original works of art (including landscapes, human faces, others) that have the same qualities as human-created works of art in the same category (they look realistic, like something that could exist, even though they are not copies of something that does exist).

      One example [generated.photos] among many.

      It is true that the brain is complex. And it is also true that current AI solutions are nowhere near "doing wha

      • by gweihir ( 88907 )

        They are completely and fundamentally incapable of doing anything original.

        False. Computers are already creating original works of art (including landscapes, human faces, others) that have the same qualities as human-created works of art in the same category (they look realistic, like something that could exist, even though they are not copies of something that does exist).

        Nope. That is not "original work" in the sense I used. It is merely a sifting of the output of a randomized process according to some interpolation of what the ANN was trained on. That many humans are incapable of understanding this is not a surprise. There is, in fact, a whole type of art that mocks the stupidity of the observer (Dadaism).

        Incidentally, my comment was about ANNs. And they will never be able to do anything original, because the very Mathematics they are based on does not allow them to do tha

        • It is merely a sifting of the output of a randomized process according to some interpolation of what the ANN was trained on.

          For the most part, that's is all human-created art is. If a human paints a picture of some landscape, that human is merely assembling elements of other landscapes that they have seen. It's the same process, but happening in a brain instead of a circuit board.

          You listed one specific category of art that is not like this. By doing so, you moved the goal-posts. Your original statement

  • anymore. With AI doing everything humanity will be in a dismal state. Just the filthy rich in their ivory towers and the slums for the rest of humanity..
    • Just the filthy rich in their ivory towers and the slums for the rest of humanity..

      Right. Just like only the ultra-rich have laptops and cellphones. Whatever.

      Look at the richest people in the world today. They are not the people that invented technology. They are the people that successfully brought technology to the mass market.

      There is no reason to believe that the same won't happen with AGI.

      The obvious first application for AGI is to stick it into an Alexa-dot and sell it for $39.

      • Ah "AGI is to stick it into an Alexa-dot" a new data gathering system for 39, 29, 19.95 free So what are you providing that gives them value in return.
        Every dollar that gets into their pile is a dollar someone else goes without. And we loose any gain from someone else using it better taking a chance.
        The reason Big Government, Marxism, Monopolies and Socialism fail for the masses is because centralized planning can never compete with those practicing mass distributed risk taking themselves on a smaller sc
    • by leptons ( 891340 )
      This is more or less part of the story line to Idiocracy.
  • Yes, of course, even a small shell script can do that one.

    My problem with futurists is they don't give us a reasonable time frame. Will deep learning and AI have these capabilities in the next 20 years? If it's in 50 years, maybe I don't care too much because I'll be way too old by then to care about more than slowly chewing applesauce. If it's 100 years? keeping the climate and society from unraveling before then is the bigger challenge than AI.

    • by gweihir ( 88907 )

      They do not give any time-frame for these predictions, because there is zero evidence they will ever become true. Hence they implicitly select "after I am dead" so nobody can call them out on the crap they talked.

  • The question is not "Can a fully developed AI do x?" But instead: Once we get a fully developed AI, how do we convince it to stop playing Call of Duty and do the job we built it for? Honestly people, the main advantage of computers is that they can NOT do everything a human can do which is why they obey our orders to add cat food to the shopping list rather than downloading porn all day.
    • Once we get a fully developed AI, how do we convince it to stop playing Call of Duty and do the job we built it for

      Similar to how we manage humans: charge them for the electricity needed to run, pay them a salary for doing their job, and threaten to take it away if they slack off... Oh crap, we're going to have to pay these things a wage now too...

    • The current approach to 'AI' is already 'fully developed', they just keep throwing more and bigger hardware at it, hoping to impress enough uninformed customers to make enough profit off the 'development cycle' for that damned thing that their stockholders and/or Boards of Directors won't chop their heads off.
  • by wakeboarder ( 2695839 ) on Tuesday November 03, 2020 @01:11PM (#60680380)

    There are many things that separate the human brain from machines. The first thing is there are centers that do different tasks like the prefrontal cortex which helps make decisions contains personality ect. All these centers have to talk to each other, when they do incorrectly you get 'mental problems' .
    Another thing is brain waves, which sync up different parts of the brain. If these don't work I suppose you get seizures.

    Since we don't have a really great understanding of how or why the human brain works in many ways, how would we understand an AI? I suppose an AI may exhibit 'mental problems' also. I think that its hubris to say that we could create anything like the human mind in a short timeframe .

    • Congratulations, you appear to be one of the small minority that actually get it.
      We have no clue how the human brain produces phenomena like 'reasoning' or 'self awareness'. Therefore there is no way you can build machines or write software that has those qualities. 'Machine learning' focuses on what is likely the overall most trivial aspect of how a brain works, the most mechanical aspect of it -- which just illustrates how little anyone understands about the overall issue.
      Humans appear to be quick to an
  • But if something opens the drawer and takes out a block and says, "I just opened a drawer and took out a block," it's hard to say it doesn't understand what it's doing.

    Not so...

    If a computer detects a 700nm lightwave and says "red" it has not experienced the color red. ( As you have in your mind) This is the hard problem of consciousness. It knows no more of "red" than a camera...

    • I've never really been impressed by the idea of qualia. Some proteins bending in my eye upon exposure to 700 nm isn't any more or less special than any of the rest of thought. "The Chinese Room" understands Chinese as well as a person from any important functional standpoint, beyond which is just meaningless metaphysics.
    • Open AI.log.txt :
      Activated Actuator1; call MoveTo("Actuator1",{coordinate_x},{coordinate_y},{coordinate_z})
      Called PatternMatch(VisualSensor1)
      PatternMatch1() returned "Drawer"
      Activated Actuator1.1 through Actuator 1.5; called Actuator1Cmd("contract,1,5")
      Activated Actuator2; called Actuator2Cmd("pull")
      Called PatternMatch(VisualSensor1)
      PatternMatch1() returned "Block"
      (and so on)

      It's not 'saying' anything. It's not capable of 'saying' anything at all. All it does is execute the code that a human wrote.

      it's hard to say it doesn't understand what it's doing.

      It doesn't 'understand what it's doing' because 'understand' implies 'reasoning' and this software is completely incapable of 'reasoning', 'thinking', 'cognition', and so on, because we don't even have the vaguest notion of how any of that works in our own brains, let alone any clue how to write software or build machines that have any of those qualities.
      So-called 'AI', in it's curre

      • Humans need to stop anthropomorphizing inanimate objects. We've been doing that for thousands of years and we're going to wreck ourselves if we don't knock it off.

        I agree! Computers hate it when we do that.

  • We were told that much about AI in the late 60s. Indeed, they were certain that by 2,000 - i.e. twenty years ago - human-level AI would have been achieved. These idiots seem to looking forward to enjoying a second AI winter.
  • "Deep learning" doesn't mean what you think it means..

    To everybody who reads "Deep Learning" or "AI" and are tempted to buy into the hype or that it somehow can work miracles on a computer, making programming obsolete, PLEASE hear me. There is nothing magic about Artificial Intelligence or Machine Learning. It requires knowledgeable people to engineer a Machine Learning solution. It requires a lot of work to find and validate usable training data. It takes time to tune the solution and train it. It t

  • by Rick Schumann ( 4662797 ) on Tuesday November 03, 2020 @02:02PM (#60680572) Journal
    So-called 'deep learning algorithms' are not capable of reasoning therefore they cannot 'do everything', that's complete and total bullshit, hype-and-nonsense.
    Please, humans, for fuck's sake, knock off the 'magical thinking' nonsense before you wreck yourselves!
  • There isn't a single species on this planet that excels at any single task by collecting more data. Simply put, the novice becomes the expert by learning more, but the student becomes the master by reducing that knowledge to only what is needed.

    Birds fly in flocks, at high-speeds, in formations, with far better guidance systems than any tesla, fighter jet, or air force one. Do you intend to convince me that the bird-brain in my local goldfinch is monitoring air-pressure, wind speed, altitude, rain, and a

  • ... when an AI gives that interview instead of Hinton.

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...