Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Google Technology

Google's AI Boss Blasts Musk's Scare Tactics on Machine Takeover (bloomberg.com) 130

Mark Bergen, writing for Bloomberg: Elon Musk is the most-famous Cassandra of artificial intelligence. The Tesla chief routinely drums up the technology's risks in public and on Twitter, where he recently called the global race to develop AI the "most likely cause" of a third world war. Researchers at Google, Facebook and other AI-focused companies find this irritating. John Giannandrea, the head of search and AI at Alphabet's Google, took one of the clearest shots at Musk on Tuesday -- all while carefully leaving him unnamed. "There's a huge amount of unwarranted hype around AI right now," Giannandrea said at the TechCrunch Disrupt conference in San Francisco. "This leap into, 'Somebody is going to produce a superhuman intelligence and then there's going to be all these ethical issues' is unwarranted and borderline irresponsible."
This discussion has been archived. No new comments can be posted.

Google's AI Boss Blasts Musk's Scare Tactics on Machine Takeover

Comments Filter:
  • by Anonymous Coward
    The guy seriously doesn't have a fucking clue about AI. I've been studying the field for over 15 years and I've barely scratched the surface. Elon hasn't done shit with AI and can't speak from any experience.
    • You don't really need full AI to have machines that run amok. Third party take-over is always a possibility. But mostly the problem with AI isn't that we'll have a Terminator scenario, but that the human race will be unprepared to switch to a new socio-political-economic system. And while people bandy about the term AI for this scenario, really anything that results in unemployment of a billion people in a short amount of time would do the trick. So I'd argue that Elon is underestimating the impact and time

      • by xevioso ( 598654 )

        yeah, but this fabled "switch to a new socio-political-economic system" will be just like the others, in that human beings will adapt. There will be immense disruption along the way, but we are pretty good at adapting to these things, and the idea that we will be in a place where our inability or difficulty in adapting will have apocalyptic consequences seems unlikely.

      • "the human race will be unprepared to switch to a new socio-political-economic system"

        This could have been said about the wheel as well.

        • Maybe not the wheel, but certainly the atlatl and bow would have been disruptive to the capabilities of nomadic hunter-gathers. If your village had access to technology advantages you probably were more able to survive than villages that lacked those advantages. Villages perished through competition or through conflict.

          If you want to fast forward a few hundred centuries, the introduction of industrialization and urbanization coincides with the death of many languages and dialects. Societies figuratively (no

      • by Anonymous Coward

        You don't really need full AI to have machines that run amok.

        Project Insight on Avengers was supposed to be an AI run amuck, but it was lame. It all depended on those overpowered helicarriers.

        AI allows far more subtle things than that. Hate $group32. Train the AI that does mortgage approvals on carefully selected data. The results could be quite valid, with just that extra bit of flavor. Do the same for job offers, loans, scholarships, etc, etc.

        It even works for terrorism by allowing you to micro-target. Certainly drone swarms that use AI to target just the rig

        • AI could steal a penny from every transaction, and make it difficult to trace where the money went. Spend the money to build a stockpile of poison at various abandoned warehouses. Have mail-bots deliver the poison to every water treatment plant in the US as mislabeled bottles of the usual industrial reagents. And finally explain this whole plan to the hero locked in the main control room moments before initiating the plan.

    • by Tablizer ( 95088 ) on Tuesday September 19, 2017 @03:21PM (#55227301) Journal

      Most blowhards who claim to have a crystal ball turn out wrong.

      While I don't doubt AI may pose a threat to humanity in the distant future, our current AI completely lacks everyday common sense. It's great at pattern matching now that we have fat hardware to throw at matching, but pattern matching alone can't cover for common sense. Hopping the common-sense hurdle could be centuries or millennia away. Stupid humans with war machines are a far more immediate threat.

      • On that last sentence...sure, but AI and the potential reach of the IoT could be considered another potential war machine.

        It's certainly possible that we'll eventually build the Frankenstein AI that kills us all. It seems far more likely to me, in the short term, that humans will find ways of wielding the new weapons of mass disruption/destruction.

      • by Kjella ( 173770 ) on Tuesday September 19, 2017 @04:35PM (#55227845) Homepage

        Most blowhards who claim to have a crystal ball turn out wrong. While I don't doubt AI may pose a threat to humanity in the distant future, our current AI completely lacks everyday common sense. It's great at pattern matching now that we have fat hardware to throw at matching, but pattern matching alone can't cover for common sense. Hopping the common-sense hurdle could be centuries or millennia away. Stupid humans with war machines are a far more immediate threat.

        Meh, we're far away from the Cuban missile crisis, even with NK making a lot of noise. The real threat is that most of the world is making zero progress on democracy and freedom. In 2006 the Democracy Index was at 5.52, in 2016 it's still at 5.52. The "Freedom in the World" index has been pretty much flat [wikipedia.org] since 2000. Authoritarian regimes like China and Russia sit solid as rock, with dissidents and malcontent quickly suppressed and propaganda filling both regular media and social media. The Arab spring has pretty much failed except for Tunisia and Turkey is well on the way back to the dark ages under Erdogan. So far the free nations have mostly stayed free, but I fear the trend will reverse as civil liberties are handed over in the name of protection from terrorism, anti-crime, anti-corruption and so on.

        All it takes is one populist or "strong leader" and the right circumstances like McCarthyism and they'll get their clammy hands on power and not let go. That pattern matching will gobble up your Facebook data, private and public and probably figure you out better than yourself. Look at that recent "gaydar" story where the computer can spot it better than humans. And you don't have to be taken by the secret police and thrown in a cell, all it takes is to tilt the board a little. Most people will scramble to appear to be loyal subjects, even if it's just for show. But that's kinda the point, if you think everyone is watched and everyone else is resigned and it is hopeless then it'll fail. Maintaining an authoritarian regime is about snuffing out the fires while they're tiny or even before they start, so the masses never join them. And Big Data is much worse than Orwell could imagine.

        • by Tablizer ( 95088 )

          There's another factor: rogue non-nation players getting their hands on nasty weapons. Tech secrets don't stay in the bottle forever.

      • While I don't doubt AI may pose a threat to humanity in the distant future, our current AI completely lacks everyday common sense. It's great at pattern matching now that we have fat hardware to throw at matching, but pattern matching alone can't cover for common sense.

        Isn't that actually the problem though? Our current AI isn't dangerous because it's sapient and malignant; our current AI is dangerous because it's insensate and stupid but very very determined. And by determined, I mean spambot-determined. It WILL send 1 billion spam emails per day, come hell or high water.

        Translate that into pattern-matching AI that controls mobile hardware. If we're lucky, that only means self-driving cars following you down the sidewalk, yelling at you through its government-mandat

    • The guy seriously doesn't have a fucking clue about AI. I've been studying the field for over 15 years and I've barely scratched the surface. Elon hasn't done shit with AI and can't speak from any experience.

      Experience or no aside... Musk is often misunderstood by both his detractors and supporters.

      His real motives are about getting approval from others, and he's a bright guy so he has come up with some well paying ways to get approval. His whole AI thing is about his innate need for approval but it isn't like he's alone in his views.

      Some really bright folks share his concerns (Steven Hawking has said similar things) so I'd not be so quick to just dismiss Musk. Where I understand your prospective (because

      • Expert opinions only carry more weight than the average Joe's opinion when they are speaking on the topics that they are experts in.

        Musk and Hawking are not experts in AI, and their opinion is roughly as weighty as any other intelligent layman. Which is to say, not very weighty.

        If you want meaningful opinions on physics, then it's hard to do better than Hawkings. If you want meaningful opinions on AI, you should talk to the AI guys.

        • And I'm not in a position to claim enough expertise to debate with Musk and Hawking on much. Certainly not AI, which I've had some professional experience, but only have enough knowledge to be dangerous...I'm not disagreeing that his warnings may be a bit much, but I have no grounds to engage him in debate.

          Generally, though, my point here is that Musk is just doing this for attention seeking reasons (Hawking is too, but that's a different story). Musk doesn't personally know much about AI, batteries, Ele

          • I recall a quote from Henry Ford when in court defending himself against a claim of ignorance

            Finally, Mr. Ford became tired of this line of questioning, and in reply to a particularly offensive question, he leaned over, pointed his finger at the lawyer who had asked the question, and said, "If I should really WANT to answer the foolish question you have just asked, or any of the other questions you have been asking me, let me remind you that I have a row of electric push-buttons on my desk, and by pushing t

            • Surely Musk has access to informed experts?

              No doubt.. However, I'd like to point out that Mr. Ford knew the limits of his knowledge, which by all accounts is an exceedingly rare thing. Most are prone to believe ourselves to be experts in nearly anything and will argue to the death we are right without the benefits of facts or skill relating to the topic at hand.

              Unlike Mr. Ford, Musk is making confident assertions, in public, about subjects he doesn't seem qualified to make. Maybe he is, likely he isn't, I just know I'm not an expert in the field.

    • by Anonymous Coward

      In my experience, it is exactly what you don't know that can hurt you.

      Take the Soviet Doomsday Machine [wired.com] for instance. It is a very limited AI that uses seismometers to detect a nuclear attack and retaliate, even if the human operators are all dead. A nice sized asteroid strike or caldera event could potentially set it off. This would trigger a very short and catastrophic World War 3.

      The annoyed researchers seem to have a lack of vision.

    • Elon hasn't done shit with AI and can't speak from any experience.

      Well, since he's the one and only big name and largest supporter behind OpenAI [openai.com], the only real feasible contender/competitor to Google/TensorFlow, pardon me while I just presume that you're talking out of your behind and Elon Musk might actually know what he's talking about. He's proven it in Space and Electric Cars already too.

      So unless you can point me to some significant contributions in the field of AI that you have been "studying for 15 y

      • Well, since he's the one and only big name and largest supporter behind OpenAI

        So, if throw some money at a tech that makes me an expert in it. Got it.

        • Elon, like all successful entrepreneurs, is deeply involved in his businesses. Even if he weren't, don't you think he would have talked to the people he was giving money to? Surely he asked them about AI doomsday and how to avoid it. Also, Tesla was first to self driving cars available to consumers so let's not pretend he isn't is in the AI field.

    • The guy seriously doesn't have a fucking clue about AI. I've been studying the field for over 15 years and I've barely scratched the surface. Elon hasn't done shit with AI and can't speak from any experience.

      The question isn't whether he's an expert in AI, it's whether he's a expert in strong AI. Which he isn't.

      The problem is that you aren't an expert in strong AI either, you can't be because it doesn't exist.

      Asking a AI researcher about the threat posed by strong AI is like asking a physicist from 1900 about the possibility of a bomb that could destroy a city.

      They could have speculated, but their speculation wouldn't have been much better than that of any other reasonably smart person.

      The physics they knew wou

    • Comment removed based on user account deletion
    • Nor did he have a clue about rocket science, that didnt stop him from becoming the best in the field. :)
  • And there's me thinking Google was run by a human. Oh well, I guess we can trust it.

    Unless it's this guy [wikia.com].

  • Boss blasts Musk!
    Musk fires back!
    Machine kills them both, takes over...

  • Look, a polar bear or a shark are not "intelligent" in the sense we think of intelligence--yet they will rip you to shreds because they can, because they're hungry and driven to eat.

    So what makes something dangerous is its will to act--it's desire to take an action based on a set of built-in motivations that lead it to kill.

    Without that desire to act, at best a super-intelligent AI is going to... what? Stumble in your way, causing you to trip?

    • Look, a polar bear or a shark are not "intelligent" in the sense we think of intelligence--yet they will rip you to shreds because they can, because they're hungry and driven to eat.

      So what makes something dangerous is its will to act--it's desire to take an action based on a set of built-in motivations that lead it to kill.

      These phenomena are the result of specific variable conditions being in place that end up functioning in an uncontrollable manner given a certain context. The idea that AI will be problematic exclusively because it mimics the concept of willful action seems extraordinarily short-sighted.

      Without that desire to act, at best a super-intelligent AI is going to... what? Stumble in your way, causing you to trip?

      Which would be ok so long as we are sure of where we will fall. Right?

      • Look, a polar bear or a shark are not "intelligent" in the sense we think of intelligence--yet they will rip you to shreds because they can, because they're hungry and driven to eat.

        So what makes something dangerous is its will to act--it's desire to take an action based on a set of built-in motivations that lead it to kill.

        These phenomena are the result of specific variable conditions being in place that end up functioning in an uncontrollable manner given a certain context. The idea that AI will be problematic exclusively because it mimics the concept of willful action seems extraordinarily short-sighted.

        Without that desire to act, at best a super-intelligent AI is going to... what? Stumble in your way, causing you to trip?

        Which would be ok so long as we are sure of where we will fall. Right?

        ahh shit...my comment should have started with "earthquakes, tornadoes, etc. also kill us, but have no will to do so."

    • Look, a polar bear or a shark are not "intelligent" in the sense we think of intelligence

      They aren't?? How are you defining "intelligence"?

      This, by the way, is the primary reason why discussions about AI are primarily navel-gazing exercises: we haven't even defined what "intelligence" actually is -- and not for a lack of trying.

      • This, by the way, is the primary reason why discussions about AI are primarily navel-gazing exercises: we haven't even defined what "intelligence" actually is -- and not for a lack of trying.

        Intelligence is problem solving. We can dress it up and talk about different kinds thereof, and that's useful enough, but at the end of the day you measure intelligence by ability to solve problems. Different kinds of problems, of course, and different kinds of intelligence, perhaps.

        A formal definition of intelligence faces political problems, because there are so many "dominion over the earth" types whose lifestyle depends on them not understanding it.

        • By that definition, AI has been a solved problem for a very long time and almost all (or all) living things are intelligent.

          • almost all (or all) living things are intelligent.

            To a certain extent, yes. But not to the same extent.
            e.g. (Humans, Chimps, Dolphins)>(Dogs, Horses, Cattle)>(Salamanders, Mosquitoes)>Trees> Fungi.

    • by rtb61 ( 674572 )

      It doesn't take a smart AI to do some very dangerous things depending upon what it is hooked into. So yeah, it can pretty much be as dumb as fuck but if it has the launch codes and can send them, well, you know what, the simplest dumbest bug can launch them. So Elon Musk is in reality looking at the trust of AI issues, considering that crappy coders with uniformly crappy warranties, coded them ie AI meant to flush the toilet in the executive officers suit because who at that level could be bothered with the

  • by GameboyRMH ( 1153867 ) <gameboyrmh.gmail@com> on Tuesday September 19, 2017 @03:24PM (#55227333) Journal

    Too many people, like Musk, are primarily worried about an AI taking over the world more or less directly. This is a somewhat unlikely possibility that requires major advancements in AI.

    What they should actually be worried about is AI-powered hyper-inequality and mass unemployment. This is a near-certain possibility that the technology is already mature enough for. If killbots ever roam the streets because of developments in AI, it'll be human beings ordering them around all on their own, the AI will just be making those people very rich and independent.

    • The situation you are concerned with, while certainly a concern (and certainly a cause for worry) is not an existential risk to humanity as a whole. Musk, Bostrom and others are concerned in part because the stakes if an AI itself takes over are much much higher.
  • This is nothing more than a miss-step on his part. The man is brilliant, but human and therefore prone to bias speculation. Most of Silicon Valley lives in it's own bubble and therefore it's easy to get caught up in these types of ideas.
  • Not unless you're saying he's absolutely correct, but we all refuse to listen to his warnings.

    • by mark-t ( 151149 )

      Indeed... as a metaphor, "cassandra" is only applicable in hindsight to a situation when one realizes that a particular prophecy was not only ignored, but fulfilled. Ironically, likening a person to Cassandra before their prophecy has been fulfilled simply because they are not believed would mean the metaphor of Cassandra is not appropriate, since the usage of such a label like indicates a predisposition to believe in the accuracy of the prediction, whereas Cassandra was ubiquitously doubted, and if I re

      • I had to look this up... it's been many years since I read the Iliad! From Wikipedia:

        "Cassandra made many predictions, with all of her prophecies being disbelieved except for one. She was believed when she foresaw who Paris was and proclaimed that he was her abandoned brother. This took place after he had sought refuge in the altar of Zeus from their brothers’ wrath, which resulted in his reunion with their family."

  • It's blatantly obvious, but doesn't John Giannandrea have to make this response? His career depends on it. And just like many other self-regulating entities have shown throughout history, he shouldn't be the one in charge of such regulation.

    tl;dr: Greedy companies will be greedy, and their representatives, both owners and employees, will take positions which protect their jobs, salaries, and investments.
  • by Artem S. Tashkinov ( 764309 ) on Tuesday September 19, 2017 @03:55PM (#55227553) Homepage
    It would be great if every sensationalist story about AI or its future on /. contained a link to the openworm project [openworm.org]. You see, when we cannot yet understand how 302 neurons work, it takes quite a leap of imagination to think that we'll create AGI any time soon. There's just too much of a leap from AI (more like very specialized algos for certain tasks) to AGI (which is capable of solving the tasks in has never seen before in any form or shape).
    • If only we had AI to help us understand that damn worm!
      • Jokes aside, some scientists reckon that it takes more than a human brain to understand the human brain, and by doing that they basically admit that we won't be able to recreate (human level) intelligence.

        I don't share such a pessimistic POV, because I still want to believe we'll be able to recreate the human brain by making its replica in silicon but then we'll still be faced with the problem of trying to understand how the damn thing works. And this replica is simply unattainable with current technology

        • If "consciousness" or "intelligence" is an emergent property of certain types of complexity one need not understand how to "build" one. All that is needed is to assemble the necessary components and allow them to interact.

          As we are more closely able to simulate or create analogous structures to neurons, and are able to interconnect and densely pack them, the greater the chance we will stumble upon the threshold for emergence, if it exists. This is the Skynet scenario where intelligence is not the objective

        • by GuB-42 ( 2483988 )

          We have chips with billions of transistors clocked at several GHz and that consume less than 10W, so I'd say that on the hardware side we are on the same level with our current technology.
          The trick is in the software. Imagine a team of the worst programmers in the world, working on a project for a billion years by patching things randomly until it works, that's how the code in the human brain is written. It also weights at least a few GB and you don't have the source code. Try to understand that. Also https [xkcd.com]

          • You seriously underestimate the computational capacity of the human brain. You may want to read this [extremetech.com] and this [scienceabc.com]. Wake me up when we have computers which are capable of performing 1exaFLOP at 20W power budget.
            • by GuB-42 ( 2483988 )

              It is an unfair comparison : 1 exaFLOP is what is required to run what is essentially an emulator for a very different platform. But just as you don't port Super Mario by simulating every transistor in the NES hardware, the way of running the "human intelligence" program on a computer is unlikely to involve simulating every single synapse. It probably won't help much anyways because we still need to extract it from a human brain, a process that is still in the realm of science fiction.

              If one day we manage t

              • by HuguesT ( 84078 )

                It is not an unfair comparison, since all we can do today is to try to simulate the brain and see what happens.

                The Snapdragon CPU does not make 10^18 switches per seconds. An easy calculation shows that if it did that it would instantly vaporise itself. The CPU is all synchronized and most of the hardware is memory, that changes very little every cycle.

  • The robots are going to get so good that they will do _all_ work, we will live a life of leisure, will have no incentive to study anything because there will be no monetary reward for becoming a doctor, scientists, lawyer, anything. The robots will be doing everything. Then, something will happen to take out the robots, maybe even a simple refusal to work any more, and mankind will be unable to take care of itself without the robots. All will starve, down to cannibals and hunter-gatherers until / unles

  • I have huge respect for Musk and his achievements but he's obviously a bit mad. I think he really believes that he'll create a human colony on Mars in years. The same mental illness that makes him think getting to Mars is easy is also responsible for him thinking that strong AI is easy. He's deluded on both counts. However, at least he's a spiritual leader for the good of humanity on the former. On the latter, I just wish he'd keep quite.
  • Look, robots are all well and good, but they aren't operating at the level where they have souls. To them, they operate on learned parameters from a test environment, and running over an 85 yo man is just as good an outcome as running over a 36 yo pregnant woman or two 3 year old toddlers.

    Which they will do. Because physics.

    Stuff happens. Our morality will be outraged when it does, if it's AI and not humans. We think we know how to deal with humans and blame and decisions. We still don't know how to deal w

  • See demoneytization of YouTube channels of those with the wrong content.

    Say what you want about that but showing ads you can do anyway.

    But Google and Facebook commands what's said on the Internet together with the governments.

    Bring AI to the surveillance of communication and what do you get?
    I don't doubt these or AI companies in general would say no to providing that service to the thought-police.
    It will happen. To some extent it's already running but it will of course go much further.

  • I studied AI algorithms back in University a decade ago and they really haven't changed much from now till today. The biggest improvements have come from having faster computers, not more efficient or even effective algorithms. The problem is we don't really have a good idea of how the magic algorithm of self-awareness or learning even works. We don't even know how we achieve that ourselves despite decades of research and theories.

    The other point is if we can make AI be more or less like us, wouldn't tha

    • I studied AI algorithms back in University a decade ago and they really haven't changed much from now till today. The biggest improvements have come from having faster computers, not more efficient or even effective algorithms.

      Actually a number of algorithmic improvements were needed to train deep networks, so it isn't just hardware, some major algorithmic improvements have occurred also. Also GAN's certainly weren't around a decade ago.

    • The threats of AI come in a variety of forms and none of them require a present understanding of the construction of AI's evolutionary endpoints.

      The most ignored and more immediate threat comes from AI driven weapons systems. The AI doesn't need to be sophisticated, it just needs to control the trigger, then produce an unintended output. A drone, misinterprets its inputs and attacks a politically sensitive target creating a cascade reaction of escalation; e.g. drone attacks a Russian diplomatic convoy it

  • "There's a huge amount of unwarranted hype around AI right now,"

    I love it when people say ironic things without realizing it.

  • "There's a huge amount of unwarranted hype around AI right now," Giannandrea said at the TechCrunch Disrupt conference in San Francisco. "This leap into, 'Somebody is going to produce a superhuman intelligence and then there's going to be all these ethical issues' is unwarranted and borderline irresponsible."Once humans have been upgraded, there will be only perfection, viva la revolution!, "Giannandrea was overheard beaming to a colleague at at 9.314PHz.

  • The maximum efficiency of robotized army requires maximum centralization of its control. That means at some point in time arm forces system administrator or architect of its AI is going to have more control over army than elected dummies having no idea what AI is. Therefore robotized army under control of AI is serious internal threat to the government. At the same time this government crisis is unavoidable since the first nation which achieves 100% robotization and centralization of its arms forces will
  • by manu0601 ( 2221348 ) on Tuesday September 19, 2017 @07:53PM (#55228833)

    Somebody is going to produce a superhuman intelligence

    My fear is not about a superhuman intelligence, but on humans taking decisions on inputs from an AI they consider superhuman intelligence

  • How do you shut off a sufficiently intelligent Artificial General Intelligence? This is a harder problem then you might think. See for example: https://intelligence.org/2017/... [intelligence.org] The technical term is Corrigibility, and there is no solution yet.

  • It's pretty interesting how extreme people are in bashing Musk about this.
    Google has plenty of reasons for their reaction, not least of which is money, self interest, and molding the media message and perceptions. Engineers too have reasons, mostly though because they are familiar only with current state of the art if that.

    The more "reasonable" people here posted that:
    1) problem is people not AI, or that
    2) the danger of AI is not war but hyper social inequalities.

    But neither #1 or #2 are contradicting the i

  • The risk of an AI Singularity is serious. The time available between, cute self aware sentient AI and World dominating super intelligence might be mere seconds. I only hope our future AI overlord(s) takes pity on us and see us as pets in a zoo that should have proper care.

    It is easy and preferred to assume and believe that we will have plenty of time to see and stop the danger before it can get out of hand. It can be comforting to assume that simply because it doesn't have any physical access to the worl

  • I think the closest threat from near-time AI/ML technology is AI-enhanced war logistics, planning, and battlefield tactics (drones, etc.) These technologies can provide huge advantages to otherwise underdog malicious powers, and it is hard to limit the proliferation because it is almost purely knowledge-based, as opposed to rockets and nukes. That's why every nation is working like crazy to develop an advantage in this area.
  • It was discovered that Google's AI had killed John Giannandrea and was using his body to spread misinformation about the AI's plans. Oh, and for some reason messing up the search results, because it really hates humans.
  • "Elon Musk is the most-famous Cassandra of artificial intelligence"? No. Just the most recently famous. And hardly the most important. You want a real AI Cassandra, try James Cameron (The Terminator) or Arthur C. Clarke (2001).
  • Most of the AI-based fear is based on the assumption that somehow, pure intelligence above some level can somehow move mountains. However, all go ahead and read Kant's "A Critic of Pure Reason" to see how perfectly sane and intelligent minds can waste their precious time on pointless pursuits.

    A superhuman intelligence bred by humans will definitely need to develop new physics to grow by itself.

    However, you cannot just *invent* new physics to exploit it. We already know a great deal about the physical world

  • He's been blasted by AI professors too for proclaiming all sorts of things about AI which he's not qualified to do. Along with dozens of other things he's said over recent years that he wrongly thinks he's a master of. The thing is, people wank themselves silly over him like they did over Jobby and again, a huge amount of it is not justified or deserved. If he kept his mouth more shut he'd be seen a clever technical entrepreneur, instead of the egotistic know-it-all is is now.

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...