Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Technology

IBM Claims Big Breakthrough in Deep Learning (fortune.com) 81

The race to make computers smarter and more human-like continued this week with IBM claiming it has developed technology that dramatically cuts the time it takes to crunch massive amounts of data and then come up with useful insights. From a report: Deep learning, the technique used by IBM, is a subset of artificial intelligence (AI) that mimics how the human brain works. IBM's stated goal is to reduce the time it takes for deep learning systems to digest data from days to hours. The improvements could help radiologists get faster, more accurate reads of anomalies and masses on medical images, according to Hillery Hunter, an IBM Fellow and director of systems acceleration and memory at IBM Research. Until now, deep learning has largely run on single server because of the complexity of moving huge amounts of data between different computers. The problem is in keeping data synchronized between lots of different servers and processors In it announcement early Tuesday, IBM says it has come up with software that can divvy those tasks among 64 servers running up to 256 processors total, and still reap huge benefits in speed. The company is making that technology available to customers using IBM Power System servers and to other techies who want to test it.
This discussion has been archived. No new comments can be posted.

IBM Claims Big Breakthrough in Deep Learning

Comments Filter:
  • >a subset of artificial intelligence (AI) that mimics how the human brain works.

    We already have humans for that. How about doing things humans cannot do?

    • Re: (Score:3, Insightful)

      by Baron_Yam ( 643147 )

      >We already have humans for that

      There are limits to overclocking humans, and they have an unbelievably high percentage of downtime combined with low overall reliability.

      >How about doing things humans cannot do?

      This is doing things humans do, but more quickly and accurately, with a lower TCO.

      • by lucm ( 889690 )

        This is doing things humans do, but more quickly and accurately, with a lower TCO.

        No it doesn't. Over and over there's stories about failed projects with Watson. For instance a few months ago there was this failed $60 million project in a cancer treatment & research department of a big university in Texas. Watson was supposed to make cancer diagnostics faster than human and it didn't happen because it just couldn't ingest data.

        That's the biggest problem with all forms of AI and machine learning: feeding it data. You can't just point it at a server and say: "look there, it's somewhere

        • In my mind, AI will be AI when you can tell it what you need and it will seek out the data it needs to learn - without intervention or guidance.

    • How about doing things humans cannot do?

      That's great, and you can work on that if you like. It doesn't diminish the value in having something that can perform the work of a human, but can be scaled up and down without traumatizing a workforce. Something that can run 24/7. Something that can be tinkered with internally without a visit from the police.

    • We already have humans for that. How about doing things humans cannot do?

      There are a lot of humans that aren't very intelligent. Furthermore just because a human can do a task doesn't mean it cannot be done better/faster/cheaper with some mechanical assistance. Humans can do remarkable things but we have our limits. This includes both physical and knowledge tasks.

      • by umghhh ( 965931 )
        Most of the time breaking the limits or an effort to reach as close to them as possible is not needed at all. That is to say it is not needed for any other reason than other humans forcing us to do that.
    • Remember when computers used to be human? No, here's a few books to read.

      "Computing in the Middle Ages: A View From the Trenches 1955-1983" [amzn.to] by Severo Ornstein

      "The Supermen: The Story of Seymour Cray and the Technical Wizards Behind the Supercomputer" [amzn.to] by Charles J. Murray

      • by ls671 ( 1122017 )

        Creimer my friend, here is some counseling:

        You wrote:

        Remember when computers used to be human? No, here's a few books to read.

        how about:
        Remember when computers used to be human? No? Here are a few books to read:

        I I didn't even restructure the phrase any better...

        Never mind your beloved spam links that followed.

    • by gweihir ( 88907 )

      They cannot even do what humans can do (this is weak AI, no actual intelligence present), how would they do things humans cannot do? However it seems the can do the very limited things possible now a lot cheaper. That is actually good news.

      • by ranton ( 36917 )

        They cannot even do what humans can do (this is weak AI, no actual intelligence present), how would they do things humans cannot do?

        Computers have been capable of things humans cannot do since they were just basic adding machines. Why would we even be using computers if you were correct? Do you also think a pickup truck is incapable of doing things humans cannot do just because it doesn't use strong AI to do it?

        • by gweihir ( 88907 )

          You seem to be unaware how computers work...
          With enough time, pencils and paper, a human can do what a computer can do.

          • by ranton ( 36917 )

            With enough time, pencils and paper, a human can do what a computer can do.

            And given enough time, my toddler can do anything I can do. Doesn't mean I want to wait 20 years for that to happen. Doing things with speed and consistency is exactly what computers have always been able to do that humans cannot. By your logic I can do everything Usain Bolt can do; guess I should sign up for the Olympics.

    • How about doing things humans cannot do?

      If a room full of servers can do the task better than a room full of humans, then that's exactly what they have accomplished.

      Doing the same task faster, more consistently, and with fewer errors is often more valuable than being able to perform a previously-impossible task. As a side benefit, automating a task may make new applications possible.

      A human could perform any of the calculations that modern computers do, if they had the time and desire. However, the automated methods transform or create entire ind

  • by JoeyRox ( 2711699 ) on Tuesday August 08, 2017 @12:13PM (#54965749)
    Expensive, polished and flashy commercials. They should develop a server farm for rendering bullshit.
    • by Tablizer ( 95088 )

      The penalty for over-spinning AI is small while the upside is an increase in stock price. I suppose if you keep hyping your products, smart investors will see a pattern and ignore the hype, but there's still plenty of lazy and clueless investors whom it works on.

    • by hord ( 5016115 )

      You mean every z-series mainframe produced? All of them ship fully loaded with every CPU and RAM bay filled but you have to pay IBM to turn them on. Per year.

      • by lucm ( 889690 )

        You mean every z-series mainframe produced? All of them ship fully loaded with every CPU and RAM bay filled but you have to pay IBM to turn them on. Per year.

        And yet they manage to have shrinking revenue year after year. Maybe they should do an internship at Apple.

  • "keeping data synchronized between lots of different servers and processors"
    In a parallel initiative, cryptocurrency Ethereum has begun work on its Casper platform meant to transition away from a 'proof of work' towards a 'proof of stake' incentive. Same problem set, but profits will go into your pocket, not corporations.

  • I wonder how this compares to Google's approach to speeding up ML, the Tensor Processing Unit [google.com], and whether the ideas can be combined for even faster learning.
    • by Anonymous Coward

      Google has open sourced tensorflow and it does parallel processing. Copying open source solution is hardly a breakthrough. But I was never expecting anything new from IBM . Google on the other hand will most likely give us something ahiny in a couple of years.

      • by lucm ( 889690 )

        Google on the other hand will most likely give us something ahiny in a couple of years.

        No, they fire competent people to appease SJW twitter mobs. Leadership and backbone have left the building.

    • by hord ( 5016115 ) <jhord@carbon.cc> on Tuesday August 08, 2017 @01:57PM (#54967023)

      Anything labeled "tensor" just means that it does matrix multiplications according to the principles of linear algebra. The reason why 3D game GPUs are popular for this is because gaming has used this technique since at least the Quake engine was invented. Basically everything that happens in a game rendering engine is just setting up a matrix and then multiplying a bunch of them together in a particular order.

      What you see with things like the TPU is more dedicated hardware that is optimized for matrix and linear operations. These would be more optimized for AI workloads and have busses and memory pipelines designed for these problem sets rather than things like textures or shaders in a GPU. Mathematically speaking they are all doing the same thing, though.

      So, essentially any piece of software that relies on these techniques can benefit from dedicated hardware acceleration and many products offer several backends that will support various hardware accelerator platforms. The recent article on a USB-based one comes to mind. Google's TensorFlow can run on a native CPU, a GPU, or dedicated CUDA-based hardware cards, also. Part of the advantage of using TensorFlow is having an abstraction over this hardware, actually. IBM's effort here mirrors other's in an attempt to distribute huge data workloads across many machines efficiently. There is a big problem in AI right now with idle cores waiting for data to load and sync.

  • "IBM (IBM, +0.44%) says it has come up with software that can divvy those tasks among 64 servers running up to 256 processors total, and still reap huge benefits in speed." Everything in this description is stuff you could do with open source code like TensorFlow 6 months to a year ago. More details are needed to call this a "breakthrough". Have they published a paper?
    • by lucm ( 889690 )

      Releasing anything is a breakthrough at IBM. If one day the broken version of wget on AIX supports the --method option there will probably be a documentary about it on Netflix.

  • But beyond the end of the world jokes this stuff is still massively cool. There is a lot of really great things that can be accomplished by better AI to enrich everyone. Definitely another step towards all the neat Sci-fi things we've seen.
    • by umghhh ( 965931 )
      Except it hardly ever works like this. Good things enrich everyone when the elites enriched themselves. This process is stopped and reversed only during major upheavals. Why this should be different with any AI (whether IBM is doing AI with this is another issue altogether) ?
      • Maybe AI will finally govern finances and decide the 1% is too greedy and fix the problems? :P 1% is definitely an issue but I wouldn't want to stop research on AI etc just because they'll benefit like they do on most things.
  • by RandCraw ( 1047302 ) on Tuesday August 08, 2017 @01:11PM (#54966429)

    IBM's "innovation" is to insert synchronizations in a render farm that enable the gathering of intermediate GPU results across a distributed batch run.

    Of course, Google made the same "revolutionary discovery several years ago when Dean and crew first developed DistBelief. Later they abstracted it into TensorFlow's compute graphs. When was this, 2010?

    Yeah. Another Big Blue Breakthrough.

    • Acording to the article, the main development is they use NVlink.

      I.e.: they use some king of SLI bus for the interconnect between all the graphic cards in their cluster.

      Instead of using OpenMPI over Infiniband like tons of scientific clusters have been doing for the past decade.

      Yes, they're going to shave a tiny bit of latency (from what I gatter the interconnect is directly handled and access by the graphic cards themselves) (as opposed to Infiniband which would need to go through the bus on the motherboar

  • by Anonymous Coward

    Since when has a neural network been and N to M network? It doesn't have the problem they claim to solve, so I'm not clear what exactly they've invented.

    Was this by anychance invented by the marketing department?

  • Why can't this be done in a couple shaders on a GPU?
    • by hord ( 5016115 ) <jhord@carbon.cc> on Tuesday August 08, 2017 @02:11PM (#54967149)

      The data workloads start at multiple TBs and sometimes can't even be hosted on a single machine. This is about distributing workloads to multiple machines that may then have dedicated hardware accelerators attached.

      • by e r ( 2847683 )

        The data workloads start at multiple TBs and sometimes can't even be hosted on a single machine.

        I moonlight as an admin for a contractor that does MRI and fMRI analysis. He does have multiple terabytes of data on the drives, but he's not churning through all of that for every subject. Still, I'm sure that several gigabytes are touched during an analysis run.
        Surely something like id's megatexture or some other kind of streaming could be implemented?

  • by javabandit ( 464204 ) on Tuesday August 08, 2017 @01:55PM (#54966981)

    Seriously. Does IBM actually make products anymore? "Deep Learning"?? Really? IBM, can you tell me where I can buy a Deep Learning? How about a Watson? How about a Cognitive Computing System? Can I buy a Big Data, please? From a technology standpoint, IBM has completely jumped the shark with all of this platform-y, non-productized, framework-y bullshit that requires millions in services hours to implement one-off solutions.

    IBM used to make real contributions from their research division into actual software products. Postfix, anyone? RISC technology with AIX and the RS-6k was revolutionary. Their virtualization innovations became the foundation of the AS/400. But no. They jettisoned all of that.

    They only do two things now: 1) Research for marketing releases to keep their stock price stable, and 2) Add cash-cow products to their portfolio through acquisition, call them "cognitive"/"big data"/"deep xxxxx", and offshore dev and tech support to a country which charges the lowest wages in the world.

    They really have just went down the tubes. It is no wonder that they have declining revenues for so many quarters that I lost count.

    • by hord ( 5016115 )

      They completely blew the PC empire. They completely blew the O/S empire. At least twice. Amazon AWS and Microsoft Azure are currently actually doing what IBM dreamed about doing (a-la-carte pay-per-cycle compute) and are winning. Amazon, Google, and Microsoft are all way ahead of IBM in AI, machine learning, neural nets, natural speech recognition, vision learning, and are currently already in medical as well as chemical and biological spaces. IBM just exposed all of Sweden's data by failing to encrypt

    • by quax ( 19371 )

      Clearly, you are not following their quantum computing research.

      https://www.research.ibm.com/i... [ibm.com]

  • Wow, IBM attempted to throw more hardware at the problem, and actually succeeded.

    Downside: requires more hardware.

    It probably does decrease average training latency at scale (not counting the shovel time invested in sunk cost), but I'd still hesitate to call this an advance.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...