Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Supercomputing

The Double Life of Memory Exposed With Automata Processor 32

An anonymous reader writes "As Nicole Hemsoth over at HPCwire reports 'In a nutshell, the Automata processor is a programmable silicon device that lends itself to handing high speed search and analysis across massive, complex, unstructured data. As an alternate processing engine for targeted areas, it taps into the inner parallelism inherent to memory to provide a robust and absolutely remarkable, if early benchmarks are to be believed, option for certain types of processing.'" Basically, the chip is designed solely to process Nondeterministic Finite Automata and can explore all valid paths of an NFA in parallel, hiding the whole O(n^2) complexity thing. Micron has a stash of technical documents including a paper covering the design and development of the chip. Imagine how fast you can process regexes now.
This discussion has been archived. No new comments can be posted.

The Double Life of Memory Exposed With Automata Processor

Comments Filter:
  • by supersat ( 639745 ) on Sunday November 24, 2013 @06:24AM (#45506177)
    Well, strictly speaking, you can already explore all paths of an NFA simultaneously. You simply convert it to a DFA, which each state representing a potential subset of states the NFA could be in. It requires exponentially more states, but the running time is equivalent.
  • by Anonymous Coward on Sunday November 24, 2013 @07:14AM (#45506273)

    In the real world (as opposed to the Big O world), the running time is waaayyyyyy faster.

    We wrote a program to translate ~10,000 non-trivial Perl regular expressions to DFAs (specifically, into Ragel syntax). Actually, only about a quarter could be completely translated, because of various Perl extensions. The rest had to be programmatically analyzed for sub-expressions which could be turned into filters--only if the filters matched did the PCRE actually run.

    Achieved a 20x (yes, 20x!) improvement in throughput compared to Perl or PCRE, and very nearly that compared to RE2 (RE2 doesn't scale well wrt memory when you try to union more than a few complex expressions together). However, as you pointed out, the state machine was enormous--over 1GB compiled, and over 2GB in source code. It was so big that neither GCC nor clang could compile it within an afternoon because their array and structure handling code has some algorithmic deficiencies. So we had to come up with some tricks there, too.

    You would think the DFA approach would run afoul of memory bandwidth and cache trashing at that scale. And it probably does, although there appears to be a significant degree of locality involved for most inputs. But regardless it's clearly better than running NFAs.

    Tangent:
    I was once on a conference call with Intel engineers who were hawking some fancy substring matching code. I think they implemented Aho–Corasick, utilizing new Xeon vector operations. Their goal was obviously to sell more expensive hardware by selling optimized software at a nominal cost and tying you to their platform. Their numbers were nice, but Ragel was still performance competitive, could scale better (in terms of number of patterns), is FOSS, and has an ingenius design which makes it trivial to integrate into your application without having to deal with an API that breaks your threading or I/O model.

    I'm sure the Automata processor would do well if it can be productized properly. But there's plenty of room for improvement in the way people do regular expressions now.

  • by Trepidity ( 597 ) <[gro.hsikcah] [ta] [todhsals-muiriled]> on Sunday November 24, 2013 @08:25AM (#45506463)

    There is some [sigcomm.org] existing work [github.io] on the same basic idea of massively parallelizing regex matching by doing all the NFA branches in parallel, but using a GPU. Now a GPU is not necessarily perfectly suited to this problem, but it does have the advantage of being a mass-market consumer product, which produces economies of mass-market scale that let the average GPU have a ton more processing units and RAM than this Automata processor does. Would be interesting to see if an NFA-specialized processor gets enough of a speedup to overcome the manufacturing advantage of just throwing a beefy GPU at the problem.

  • by Theovon ( 109752 ) on Sunday November 24, 2013 @09:53AM (#45506685)

    I skimmed over the article, but I couldn’t tell: Is this another example of someone choosing an O(n^2) algorithm over an equivalent O(n log n) algorithm and then patting themselves on the back because they speed up the O(n^2) algorithm through parallelism, even though it’s still slower than the O(n log n) on a single processor?

    I can’t tell because they go on about NFA’s, which would be exponential in time compared to DFA’s. That’s a totally different thing.

  • by Anonymous Coward on Sunday November 24, 2013 @10:58AM (#45506871)

    Hi, I'm a biochemist and I had a very nice conversation with these guys at SC13.

    If it actually does appear on the market as a normal memory module, it is likely to find many uses that cannot be imagined now!

    In the paper presentation the phrase "non-von neumann" architecture is a hint at the applications that may be useful.

    And while we are on the subject, the quantum d-wave talk was fascinating...!

Stellar rays prove fibbing never pays. Embezzlement is another matter.

Working...