Forgot your password?
typodupeerror
Google Businesses The Internet

Google Open Sources Its Data Interchange Format 332

Posted by kdawson
from the it's-fast-that's-why dept.
A number of readers have noted Google's open sourcing of their internal data interchange format, called Protocol Buffers (here's the code and the doc). Google elevator statement for Protocol Buffers is "a language-neutral, platform-neutral, extensible way of serializing structured data for use in communications protocols, data storage, and more." It's the way data is formatted to move around inside of Google. Betanews spotlights some of Protocol Buffers' contrasts with XML and IDL, with which it is most comparable. Google's blogger claims, "And, yes, it is very fast — at least an order of magnitude faster than XML."
This discussion has been archived. No new comments can be posted.

Google Open Sources Its Data Interchange Format

Comments Filter:
  • by Anonymous Coward on Tuesday July 08, 2008 @04:10PM (#24105175)

    So is, well, just about anything.

    • by dedazo (737510) on Tuesday July 08, 2008 @04:34PM (#24105539) Journal

      Looks like Google just invented the IIOP [wikipedia.org] wire protocol, which is also platform agnostic and an open standard.

      I guess the main difference here is that their "compiler" can generate the actual language-domain classes off of the descriptor files, which is a definite advantage over "classic" IDL.

      "Google protocol Buffers" is cooler than the OMG terminology, but this kind of thing has been around for 20 years.

      • by kriston (7886) on Tuesday July 08, 2008 @05:22PM (#24106289) Homepage Journal

        Oh, I'm a little ashamed that I recognize this message as CORBA flamebait.

      • by jd (1658) <.imipak. .at. .yahoo.com.> on Tuesday July 08, 2008 @06:12PM (#24107089) Homepage Journal
        Technically, you are correct - platform-agnostic data transfer has been possible since Sun's earliest RPC implementations. However, this seems to be considerably lighter-weight (although so is Mount Everest) and because order is specified, it's going to be much simpler to pluck specific data out of a data stream. You don't need to have an order-agnostic structure and then an ordering layer in each language-specific library.

        There have been all kinds of attempts to produce this sort of stuff. RPC, DCE, Corba, DCOM, etc, are programmatic interfaces and handle function calls, synchronization, etc. OPeNDAP is probably the closest to Google's architecture in that it is ONLY data. It's more sophisticated, as it handles much more complex data types than mere structures, but it has its own overheads issues. It isn't designed to scale to terabyte databases, although it DOES scale extremely well and is definitely the preferred method of delivering high-volume structured scientific data - at least when compared to the RPC family of methods, or indeed the XML family. I wouldn't use it for the kind of volume of data Google handles, though, you'd kill the servers.

        • by vrmlguy (120854) <samwyse@nOsPam.gmail.com> on Tuesday July 08, 2008 @07:23PM (#24108123) Homepage Journal

          Technically, you are correct - platform-agnostic data transfer has been possible since Sun's earliest RPC implementations. However, this seems to be considerably lighter-weight (although so is Mount Everest) and because order is specified, it's going to be much simpler to pluck specific data out of a data stream. You don't need to have an order-agnostic structure and then an ordering layer in each language-specific library.

          Actually, XDR (used for Sun's RPC) is very lightweight, arguably lighter than PB. (Yes, I forsee a Java implementation called PB&J.) XDR is potentially more compact, since it doesn't encode field identifiers, but it's also big-endian, which made it less attactive as little-endian computer archtectures took over the world. Also, while XDR demands a fixed ordering of fields, field order in PB *isn't* specified; the field identifiers allow you to order the fields anyway that you like.

          Overall, I like it. It's obvious that the developers were familar with the flaws of older protocols, and found ways to fix most of them. The only obvious thing I see missing is a canonical way to encode the .proto file as a Protocol Buffer, to make a stream self-describing.

          • by vrmlguy (120854) <samwyse@nOsPam.gmail.com> on Tuesday July 08, 2008 @11:25PM (#24111095) Homepage Journal

            The only obvious thing I see missing is a canonical way to encode the .proto file as a Protocol Buffer, to make a stream self-describing.

            A-ha! I found it! [google.com] "Thus, the classes in this file allow protocol type definitions to be communicated efficiently between processes."

            Why do you need this? Well, you may not. "Most users will not care about descriptors, because they will write code specific to certain protocol types and will simply use the classes generated by the protocol compiler directly. Advanced users who want to operate on arbitrary types (not known at compile time) may want to read descriptors in order to learn about the contents of a message."

    • by alexgieg (948359) <alexgieg@gmail.com> on Tuesday July 08, 2008 @04:45PM (#24105683) Homepage

      An order of magnitude over XML? So is, well, just about anything.

      Well, let's also not forget that the meaning of the expression "an order of magnitude" depends strongly from the numeric base you're using.

    • But the Slashdot Add above the message says XML combined with Java is fast. And the slow part is the Database server. Could I be mistaken.

  • by TheRealMindChild (743925) on Tuesday July 08, 2008 @04:13PM (#24105217) Homepage Journal
    "Google's blogger claims, "And, yes, it is very fast -- at least an order of magnitude faster than XML."

    That is just because they aren't using enough XML!
    • http://www.w3.org/XML/EXI/ [w3.org]

      The development of the Efficient XML Interchange (EXI) format was guided by five design principles, namely, the format had to be general, minimal, efficient, flexible, and interoperable. The format satisfies these prerequisites, achieving generality, flexibility, and performance while at the same time keeping complexity in check.

      Many of the concepts employed by the EXI format are applicable to the encoding of arbitrary languages that can be described by a grammar. Even though EXI utilizes schema information to improve compactness and processing efficiency, it does not depend on accurate, complete or current schemas to work.

  • I bet ... (Score:5, Funny)

    by Anonymous Coward on Tuesday July 08, 2008 @04:15PM (#24105239)

    ... it requires piping data through google's servers for data mining and ad injection purposes.

  • by Yvan256 (722131)

    Is that like PHP's serialize?

    • by psergiu (67614)

      More like the Oracle SQLLoader ...
      Or the VMS Fixed Record Length/Indexed or VFC files ...

      I think Google might just receive a visit from the patent fairy ...

    • by Foofoobar (318279)
      No. This is more along the lines of a hashmap or a multidimensional array. With serialize in PHP, you still have to unserialize which takes time to parse. With a multidimensional array, it's already in a usable state; no additional parsing is required. And you can add on or remove variables whenever you want without having to reparse.
    • Re: (Score:3, Informative)

      by merreborn (853723)

      1) It has a binary format, far more compact (and faster to unserialize) than PHP's text-based serialized format.
      2) It handles multiple versions of the same objects (e.g., your server can interact with both PhoneNumber 2.0 and PhoneNumber 3.0 objects relatively trivially)
      3) It generates code for converting each format into objects in their 3 supported languages.

      So, no, not really.

  • No PERL API ??!!?? (Score:4, Insightful)

    by Proudrooster (580120) on Tuesday July 08, 2008 @04:18PM (#24105275) Homepage

    C++
    Python
    Java

    what about PERL ? :]

  • Just think of the kind of power it took to make millions of employees standardize on the same format for their data interchange. Humans just gravitate to power wielding forces. Wonder what format they require for their surprise blog posts.

  • SunRPC is old and awkward. Always want something better.
  • ...and we'll be happy.

  • by Anonymous Coward on Tuesday July 08, 2008 @04:28PM (#24105431)

    It looks like Google has taken some of the good elements of CORBA and IIOP into its own interchange format.
    While CORBA certainly is bloated in a lot of ways, the IIOP wire protocol it uses is vastly faster and more efficient than any XML out there.. and yes it is just as "open" (publicly documented and Freely available for use in any open source application) as any XML schema out there. J2EE uses IIOP as well and its is technically possible to interoperate (although the problem with CORBA is that different implementations never really interoperated as they were supposed to).
        As a side note, I'd rather write IDL code than an XML schema any day of the week too, but that's another rant.

  • by Anonymous Coward on Tuesday July 08, 2008 @04:29PM (#24105439)

    both really from the same design sheet, but thrift has been opensource'd for over a year, and has many more language bindings. its been in use in several opensource projects (thrudb comes to mind), and has much more extant articles/documentation.

    http://developers.facebook.com/thrift/

  • Fast (Score:5, Interesting)

    by JamesP (688957) on Tuesday July 08, 2008 @04:30PM (#24105457)

    "And, yes, it is very fast â" at least an order of magnitude faster than XML."

    Just wait for the XML zealots to come crashing and not believing that XML is not the fastest, best, solution to all the world's problems (including cancer) and of course people at Google are amateurs and id10ts and WHY DO YOU HATE XML kind of stuff.

    Or, as Joel Spolski once said: http://www.joelonsoftware.com/articles/fog0000000296.html [joelonsoftware.com]

    No, there is nothing wrong with XML per se, except for the fans...

    • Ok, I'll bite... (Score:5, Interesting)

      by Dutch Gun (899105) on Tuesday July 08, 2008 @05:03PM (#24105961)

      Obviously, those at Google felt XML didn't work well for them. They have the resources to invent a protocol and libraries to support it. And, they are big enough to be their own ecosystem, which means as long as everyone at Google is using their formats, interop is no biggie. Good for them, I don't begrudge that decision.

      I'm actually a game developer, not a web developer, so I'll speak to XML's use as a file format in general. Here's a few points regarding our use of XML:

      * We only use it as a source format for our tools. XML is far too inefficient and verbose to use in the final game - all our XML data is packed into our own proprietary binary data format.
      * We also only use it as a meta-data format, not a primary container type. For instance, we store gameplay scripts, audio script, and cinematic meta-data in XML format. We're not foolish enough to store images, sounds, or maps in a highly-verbose, text-based format. XML's value to us is in how well it can glue large pieces of our game together.
      * All our latest tools are written in C# and using the .NET platform (Windows is our development platform, of course). It's astoundingly easy to serialize data structures to XML using .NET libraries - just a few lines of code.
      * Because it's a text-based format and human readable, if a file breaks in any way, we can just do a diff in source control to see what changed, and why it's breaking.

      I'll make a concession that I've heard of some pretty awful uses of XML. But those who dismiss XML as a valuable tool in the toolchest are equally as foolish as those who believe it's the end-all and be-all of programming (I'm not saying that's true of you, just pointing out foolishness on both sides). Like any tool, it's most valuable when used in it's optimal role, not when shoehorned into projects as a solution to everything.

  • Smart move (Score:5, Insightful)

    by ruin20 (1242396) on Tuesday July 08, 2008 @04:32PM (#24105491)
    Since they're Google people will clamor over this (as we're doing here) and the result will be at least a handful of folks will learn and use it. Google's key to success has always been finding fresh talent and removing barriers from their contributing and advancement so what I've seen they've done is A) help train potential employee's on how they're tech and thought process works, and B) provide themselves a filter by which to gauge the ability for a potential employee to understand they're system.

    And as a bonus, they help undermine opponents who use competing technologies by helping train the workforce away from their practices. Overall I think it's very intelligent and well done strategic move.

  • by jandrese (485) <kensama@vt.edu> on Tuesday July 08, 2008 @04:37PM (#24105571) Homepage Journal
    The point of this isn't so much that it's faster than XML (so is everything else), it's that google took everything that a real person needs in a IDL and cut out everything else. Most IDLs have a serious case of second system effect, where features are added that nobody uses but seriously complicate the API. Even XML suffers from that (have you ever seen the kind of data structure you need to store a DOM, or what that does to library APIs for manipulating XML)?

    I'd use it because 95% of the time all I need is something simple like this, and the other 5% of the time I should go back and rethink my design anyway.

    That said, there is still a case for XML, especially the self documenting and human readable nature of the document, but there are a lot of cases where it is used today where it only adds unnecessary complexity and actually makes your code more difficult to maintain instead of simpler.
  • by Alex Belits (437) * on Tuesday July 08, 2008 @04:42PM (#24105649) Homepage

    I always told people that -- it's optimized for:

    1. Easy parsing by parsers written by people who slept through their compiler classes.

    2. Verification in situations when it's impossible to devise a meaningful reaction to a failure (other than either "everything failed, turn off the computers and go home" and "assume the data to be valid anyway because ALL of it will have the same formatting error because the same program generates it")

    3. Dealing with data that arrives in neatly packaged "documents" and "requests", as opposed to being constantly produced and consumed.

    4. Either communicating between programs that have the same knowledge of message semantics, or preparation of pretty human-readable documents.

    None of the above even remotely applies to anything practical except UI/display formats -- this is why XHTML and ODF (and because of that at some extent XSL) are usable, SOAP is a load of crap, and for the rest of purposes XML is used as a glorified CSL with angle brackets. XML is widespread because monumentally stupid standard is still better than no standard.

    So here is your example of how superior can be ANY format that is not based on this stupid idea.

    • by r3g3x (1147243)

      XML is crappy format

      That statement underlines most people's myopic vision of the XML family of technologies. XML is not a format it is a family of technologies based around a common grammar.

      XML is not a bucket.
      It is not a passive container for data.
      It is a transformable semantic graph.

      The heart and sole of XML is XLST [w3.org] it serves as a common 'glue' that allows the transformation between the various standardized 'languages' XML [w3.org], XHTML [w3.org], XLST [w3.org], XSL-FO [w3.org], SVG [w3.org], RDF [w3.org], RSS [harvard.edu], etc...

      Example; the same XML document (lets say it r

    • by mmurphy000 (556983) on Tuesday July 08, 2008 @07:31PM (#24108223)

      Y'know, I usually give low-UID Slashdotters a modicum of respect, but this diatribe is off-the-charts nonsense.

      1. Easy parsing by parsers written by people who slept through their compiler classes.

      And your evidence of this assertion is...what exactly? Not to mention the minor detail that XML and compilers are orthogonal: you can use XML (or many other data interchange formats) with non-compiled languages, and most compilers know nothing about XML (or many other data interchange formats).

      2. Verification in situations when it's impossible to devise a meaningful reaction to a failure (other than either "everything failed, turn off the computers and go home" and "assume the data to be valid anyway because ALL of it will have the same formatting error because the same program generates it")

      And your evidence of this assertion is...what exactly? XML-consuming programs that are aware of the data structure can have as detailed a "reaction to a failure" as a JSON-consuming program, or a YAML-consuming program, or a Protocol Buffer-consuming program. XML-consuming programs that are not aware of the data structure can, if the XML supplies it, validate against a DTD or schema, things which are not possible in some other data interchange formats (e.g., JSON, YAML).

      3. Dealing with data that arrives in neatly packaged "documents" and "requests", as opposed to being constantly produced and consumed.

      All data comes in neatly packaged buckets of varying types. We call them "bytes" and "packets" and "structures" and "records" and "frames" and "rows" and the like. The only way I can interpret your claim in a way that makes sense is to translate it as "XML sucks for streaming audio and video", which is undoubtedly true, and I don't think anyone uses it in that arena.

      4. Either communicating between programs that have the same knowledge of message semantics, or preparation of pretty human-readable documents.

      On the contrary, this is one of XML's primary strengths — handling cases where programs lack the "same knowledge of message semantics".

      With most data interchange formats, from CSV to JSON to Protocol Buffers, either you know everything about the data structure you're receiving, or you're screwed. In other words, there is no discoverability and no standardized means of being able to only deal with a portion of the data. This is particularly true for binary formats, like Protocol Buffers — either you know exactly what structure you received so you can parse it, or you're SOL, since it's just a bunch of bytes.

      With XML namespaces, it is entirely possible for Program X to publish data that Program Y has no intrinsic knowledge of in its entirety, but might know in part. If Program Y knows how to handle documents containing Dublin Core elements, for example, it can work with just those elements and ignore the rest of the document.

      You're welcome to have any opinion of XML you like. Heck, I even agree that XML tends to be used in places where it's overkill or too verbose. But if you want to convince others that your opinion is the correct one, you'll need to do a better job than this.

      • Re: (Score:3, Informative)

        by pikine (771084)

        Not to mention the minor detail that XML and compilers are orthogonal: you can use XML (or many other data interchange formats) with non-compiled languages, and most compilers know nothing about XML (or many other data interchange formats).

        If you have taken a compiler class, you'd learn about "compiler compilers" which are parser generators. He's just talking about the concept of parsing in general, and that XML is for people who don't understand how to write parsers.

        I don't agree with everything he says, b

  • JSON (Score:5, Interesting)

    by hey (83763) on Tuesday July 08, 2008 @04:49PM (#24105729) Journal

    Looks kinda like JSON to me.

    • I was kind of wondering the same thing, JSON was created to fill the same need. JSON is more like XML in that it's meant to be human parsable though, which counts for a lot in web use I think.

    • Re:JSON (Score:5, Informative)

      by Temporal (96070) on Tuesday July 08, 2008 @05:20PM (#24106247) Journal

      Structurally Protocol Buffers are similar to JSON, yes. In fact, you could use the classes generated by the Protocol Buffer compiler together with some code that encodes and decodes them in JSON. This is something some Google projects do internally since it's useful for communicating with AJAX apps. Writing a custom encoding that operates on arbitrary protocol buffer classes is actually pretty easy since all protocol message objects have a reflection interface (even in C++).

      The advantage of using the protocol buffer format instead of JSON is that it's smaller and faster, but you sacrifice human-readability.

    • Re:JSON (Score:4, Informative)

      by pavon (30274) on Tuesday July 08, 2008 @05:57PM (#24106865)

      The major difference between this and something like JSON or YAML or even XML is that those formats all include the format information (variable names, nesting, etc) along with the data. This does not.

      message Person {
          required int32 id = 1;
          required string name = 2;
          optional string email = 3;
      }

      What you are looking at above is the Protocol Format (.proto file) for a single message, which is analogous to an XML schema. No data is stored in that file - the numbers you see are unique ids for the different fields, and they are used in the low low-level representation of the data (not all fields have to be included in every instance of a message)

      The actual data is serialized using a compact binary format, not ASCII like JSON/YAML/XML which makes it much more efficient both to transfer over a network as well as to parse.

    • Re:JSON (Score:5, Interesting)

      by 0xABADC0DA (867955) on Tuesday July 08, 2008 @06:35PM (#24107449)

      Modify JSON so unquoted attributes are 'type labels' and define the type of an attribute by giving a label or a default value. For instance:

      phoneType: { MOBILE: 0, HOME: 1, WORK: 2 }

      phoneNumber: { "number": "", "type": phoneType }

      person: {
        "name": "",
        "id": 0,
        "email": "",
        "phone": [ phoneNumber ],
      }

      ... now you have pretty much exactly the same message definition as protocol buffers, but in pure JSON. It could also use some convention like "@WORK" for labels/classes so that a normal JSON parser can parse the message definitions. You can write a code generator to make access classes for messages just by walking the json and looking at the types. I don't see that 'required' and 'optional' keywords help much... imo defaults are generally better (even if they are nil). But this could easily be expressed in a json message definition.

      It's easy to make a binary JSON format that is fast and also small, so there is little advantage to protocol buffers there. It's also easy and ridiculously fast to compress JSON text using say character-based lzo (Oberhumer).

      Maybe somebody can explain, but it doesn't seem like protocol buffers really have much advantages over JSON. It sounds like it is effectively just a binary format for JSON-like data (name-value pairs they say) along with a code generator to access it. The code generator is nice, but this is like a day's work max. Maybe I'm not understanding google's problems, but I'll stick with JSON since it actually is a cross-platform, language neutral data format... and you can always optimize it if actually needed.

  • by ugen (93902) on Tuesday July 08, 2008 @04:51PM (#24105755)

    How is this either implementationally or conceptually different from BER/DER encoding (commonly used and available all over the place)?

    Looks to me like it is exactly the same thing, reimplemented. I am sure bearing a mark of Google is nice and all, but they are definitely reinventing the wheel here.

    • Have you ever met anyone who worked with ASN.1 and didn't run screaming for the hills?
    • by Animats (122034) on Tuesday July 08, 2008 @05:51PM (#24106741) Homepage

      ASN.1, from 1985, really is very similar. Here's a message defined in ASN.1 form:

      Order ::= SEQUENCE {
      header Order-header,
      items SEQUENCE OF Order-line}

      Order-header ::= SEQUENCE {
      number Order-number,
      date Date,
      client Client,payment Payment-method }

      Order-number ::= NumericString (SIZE (12))
      Date ::= NumericString (SIZE (8)) -- MMDDYYYY

      Client ::= SEQUENCE {
      name PrintableString (SIZE (1..20)),
      street PrintableString (SIZE (1..50)) OPTIONAL,postcode NumericString (SIZE (5)),
      town PrintableString (SIZE (1..30)),
      country PrintableString (SIZE (1..20))
      DEFAULT default-country }
      default-country PrintableString ::= "France"

      Payment-method ::= CHOICE {
      check NumericString (SIZE (15)),
      credit-card Credit-card,
      cash NULL }

      Credit-card ::= SEQUENCE {
      type Card-type,
      number NumericString (SIZE (20)),
      expiry-date NumericString (SIZE (6)) -- MMYYYY -- }

      Card-type ::= ENUMERATED { cb(0), visa(1), eurocard(2),
      diners(3), american-express(4) }

      Note that this has almost exactly the same feature set as Google's representation. There are named, typed field which can be optional or repeated. It just looks more like Pascal, while Google's syntax looks more like C.

  • I guess that XDR wasn't good enough, then, or ASN.1 (which supports multiple abstract encodings to boot).

    XML, as an interchange format?

    I suppose one could load source code into memory, and compile it every time, too. Even Java compiles to bytecode.

    Bloated formats are fine for human interpretation (I rather like one kind of structure for my config files), or occasional parsing (which is why most of the stuff in /etc is human-readable, for small data sets (I do remember when "the internet" was one big /etc/ho

  • .. from things like YAML and JSON?

    • by Temporal (96070)

      YAML and JSON are text-based formats intended for human readability. Protocol Buffers are binary, and therefore smaller and faster, but not human-readable.

      Also, the protocol buffer compiler provides friendly data access objects. You could actually use these with JSON or YAML, by just writing a new encoder and decoder (which is easy to do).

  • by IGnatius T Foobar (4328) on Tuesday July 08, 2008 @05:15PM (#24106169) Homepage Journal
    I have my own data format that is an alternative to XML as well. It works by normalizing the data into records which all contain the same number of fields, and placing an agreed-upon delimiter between each field. The end of the record is indicated by a newline.

    I think this "delimited" format has a lot of potential.
  • by kriston (7886) on Tuesday July 08, 2008 @05:20PM (#24106253) Homepage Journal

    Thankfully an alternative to XML.
    If you didn't think XML was among least efficient transport formats then you weren't really paying attention. Battery-conscious mobile devices do not really enjoy parsing XML DTD and then the XML file itself.
    It reminds me a little bit of AOL's SNAC message types.

    We get something good for the industry from Google, after a rash of bad press, and is actually NOT a beta.

  • by menace3society (768451) on Tuesday July 08, 2008 @05:35PM (#24106481)

    The similarity between these things and NeXT's Property Lists (now called "Old-School Property Lists" that Apple/NeXT has standardized on XML) is incredible. Some things are changed, like having a specification instead of just assuming that the recipient will parse it and figure it out, but the likeness is there. I wonder if any of the proto people at google had experience with plists, or if it's just a case of convergent design.

    Everything old-school is new-school again, I guess.

  • by somethingwicked (260651) on Wednesday July 09, 2008 @08:24AM (#24115019)

    Google elevator statement for Protocol Buffers is "a language-neutral, platform-neutral, extensible way of serializing structured data for use in communications protocols, data storage, and more."

    Christ, I hope I'm never in an elevator with someone who would consider THAT an elevator statement.

"Card readers? We don't need no stinking card readers." -- Peter da Silva (at the National Academy of Sciencies, 1965, in a particularly vivid fantasy)

Working...