Protobuf-for-node - Protocol Buffers for Node.JS

Protobuf for Node adds idiomatic protocol buffer handling to node.JS. It is actually two things in one: firstly, you can marshal protocol messages to and from Node byte buffers and send them over the wire in pure JS. Secondly, you can use protocol messages as a native interface from JS to C++ add-ons in the same process. It takes away the details of the V8 and the eio threadpool APIs and makes native extensions to Node much easier to implement. How to useTo read/write protocol buffers, protobuf for node needs the parsed representation of your protocol buffer schema (which is in protobuf format itself). The protobuf compiler spits out this format: $ vi feeds.protopackage feeds;message Feed { optional string title = 1; message Entry { optional string title = 1; } repeated Entry entry = 2;}:wq$ $PROTO/bin/protoc --descriptor_set_out=feeds.desc --include_imports feeds.protoUsing this file (read into a Buffer) you can create a 'Schema' object: var fs = require('fs'); var Schema = require('protobuf_for_node').Schema; var schema = new Schema(fs.readFileSync('feeds.desc'));A Schema object maps fully qualified protocol buffer names to type objects that know how to marshal JS objects to and from Buffers: var Feed = schema['feeds.Feed']; var aFeed = Feed.parse(aBuffer); var serialized = Feed.serialize(aFeed);When marshalling, the serializer accepts any JavaScript object (but only picks the properties defined in the schema): var serialized = Feed.serialize({ title: 'Title', ignored: 42 }); var aFeed = Feed.parse(serialized); => { title: 'Title' }Note that property names are the camel-cased version of the field names in the protocol buffer description, like for the Java version (http://code.google.com/apis/protocolbuffers/docs/reference/java-generated.html#fields). E.g. "optional string title_string = 1" will translate into a "titleString" JS property. Note how we're using uppercase for the type objects like for constructor functions. That's because they are: when parsing, objects are constructed using the respective constructor for the message type. This means that you are able to attach methods: Feed.prototype.numEntries = function() { return this.entry.length; }; var aFeed = Feed.parse(Feed.serialize({ entry: [{}, {}] })); aFeed.numEntries() => 2Native InterfaceProtocol buffers aren't only great for data interchange between processes - you can also use them to send data between code written in JS and C++ within the same process. Protobuf for node makes it simple to implement a native add-on without having to touch the V8 api at all: you implement a protobuf service instead. It's three easy steps: 1. Define the add-on service interface: // An example service to query pwd(3).package pwd;// Empty (=no arg) request message.message EntriesRequest {}message Entry { optional string name = 1; optional int32 uid = 2; optional int32 gid = 3; optional string home = 4; optional string shell = 5;}message EntriesResponse { repeated Entry entry = 1;}service Pwd { rpc GetEntries(EntriesRequest) returns (EntriesResponse);}... and generate the C++ code for it: proto/bin/protoc --cpp_out=. service.proto2. Implement and export the service in an add-on: extern "C" void init(v8::Handle target) { // Look Ma - no V8 api required! // Simple synchronous implementation. protobuf_for_node::ExportService( target, "pwd", new (class : public pwd::Pwd { virtual void GetEntries(google::protobuf::RpcController*, const pwd::EntriesRequest* request, pwd::EntriesResponse* response, google::protobuf::Closure* done) {\t struct passwd* pwd;\t while ((pwd = getpwent())) { pwd::Entry* e = response->add_entry(); e->set_name(pwd->pw_name); e->set_uid(pwd->pw_uid); e->set_gid(pwd->pw_gid); e->set_home(pwd->pw_dir); e->set_shell(pwd->pw_shell); }\t setpwent();\t done->Run(); } }));3. Use it from JS: // prints the user databaseputs(JSON.stringify(require('pwd').pwd.GetEntries({}), null, 2));If your service is CPU intensive, you should call it with an extra callback argument: the invocation is automatically placed on the eio thread pool and does not block node: // prints the user databaserequire('pwd').pwd.GetEntries({}, function(response) { puts(JSON.stringify(response), null, 2);});Your service is free to finish and call the "done" closure asynchronously. Note that marshalling between JS and the request and response protos necessarily happens on the JS thread. Passing tons of data will block just as much using asynchronous invocation. SpeedThe Protobuf for Node add-on relies on the protobuf C++ runtime but it does not require any generation, compilation or linkage of generated C++ code. It works reflectively and is thus able to deal with arbitrary schemas. This means however, that it is not as fast as with generated code. Simple measurements show that unmarshalling is about between 20% and 50% faster than V8's native JSON support. Calling C++ services is faster since it avoids marshalling to bytes but transfers data directly between the JS objects an



http://code.google.com/p/protobuf-for-node

Bookmark and Share          3330



comments powered by Disqus


Related Products

Node.JS - Builds Scalable Network Programs

Node.js is a server-side JavaScript environment that uses an asynchronous event-driven model. It could handle thousands of concurrent connections. This allows Node.js to get excellent performance based on the architectures of many Internet applications. It is an evented I/O for V8, which is Google's open source JavaScript engine.

Read more

Scribe - Real time log aggregation used in Facebook

Scribe is a server for aggregating log data that's streamed in real time from clients. It is designed to be scalable and reliable. It is developed and maintained by Facebook. It is designed to scale to a very large number of nodes and be robust to network and node failures. There is a scribe server running on every node in the system, configured to aggregate messages and send them to a central scribe server (or servers) in larger groups.

Read more

Expressjs - NodeJS Web Application Framework

Express is a minimal and flexible node.js web application framework, providing a robust set of features for building single and multi-page, and hybrid web applications. It provides a thin layer of features fundamental to any web application, without obscuring features that you know and love in node.js. Its philosophy is to provide small, robust tooling for HTTP servers. Making it a great solution for single page applications, web sites, hybrids, or public HTTP APIs.

Read more

Hadoop Common

Apache Hadoop is a framework for running applications on large clusters built of commodity hardware. Hadoop common supports other Hadoop subprojects

Read more

NetBeans

An IDE to create professional desktop, enterprise, web, and mobile applications with the Java language, as well as C/C++, PHP, JavaScript, Groovy, and Ruby.

Read more

Etherpad - Web-based real time collaborative editor

Etherpad lite is a really-real time collaborative editor spawned from the Hell fire of Etherpad. We're reusing the well tested Etherpad easysync library to make it really realtime. Etherpad Lite is based on node.js ergo is much lighter and more stable than the original Etherpad. Our hope is that this will encourage more users to use and install a realtime collaborative editor.

Read more

Mongoose - Elegant MongoDB object modeling for Node.js

MongoDB object modeling designed to work in an asynchronous environment. It provides a straight-forward, schema-based solution to modeling your application data and includes built-in type casting, validation, query building, business logic hooks and more, out of the box.

Read more

Mocha - JavaScript Test Framework

Mocha is a feature-rich JavaScript test framework running on node.js and the browser, making asynchronous testing simple and fun. Mocha tests run serially, allowing for flexible and accurate reporting, while mapping uncaught exceptions to the correct test cases.

Read more

Cassandra - Scalable Distributed Database

The Apache Cassandra Project develops a highly scalable second-generation distributed database, bringing together Dynamo's fully distributed design and Bigtable's ColumnFamily-based data model. Cassandra is suitable for applications that can't afford to lose data. Data is automatically replicated to multiple nodes for fault-tolerance.

Read more

HPCC System - Hadoop alternative

HPCC is a proven and battle-tested platform for manipulating, transforming, querying and data warehousing Big Data. It supports two type of configuration. Thor is responsible for consuming vast amounts of data, transforming, linking and indexing that data. It functions as a distributed file system with parallel processing power spread across the nodes. Roxie, the Data Delivery Engine, provides separate high-performance online query processing and data warehouse capabilities.

Read more

Related Tags
Browse projects by tags.

We have collection of more than 1 Million open source products ranging from Enterprise product to small libraries in all platforms. We aggregate information from all open source repositories. Search and find the best for your needs.