Micro-daemons with ØMQ and node.js


My recent attempt to implement an IT logging architecture was frustrating so I decided to experiment with some ideas of my own.  I want an architecture that is flexible and efficient. It should support the features I want such as JSON output. And, it should allow me to connect the applications in a way that is optimal for my IT infrastructure, providing the ability to scale out or change the way log messages flow.

A Logging Experiment

The pipe-and-filter architecture pattern is perfect for this problem. Most of the logging applications implicitly follow this pattern.  The idea I wanted to test was, could I decompose the current monolithic applications into very small “micro-daemons” and then wire them together in a pipe-and-filter architecture.  This idea was inspired by two powerful tools: node.js and ØMQ. I decided to test this idea by building a toy logging architecture.

My toy logging architecture was simple and performed three functions:

  1. Bridge syslog and ØMQ
  2. Translate from syslog data into JSON
  3. Store the JSON in elasticsearch.

logstash and rsyslog can perform the same functions but only as part of a single, large application. My intent was to implement these functions as distinct, network- accessible daemons.

Results

It took me a few hours to write these micro-daemons. I haven’t programmed in node.js or with ØMQ before but they were not hard to learn. The documentation for both is excellent and plentiful. I first ran them on the same VM and then moved them to different hosts to test location independence.

It worked great. They start instantly and perform well (no dropped messages). The components have been running stably for a week. The only negative is memory consumption. Each takes about 9 times the amount of memory that rsyslod does.  It is still a small amount (~10 MB if I have measured correctly) in absolute terms though. A worthwhile trade in my view.

Components like this can be developed quickly and assembled in many, many ways.  Using this design pattern a single application does not have to provide all the features you need. Need to store messages in mongodb? Find an adapter that converts ØMQ to monogodb. Need XML instead of JSON? Find (or write!) a translator that converts JSON to XML. I am supremely tired of the endless reinventing the wheel that takes place in the software world. Building reusable micro components is one way to stop it. They don’t even have to be written in the same language as long as they use  ØMQ.

I realize this test is more suggestive than conclusive. Still, I am impressed how much capability I could assemble so quickly.  You can see how I tried to make these micro-daemons more production friendly in the details below. To further harden them I would add  encrypted sockets, run them as a non-root user, and make them SELinux friendly.

The Pipe and Filter Pattern

Pipe and filter is my favorite architectural style because it is powerful and intuitively easy to understand. It is the same pattern used by Unix shells and the plumbing in your house (hence the name). It is built on a few core concepts: the messaging channel, a message, and processors.

The diagram below shows an example. A message (any piece of data) enters the channel on the left.  The channel is symbolized by the connecting arrows. As the message flows through the channel it is handled in different ways by the processors (they are the “filter” in pipe and filter). The processors icons used in the diagram are from the excellent book Enterprise Integration Patterns which I highly recommend. Any number of processors can be added to a channel and any number of channels can be used.

Pipe& Filter

ØMQ and Messaging Channels

The messaging channel is the conduit through which messages flow. In this context, a channel is formed when two components can communicate using compatible network protocols. ØMQ is well suited as the channel mechanism. It makes it possible to create message channels without a server. This is critical for a lightweight and distributed architecture.

It also supports a large number of languages which is key because it allows you to build processors using the best language for the task. It makes connections between processors written in different languages a snap.

ØMQ has another essential feature in that it can be used for communication between apps on the same host (IPC)  or for network communication.  And it does this without the need for re-coding or re-design. This is a major benefit. It means you can run a set of processors on a single host when demand is small but then scale out if demand becomes high simply by changing the hostname.

Messages

Messages are data encapsulated in a known schema. That schema could be a line of text from syslog or a structured format using JSON or XML. System logs are just a kind of message; hence there is no reason to have a dedicated infrastructure just for them.  Inventing a new protocol and format for a single purpose (like syslog) is bad bad bad. Of course, when syslog was invented they didn’t have the same tools we have now but their obsolete approach still survives. Have a problem? Build a new protocol, a new data format, and a new monolithic application to implement them all. As American Hustle showed there is a lot to like about the 70s, its coding style, however, isn’t one of them.

node.js and Processors

Processors are pieces of logic that change how a message flows through a channel or change the message itself.  Processors, if designed correctly,  enabled loose coupling and flexibility in the architecture. If they are small and single purpose, you can easily move them around and rewire them. They make it easier to change your architecture.  Need more performance? Add more message processors.  Need to change the format of your message data? Insert a message translation processor.

This only works if the processors are single purpose, stand-alone, and network accessible. If they are all bundled in the same, monolithic application then you no longer have that flexibility. If you want a different design you have to deploy the monolith everywhere you need the processor.  I don’t think that makes sense now that we have the ability to quickly spin up hundreds or even thousands of VMs and lightweight containers (i.e., Docker or Solaris Zones). Those require lightweight and distributed solutions rather than the full heavyweight stack required by the monoliths.

I thought node.js would be a good candidate for the processors because it is high-performance yet also a scripting language. Small processors in this model are micro-daemons. They are small server applications that listen on a ØMQ channel, perform some task, and then forward the message. As such, they need a language that makes writing servers easy. node.js was made for this task.

Scripting languages are easier to learn. No compile and linking cycle, no complex syntax. Simple applications can usually be deployed as a single file. From a DevOps perspective scripting languages allow an administrator to design and program their own infrastructure, shifting this responsibility away from the developer.

Deploying and managing dozens of small, single-file scripted processors across hundreds of servers would be a big problem if you had to do it manually. Fortunately you don’t.  Automated CM tools (Ansible, Chef, etc.) make it possible to easily deploy these scripts on a large scale. With an automated CM tool you can deploy hundreds of these files on thousands of servers.

The automated CM tool can also configure your pipe-and-filter architecture. For example, you could move a processor script from one host to ten and have all the host names referenced by ØMQ updated via templates. This is another good feature of scripting languages: their text format make them easy to manipulate via templates.  Scripts combine the application and its configuration into one file.

The Experimental Code

The three processor components of my toy logging system are shown in the diagram below:

Logging

The code for the syslog bridge is:

// Global config options
var udpSyslogPort = 514;
var sendToHost = "127.0.0.1";
var sendToPort = 3000;

// UDP server
var udp = require("dgram");
var syslogUDPServer = udp.createSocket("udp4");

// ZeroMQ channel
var zmq = require('zmq')
var zmqChannel = zmq.socket('push');
zmqChannel.bindSync('tcp://' + sendToHost + ':' + sendToPort);

// Main receive loop. Forward all syslog messages to a zeroMQ channel.
syslogUDPServer.on("message", function (msg, rinfo) {
  console.log("Inbound Syslog:" + msg.toString() );
    zmqChannel.send(  msg.toString() );
});

    // Start syslog server
syslogUDPServer.on("listening", function () {
   var address = syslogUDPServer.address();
   console.log("Syslog server listening on " + address.address + " at " + address.port);
});

syslogUDPServer.bind( udpSyslogPort );

The code for the syslog to JSON translator is:

var zmq = require('zmq');
var sendChannel = zmq.socket('push');
var receiveChannel = zmq.socket('pull');
var syslogRegex = /^<(\S+)>(\S+\s+\S+\s+\d+:\d+:\d+) (\S+) ([^:\[]+)\[?(\d*)\]?:\s+(.*)$/;

receiveChannel.connect('tcp://127.0.0.1:3000');
sendChannel.bindSync('tcp://127.0.0.1:3001');

receiveChannel.on('message', function(msg){
  console.log( "Received: " + msg.toString() );
  var result = syslogRegex.exec( msg.toString() );
  if (result != null) {
    var log = {
        syslog_priority: result[1],
        timestamp: result[2],
        host: result[3],
        syslog_program: result[4],
        syslog_pid: result[5],
        message: result[6]
    }
   sendChannel.send( JSON.stringify( log ));
  }
});

And one more. Here is the code for the elasticsearch store.

var zmq = require('zmq');
var receiveChannel = zmq.socket('pull');
receiveChannel.connect('tcp://127.0.0.1:3001');

var elasticsearch = require('elasticsearch');
var client = new elasticsearch.Client({
  host: 'elastic.sharknet.us:9200',
  log: 'trace'
});

receiveChannel.on('message', function(msg){
   client.create({
     index: 'logs',
     type: 'mylog',
     body: msg.toString()
  })
});

This code is amazingly small for what it does. It is also not very good. I am a complete neophyte when it comes to node.js, ØMQ, and elasticsearch. I’m sure experienced developers could do much better.

You can run these scripts on different hosts by changing the IP address. Remember to open the right ports in your firewall. You can convert these processor scripts into daemons managed by service using this library. With this library you can start or stop the processor scripts using the familiar service [daemon] start and service [daemon] stop commands.

On CentOS you will need to install the following packages from the EPEL repository:

  • nodejs
  • npm
  • zeromq3
  • zeromq3-devel

Then you need the following node packages from npm:

  • elasticsearch
  • zmq
  • node-linux

Final Thoughts

If OSSEC supported external inputs I would have included a processor for it as well. OSSEC is like an expert system for logs and is a useful component of a logging architecture. OSSEC follows the opposite design approach from what I have done here, it bundles everything into a single, tightly-coupled application. When such an application doesn’t do what you want there isn’t much you can do. With a loosely-coupled pipe and filter system like the one I experimented with, if it doesn’t do what you want, no problem, just add a processor.



Categories: Software

Tags: , , , ,

Share Your Ideas

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: