As much as I like the features of OSSEC I am struggling to make it part of my logging architecture. Modern logging tools such logstash and graylog2 are modular. You can combine them with other components in a variety of ways. For example, an rsyslog client can feed a logstash server which can then store the logs in an elasticsearch database. And that is just one of many, many possible architectures.
OSSEC has a role in the logging architecture because of its powerful rules engine and predefined set of rules. It can scan incoming log events and send an alert when it finds something troubling. OSSEC can even take a few limited remediation steps such as blocking an IP address. This automated log scanning feature is a nice complement to the log database of elasticsearch. A centralized logging architecture can produce gigabytes or even terabytes of data. The ability to query a database of these events is good, having an automated scanner watching for patterns is even better.
The problem is that OSSEC is not modular. It has a tightly-coupled design. There is no way (that I have seen) to use its logging rules engine independently of other OSSEC features like file integrity monitoring. You cannot send logs to OSSEC directly using syslog or AMQP for example. The logs can only be sent ti the rules engine by the OSSEC agent on the client. It can read files directly but that is not very useful in a highly networked infrastructure.
Tight-coupling is a common design problem. In my experience it tends to arise when the software’s designers think of their application as an isolated, stand-alone system rather than as a component in a larger architecture. Making software modular (the core principle behind SOA) allows users to more easily adapt it to their local environment. Perhaps OSSEC developers will soon rethink their design.