Suricata Flow Logging

Pretty much from the start of the project, Suricata has been able to track flows. In Suricata the term ‘flow’ means the bidirectional flow of packets with the same 5 tuple. Or 7 tuple when vlan tags are counted as well.

Such a flow is created when the first packet comes in and is stored in the flow hash. Each new packet does a hash look-up and attaches the flow to the packet. Through the packet’s flow reference we can access all that is stored in the flow: TCP session, flowbits, app layer state data, protocol info, etc.

When a flow hasn’t seen any packets in a while, a separate thread times it out. This ‘Flow Manager’ thread constantly walks the hash table and looks for flows that are timed out. The time a flow is considered ‘active’ depends on the protocol, it’s state and the configuration settings.

In Suricata 2.1, flows will optionally be logged when they time out. This logging is available through a new API, with an implementation for ‘Eve’ JSON output already developed. Actually, 2 implementations:

  1. flow — logs bidirectional records
  2. netflow — logs unidirectional records

As the flow logging had to be done at flow timeout, the Flow Manager had to drive it. Suricata 2.0 and earlier had a single Flow Manager thread. This was hard coded, and in some cases it was clearly a bottleneck. It wasn’t uncommon to see this thread using more CPU than the packet workers.

So adding more tasks to the Flow Manager, especially something as expensive as output, was likely going to make things worse. To address this, 2 things are now done:

  1. multiple flow manager support
  2. offloading of part of the flow managers tasks to a new class of management threads

The multiple flow managers simply divide up the hash table. Each thread manages it’s own part of it. The new class of threads is called ‘Flow Recycler’. It takes care of the actual flow cleanup and recycling. This means it’s taking over a part of the old Flow Manager’s tasks. In addition, if enabled, these threads are tasked with performing the actual flow logging.

As the flow logging follows the ‘eve’ format, passing it into Elasticsearch, Logstash and Kibana (ELK) is trivial. If you already run such a setup, the only thing that is need is enabling the feature in your suricata.yaml.

kibana-flow

kibana-netflowThe black netflow dashboard is available here: http://www.inliniac.net/files/NetFlow.json

Many thanks to the FireEye Forensics Group (formerly nPulse Technologies) for funding this work.

Closing in on Suricata 1.4

I just made Suricata 1.4rc1 available with some pretty exciting features: unix socket mode and IP reputation.

Unix socket

First of all, Eric Leblond’s work on the Unix socket was merged. The unix socket work consists of two parts. The unix socket protocol implementation and a new runmode.

The protocol implementation is based on JSON messages over unix socket. Eric will be fully documenting it soon. Currently the commands are limited to shutting down and getting some basic stats. This part isn’t very exciting yet, but the groundwork for many future extensions has been laid.

The part that is exciting right now, is the unix socket runmode. That this does is start Suricata with all the rules and such, and then it waits for commands on the unix socket. Then the commands will be a pcap filename – log directory pair. This pcap will then be inspected against the rules and the logs go into the log directory supplied. As this can be easily scripted (a python script is provided), it’s a very fast way to test your pcap collections, as the overhead of starting and stopping is skipped.

This may initialy appeal mostly for those of you doing sandnetting and malware analysis, where tens of thousands of pcaps and automatically processed every hour or day, I think this could grow into a feature for a wider audience as well. For example, I could see use in Sguil or Snorby, or pretty much every event manager with full packet capture support, adding an option to scan a pcap associated with an event again. Maybe against _all_ rules, instead of the tuned set running on the live sensors. Maybe you can re-inspect old sessions against the current rules this way to find hits on attacks that were 0-days at the time, etc.

I think there could be many possibilities.

IP Reputation

A slightly more polished version of the code I discussed here is now available in this release. It’s one of those things where it will be very interesting to see how people will put it to use.

Matt Jonkman just wrote some of his ideas to the Emerging Threats mailing list: one of the ideas Matt wrote about is to amend weak rules with reputation data. So if you have a signature that is phrone to false positives, you probably disable it currently. But what if you combine it with reputation data? If the weak rule fires on a sketchy ip, it may be a more reliable alert.

We’ll see how this plays out.

1.4 final

We’re hoping that if nothing big happens, we can do a mid-December 1.4 final release. So please consider running this new release. It’s running very stable on quite a number of places, ISP networks, Lab networks, home networks, sandnetting networks, etc. But we need much more testing to find issues and/or gain confidence that we have found the most important issues. Thanks for helping out!

Recovering the email/username in Snorby

I use a Snorby setup that comes with Security Onion. Recently I had changed the username, but I couldn’t remember what I had set it to.

To recover the username, we can look it up in the database, like this:

mysql -uroot -B -e 'use snorby; select email from users;'

Thanks to Doug Burks and Dustin Webber for helping me recover it.