On Suricata performance

Lots of fuzz in the media about Suricata’s performance versus Snort yesterday. Some claiming Suricata is much faster, others claiming Snort is much faster.

At this point I really don’t care much. What the Suricata development by the OISF has shown in my opinion is that we’ve managed to create a very promising new Open Source project out here. In little over a year, funded for about $600k by the US government and with heavy (and growing) industry support, we’ve produced a new IDS/IPS engine mostly compatible with Snort but build on a all new code base an incorporating some very interesting fresh ideas. We’re already seeing a community form around our project with a lot of support from that new community.

So about this performance fuzz. Who to believe? Is Suricata faster than Snort? Yes, no, ehhh, depends on how you look at it. Is Suricata faster than Snort on a single core cycle for cycle, tick for tick? No. It’s pretty clear we aren’t, I didn’t expect us to be either. But we scale. We’ve had reports of running on a 32 core box and scaling to use all cores. There Suricata is much faster. Like Martin Roesch wrote on the VRT blog one can set up Snort on a box to one have instance of Snort per core (or multiple per core). This is in fact the way many appliance builders get to high speeds with it. While this may be feasible for appliance builders, admins we talked to that run their own IDS/IPS think it’s a management nightmare.

As we’re a new project with a fresh codebase, there is going to be a lot of low hanging fruit in performance optimizations. I’ll give an example here. On a test pcap, with a reduced ruleset (about 10k rules), Suricata took about 400s to inspect. Then with a bigger ruleset (about 14k rules), it suddenly took 1600s! After a little bit of cache profiling it turned out that the part of the engine where the address part of a signature was inspected was horribly cache inefficient. In less than an afternoon I rewrote it to be more efficient. Result, the same test now completes in under 600s. This code is in the current git master and will be in 1.0.1.

My point here being that there will be lots of room for optimizations, and not just minor stuff. So far we’ve mostly focused on being accurate (we still have work to do here) and having the algorithms be correct. Hardly any tuning has been done. In our last OISF meeting we’ve gotten a few very interesting help offers for serious performance testing and tuning on some really big boxes, state of the art CUDA hardware, 10GBit labs, etc. So I expect a lot of progress in the months to follow.

It’s clear that we have work to do. What I’m really excited about is how fast that work is progressing, how much help we’re getting both from our brand new community and the industry, and the openness of our development process.

On a final note, during the development of this project we’ve found a lot of bugs and issues in other tools. Will Metcalf, who runs our QA, has been reporting many issues in Snort and VRT sigs to Sourcefire, in Emerging Threats sigs to the ET community. We’ve found bugs in other tools as well, for example in a neat library called libcap-ng. So everyone benefits from our work! 🙂

5 thoughts on “On Suricata performance

  1. Pingback: Tweets that mention Blog post on Suricata performance: #oisf #suricata -- Topsy.com

  2. First of all, congratulations on releasing this platform. I’m sure that the development process and community support will help it to evolve and improve in short order. I’ve seen (and participated) in several discussions about the Suricata vs Snort comparison. All things considered, especially the maturity of Snort, it doesn’t seem like an appropriate comparison yet. Snort has had years of development and the VRT’s work on rule development is exceptional in my opinion. I think Suricata has as much chance as any other open source IPS that’s available now and should be given the chance to make it without the pre-judgement and comparison to Snort.

    And herein lies the problem. The recent articles quoting Jonkman and Stiennon with commentary about Snort’s deficiencies doesn’t really speak to Suricata’s strengths (current or future). It merely invites an inevitable comparison to see “who’s is bigger” so to speak. Rather than waste time and energy trying to be “better than Snort” and openly campaigning on its problems, why not simply focus on being the best open source IPS platform Suricata can be and perhaps even show some respect for the hard won success of Snort and everyone that’s contributed to it’s development over the past 12 years.

    Considering the amount of money invested in creating Suricata from scratch (as is often touted in the articles), surely you and the Suricata team can appreciate the level of effort and the resources it requires to bring a usable and worthwhile IPS to market. Snort (and Sourcefire) have endured the same challenges and have their success to show for it.

    If Suricata is all it’s made out to be then I’m confident it will stand on its own, improve with community support, and succeed. Don’t muddy that success by bashing other products to achieve it.

    Scott

  3. Hi,

    first of all congratulations for the fresh new release of Suricata!
    A question came to me after having read this article. You give us an example of the time took by Suricata to inspect a pcap file…
    I was wondering what is your testing model… I mean, how do you do to extract the time took by Suricata ? I ask this question because I would like to compute this kind of performance test by myself adding different set of rules to figure out how much it slows down.

    Thank you for your help, and all the best for the next release !

    Giuseppe

  4. Pingback: Security Advancements at the Monastery » Blog Archive » Three Open Source IDS/IPS Engines: The Setup

  5. Pingback: Capacity Planning for Snort IDS | Bulbous, Not Tapered

Comments are closed.