Improving Snort_inline’s NFQ performance

When using Snort_inline with NFQ support, it’s likely that at some point you’ve seen messages like these on the console: packet recv contents failure: No buffer space available. When the messages are appearing Snort_inline slows down significantly. I’ve been trying to find out why.

There are a number of setting that influence NFQ performance. One of them is the NFQ queue maximum length. This is a value in packets. Snort_inline takes an argument to modify the buffer length: –queue-maxlen 5000 (note: there are two dashes before queue-maxlen).

That’s not enough though. The following settings increase the buffer that NFQ seems to use for it’s queue. Since I’ve set it this high, I haven’t been able to get a single read error anymore:

sysctl -w net.core.rmem_default=’8388608′
sysctl -w net.core.wmem_default=’8388608′

The values are in bytes. The following values increase buffers for tcp traffic.

sysctl -w net.ipv4.tcp_wmem=’1048576 4194304 16777216′
sysctl -w net.ipv4.tcp_rmem=’1048576 4194304 16777216′

For more details see this page: http://www-didc.lbl.gov/TCP-tuning/linux.html

Setting these values fixed all my NFQ related slowdowns. The values probably work for ip_queue as well. If you use other values, please put them in a comment below.

Thanks to Dave Remien for helping me track this down!

Tunnel unwrapping for Snort_inline 2.8.0.1

Not many people have native IPv6 connectivity and use some form of tunneling. For this reason Nitro Security asked me to develop a Snort preprocessor to unwrap various tunnels. This resulted in the preprocessor ‘ip6tunnel’, which I uploaded to Snort_inline’s SVN yesterday. The preprocessor is capable of unwrapping IPv6-in-IPv4, IPv6-in-IPv6, IPv4-in-IPv6, IPv4-in-IPv4 and finally IPv6-over-UDP. The latter is used by Freenet6.

I chose to develop it as a preprocessor because this allows Snort to inspect both the original packet and the tunnel packet(s). The preprocessor supports recursive unwrapping. The recursion depth is limited to 3 by default, but can be configured differently. Get the preprocessor from Snort_inline’s SVN by checking out the latest trunk:

svn co https://snort-inline.svn.sourceforge.net/svnroot/snort-inline/trunk

Then have a look at doc/README.IP6TUNNEL for configuration options.

Once again thanks to the great people of Nitro Security. I think it’s great to see this company giving back to the community!

Snort_inline updated to 2.8.0.1 in SVN

I’ve just committed an update to Snort_inline’s SVN. It brings it to the Snort 2.8.0.1 level. It supports both IPv4 and IPv6 on IPQ and NFQ. I have not been able to test IPFW on IPv6, so I don’t think that will work currently.

This update removes the libdnet dependency and replaces it with libnet 1.1. To be able to send ICMPv6 unreachable packets you will need the libnet 1.1 patch I wrote a while ago. You can find that here. Get the latest Snort_inline by checking out SVN:

svn co https://snort-inline.svn.sourceforge.net/svnroot/snort-inline/trunk

Consider the code to be of beta quality for now, so be careful with it. Please report any problems with it!

Again, a big thank you to NitroSecurity for funding this work!

Working on Snort_inline 2.8.0.1

The last week I’ve been working on bringing Snort_inline to the Snort 2.8.0.1 level, including it’s IPv6 support. I’m almost ready to commit it to SVN, there are just some issues I need to fix in the inline specific code. The code will get rid of libdnet and use libnet 1.1 for sending reset/reject packets for both IPv4 and IPv6. After committing I will start working on getting the IPv6 features I wrote for NitroSecurity into this tree. This includes more matches, tunnel decoding (including for example the freenet6 tunnel, etc). So stay tuned!

New Snort_inline TCP window normalization code in SVN

A while ago I wrote about why the TCP window scaling normalization in Snort_inline was broken by design. I also wrote about a new solution I was working on and testing that would be uploaded to SVN soon. I just committed the patch to SVN. What it does is add two new options to stream4:

norm_window: normalize the TCP window (disabled by default). This is to protect Snort_inline from being forced to queue too many packets.
max_win_size: maximum size of the scaled TCP window. Packets increasing the window beyond the limit are modified.

This option is disabled by default, and the old wscale normalization code is removed, as are the options that configured it. It runs fine on my gateway, without noticeable slowdowns, but I haven’t done any benchmarking so far. Please try this and let me know how it works for you!

Matt Jonkman leaves Bleeding Edge

Matt Jonkman is stepping out of the Bleeding Edge project. He announced this here. Apparently Sensory Networks, one of the sponsors of the project, now owns it. It will be interesting to see if they will continue it, and if so, how. Honestly, I’m a bit skeptical, since to my knowledge not many Sensory people are directly involved at this moment. Still I believe Sensory consists of good people. I did a contract job for them about a year ago, and enjoyed working with them.

I think I speak for many if I say “Thanks” for all the hard work Jonkman has done for Bleeding, and I really look forward to new projects he will start or get involved in! Thanks Matt!

Multiple Snort_inline processes with Vuurmuur

One of the cool things of the Snort_inline project is the support for NFQUEUE. NFQUEUE is the new queuing mechanism to push packets from the kernel to userspace so a userspace program can issue a verdict on it. What makes NFQUEUE cooler than it’s predecessor ip_queue is that it supports multiple queue’s. This means that there can be more than one Snort_inline process inspecting and judging traffic. The challenge is to make sure that each Snort_inline instance sees all traffic belonging to a certain connection so Snort_inline can do stateful inspection on it. Luckily, Vuurmuur makes it very easy.

Normally an ‘accept’ rule in Vuurmuur looks like this:

accept service http from local.lan to world.inet options log

The NFQUEUE equivalent of this rule is:

nfqueue service http from local.lan to world.inet options log,nfqueuenum=”1″

To have ftp handled by another Snort_inline instance, just add a new rule:

nfqueue service ftp from local.lan to world.inet options log,nfqueuenum=”2″

Easy, no? 🙂 Vuurmuur creates the iptables rules that are required. It uses some advanced connmark-fu for it, so the right Snort_inline process receives all packets from a connection. It uses the helper match to make sure related connections are handled by the right queue, such as the ftp data channel. Of course you also need Snort_inline to be ready for it. See this post for more info on that.

The Snort_inline configuration part takes some work. You have to setup your init scripts to start all instances, setup different configs, logging to different locations. You will need multiple Barnyard’s and if using Sguil multiple snort_agent.tcl instances. When updating the rules you need to take care of the multiple processes as well. As said, it takes some work, but it’s rewarding. You can for example setup an extra Snort_inline instance for testing purposes only. Send all traffic from a certain IP to it to try out new rules, config changes, etc. I have set it up to have separate processes monitor my dmz and my lan.

What is possible as well, but not with Vuurmuur so far, is to have a form of poor man’s load balancing by sending new connections to one of multiple processes. This could be done by making use of the ‘ipt_statistics’ iptables module (fmr ipt_random). This allows a rule to be activated only some percent of the time. By using some more connmark-fu it’s possible to have multiple Snort_inline instances to handle different connections of the same type of traffic. I’ll add support for that to a future Vuurmuur release.