New Snort_inline TCP window normalization code in SVN

A while ago I wrote about why the TCP window scaling normalization in Snort_inline was broken by design. I also wrote about a new solution I was working on and testing that would be uploaded to SVN soon. I just committed the patch to SVN. What it does is add two new options to stream4:

norm_window: normalize the TCP window (disabled by default). This is to protect Snort_inline from being forced to queue too many packets.
max_win_size: maximum size of the scaled TCP window. Packets increasing the window beyond the limit are modified.

This option is disabled by default, and the old wscale normalization code is removed, as are the options that configured it. It runs fine on my gateway, without noticeable slowdowns, but I haven’t done any benchmarking so far. Please try this and let me know how it works for you!

Snort_inline and out of order packets

In Snort_inline’s stream4 modifications, one of the changes is that out of order TCP packets are treated differently from unmodified stream4. This can cause some new alerts to appear and some unexpected behaviour. So I’ll try to explain what happens here.

First of all let me explain quickly what out of order packets are. To put it simple, TCP packets are send out by the source host in a specific order but can arrive in a different order at the destination. Packetloss, link saturation, routing issues are among many things that can cause this. A Snort_inline specific issue is that when Snort_inline can’t keep up with the packets it needs to process, it will drop packets which causes packetloss. These packets will then have to be resent by the sending host.

Out of order packets become a problem when dealing with stream reassembly. Stream reassembly basically is putting all data from the packets in the right order to get the original data as it was sent. We can’t do stream reassembly if we don’t have all packets. Unmodified stream4 basically ignores gaps in the stream. Designed for passive listening for traffic, it has to deal with packetloss differently than Snort_inline.

Next, some definitions of this functionality in Snort_inline. Out-of-order packets: The number of packets that we have in queue that are out of order for a stream. This means they have a higher sequence number than the next in-sequence packet we are expecting. Out-of-order bytes: The number of bytes of the combined data of the out-of-order packets in the stream. Sequence number hole: A gap between two packets, that can be closed by one or more missing packets.

To prevent Snort_inline from using to much memory on bad connections or when an attacker sends lots of out of order packets, Snort_inline can enforce limits to protect itself. Snort_inline can even force a stream to be completely in-order by dropping all packets that are out of order. Sadly, this has a bad effect on the performance of the connections, so you can set certain limits that balance between performance and protection.

When Snort_inline hits these limits, it will (optionally) fire alerts that look like this:

(spp_stream4) TCP out-of-order packets limit reached for stream
(spp_stream4) TCP out-of-order bytes limit reached for stream
(spp_stream4) TCP sequence number holes limit reached for stream

You can disable the alerts by adding the following option to the preprocessor stream4 line: disable_ooo_alerts. The limits themselves can be adjusted by using the following options: max_seq_holes 2, max_ooo_pkts 25, max_ooo_bytes 7000. These are the values I currently use on my home gateway. I got the idea of implementing these limits from this paper by Vern Paxson. However, it seems to me that his suggestion that at max one sequence hole per stream (even per host) was a bit optimistic. Maybe DSL has more packetloss than the university links he studied.

By default Snort_inline uses the settings that were chosen a bit randomly, so they may not fit your usage. Like with the wscaling, please let me know in a comment what values you use!

TCP Window scaling in Snort_inline

The TCP window field in the TCP header is only 16 bits, so the maximum window size it can handle is only 64kb. A long time ago this was enough, but nowadays it isn’t, by far. Luckily, this is something the window scaling option fixes. Window scaling is very common these days. Your pc or laptop probably uses it by default. Snort’s stream4 however, does not support it. This means that when tracking and reassembling streams, Snort for most connections has no idea about what data is in window and which is out of window. To make matters worse, the packets that are in window when using wscaling, but appear out of window when the wscaling is not accounted for, are never used in the reassembly process. This makes Snort evadable.

One of the goals when creating the stream4inline modifications, was to be able to drop on all TCP anomalies stream4 detects. For this support for window scaling was added to Stream4, so Snort_inline would be able to drop out of window packets. There is however a big problem with window scaling. With window scaling the TCP window possibly increases to a maximum of 1GB (with the maximum wscale value of 14). Stream4 would thus theoretically have to queue up to 1GB of packet data, per stream. While this is something that is unlikely to happen during normal connections, it is possible. This could then be used by an attacker to attack Snort_inline itself.

To prevent this, I added an option to stream4inline that allows the administrator to set a maximum allowable wscale setting. Any higher setting will be normalized away. In these cases the packet is modified and the wscale lowered to the maximum that is allowed. The hosts talking to each other then think the other accepts only the lower wscale and accepts that setting. This can however have some unexpected consequences. If the link that Snort_inline deals with is high speed, high latency or both, setting the wscale value to low can result in serious performance degradation. Connections that are (way) slower than usual is how this issue shows. In these cases the wscale value needs to be increased.

The default value of Snort_inline 2.6.1.5 is a wscale of 2, which is quite low but works fine on my home DSL connection. To change the setting add ‘norm_wscale_max 5’ to your stream4 configuration line. This will allow for a wscale of up to 5. The maximum value is 14. I’d be interested in what values people use on what types and speeds of lines, so please let me know! We can use it to suggest values in the docs or to set a less insane default value 🙂

Memory leak fixed in stream4inline

A few days ago William told me that if he enabled stream4inline on a busy gateway, Snort_inline would consume all memory within hours. The problem went away when disabling stream4inline, so it made sense that the problem would be in there somewhere.

The first suspect was the reassembly cache. The reassembly cache is used to keep a per stream copy of the reassembled packet in memory. While being memory expensive, it greatly speeds up the sliding window stream reassembly process, especially with small packets. The reason for this being the first and primary suspect is that this is the only place where stream4inline code allocates memory. Reviewing the code however, showed no leaks and adding a debug counter to monitor the memory usage also showed that the leak was not in that code.

Next my investigation focused on parts where stream4 behaves differently in stream4inline mode. I initially focused on what happened when stream4 hit it’s memory limit: the memcap. When the configurable memcap is reached, stream4 nukes 5 random sessions. In stream4inline the option to truncate 15 of the sessions was added, where an attempt is made to clear the memory by removing stored packets no longer needed from a stream. If that fails, 5 random sessions are nuked anyway.

Reviewing the truncating of the sessions didn’t show anything obvious to me so I went on to the killing of the sessions. Descending down the code I finally reached the DropSession function, where the memory cleanup for a session is handled. Here it turned out that the DeleteSpd function, used to clear the stored packets in a stream, was not called in stream4inline mode. The reason for this mistake is that with Snort 2.6.1 support for UDP was added to stream4. The merge with the Snort_inline code went wrong because of extra checks added to the DropSession function.

The stupid thing is that when I did the merge, I was already in doubt about it as a comment showed:

/* XXX did I merge this right??? VJ */

Guess I know the answer now: No 😉

Differences between Snort and Snort_inline

Every few weeks the same question comes up: what is the difference between Snort in inline mode and Snort_inline. This makes sense, because the Snort_inline documentation and website fail to explain it. In this post I will try to highlight the main differences. In general I can say that we try to develop Snort_inline as a patchset on top of Snort. Snort_inline is focused at improving the inline part of Snort. Originally of course, Snort’s inline capabilities were developed in the Snort_inline project. With Snort 2.3.0RC1 they were merged into mainline Snort.

Convenience

We did a number of things to make Snort_inline a little more convenient for inline users.

  • inline is enabled by default in ./configure
  • we got rid of libnet 1.0.2a, switched to libdnet 1.1 instead
  • a snort_inline specific manual page was added, as well as some extra docs
  • a example configuration file for inline use is supplied

Added functionality

  • we support Linux’ new queue’ing mechanism called nfqueue. This was contributed by Nitro Security. Nfqueue supports running multiple copies of Snort_inline to take advantage of SMP and reduce risk of denial of service when Snort_inline should crash.
  • stickydrop preprocessor enables you to add options to the rules to block an ipaddress for a configurable amount of time
  • bait-and-switch preprocessor (Linux only) allows you to redirect traffic from a host to a honeypot based on the rules
  • clamav preprocessor is included (you still need to pass –enable-clamav to ./configure)
  • reinject action for FreeBSD: reinjects an accepted packet into the ipfw list at a specific rule number

Improved for inline use

  • reject action can send RST packets to both source and destination
  • stream4 can drop attacks detected in the reassembled stream. It also enforces the TCP window. It implements a number of ideas from Vern Paxson on TCP reassembly, such as a limit on the number of out of order packets and bytes that are accepted in a stream.
  • some fixes for FreeBSD

As the list shows, if you are interested in Snort running inline, using Snort_inline might be a better choice for you!

Snort_inline updated to 2.6.1.4 in SVN

After moving, which went fine, I now finally have some real coding time again. The last week I have been updating and fixing various parts of Snort_inline. The most important change was the update to Snort version 2.6.1.4, which contains security fixes. William also found an issue with the Stream4inline code. The issue was that the memcap that the admin sets to limit the amount of memory used by stream4 wasn’t properly enforced.

Other fixes that are done is that Snort_inline in nfqueue mode now properly honors signals and also no longer needs the libipq library and headers. There are few changes that will be committed soon. One is an issue that clamav can sometimes return an error when parsing malformed file. Until now the spp_clamav preprocessor would issue a FatalError and cause Snort_inline to die. This is obviously not desirable so the patch makes sure that Snort_inline no longer dies and gives the admin an option to either drop or pass traffic that can’t be inspected.

Last but not least there will be a fix to the nfqueue code that appears to solve the ‘stuck packet problem’ we were seeing under heavy load. A number of people are testing my patch currently so if all goes well that will be commited soon as well.

Checking out the latest code is done with the following command:

svn co https://snort-inline.svn.sourceforge.net/svnroot/snort-inline/trunk