Snort_inline and out of order packets

In Snort_inline’s stream4 modifications, one of the changes is that out of order TCP packets are treated differently from unmodified stream4. This can cause some new alerts to appear and some unexpected behaviour. So I’ll try to explain what happens here.

First of all let me explain quickly what out of order packets are. To put it simple, TCP packets are send out by the source host in a specific order but can arrive in a different order at the destination. Packetloss, link saturation, routing issues are among many things that can cause this. A Snort_inline specific issue is that when Snort_inline can’t keep up with the packets it needs to process, it will drop packets which causes packetloss. These packets will then have to be resent by the sending host.

Out of order packets become a problem when dealing with stream reassembly. Stream reassembly basically is putting all data from the packets in the right order to get the original data as it was sent. We can’t do stream reassembly if we don’t have all packets. Unmodified stream4 basically ignores gaps in the stream. Designed for passive listening for traffic, it has to deal with packetloss differently than Snort_inline.

Next, some definitions of this functionality in Snort_inline. Out-of-order packets: The number of packets that we have in queue that are out of order for a stream. This means they have a higher sequence number than the next in-sequence packet we are expecting. Out-of-order bytes: The number of bytes of the combined data of the out-of-order packets in the stream. Sequence number hole: A gap between two packets, that can be closed by one or more missing packets.

To prevent Snort_inline from using to much memory on bad connections or when an attacker sends lots of out of order packets, Snort_inline can enforce limits to protect itself. Snort_inline can even force a stream to be completely in-order by dropping all packets that are out of order. Sadly, this has a bad effect on the performance of the connections, so you can set certain limits that balance between performance and protection.

When Snort_inline hits these limits, it will (optionally) fire alerts that look like this:

(spp_stream4) TCP out-of-order packets limit reached for stream
(spp_stream4) TCP out-of-order bytes limit reached for stream
(spp_stream4) TCP sequence number holes limit reached for stream

You can disable the alerts by adding the following option to the preprocessor stream4 line: disable_ooo_alerts. The limits themselves can be adjusted by using the following options: max_seq_holes 2, max_ooo_pkts 25, max_ooo_bytes 7000. These are the values I currently use on my home gateway. I got the idea of implementing these limits from this paper by Vern Paxson. However, it seems to me that his suggestion that at max one sequence hole per stream (even per host) was a bit optimistic. Maybe DSL has more packetloss than the university links he studied.

By default Snort_inline uses the settings that were chosen a bit randomly, so they may not fit your usage. Like with the wscaling, please let me know in a comment what values you use!

TCP Window scaling in Snort_inline

The TCP window field in the TCP header is only 16 bits, so the maximum window size it can handle is only 64kb. A long time ago this was enough, but nowadays it isn’t, by far. Luckily, this is something the window scaling option fixes. Window scaling is very common these days. Your pc or laptop probably uses it by default. Snort’s stream4 however, does not support it. This means that when tracking and reassembling streams, Snort for most connections has no idea about what data is in window and which is out of window. To make matters worse, the packets that are in window when using wscaling, but appear out of window when the wscaling is not accounted for, are never used in the reassembly process. This makes Snort evadable.

One of the goals when creating the stream4inline modifications, was to be able to drop on all TCP anomalies stream4 detects. For this support for window scaling was added to Stream4, so Snort_inline would be able to drop out of window packets. There is however a big problem with window scaling. With window scaling the TCP window possibly increases to a maximum of 1GB (with the maximum wscale value of 14). Stream4 would thus theoretically have to queue up to 1GB of packet data, per stream. While this is something that is unlikely to happen during normal connections, it is possible. This could then be used by an attacker to attack Snort_inline itself.

To prevent this, I added an option to stream4inline that allows the administrator to set a maximum allowable wscale setting. Any higher setting will be normalized away. In these cases the packet is modified and the wscale lowered to the maximum that is allowed. The hosts talking to each other then think the other accepts only the lower wscale and accepts that setting. This can however have some unexpected consequences. If the link that Snort_inline deals with is high speed, high latency or both, setting the wscale value to low can result in serious performance degradation. Connections that are (way) slower than usual is how this issue shows. In these cases the wscale value needs to be increased.

The default value of Snort_inline 2.6.1.5 is a wscale of 2, which is quite low but works fine on my home DSL connection. To change the setting add ‘norm_wscale_max 5’ to your stream4 configuration line. This will allow for a wscale of up to 5. The maximum value is 14. I’d be interested in what values people use on what types and speeds of lines, so please let me know! We can use it to suggest values in the docs or to set a less insane default value 🙂

Snort_inline and TCP Segmentation Offloading

Since a short while I have a gigabit setup at home. My laptop has a e1000 Intel NIC, my desktop a Broadcom NIC.While playing with Snort_inline and netpipe-tcp, I noticed something odd. I got tcp packets that had the ‘Don’t Fragment’ option set, but were still bigger than the mtu size of the link. Snort_inline read packets of up to 26kb from the queue, and wireshark and tcpdump were seeing the packets as well. This was only for outgoing packets on the e1000 NIC. The receiving pc saw the packets split up in multiple packets that were honoring the mtu size. This got me thinking that some form of offloading must be taking place and indeed this was the case:

# ethtool -k eth0
Offload parameters for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp segmentation offload: on

It can be disabled by using the following command:

# ethtool -K eth0 tso off

The large packets caused problems in my stream4inline modifications of Stream4. The code can’t handle the case where the packet is bigger than the sliding window size. So I have some work to do 😉