Snort_inline patch updated to 2.6.1.2

With the recent Snort vulnerabilities we had to make a choice if we would backport the fixes to our Snort_inline 2.6.0.2 patch or that we would upgrade to 2.6.1.2. Upgrading makes most sense since SourceFire improves Snort with every release, but since the upgrade process has been very painful the last couple of releases, we weren’t really looking forward to it.

Earlier I wrote about my testing with Subversion for Snort_inline, and I found out that using Subversion made the upgrade procedure much easier and much less time consuming. So upgrading it was. Generally there were little changes to the Snort_inline patch required.

One thing however, messed up the way the new stream4inline code works. A new option in Snort’s Stream4, which is enabled by default, is session dropping. The way it works is that when a packet is dropped, the session it belongs to is instructed to drop every packet from that session from that time on.

This makes sense in many cases, but not in all. In stream4inline, we have created options to drop out-of-order packets, out-of-window packets, bad reset packets, and more. Generally, in these cases we want just drop those individual packets, not kill off the session.

Especially killing the session on bad reset packets would be making it easier to kill sessions by third parties. One might argue that sessions writing outside of the window can be killed, but when looking at the out-of-order limits, this can’t be done.

The out-of-order limits are enforced not because it is bad traffic, but to prevent resource starvation attacks against Snort_inline’s stream reassebler. Out-of-order packets will have to be put in the right order before processing, taking CPU time. Also, they have to be queue’d so re-order, taking memory.

By setting out-of-order limits, the burden of getting the stream in order is on the sender of the packets. He will have to retransmit the right packets first, before sending more out-of-order packets. In this case, we don’t want InlineDrop() to kill the entire session. To deal with this, we introduced InlineDropPacketOnly(), that just drops the packet.

A official beta should be out RealSoonNow(tm) 😉

Setting up Subversion for Snort_inline

A reason for the slow development of Snort_inline is that we still weren’t using a version control system. Being sick of this, I decided to setup a private Subversion server to see how we could best use it. One thing that complicates the use of such a system is the fact that we maintain a patch on top of source code not maintained by ourselves. So the system must be able to deal with upstream sourcecode updates.

In the excellent book Practical Subversion, Garrett Rooney suggests the use of so called vendor branches. In this setup the vanilla sources of the upstream Snort would be in the svn repository as well. I’ve decided to experiment with this, and this is how I found it to work.

There are two branches in the svn:

vendor/
trunk/

In vendor, the vanilla source is imported, with tags to the specific releases. So for Snort you will have:

vendor/current
vendor/2.6.0.2

The trunk is first initialized as a copy of vendor/current, after which the Snort_inline specific code is added to the trunk. All modifications to our Snort_inline patch will be done in trunk/.

Where this approach shines is when there is a new upstream version. The procedure is this:

  1. checkout vendor/current
  2. update your working copy to the new version
  3. commit
  4. create a new tag for the new version.

For going to 2.6.1.2, this also meant removing a few files. After this, you have:

vendor/current
vendor/2.6.0.2
vendor/2.6.1.2

After this, checkout the trunk, and do a merge of the two vanilla trees (2.6.0.2 and 2.6.1.2) into the trunk. This will update our Snort_inline code with the new ‘vendor’ version. This will create a number of conflicts that will have to be resolved manually (because of our changes in Snort_inline), but resolving this turns out to be a lot simpler and less time comsuming than our old method of just copy-pasting the Snort_inline code into the new Snort release.

Anyway, since Will and I were happy about this approach, we have decided to move to the SourceForge.net SVN server, which now contains a trunk with Snort_inline code, soon to be released as Snort_inline 2.6.1.2 BETA 1. But don’t wait for us, you can also checkout your own copy from:

https://snort-inline.svn.sourceforge.net/svnroot/snort-inline/trunk/

Check it out! 🙂

Snort_inline 2.6 development update

Development of Snort_inline 2.6 experienced a bit of a setback when William and I discovered that the new Stream4inline had some issues with detecting certain attacks. Since we are scanning the reassembled stream certain detection plugins didn’t work as expected. Basically every detection plugin that uses absolute offsets from the packet start is messed up when we scan the reassembled stream only.

This is because the start of the reassembled stream doesn’t match with the start of the last packet added to this stream. Most TCP sigs are using offsets match against the start of the stream, or relative matches. For example a rule like:

alert tcp any any -> $HTTP_SERVERS 80 (msg:”GET request”; content:”GET”; offset:0; depth:3; sid:12345678; rev:1);

matches against the start of the stream since ‘GET’ will be the first data on the stream. In this case the reassembled stream only scanning would have worked fine because the start of the reassembled stream would match the start of the stream. So offset:0 in the reassembled stream points to the stream start, which is what we want in this case.

Things get different however, when we try to match against midstream packets where the rule matches against the actual packet start. One might argue that this is a bad idea in most cases, and I agree. Since TCP moves data stream based and not packet based, hardly any assumptions can be done about packet sizes, etc. Most TCP rules don’t use this, so the problem is fairly limited. An example of a rule that does this is the eDonkey detection sigs in the Bleeding ruleset.

As a solution we came up with the following. We scan every packet individually and in it’s reassembled stream. This is certainly more expensive, but the only way to avoid the evasion problems. I think we can probably add an option to make this behaviour optional, so the admin can choose to be extra safe at the cost of some perfomance.

Detecting and blocking Phishing with Snort and ClamAV

ClamAV is a great Open Source virusscanner that can be used for detecting virusses from Snort or Snort_inline using the ClamAV preprocessor. However, by using the anti-phishing and anti-scam signatures by SaneSecurity, this combination can also be used to detect and block phishing and scam attempts. Here is how to set it up.

I’ve decided to run this on my gateway, which is a slow machine. Because I don’t want all my traffic to slow down to much, I’m not going to run the ClamAV defs, only the anti-phishing ones. The default location of the defs on my Debian Sarge system is /var/lib/clamav, so I’ve created a new directory called ‘/var/lib/clamav-phish’. Next I’ve downloaded the defs from SaneSecurity. After unzipping them and the defs were ready.

Next was setting up the clamav preprocessor. For this I used the config line in my snort config:

preprocessor clamav: ports 80, dbreload-time 3600, dbdir /var/lib/clamav-phish, action-drop, toclientonly

This line says that spp_clamav should look for traffic on port 80 that flows to the client. It should use the signatures in /var/lib/clamav-phish/ and it should drop the traffic if a phishing attempt is detected. It also checks once an hour to see if the defs in the directory have been updated, and reloads them if so.

William Metcalf pointed me to a site where you can test this setup. It’s called MillerSmiles.co.uk and it’s an anti-phishing site, with many examples on their site. Opening an example shows this in my snort_inline log:

11/12-18:44:29.581771 [**] [129:1:1] (spp_clamav) Virus Found: Html.Phishing.Bank.Gen636.Sanesecurity.06051701 [**] {TCP} 209.85.50.12:80 -> 192.168.1.2:34915

The site failed to open, so it works just fine!

Update on Snort_inline 2.6.0.2 development

I have spend the last week trying to find a very annoying bug that caused Snort_inline to go into 100% CPU on certain traffic. It kept working, only my P3 500Mhz home gateway slowed down to between 2kb/s and 25kb/s, while normally it handles the full 325kb/s for my DSL line at around 25% CPU.

Snort comes with a number of performance measurement options. In 2.6 –enable-perfprofiling was introduced. Also, –enable-profile builds Snort for use with gprof. Next to those you can use strace and ltrace with the -c option to see the ammount of time spend in the several functions.

I already knew the problem was related to my new Stream4 code, since running Snort_inline without the ‘stream4inline’ option made the problem go away. So my performance debugging and code reviews were focussed on that code. However, the performance statistics showed no functions that took large ammounts of time in Stream4.

Continue reading

New ClamAV patch for Snort 2.6.0.2

Okay, so i’m fired at patch making because I screwed up the last patch. I never bothered to test it with Snort in inline-mode. This didn’t work because we included all kinds of specific features for Snort_inline into the preprocessor. I have updated the patch.

Get it here: http://www.inliniac.net/files/061106-snort-2.6.0.2-clamav.diff.gz

Will, am I re-hired now? Pretty please??? 😉

Rules for reported Tikiwiki vulnerabilities

Yesterday there was a mail to the bugtraq mailinglist about two types of vulnerabilties in Tikiwiki 1.9.5. The most serious is a claimed MySQL password disclosure through a special URI. The second is an XSS, also through an special URI. The message can be found here.

I wrote ‘claimed password disclosure’, because on the Tikiwiki server I run, I could not reproduce it. With that I mean the password disclosure, since I do see that Tikiwiki gives an error that reveals other information, most notably the location of the website on the local filesystem.

Anyway, since I’m running Tikiwiki I was eager to protect myself, so I started to write some rules.

XSS

Since I run ModSecurity on this server, I started with a rule for that:

SecFilterSelective REQUEST_URI “/tiki-featured_link.php?type” “chain,status:403,msg:’LOCAL tikiwiki featured link XSS attempt’,severity:6”
SecFilterSelective REQUEST_URI “/iframe>” log,deny,status:403

I did the same for Snort, and submitted it to the Bleeding Edge ruleset, see here.

Passwd/filesystem disclosure

This one is much harder to catch in a rule. The problem is in how Tikiwiki handles the sort_mode option in an URI. Only if the argument to sort_mode is valid (such as hits_asc or hits_desc for sorting on number of hits) the error is prevented. If the argument to sort_mode is empty or invalid then the disclosure condition triggers.

The only way I can think of to write rules for this is by adding some positive security filtering. In other words, create a rule that defines the valid arguments to sort_mode and drop anything else. Below is an example of one of the affected pages in Tikiwiki:

SecFilterSelective REQUEST_URI “tiki-listpages.php” chain
SecFilterSelective REQUEST_URI “sort_mode=(pageName|hits|lastModif|creator|user|version|
comment|flag|versions|links|backlinks|size)_(asc|desc)” pass,skip:2

SecFilterSelective REQUEST_URI “tiki-listpages.php” “chain,msg:’LOCAL tikiwiki listpages mysql passwd disclosure attempt’,severity:7”
SecFilterSelective REQUEST_URI “sort_mode=” log,deny,status:403

As you can see, here are two logical rules, each consisting of two chained rules. The first rule defines all the possible valid options to sort_mode and then has the action ‘pass,skip:2’. This says that this rule should not use the default action of deny and that the next two rules should be skipped. These next two rules drop every use of the sort_mode option, thus blocking the attack.

I have not yet looked at doing this in Snort. According to the advisory, there are 21 different vulnerable URI’s in Tikiwiki, which all have different arguments to sort_mode. So only 20 more to go! 😉

Snort_inline: getting closer to 2.6.0.2

I’m back from my vacation which was very nice. Hardly did any geek stuff, other than meeting up with Philippe, who lives in Paris. It was the first time I met someone I got to know through the Vuurmuur project 🙂

So with Snort_inline things aren’t moving as fast as I hoped, but there is certainly progress. I’m currently hunting for a few bugs. First of all I’ve seen it segfault on me once. Sadly I had forgotten to enable coredumps, so no clue as of why. Second, William and I have been ironing out some issues where the new stream4 mode was getting mixed up with the old. I think these are pretty much taken care of now. Third, there is a bug where an unified alert fired by http_inspect doesn’t contain a payload. Finally, i’m hunting what appears to be a heisenbug in the new stream reassembly, because I’ve never encountered it since I’m actually looking for it.

Still it has been running on my gateway with good stability and performance for a few weeks now. So I think that if we can find the http_inspect issue, we should be ready for a beta release…

Snort_inline: running Snort_inline 2.6.0.2

No, it’s not released. But it wil be soon… really!

William has done most of the hard work of porting our Snort_inline patch from 2.4.5 to 2.6. I have mostly been working on improving the stream4inline modification. I have written about this before. Like the stream4inline modification in Snort_inline 2.4.5 it scans the stream in a sliding window, making it possible to drop an attack detected in the reassembled stream. The new code does the same but is much faster, at the cost of higher memory usage.

Another interesting feature is that it keeps track of the number of sequence holes there are in a stream, and it can force a stream to get back in order. This limit can be enforced by the number of out-of-order packets and bytes, and also by the number of simultanious sequence number holes. Inspired by the paper by Sarang Dharmapurikar and Vern Paxson.

Last but not least it adds support for window scaling to stream4. Since window scaling adds the possibility to have window sizes of up a gigabyte, I’ve added a normalizing function as well, that can force all streams to use a configurable maximum wscale setting.

But it is running on my gateway now, which is also the gateway leading to this blog, so if it is unavailable to you, you’ve hit a bug 😉