ModSecurity rule for Tikiwiki XSS

I just read about a Tikiwiki XSS here. Since the Vuurmuur wiki runs Tikiwiki I created a ModSecurity rule for it:

SecDefaultAction “log,deny,phase:2,status:403,t:urlDecodeUni,t:lowercase”

# XSS in remind password field
SecRule REQUEST_METHOD “^post$” “chain,msg:’TIKIWIKI lost password XSS'”
SecRule REQUEST_FILENAME “tiki-remind_password.php” “chain”
SecRule ARGS:/s*username/ “!^(:?[a-z0-9-_]{1,37})$”

This allows only valid usernames to be entered.

Update: Ivan Ristic privately pointed me at some possible problems with the rule:

  1. the escaping of the – and _ chars is not needed, although it seems to be harmless.
  2. the $ at the end of the filename is dangerous, because Apache treats tiki-remind_password.php/xxx as tiki-remind_password.php. In this case the rule is evaded.
  3. PHP (which Tikiwiki uses) ignores leading spaces in request arguments. So it treats ‘ username’ the same as ‘username’. The rule needs to deal with that.

Thanks for your feedback Ivan!

Old rule:

SecDefaultAction “log,deny,phase:2,status:403,t:urlDecodeUni,t:lowercase”

# XSS in remind password field
SecRule REQUEST_METHOD “^post$” “chain,msg:’TIKIWIKI lost password XSS’”
SecRule REQUEST_FILENAME “tiki-remind_password.php$” “chain”
SecRule ARGS:username “!^(:?[a-z0-9-_]{1,37})$”

Using Modsec2sguil for HTTP transaction logging revisited

Recently I wrote about the idea to log all HTTP transactions into Sguil using my Modsec2sguil agent. I’ve implemented this in the current 0.8-dev5 release and it works very well. All events go into Sguil smoothly and I’ve not experienced slowdowns on the webserver. I’ve been running it for almost a week now, like to share the first experiences here.

I find it to be quite useful. When receiving an alert, it is perhaps more interesting to see what else was done from that ipaddress than to see what was blocked (unless you are suspecting a false positive of course). One area I find to be useful is when I’m creating rules against comment spam on this blog. By seeing all properties of a spam message I can create better rules. For example on broken user-agents or weird codes inserted into the comment field of WordPress.

It’s easy to search and filter on HTTP response codes because the code is a part of the RT message. For example, when searching for all HTTP 500 error codes, add the following ‘WHERE’ clause to a query:

WHERE event.signature like “%MSc 500%”

This works quite fast although you best limit the query on properties like date and port as well. To get all the HTTP code 500 alerts from the last days do something like:

WHERE event.timestamp > ‘2007-08-18’ AND (event.dst_port = 80 OR event.dst_port = 443) AND event.signature like “%MSc 500%”

One thing that is disappointing is the inabillity to search in the event payloads stored in the database. Technically it’s possible to create mysql queries that search for certain strings, but this process is so slow that it’s hardly usable in practice. The problem here is that the database field containing the payload is not indexed. I’ll show the query I used here (ripped from David Bianco’s blog)

WHERE event.timestamp >= ‘2007-08-18’ AND (event.dst_port = 80 OR event.dst_port = 443) AND data.data_payload like CONCAT(“%”, HEX(“Mozilla/5.0”), “%”)

If you know a more efficient query, please let me know!

Update on using realtime blacklists with ModSecurity

A few days ago I posted a blog article about stopping comment spam with ModSecurity using realtime blacklists (rbl). While the approach was working, I noted having problems with rules when I tried to match on POST methods in HTTP requests.

Luckily, ModSecurity creator Ivan Ristic was quick to point out where the problem is. I’m using the Core Ruleset for ModSecurity, and one thing that ruleset does is use the ‘lowercase’ transformation. This converts all text from arguments to lowercase, so my ^POST$ match would never be able to match. So like Ivan suggested, using ^post$ solved this part.

Next Ivan pointed out a weakness in the rules. My rules looked for /blog/wp-comment-post.php, and would be easily evaded by just using /blog//wp-comment-post.php. He suggested using the ‘normalisePath’ transformation. I did this, but I also slightly changed the rules to not look for the /blog/ part at all (maybe this makes normalisePath useless, but I decided to rather be safe than sorry).

The rules I’m using now look like this:

SecRule REQUEST_METHOD “^post$” “log,deny,chain,msg:’LOCAL comment spammer at rbl list.dsbl.org'”
SecRule REQUEST_URI “wp-(comments-post|trackback).php$” “chain,t:normalisePath”
SecRule REMOTE_ADDR “@rbl list.dsbl.org”

SecRule REQUEST_METHOD “^post$” “log,deny,chain,msg:’LOCAL comment spammer at rbl bl.spamcop.net'”
SecRule REQUEST_URI “wp-(comments-post|trackback).php$” “chain,t:normalisePath”
SecRule REMOTE_ADDR “@rbl bl.spamcop.net”

SecRule REQUEST_METHOD “^post$” “log,deny,chain,msg:’LOCAL comment spammer at rbl sbl-xbl.spamhaus.org'”
SecRule REQUEST_URI “wp-(comments-post|trackback).php$” “chain,t:normalisePath”
SecRule REMOTE_ADDR “@rbl sbl-xbl.spamhaus.org”

Thanks a lot Ivan Ristic for your comments!

Blocking comment spam using ModSecurity and realtime blacklists

Spammers are known to use compromised hosts from all over the world to send their messages. Many people are blocking or scoring email spam based on realtime blacklist (rbl), which contain ipaddresses of these known bad hosts. In my experience this works fairly well for email. A while ago I noticed in the ModSecurity documentation for version 2.0 that ModSecurity features an operator called rbl, that can be used to check the ipaddress of a visitor with a rbl. So I decided to see if I could use the realtime blacklists to prevent comment spam on my blog. Turns out this works great! In this post I’ll show how to get it working.
Continue reading

Migrating from ModSecurity 1.9.4 to 2.0.4

ModSecurity 2 has been out for a while now, and although I have played with it some, I never found some time to upgrade my own servers. The upgrading generally went quite smooth, even though ModSecurity 2 changed quite a bit.

First of all there are now 5 phases where you can filter. Actually, one of them only applies to the logging, so you can filter in 4 phases. The phases are headers and body for both request and response traffic. Filtering on specific URIs can be done in phase 1 (request headers), while inspecting a POST payload requires phase 2 (request body).

Next, some shortcuts where removed. In 1.9.4 there was a variable called POST_PAYLOAD, that enabled the user to match against payloads from POST requests easily. Now there is REQUEST_BODY, but since that can be part of non-POST requests as well, you have to use:

SecRule REQUEST_METHOD “POST” chain
SecRule REQUEST_BODY “evil”

instead of:

SecFilterSelective POST_PAYLOAD “evil”

One other change is visible above already. The keyword to create a rule has been changed from SecFilterSelective to SecRule. Many rules can be converted by just replacing the keyword, but certainly not all. A simple find/replace should not be done without a manual review!

I use a number of custom rules to protect certain parts of my server, so I needed to convert my rules. For most of them it was simply enough to replace SecFilterSelective with SecRule. For a few I had to replace the OUTPUT_STATUS variable with the RESPONSE_STATUS variable, as it is called now.

For one rule however, I had quite some problems to get it running correctly. This was the rule in 1.9.4 syntax:

# block wp-login.php
SecFilterSelective REMOTE_ADDR “!10.1.1.1” chain
SecFilterSelective REQUEST_URI “/wp-login.php”
log,deny,redirect:http://www.inliniac.net/nologin.html

This rule makes sure only 10.1.1.1 can open the login page, everyone else is redirected to a simple html page containing a ‘Access Denied – Logins disabled’ message. I converted it to the following:

SecRule REMOTE_ADDR “!10.1.1.1” chain
SecRule REQUEST_URI “/wp-login.php”
log,deny,redirect:http://www.inliniac.net/nologin.html

Guess what? It didn’t work. I’ve spend quite some time trying all kinds of variations of the rule, and finally I found out what the issue is. In 1.9.4 the rule actions, like deny, redirect etc could be in the final rule of a series of chained rules. With 2.0.4 this doesn’t work correctly. So when I changed the rules to the following, it worked:

# block wp-login.php
SecRule REMOTE_ADDR “!10.1.1.1” “chain,phase:1,log,deny,redirect:http://www.inliniac.net/nologin.html”
SecRule REQUEST_URI “/wp-login.php$”

I haven’t looked into this further to find out whether this is a bug or a feature.

The last thing that was interesting is the modsec2sguil script. There have been some changes to the alert files. So expect a new version of the script soon!

Detecting and blocking Phishing with Snort and ClamAV

ClamAV is a great Open Source virusscanner that can be used for detecting virusses from Snort or Snort_inline using the ClamAV preprocessor. However, by using the anti-phishing and anti-scam signatures by SaneSecurity, this combination can also be used to detect and block phishing and scam attempts. Here is how to set it up.

I’ve decided to run this on my gateway, which is a slow machine. Because I don’t want all my traffic to slow down to much, I’m not going to run the ClamAV defs, only the anti-phishing ones. The default location of the defs on my Debian Sarge system is /var/lib/clamav, so I’ve created a new directory called ‘/var/lib/clamav-phish’. Next I’ve downloaded the defs from SaneSecurity. After unzipping them and the defs were ready.

Next was setting up the clamav preprocessor. For this I used the config line in my snort config:

preprocessor clamav: ports 80, dbreload-time 3600, dbdir /var/lib/clamav-phish, action-drop, toclientonly

This line says that spp_clamav should look for traffic on port 80 that flows to the client. It should use the signatures in /var/lib/clamav-phish/ and it should drop the traffic if a phishing attempt is detected. It also checks once an hour to see if the defs in the directory have been updated, and reloads them if so.

William Metcalf pointed me to a site where you can test this setup. It’s called MillerSmiles.co.uk and it’s an anti-phishing site, with many examples on their site. Opening an example shows this in my snort_inline log:

11/12-18:44:29.581771 [**] [129:1:1] (spp_clamav) Virus Found: Html.Phishing.Bank.Gen636.Sanesecurity.06051701 [**] {TCP} 209.85.50.12:80 -> 192.168.1.2:34915

The site failed to open, so it works just fine!

Rules for reported Tikiwiki vulnerabilities

Yesterday there was a mail to the bugtraq mailinglist about two types of vulnerabilties in Tikiwiki 1.9.5. The most serious is a claimed MySQL password disclosure through a special URI. The second is an XSS, also through an special URI. The message can be found here.

I wrote ‘claimed password disclosure’, because on the Tikiwiki server I run, I could not reproduce it. With that I mean the password disclosure, since I do see that Tikiwiki gives an error that reveals other information, most notably the location of the website on the local filesystem.

Anyway, since I’m running Tikiwiki I was eager to protect myself, so I started to write some rules.

XSS

Since I run ModSecurity on this server, I started with a rule for that:

SecFilterSelective REQUEST_URI “/tiki-featured_link.php?type” “chain,status:403,msg:’LOCAL tikiwiki featured link XSS attempt’,severity:6”
SecFilterSelective REQUEST_URI “/iframe>” log,deny,status:403

I did the same for Snort, and submitted it to the Bleeding Edge ruleset, see here.

Passwd/filesystem disclosure

This one is much harder to catch in a rule. The problem is in how Tikiwiki handles the sort_mode option in an URI. Only if the argument to sort_mode is valid (such as hits_asc or hits_desc for sorting on number of hits) the error is prevented. If the argument to sort_mode is empty or invalid then the disclosure condition triggers.

The only way I can think of to write rules for this is by adding some positive security filtering. In other words, create a rule that defines the valid arguments to sort_mode and drop anything else. Below is an example of one of the affected pages in Tikiwiki:

SecFilterSelective REQUEST_URI “tiki-listpages.php” chain
SecFilterSelective REQUEST_URI “sort_mode=(pageName|hits|lastModif|creator|user|version|
comment|flag|versions|links|backlinks|size)_(asc|desc)” pass,skip:2

SecFilterSelective REQUEST_URI “tiki-listpages.php” “chain,msg:’LOCAL tikiwiki listpages mysql passwd disclosure attempt’,severity:7”
SecFilterSelective REQUEST_URI “sort_mode=” log,deny,status:403

As you can see, here are two logical rules, each consisting of two chained rules. The first rule defines all the possible valid options to sort_mode and then has the action ‘pass,skip:2’. This says that this rule should not use the default action of deny and that the next two rules should be skipped. These next two rules drop every use of the sort_mode option, thus blocking the attack.

I have not yet looked at doing this in Snort. According to the advisory, there are 21 different vulnerable URI’s in Tikiwiki, which all have different arguments to sort_mode. So only 20 more to go! 😉

ModSecurity: rule for latest Tikiwiki vulnerability

A few days ago a new vulnerability was reported in Tikiwiki 1.9.x, the software I use for the Vuurmuur Wiki. Luckily, the Snort.org Community rules quickly had a rule for detecting the attack. Because I also run ModSecurity on the webserver, i wanted to have protection there as well. This rule should block the attack:

SecFilterSelective POST_PAYLOAD “jhot.php” “log,deny,status:403,msg:’LOCAL tikiwiki jhot.php attempt'”

Let’s see if I ever get a hit on it. An update for Tikiwiki as been released, so that should fix the issue completely.

ModSecurity: rules against comment spam

Lately the wiki of my Vuurmuur project has been receiving quite a lot of comment spam. Although removing the spam manually is boring work, i still don’t really mind the spam, because it enables me to practice with ModSecurity rules to fight it off. So far, the spam seems to be following a pattern, in which the spam is posted by bots, and has the same general layout for longer periods of time. That makes it worthwhile to spend time on creating rules against it. Yesterday a new type of spam emerged on the wiki. The following audit_log is for one of them. I had to slightly edit it for layout reasons.

–9075cb7a-A–
[22/Aug/2006:20:20:46 +0200] SPO4w5FhwZUAADItDEgAAAAC 195.225.177.131 34189 192.168.1.101 80
–9075cb7a-B–
POST /tiki/tiki-index.php HTTP/1.1
Host: wiki.vuurmuur.org
Referer: http://wiki.vuurmuur.org/tiki/tiki-index.php?page=Vuurmuur
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)
Content-Length: 1304
Content-Type: application/x-www-form-urlencoded

–9075cb7a-C–
page=Vuurmuur&cols=60&comments_sort_mode=points_desc&
comments_parentId=0&rows=6&comments_postComment=
post&comments_threshold=0&comments_threadId=0&
comments_data=Chevrolet+-+%5BURL%3Dhttp%3A%2F%2F
canonprnbf.kickme.to%2F+%5D+Chevrolet+%5B%2FURL%5D+
home+based+businesses+-+%5BURL%3Dhttp%3A%2F%2F
1964cadulj.blog.kataweb.it%2F1964cadulj%2Findex-home-
based-businesses.html+%5D+home+based+businesses+
%5B%2FURL%5D+diet+pills+-+%5BURL%3Dhttp%3A%2F%2F
aemerge4li.web1000.com%2Findex-diet-pills.html+%5D+diet+
pills+%5B%2FURL%5D+Cash+Advance+-+%5BURL%3D
http%3A%2F%2Fseamlesmnp.su.pl%2F+%5D+Cash+Advance
+%5B%2FURL%5D+Azithromycin+-+%5BURL%3D
http%3A%2F%2Fthelacefab7ne.kickme.to%2F+%5D+
Azithromycin+%5B%2FURL%5D+allergy+-+%5BURL%3D
http%3A%2F%2Fgeo.ya.com%2Faeggcelpzg%2Findex-
allergy.html+%5D+allergy+%5B%2FURL%5D+Cash+Advance+
-+%5BURL%3Dhttp%3A%2F%2Faofficeuov.ir.pl%2F+%5D+
Cash+Advance+%5B%2FURL%5D+free+games+-+%5BURL
%3Dhttp%3A%2F%2Fwww.geocities.com%2Fthebestjwb
%2Findex-free-games.html+%5D+free+games+%5B%2F
URL%5D+Ambien+-+%5BURL%3Dhttp%3A%2F%2F
beadedluwu.su.pl%2F+%5D+Ambien+%5B%2FURL%5D+altace
+-+%5BURL%3Dhttp%3A%2F%2Facellphonehy4.tripod.com
%2Findex-altace.html+%5D+altace+%5B%2FURL%5D
+&comments_reply_threadId=0&comments_title=Chevrolet&
comments_offset=0&comments_previewComment=preview&
comments_grandParentId=
–9075cb7a-F–
HTTP/1.1 200 OK
X-Powered-By: PHP/4.3.10-16
Set-Cookie: PHPSESSID=b2497fd593f56c5af4f3613ba78a7619; path=/tiki
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8

–9075cb7a-H–
Stopwatch: 1156270844524739 1475657 (17717* 38503 0)
Producer: ModSecurity v1.9.2 (Apache 2.x)
Server: Apache/2.0.54 (Debian GNU/Linux) PHP/4.3.10-16 mod_ssl/2.0.54 OpenSSL/0.9.7e

–9075cb7a-Z–

A general rule against the comment spam would probably not be that hard. Just blocking “http://” would probably to the trick. However, I’m not quite willing to do this, since people might actually post real and interesting links to the wiki. I noticed that the above comment and the 30 other already posted messages all contained the uppercase word URL, so i decided to block on that. Below are the rules, which as you can see are quite an improvement over these rules, because they now only match on actual comments posts 😉

SecFilterSelective REQUEST_URI “/tiki/tiki-index.php” “chain,msg:’LOCAL comment spam'”
SecFilterSelective POST_PAYLOAD “URL” chain
SecFilterSelective POST_PAYLOAD “comments_postComment=post” log,deny,status:403

An Apache restart and just wait… but not for long:

[Tue Aug 22 21:35:44 2006] [error] [client 195.225.177.131] mod_security: Access denied with code 403. Pattern match “comments_postComment=post” at POST_PAYLOAD [msg “LOCAL comment spam”] [hostname “wiki.vuurmuur.org”] [uri “/tiki/tiki-index.php”] [unique_id “VTGBGJFhwZUAADMnAWAAAAAA”]

This morning there were 59 attempts blocked already which i would have to remove manually without these rules. So taking the time to setup the rules really pays off.

ModSecurity: more security by obscurity

Yesterday, Philippe Baumgart showed me that my obscurity setup is not yet perfect. In fact, he could very easily enter an URL that didn’t exist and caused the webserver behind my proxy to respond with a 404. In this 404 the name and the version of the webserver were exposed.

After some testing i found that adding the following to my config worked very well.

# enable output scanning in Mod Security.
SecFilterScanOutput On

# hide outgoing 404 by webserver behind proxy
SecFilterSelective OUTPUT_STATUS 404 deny,status:404

This catches outgoing 404 errors, and replaces them by the 404 from the proxy. For some reason, this still didn’t exaclty look like the 404 from the proxy itself, because it contained a message that an additional 404 was encountered. I solved this by changing the ErrorDocument in de Apache config:

ErrorDocument 404 “<html><head><title>404 Not Found</title></head><body><h1>Not Found</h1><p>The requested URL was not found on this server.</p></body></html>”

After this, there was no longer any difference between 404’s produced by the proxy and by the webserver behind it.

Next, Phil showed me that i also leaked my version number of PHP. By using WordPress hiding the fact that I use PHP is impossible and pointless, but hiding the exact version still looks like a good idea. The version was leaked in the header from a server response: X-Powered-By: PHP/4.4.1build1. Solving this requires the mod_header module again:

# unset X-Powered-By to prevent leaking the PHP version
Header unset X-Powered-By

This hides this header. Thanks to Phil for doing some pen-testing 🙂