Sguil 0.7.0 CVS client on HeX 1.0.1

The last few days I’ve been playing with the HeX live-cd. It boots fine on my Lenovo T60 laptop. So after about a minute a nice graphical interface awaits me. I really love the artwork of this project.

There are many security tools installed, including the Sguil client. This is the 0.6.1 version however. As I have written before, I’m running 0.7.0 CVS here, so I needed the 0.7.0 CVS client. Luckily, it’s easy to install.

^^analyzt@raWPacket ~ ->
[HeX]$ cvs -d:pserver:anonymous@sguil.cvs.sourceforge.net:/cvsroot/sguil login
Logging in to :pserver:anonymous@sguil.cvs.sourceforge.net:2401/cvsroot/sguil
CVS password:
cvs login: warning: failed to open /home/analyzt/.cvspass for reading: No such file or directory
^^analyzt@raWPacket ~ ->
[HeX]$ cvs -d:pserver:anonymous@sguil.cvs.sourceforge.net:/cvsroot/sguil co sguil
cvs checkout: Updating sguil
U sguil/README

U sguil/web/lib/geoip.inc
^^analyzt@raWPacket ~ ->
[HeX]$

Before starting the client, remove the sguil.conf in /home/analyzt/, or change the SGUILLIB setting in it. It took me quite some time and help to figure this out. Many thanks to Bamm Visscher and David Bianco! One last thing before starting the client. Remove the /home/analyzt/.sguilrc. If I didn’t I got an error when logging in: “unable to write preferences to /home/analyzt/.sguilrc”. The fonts also looked very ugly and trying to change the font resulted in an error as well.

When starting the client, enterering sguil/client and issuing a ./sguil.tk doesn’t work:

^^analyzt@raWPacket ~ ->
[HeX]$ ./sguil.tk
exec: wish: not found
^^analyzt@raWPacket ~ ->
[HeX]$

The solution to this is simple, just explicity call sguil.tk with the installed wish:

^^analyzt@raWPacket ~ ->
[HeX]$ /usr/local/bin/wish8.4 sguil.tk

Then it works. Next I will try to somehow make sure it can survive a reboot and figure out how to enable the wireless lan.

Sguil 0.7-CVS client on Ubuntu Gutsy

Last week I installed Ubuntu Gutsy on my laptop. I did a clean install, which went fine. Of course, I needed the Sguil client on it as well. Gutsy has all the required libraries in it’s repositories. Install the following packages:

tcl8.4
tclx8.4
tcllib
tk8.4
iwidgets4

Checking out the Sguil client is easy (make sure you have ‘cvs’ installed):

cvs -d:pserver:anonymous@sguil.cvs.sourceforge.net:/cvsroot/sguil login
cvs -d:pserver:anonymous@sguil.cvs.sourceforge.net:/cvsroot/sguil co sguil

After this the client runs fine on my system.

Follow up on Sguil securtiy

In the discussion about my post about Sguil security there have been a number of ideas and general thoughts. I’d like to write about them here to we can further discuss them. There seems to be consensus on that when a sensors is rooted, there is nothing we can do to prevent injection of bogus data as long as it isn’t malformed.

Having the agent authenticate itself is a solution, but it relies on the agent credentials to remain secret. So when a webserver is rooted the attacker will have access to the credentials as they will be stored on the webserver itself. So this approach does provide an extra layer of defense but local roots aren’t uncommon, so it remains risky. It may still be worth the effort though.

One idea that came up was a proxy, in two forms. A Sguil proxy filtering the Sguil agent-server protocol is one of them. This would be positioned between the Sguil server and the agent to prevent the attacker going after the server directly. The proxy would be able to filter on the communications and could maybe trottle the rate of events to protect agains DoSsing the server. Maybe it could also block an agent completely is it’s sending malformed commands. Such a proxy could be positioned on the border of a monitored and management network.

David Bianco suggested a different proxy-like setup in which the actual sensor would receive the raw data over the network in some (secure) way from the webserver. This way the sensor talking to the Sguil server wouldn’t have run on the webserver. I think this is a good idea for the webservers. It does add complexity, because there will have to be some form of communications between the sensor and the webserver. At the same time this could simplify things as a single sensor box can deal with multiple webservers. In this case the credentials of the agent won’t be stored at the webserver, which is another plus. With this idea, much relies on how the webserver-sensor communications would be implemented.

Both ideas won’t be able to prevent an attacker owning the webserver to insert bogus data. Also in both cases the proxy can and probably will get a target itself.

Even though the risk increases with agents running on webserver, it also exists on other types of sensors. If a hole is found in Snort, Snort_inline, Sancp or other tools Sguil uses, the effects could be the same.

More ideas and suggestions are welcome!

Thoughts on Sguil security

Sguil is build using a server and sensors. Traditionally the sensors are passive monitoring agents running Snort and a few other tools. Best practice was (and still is) to separate the management network of these sensors and server from the monitored network(s). This way it would be fairly hard for an attacker to get a shot at the Sguil server.

Sguil of course, would be a extremely interesting target for hackers. It contains so much info about the monitored network. Also, it has realtime access to all network traffic. A hacker may also be interested in shutting Sguil down to avoid detection.

Securing Sguil is therefore very important. Sguil has a number of defenses. It separates agent and client access so an agent cannot issue commands a client only should issue. The clients need to authenticate to the server and Sguil provides the option to do both the agent-server and the client-server communications over SSL. Finally Sguil also has the option to only allow certain ipaddresses to connect. This can be set both for agents and clients separately.

With my Modsec2sguil agent, a part of these defenses go overboard. Modsec2sguil is likely to be run on webservers directly, and therefore the separation of a monitored network and a management network is no longer possible. Webservers tend to get hacked a lot more often than IDS systems, so this is an additional risk. A user getting access to the webserver is able to see that there is a Sguil agent active and will be able to connect to the Sguil server directly.

When all security of Sguil is in place the only thing this attacker will be able to do is act as an agent to Sguil. No authentication is required (or possible). The risk here is that the attacker can send bogus events to flood the analyst and Sguil itself. Possibly even DoSsing the server. By sending malformed data, the attacker could also try to crash the server.

So what can be done about it? Well I really can’t think of much other than to add authentication for agents in addition to the clients. This would provide an extra hurdle for the attacker because he has to get the right credentials. However, these credentials will have to be in the agent configuration on the sensor, so if the attacker manages to escalate his privileges he will get access to the credentials and this defense will fail. Still, it’s another layer of defense, so I think it’s better than nothing.

I’m very much interested in hearing other opinions about this!

Using Modsec2sguil for HTTP transaction logging revisited

Recently I wrote about the idea to log all HTTP transactions into Sguil using my Modsec2sguil agent. I’ve implemented this in the current 0.8-dev5 release and it works very well. All events go into Sguil smoothly and I’ve not experienced slowdowns on the webserver. I’ve been running it for almost a week now, like to share the first experiences here.

I find it to be quite useful. When receiving an alert, it is perhaps more interesting to see what else was done from that ipaddress than to see what was blocked (unless you are suspecting a false positive of course). One area I find to be useful is when I’m creating rules against comment spam on this blog. By seeing all properties of a spam message I can create better rules. For example on broken user-agents or weird codes inserted into the comment field of WordPress.

It’s easy to search and filter on HTTP response codes because the code is a part of the RT message. For example, when searching for all HTTP 500 error codes, add the following ‘WHERE’ clause to a query:

WHERE event.signature like “%MSc 500%”

This works quite fast although you best limit the query on properties like date and port as well. To get all the HTTP code 500 alerts from the last days do something like:

WHERE event.timestamp > ‘2007-08-18’ AND (event.dst_port = 80 OR event.dst_port = 443) AND event.signature like “%MSc 500%”

One thing that is disappointing is the inabillity to search in the event payloads stored in the database. Technically it’s possible to create mysql queries that search for certain strings, but this process is so slow that it’s hardly usable in practice. The problem here is that the database field containing the payload is not indexed. I’ll show the query I used here (ripped from David Bianco’s blog)

WHERE event.timestamp >= ‘2007-08-18’ AND (event.dst_port = 80 OR event.dst_port = 443) AND data.data_payload like CONCAT(“%”, HEX(“Mozilla/5.0”), “%”)

If you know a more efficient query, please let me know!

Using Modsec2sguil for HTTP transaction logging

Modsec2sguil is currently configured to send alerts to Sguil. ModSecurity can be configured to log any event or transaction, including 200 OK, 302 Redirect, etc. Modsec2sguil distinguishes between alerts and other events by only processing HTTP codes of 400 and higher. Since 0.8-dev2 there is a configuration directive to prevent certain codes, such as 404, from being treated as an alert.

Now I have the following idea. Since ModSecurity can log all events with details of request headers, response headers and POST message body, it may be interesting to just send all these events to Sguil. They should not be appearing as alerts, but having them in the database can perhaps be interesting. I know using flow data and full packet captures the same data can be accessed, but having it in the database makes querying it a lot easier and longer available.

Possible problems are mostly the performance hit the webserver may take for sending all these events to Sguil and the storage requirements in Sguil’s database. I estimate the events are about 1kb in size on average, so on a busy site this may cause the database to grow very rapidly. Of course this behavior would be optional so it can be disabled.

Any thoughts on this idea?