One area of interest in the development of Suricata is hardware acceleration. Using the GPU is particularly interesting, as they are cheap and widely available. We’ve been looking at using the GPU to speed up pattern matching as a first step. Since OpenCL promises to be a cross platform multi vendor API for doing this we first looked at OpenCL. But we were never able to get something stable out of it, not on the NVIDIA drivers in Linux anyway. As that didn’t go anywhere we decided to use CUDA for the time being. CUDA obviously is NVIDIA only. Once we have CUDA fully running we may revisit OpenCL or look at other implementations like AMD/ATI’s stream API.
What we have so far is a implementation our 2 gram SBNDM pattern matcher algorithm in CUDA. The detection thread(s) currently send packets one by one to a central dispatcher thread that controls the GPU. This setup is far from ideal performance wise, but our first goal was to get it working at all. Currently on my desktop CUDA actually slows things down.
In the next weeks and months we plan to do some redesigning of the CUDA implementation and it’s integration into the engine. We plan to send the packets in batches to the dispatcher thread right after the decoders have determined what the payload portion of a packet is. The (separate) detection thread(s) can then process the results of the GPU when they get to a packet. By using the CUDA scanning async like this we hope that we can reduce the costs of the transfer of packets from and to the card.
Currently the code in the tree can be activated by passing the “–enable-cuda” option to ./configure. Next, in the configuration file enable the cuda pattern matcher by setting the “mpm-algo” option to “b2g_cuda”. As a first test, run the CUDA unittests (assuming you enabled the building of the unittests too) by using “suricata -uUCuda”. Please note that currently running all unittests will fail if CUDA is enabled.
The code is only tested on 32bit Linux at the moment. There are some issues with 64bit that we’re resolving right now. We’re expecting to be continuously updating this code, so be sure to work with the most current version of the git repo all the time!
Let us know your experiences!