This patch introduces wrapper functions around allocation functions
to be able to have a global HTP memcap. A simple subsitution of
function was not enough because allocated size needed to be known
during freeing and reallocation.
The value of the memcap can be set in the YAML and is left by default
to unlimited (0) to avoid any surprise to users.
app-layer.[ch], app-layer-detect-proto.[ch] and app-layer-parser.[ch].
Things addressed in this commit:
- Brings out a proper separation between protocol detection phase and the
parser phase.
- The dns app layer now is registered such that we don't use "dnstcp" and
"dnsudp" in the rules. A user who previously wrote a rule like this -
"alert dnstcp....." or
"alert dnsudp....."
would now have to use,
alert dns (ipproto:tcp;) or
alert udp (app-layer-protocol:dns;) or
alert ip (ipproto:udp; app-layer-protocol:dns;)
The same rules extend to other another such protocol, dcerpc.
- The app layer parser api now takes in the ipproto while registering
callbacks.
- The app inspection/detection engine also takes an ipproto.
- All app layer parser functions now take direction as STREAM_TOSERVER or
STREAM_TOCLIENT, as opposed to 0 or 1, which was taken by some of the
functions.
- FlowInitialize() and FlowRecycle() now resets proto to 0. This is
needed by unittests, which would try to clean the flow, and that would
call the api, AppLayerParserCleanupParserState(), which would try to
clean the app state, but the app layer now needs an ipproto to figure
out which api to internally call to clean the state, and if the ipproto
is 0, it would return without trying to clean the state.
- A lot of unittests are now updated where if they are using a flow and
they need to use the app layer, we would set a flow ipproto.
- The "app-layer" section in the yaml conf has also been updated as well.
Add two new mPIPE load-balancing configuration options in suricata.yaml.
1) "sticky" which keep sending flows to one CPU, but if that queue is full,
don't drop the packet, move the flow to the least loaded queue.
2) Round-robin, which always picks the least full input queue for each
packet.
Allow configuring the number of packets in the input queue (iqueue) in
suricata.yaml.
For the mPipe.buckets configuration, which must be a power of 2, round
up to the next power of two, rather than report an error.
Added mpipe.min-buckets, which defaults to 256, so if the requested number
of buckets can't be allocated, Suricata will keep dividing by 2 until either
it succeeds in allocating buckets, or reaches the minimum number of buckets
and fails.
emerging-virus.rules is not present anymore in ET ruleset downloaded
by 'make install-rules'. This patch removes it from the list to avoid
an error message.
This patch adds support for checksum-checks in the pcap-file running
mode. This is the same functionnality as the one already existing for
live interface.
It can be setup in the YAML:
pcap-file:
checksum-checks: auto
A message is displayed for small pcap to warn that invalid checksum
rate is big on the pcap file and that checksum-check could
be set to no.
The meta-field-option allows for setting the hard limit of request
and response fields in HTTP. In requests this applies to the request
line and headers, not the body. In responses, this applies to the
response line and headers, not the body.
Libhtp uses a default limit of 18k. If this is reached an event is
raised.
Ticket 986.
Strip the 'proxy' parts from the normalized uri as inspected by http_uri,
urilen, pcre /U and others.
In a request line like:
GET http://suricata-ids.org/blah/ HTTP/1.1
the normalized URI will now be:
/blah/
This doesn't affect http_raw_uri. So matching the hostname, etc is still
possible through this keyword.
Additionally, a new per HTTP 'personality' option was added to change
this behavior: "uri-include-all":
uri-include-all: <true|false>
Include all parts of the URI. By default the
'scheme', username/password, hostname and port
are excluded. Setting this option to true adds
all of them to the normalized uri as inspected
by http_uri, urilen, pcre with /U and the other
keywords that inspect the normalized uri.
Note that this does not affect http_raw_uri.
So adding uri-include-all:true to all personalities in the yaml will
restore the old default behavior.
Ticket 1008.
Take VLAN IDs into account when re-assembling fragments.
Prevents fragments that would otherwise match, but on different
VLANs from being reassembled with each other.
This patch update reject code to send the packet on the interface
it comes from when 'host-mode' is set to 'sniffer-only'. When
'host-mode' is set to 'router', the reject packet is sent via
the routing interface.
This should fix#957.
This variable can be used to indicate to suricata that the host
running is running as a router or is in sniffing only mode.
This will used at least to determine which interfaces are used to
send reject message.
1. Proto detection
2. Parsers
For app layer protocols.
libhtp has now been moved to the section under app-layer.protocols.http,
but we still provide backward compatibility with older conf files.
The TILE-Gx processor includes a packet processing engine, called
mPIPE, that can deliver packets directly into user space memory. It
handles buffer allocation and load balancing (either static 5-tuple
hashing, or dynamic flow affinity hashing are used here). The new
packet source code is in source-mpipe.c and source-mpipe.h
A new Tile runmode is added that configures the Suricata pipelines in
worker mode, where each thread does the entire packet processing
pipeline. It scales across all the Gx chips sizes of 9, 16, 36 or 72
cores. The new runmode is in runmode-tile.c and runmode-tile.h
The configure script detects the TILE-Gx architecture and defines
HAVE_MPIPE, which is then used to conditionally enable the code to
support mPIPE packet processing. Suricata runs on TILE-Gx even without
mPIPE support enabled.
The Suricata Packet structures are allocated by the mPIPE hardware by
allocating the Suricata Packet structure immediatley before the mPIPE
packet buffer and then pushing the mPIPE packet buffer pointer onto
the mPIPE buffer stack. This way, mPIPE writes the packet data into
the buffer, returns the mPIPE packet buffer pointer, which is then
converted into a Suricata Packet pointer for processing inside
Suricata. When the Packet is freed, the buffer is returned to mPIPE's
buffer stack, by setting ReleasePacket to an mPIPE release specific
function.
The code checks for the largest Huge page available in Linux when
Suricata is started. TILE-Gx supports Huge pages sizes of 16MB, 64MB,
256MB, 1GB and 4GB. Suricata then divides one of those page into
packet buffers for mPIPE.
The code is not yet optimized for high performance. Performance
improvements will follow shortly.
The code was originally written by Tom Decanio and then further
modified by Tilera.
This code has been tested with Tilera's Multicore Developement
Environment (MDE) version 4.1.5. The TILEncore-Gx36 (PCIe card) and
TILEmpower-Gx (1U Rack mount).
In some cases using the vlan id(s) in flow hashing is problematic. Cases
of broken routers have been reported. So this option allows for disabling
the use of vlan id(s) while calculating the flow hash, and in the future
other hashes.
Vlan tracking for flow is enabled by default.
Use per thread pools to store and retrieve SSN's from. Uses PoolThread
API.
Remove max-sessions setting. Pools are set to unlimited, but TCP memcap
limits the amount of sessions.
The prealloc_session settings now applies to each thread, so lowered the
default from 32k to 2k.
Normally, there is one verdict per packet, i.e., we receive a packet,
process it, and then tell the kernel what to do with that packet (eg.
DROP or ACCEPT).
recv(), packet id x
send verdict v, packet id x
recv(), packet id x+1
send verdict v, packet id x+1
[..]
recv(), packet id x+n
send verdict v, packet id x+n
An alternative is to process several packets from the queue, and then send
a batch-verdict.
recv(), packet id x
recv(), packet id x+1
[..]
recv(), packet id x+n
send batch verdict v, packet id x+n
A batch verdict affects all previous packets (packet_id <= x+n),
we thus only need to remember the last packet_id seen.
Caveats:
- can't modify payload
- verdict is applied to all packets
- nfmark (if set) will be set for all packets
- increases latency (packets remain queued by the kernel
until batch verdict is sent).
To solve this, we only defer verdict for up to 20 packets and
send pending batch-verdict immediately if:
- no packets are currently queue
- current packet should be dropped
- current packet has different nfmark
- payload of packet was modified
This patch adds a configurable batch verdict support for workers runmode.
The batch verdicts are turned off by default.
Problem is that batch verdicts only work with kernels >= 3.1, i.e.
using newer libnetfilter_queue with an old kernel means non-working
suricata. So the functionnality has to be disabled by default.
By randomizing chunk size around the choosen value, it is possible
to escape some evasion technics that are using the fact they know
chunk size to split the attack at the correct place.
This patch activates randomization by default and set the random
interval to chunk size value +- 10%.
Until now, when processing the TCP 3 way handshake (3whs), retransmissions
of SYN/ACKs are silently accepted, unless they are different somehow. If
the SEQ or ACK values are different they are considered wrong and events
are set. The stream events rules will match on this.
In some cases, this is wrong. If the client missed the SYN/ACK, the server
may send a different one with a different SEQ. This commit deals with this.
As it is impossible to predict which one the client will accept, each is
added to a list. Then on receiving the final ACK from the 3whs, the list
is checked and the state is updated according to the queued SYN/ACK.
Added a napatech section in the yaml configuration.
hba - host buffer allowance
use-all-streams - whether all streams should be used
streams - list of stream numbers to use when use-all-streams is no
The source-napatech.* files were modified to support the host buffer allowance configuration.
The runmode-napatech.c file was modified to support both the host buffer allowance configuration and stream configuration
Signed-off-by: Matt Keeler <mk@npulsetech.com>
This patch introduces 'snaplen' a new YAML variable in the pcap section.
It can be set per-interface to force pcap capture snaplen. If not set
it defaults to interface MTU if MTU can be known via a ioctl call and to
full capture if not.
Main objective of this patch is to use a dynamic snaplen to avoid
to truncate packet at the currently fixed snaplen.
It set snaplen to MTU length if the MTU can be retrieved. If not, it
does not set the snaplen which results in using a 65535 snaplen.
libpcap is trying to use mmaped capture and setup the ring by using buffer_size
as the total memory. It also use "rounded" snaplen as frame size. So if we set
snaplen to MTU when available we are optimal regarding the building of the ring.
This patch introduces a unix command socket. JSON formatted messages
can be exchanged between suricata and a program connecting to a
dedicated socket.
The protocol is the following:
* Client connects to the socket
* It sends a version message: { "version": "$VERSION_ID" }
* Server answers with { "return": "OK|NOK" }
If server returns OK, the client is now allowed to send command.
The format of command is the following:
{
"command": "pcap-file",
"arguments": { "filename": "smtp-clean.pcap", "output-dir": "/tmp/out" }
}
The server will try to execute the "command" specified with the
(optional) provided "arguments".
The answer by server is the following:
{
"return": "OK|NOK",
"message": JSON_OBJECT or information string
}
A simple script is provided and is available under scripts/suricatasc. It
is not intended to be enterprise-grade tool but it is more a proof of
concept/example code. The first command line argument of suricatasc is
used to specify the socket to connect to.
Configuration of the feature is made in the YAML under the 'unix-command'
section:
unix-command:
enabled: yes
filename: custom.socket
The path specified in 'filename' is not absolute and is relative to the
state directory.
A new running mode called 'unix-socket' is also added.
When starting in this mode, only a unix socket manager
is started. When it receives a 'pcap-file' command, the manager
start a 'pcap-file' running mode which does not really leave at
the end of file but simply exit. The manager is then able to start
a new running mode with a new file.
To start this mode, Suricata must be started with the --unix-socket
option which has an optional argument which fix the file name of the
socket. The path is not absolute and is relative to the state directory.
THe 'pcap-file' command adds a file to the list of files to treat.
For each pcap file, a pcap file running mode is started and the output
directory is changed to what specified in the command. The running
mode specified in the 'runmode' YAML setting is used to select which
running mode must be use for the pcap file treatment.
This requires modification in suricata.c file where initialisation code
is now conditional to the fact 'unix-socket' mode is not used.
Two other commands exists to get info on the remaining tasks:
* pcap-file-number: return the number of files in the waiting queue
* pcap-file-list: return the list of waiting files
'pcap-file-list' returns a structured object as message. The
structure is the following:
{
'count': 2,
'files': ['file1.pcap', 'file2.pcap']
}
It is now possible to use the 'daemon-directory' configuration
variable to specify the working directory of suricata in daemon
mode. This will permit to specify the place for core and other
related files.
stream.inline YAML configuration variable now support the 'auto' value.
In this case, inline mode is activated for IPS running mode (NFQ and
IPFW) and is deactivated for IDS mode. This patch should fix bug #592.