Due to an error at initialization, the stream engine would not disable
'raw' reassembly automatically when --disable-detection was used.
This lead to segments not getting cleared from the segment lists.
Don't kill flow manager and recyclers before the rest of the threads. The
packet threads may still have packets from their pools. As the flow threads
would destroy their pools the packets would be lost.
This patch doesn't kill the threads, it just pulls them out of their run
loop and into a wait loop. The packet pools won't be cleared until all
threads are killed.
Wait for flow management threads to close before moving on to the
next steps in the shutdown process.
Don't destroy flow force reassembly packet pool too early. Worker
threads may still want to return packets to it.
Load the YAML into a prefix "detect-engine-reloads.N" where N is the
reload counter. This way we can load the updated config w/o overwriting
the current one.
Use new DetectEngineReload() function. It's called from the main loop
instead of it being spawned into it's own temporary thread. This greatly
simplifies the signal handling.
An added advantage is that this seems to improve the memory usage.
Related to bug #1358
The minimal detect engine has only the minimal memory use and setup
time. It's to be used for 'delayed' detect where the first detection
engine is essentially empty.
The threads setup are also minimal.
Instead of threading logic with dummy slots and all, use the regular
reload logic for delayed detect.
This means we pass a empty detect engine to the threads and then
reload (live swap) it as soon as the engine is running.
Update detect engine management to make it easier to reload the detect
engine.
Core of the new approach is a 'master' ctx, that keeps a list of one or
more detect engines. The detect engines will not be passed to any thread
directly, but instead will only be accessed through the detect engine
thread contexts. As we can replace those atomically, replacing a detect
engine becomes easier.
Each thread keeps a reference to its detect context. When a detect engine
is replaced or removed, it's added to a free list. Once its reference
count reaches 0, it is freed.
Update pktacq loop to process flow timeouts in a running engine.
Add a new step to the shutdown phase of packet acquisition loop
threads (pktacqloop).
The shutdown code lets the pktacqloop break out of it's packet
acquisition loop. The thread then enters a flow timeout loop, where
it processes packets from it's tv->stream_pq queue until it's
empty _and_ the KILL flag is set.
Make sure receive threads are done before moving on to flow hash
cleanup (recycle all). Without this the flow recycler could start
it's unconditional hash clean up while detect threads are still
running on the flows.
Update unix socket to match live modes.
Have -T / --init-errors-fatal process all rules so that it's easier
to debug problems in ruleset. Otherwise it can be a lengthy fix, test
error cycle if multiple rules have issues.
Convert empty rulefile error into a warning.
Bug #977
Convert regular 'stats.log' output to this new API.
In addition to the current stats value, also give the last value. This
makes it easy to display the difference.
The global variable suricata_ctl_flags needs to volatile, otherwise the
compiler might not cause the variable to be read every time because it
doesn't know other threads might write the variable.
This was causing Suricata to not exit under some conditions.
AF_PACKET is not setting the engine mode to IPS when some
interfaces are peered and use IPS mode. This is due to the
fact, it is possible to peer 2 interfaces and run an IPS on
them and have a third one that is running in normal IDS mode.
In fact this choice is the bad one as unwanted side effect is
that there is no drop log and that stream inline is not used.
To fix that, this patch puts suricata in IPS mode as soon as
there is two interfaces in IPS mode. And it displays a error
message to warn user that the accuracy of detection on IDS only
interfaces will be low.
This patch adds a new Log API for streaming data such as TCP reassembled
data and HTTP body data. It could also replace Filedata API.
Each time a new chunk of data is available, the callback will be called.
Use new management API to run the flow recycler.
Make number of threads configurable:
flow:
memcap: 64mb
hash-size: 65536
prealloc: 10000
emergency-recovery: 30
managers: 2
recyclers: 2
This sets up 2 flow recyclers.
Use new management API to run the flow manager.
Support multiple flow managers, where each of them works with it's
own part of the flow hash.
Make number of threads configurable:
flow:
memcap: 64mb
hash-size: 65536
prealloc: 10000
emergency-recovery: 30
managers: 2
This sets up 2 flow managers.
Handle misc tasks only in instance 1: Handle defrag hash timeout
handing, host hash timeout handling and flow spare queue updating
only from the first instance.
Thread was killed by the generic TmThreadKillThreads instead of
the FlowKillFlowRecyclerThread. The latter wakes the thread up, so
that shutdown is quite a bit faster.
Using a stack for free Packet storage causes recently freed Packets to be
reused quickly, while there is more likelihood of the data still being in
cache.
The new structure has a per-thread private stack for allocating Packets
which does not need any locking. Since Packets can be freed by any thread,
there is a second stack (return stack) for freeing packets by other threads.
The return stack is protected by a mutex. Packets are moved from the return
stack to the private stack when the private stack is empty.
Returning packets back to their "home" stack keeps the stacks from getting out
of balance.
The PacketPoolInit() function is now called by each thread that will be
allocating packets. Each thread allocates max_pending_packets, which is a
change from before, where that was the total number of packets across all
threads.
EVE logging is a really good substitute for pcapinfo. Suriwire is
now supporting EVE output so it is not anymore necessary to have
pcapinfo in Suricata.
Add profiling to a logfile. Default is $log_dir/pcaplog_stats.log
The counters for open, close, rotate, write and handles are written
to it, as well as:
- total bytes written
- cost per MiB
- cost per GiB
Option is disabled by default.
If a live reload signal was given before the engine was fully started
up (e.g. pcap file thread waiting for a disk to spin up), a segv could
occur.
This patch only enables live reloads after the threads have been
started up completely.