For netflow logging track TCP flags per stream direction. As the struct
had no more space left without expanding it, the flags and wscale
fields are now compressed.
Most flows are marked for clean up by the flow manager, which then
passes them to the recycler. The recycler logs and cleans up. However,
under resource stress conditions, the packet threads can recycle
existing flow directly. So here the recycler has no role to play, as
the flow is immediately used.
For this reason, the packet threads need to be able to invoke the
flow logger directly.
The flow logging thread ctx will stored in the DecodeThreadVars
stucture. Therefore, this patch makes the DecodeThreadVars an argument
to FlowHandlePacket.
Now that we use 'filetype' instead of 'type', we should also
use 'regular' instead of 'file'.
Added fallback to make sure we stay compatible to old configs.
At -O1+ in both Gcc and Clang, PacketPoolWait would optimize the
wait loop in the wrong way. Adding a compiler barrier to prevent
this optimization issue.
Fix pcap packet acquisition methods passing 0 to pcap_dispatch.
Previously they passed the packet pool size, but the packet_q_len
variable was now hardcoded at 0.
This patch sets packet_q_len to 64. If packet pool is empty, we fall
back to direct alloc. As the pcap_dispatch function is only called
when packet pool is not empty, we alloc at most 63 packets.
Better handle the autofp case where one thread allocates the majority
of the packets and other threads free those packets.
Add a list of locally pending packets. The first packet freed goes on the
pending list, then subsequent freed packets for the same Packet Pool are
added to this list until it hits a fixed number of packets, then the
entire list of packets is pushed onto the pool's return stack. If a freed
packet is not for the pending pool, it is freed immediately to its pool's
return stack, as before.
For the autofp case, since there is only one Packet Pool doing all the
allocation, every other thread will keep a list of pending packets for
that pool.
For the worker run mode, most packets are allocated and freed locally. For
the case where packets are being returned to a remote pool, a pending list
will be kept for one of those other threads, all others are returned as before.
Which remote pool for which to keep a pending list is changed each time the
pending list is returned. Since the return pending pool is cleared when it is
freed, then next packet to be freed chooses the new pending pool.
Using a stack for free Packet storage causes recently freed Packets to be
reused quickly, while there is more likelihood of the data still being in
cache.
The new structure has a per-thread private stack for allocating Packets
which does not need any locking. Since Packets can be freed by any thread,
there is a second stack (return stack) for freeing packets by other threads.
The return stack is protected by a mutex. Packets are moved from the return
stack to the private stack when the private stack is empty.
Returning packets back to their "home" stack keeps the stacks from getting out
of balance.
The PacketPoolInit() function is now called by each thread that will be
allocating packets. Each thread allocates max_pending_packets, which is a
change from before, where that was the total number of packets across all
threads.
Whenever DETECT_CONTENT_NOCASE is set for a BoyerMoore matcher, the
function BoyerMooreCtxToNocase() must be called. This call was missing
in AppLayerProtoDetectPMRegisterPattern().
Also created BoyerMooreNocaseCtxInit() that calls BoyerMooreCtxToNocase()
to make some code cleaner and safer.
Rather than converting the search string to lower case while searching,
convert it to lowercase during initialization.
Changes the Boyer Moore search API for take BmCtx
Change the API for BoyerMoore to take a BmCtx rather than the two parts that
are stored in the context. Which is how it is mostly used. This enforces
always calling BoyerMooreCtxToNocase() to convert to no-case.
Use CtxInit and CtxDeinit functions to create and destroy the context,
even in unit tests.
EVE logging is a really good substitute for pcapinfo. Suriwire is
now supporting EVE output so it is not anymore necessary to have
pcapinfo in Suricata.