Goals:
- reduce locking
- take advantage of 'hot' caches
- better locality
Locking reduction
New flow spare pool. The global pool is implmented as a list of blocks,
where each block has a 100 spare flows. Worker threads fetch a block at
a time, storing the block in the local thread storage.
Flow Recycler now returns flows to the pool is blocks as well.
Flow Recycler fetches all flows to be processed in one step instead of
one at a time.
Cache 'hot'ness
Worker threads now check the timeout of flows they evaluate during lookup.
The worker will have to read the flow into cache anyway, so the added
overhead of checking the timeout value is minimal. When a flow is considered
timed out, one of 2 things happens:
- if the flow is 'owned' by the thread it is handled locally. Handling means
checking if the flow needs 'timeout' work.
- otherwise, the flow is added to a special 'evicted' list in the flow
bucket where it will be picked up by the flow manager.
Flow Manager timing
By default the flow manager now tries to do passes of the flow hash in
smaller steps, where the goal is to do full pass in 8 x the lowest timeout
value it has to enforce. So if the lowest timeout value is 30s, a full pass
will take 4 minutes. The goal here is to reduce locking overhead and not
get in the way of the workers.
In emergency mode each pass is full, and lower timeouts are used.
Timing of the flow manager is also no longer relying on pthread condition
variables, as these generally cause waking up much quicker than the desired
timout. Instead a simple (u)sleep loop is used.
Both changes reduce the number of hash passes a lot.
Emergency behavior
In emergency mode there a number of changes to the workers. In this scenario
the flow memcap is fully used up and it is unavoidable that some flows won't
be tracked.
1. flow spare pool fetches are reduced to once a second. This avoids locking
overhead, while the chance of success was very low.
2. getting an active flow directly from the hash skips flows that had very
recent activity to avoid the scenario where all flows get only into the
NEW state before getting reused. Rather allow some to have a chance of
completing.
3. TCP packets that are not SYN packets will not get a used flow, unless
stream.midstream is enabled. The goal here is again to avoid evicting
active flows unnecessarily.
Better Localily
Flow Manager injects flows into the worker threads now, instead of one or
two packets. Advantage of this is that the worker threads can get packets
from their local packet pools, avoiding constant overhead of packets returning
to 'foreign' pools.
Counters
A lot of flow counters have been added and some have been renamed.
Overall the worker threads increment 'flow.wrk.*' counters, while the flow
manager increments 'flow.mgr.*'.
Additionally, none of the counters are snapshots anymore, they all increment
over time. The flow.memuse and flow.spare counters are exceptions.
Misc
FlowQueue has been split into a FlowQueuePrivate (unlocked) and FlowQueue.
Flow no longer has 'prev' pointers and used a unified 'next' pointer for
both hash and queue use.
This commit adds MAC address output to the EVE-JSON format. We follow the
remarks made in Redmine ticket #962: for packets, log MAC src/dst as a
scalar field in EVE; for flows, log MAC src/dst as lists in EVE. Field names
are different between flow and packet context to avoid type confusion
(src_mac vs. src_macs). Configuration approach and JSON representation is
taken from previous GitHub PR #2700.
This commit checks whether pre-6.x settings for ERSPAN Type I are
present. ERSPAN Type I is no longer enabled/disabled through a
configuration setting -- it's always enabled.
When a setting exists to enable/disable ERSPAN Type I decoding, a
warning message is logged.
Enabling/disabling ERSPAN Type I decode has been deprecated in 6.x
Previously each 'TmSlot' had it's own packet queue that was passed
to the registered SlotFunc as an argument. This was used mostly for
tunnel packets by the decoders and by defrag.
This patch removes that in favor of a single queue in the ThreadVars:
decode_pq. This is the non-locked version of the queue as this is
only a temporary store for handling packets within a thread.
This patch removes the PacketQueue pointer argument from the API.
The new queue can be accessed directly through the ThreadVars
pointer.
Set the livedev on reassembled packets to that of the parent
packet. Fixes issues with multidetect, specifically a segfault
as reported in issue 3380.
Bug #3380.
Replace index by strchr and rindex by strrchr.
index(3) states "POSIX.1-2008 removes the specifications of index() and
rindex(), recommending strchr(3) and strrchr(3) instead."
Add index/rindex to banned function check so they don't get reintroduced.
Bug #1443.
This define is used to remove reference to capture bypass in case
no capture method implementing this is active.
This patch also introduces CAPTURE_OFFLOAD_MANAGER that is defined
if we need the flow bypass manager code.
Fill in the vlan_id fields unconditionally. We can now remove the check
for the vlan.use-for-tracking setting in decode.c. The debug log message
is moved to suricata.c.
Implement port config handling. Also check both src port and dest
port for tunnels that only set the destination port to the VXLAN
port. At the point of the check we don't know the packet direction
yet.
Implement as Suricata tunnel similar to Teredo.
Cleanups.
This patch introduces and uses a new bypass strategy
based on a callback. EBPF bypass implementation is
updated to use this new strategy.
Once the flow manager detect that a flow should be timeouted,
it asks the capture method if it has seen packets in the interval.
If it is the case the lastts of the flow is updated and the timeout
is postponed.
For capture method that have their own flow structure (not maintained
by Suricata), it can make sense to bypass a packet even if there is
no Flow in Suricata.
For AF_PACKET it does not make sense as the eBPF map entry will
be destroyed as soon as it will be checked by the flow bypass
manager. Thus we shortcut the bypass function if ever no Flow is
attached to the packet.
This path also removes reference to Flow in the bypass functions
for AF_PACKET. It was not necessary and we possibly could benefit
of it if ever we change the bypass algorithm.
There is a synchronization issue occuring when a flow is
added to the eBPF bypass maps. The flow can have packets
in the ring buffer that have already passed the eBPF stage.
By consequences, they are not accounted in the eBPF counter
but are accounted by Suricata flow engine.
This was causing counters to be completely wrong. This code
fixes the issue by avoiding the counter change in invalid
case.
To avoid adding 4 64bits integers to the Flow structure for the
bypass accounting, we use instead a FlowStorage. This limits the
memory usage to the size of a pointer.
In the eve log the decoder events are added as optional counters. This
behaviour is enabled by default. However, lots of the counters are
missing, as the names colide with other counters.
E.g.
decoder.ipv6 counts ipv6 packets
decoder.ipv6.unknown_next_header counts how often an unknown next
header is encountered.
In this example 'ipv6' would be both a json integer and a json object.
It appears that jansson favours the first that is generated, so the
event counters are mostly missing.
This patch registers them as 'decoder.events.<event>' instead. As
these names are generated on the fly, a hash table to contain the
allocated strings was added as well.
Invalid Teredo can lead to valid DNS traffic (or other UDP traffic)
being misdetected as Teredo. This leads to false negatives in the
UDP payload inspection.
Make the teredo code only consider a packet teredo if the encapsulated
data was decoded without any 'invalid' events being set.
Bug #2736.
Change the decode handler signature to increase the size of its decode
handler, from uint16 to uint32. This is necessary to let suricata use
interfaces with mtu > 65535 (ex: lo interface has default size 65536).
It's necessary to change several primitive for Packet manipulation, to
unify the parameter "packet length" whenever we are before IP decoding.
Add tests before calling DecodeIPVX function to avoid a possible
integer overflow over the len parameter.
When switching protocol from http to tls the following corner case
was observed:
pkt 6, TC "200 connection established"
pkt 7, TS acks pkt 6 + adds "client hello"
pkt 8 TC, acks pkt 7
pkt 8 is where normally the detect on the 200 connection established
would run however before detection runs the app-layer is called
and it resets the state
So the issue is missed detection on the last data in the original
protocol before the switch.
Another case was:
TS -> STARTTLS
TC -> Ack "STARTTLS data"
220
TS -> Ack "220 data"
Client Hello
In IDS mode, this made a rule that wanted to look at content:"STARTTLS"
in combination with the protocol SMTP 'alert smtp ... content:"STARTTLS";'
impossible. By the time the content would match, the protocol was already
switched.
This patch fixes this case by creating a 'Detect/Log Flush' packet in
both directions. This will force final inspection and logging of the
pre-upgrade protocol (SMTP in this example) before doing the final
switch.
Set flags by default:
-Wmissing-prototypes
-Wmissing-declarations
-Wstrict-prototypes
-Wwrite-strings
-Wcast-align
-Wbad-function-cast
-Wformat-security
-Wno-format-nonliteral
-Wmissing-format-attribute
-funsigned-char
Fix minor compiler warnings for these new flags on gcc and clang.
Due to the use of AFL_LOOP and initialization/deinit outside of it,
part of the fuzzing relied on the global 'state' in flow and defrag.
Because of this crashes that were found could not be reproduced. The
saved crash input was only the last in the series.
This patch addresses that. It requires a new output directory 'dump'
where the packet fuzzers will store all their input. If the AFL_LOOP
fails the files will not be removed and this 'serie' can be read
again for reproducing the issue.
e.g.: AFL would work with:
--afl-decoder-ppp=@@
and after a crash is found the produced serie can be read with:
--afl-decoder-ppp-serie=1486656919-514163
The series have a timestamp as name and a suffix that controls the
order in which the files will be 'replayed' in Suricata.
Call the packet bypass callback if necessary and update the flow
state. In case of failure we switch to local bypassed state and set
capture bypassed state if the callback is successful.