To simplify locking, move all locking out of the individual detect
code. Instead at the start of detection lock the flow, and at the
end of detection unlock it.
The lua code can be called without a lock still (from the output
code paths), so still pass around a lock hint to take care of this.
When we run on live traffic, time handling is simple. Packets have a
timestamp set by the capture method. Management threads can simply
use 'gettimeofday' to know the current time. There should never be
any serious gap between the two or major differnces between the
threads.
In offline mode, things are dramatically different. Here we try to keep
the time from the pcap, which means that if the packets are recorded in
2011 the log output should also reflect this. Multiple issues:
1. merged pcaps might have huge time jumps or time going backward
2. slowly recorded pcaps may be processed much faster than their
'realtime'
3. management threads need a concept of what the 'current' time is for
enforcing timeouts
4. due to (1) individual threads may have very different views on what
the current time is. E.g. T1 processed packet 1 with TS X, while T2
at the very same time processes packet 2 with TS X+100000s.
The changes in flow handling make the problems worse. The capture thread
no longer handles the flow lookup, while it did set the global 'time'.
This meant that a thread may be working on Packet 1 with TS 1, while the
capture thread already saw packet 2 with TS 10000. Management threads
would take TS 10000 as the 'current time', considering a flow created by
the first thread as timed out immediately.
This was less of a problem before the flow changes as the capture thread
would also create a flow reference for a packet, meaning the flow
couldn't time out as easily. Packets in the queues between capture
thread and workers would all hold such references.
The patch updates the time handling to be as follows.
In offline mode we keep the timestamp per thread. If a management thread
needs current time, it will get the minimum of the threads' values. This
is to avoid the problem that T2s time value might already trigger a flow
timeout as the flow lastts + 100000s is almost certainly meaning the
flow would be considered timed out.
Instead of handling the packet update during flow lookup, handle
it in the stream/detect threads. This lowers the load of the
capture thread(s) in autofp mode.
The decoders now set a flag in the packet if the packet needs a
flow lookup. Then the workers will take care of this. The decoders
also already calculate the raw flow hash value. This is so that
this value can be used in flow balancing in autofp.
Because the flow lookup/creation is now done in the worker threads,
the flow balancing can no longer use the flow. It's not yet
available. Autofp load balancing uses raw hash values instead.
In the same line, move UDP AppLayer out of the DecodeUDP module,
and also into the stream/detect threads.
Handle TCP session reuse inside the flow engine itself. If a looked up
flow matches the packet, but is a TCP stream starter, check if the
ssn needs to be reused. If that is the case handle it within the
lookup function. Simplies the locking and removes potential race
conditions.
Update Flow lookup functions to get a flow reference during lookup.
This reference is set under the FlowBucket lock.
This paves the way to not getting a flow lock during lookups.
Match on server name indication (SNI) extension in TLS using tls_sni
keyword, e.g:
alert tls any any -> any any (msg:"SNI test"; tls_sni;
content:"example.com"; sid:12345;)
This new API allows for different SPM implementations, using a function
pointer table like that used for MPM.
This change also switches over the paths that make use of
DetectContentData (which previously used BoyerMoore directly) to the new
API.
Bug #1771.
Direct leak of 1834 byte(s) in 1 object(s) allocated from:
#0 0x4e2e65 in realloc ??:?
#1 0xcec27b in LogTlsLogPem /home/mats/suricata/src/log-tlsstore.c:130
#2 0xcea4f5 in LogTlsStoreLogger /home/mats/suricata/src/log-tlsstore.c:303
#3 0xd8b99c in OutputPacketLog /home/mats/suricata/src/output-packet.c:104
Sometimes we want to log when we reach a specified state instead of
waiting for the session to end. E.g for TLS we want to log as soon
as the handshake is done.
To do this, a new logger is added, where it is possible to specify
a custom "ProgressCompletionStatus".
Add function AppLayerParserRegisterLoggerFuncs for registering
a callback function for checking if a specific logger has logged
a transaction, and a callback function for specifying that it has.
Also add functions AppLayerParserGetTxLogged and
AppLayerParserSetTxLogged to invoke these callback functions.
Change AppLayerParserRegisterGetStateProgressCompletionStatus to
only store one ProgressCompletionStatus callback function for each
alproto, instead of storing one for each ipproto.
This enables us to use AppLayerParserGetStateProgressCompletionStatus
in functions where we do not know the ipproto used.