Store the tenant id in the flow and use the stored id when setting
up pesudo packets.
For tunnel and defrag packets, get tenant from parent. This will only
pass tenant_id's set at capture time.
For defrag packets, the tenant selector based on vlan id will still
work as the vlan id(s) are stored in the defrag tracker before being
passed on.
Set noinspection flags for payloads and packets on flow and stream
pseudo packets. Without these, the pseudo packets could trigger
inspection even though this was disabled for a flow.
The flow timeout mechanism called both from the flow manager at run time
and at shutdown creates pseudo packets. For this it has it's own packet
pool, which can be depleted if the timeout logic is faster than the packet
processing threads. In this case the flow timeout would enter a wait loop.
The problem however, is that this wait loop would happen while keeping a
flow locked. This could lead to a race condition when the packet thread(s)
are waiting for the lock that the flow manager has.
This patch introduces a new packet pool call 'PacketPoolWaitForN', meant
to make sure that the thread's packet pool has at least N available
packets. The flow timeout paths use this to make sure enough packets are
available *before* grabbing the flow lock. If there aren't enough packets
available yet, the wait happens before the lock as well.
This still means the wait can happen while the flow hash row is locked, so
we do make sure some more packets are available when entering that. But
perhaps in the future we need a more precise logic there as well.
Don't kill flow manager and recyclers before the rest of the threads. The
packet threads may still have packets from their pools. As the flow threads
would destroy their pools the packets would be lost.
This patch doesn't kill the threads, it just pulls them out of their run
loop and into a wait loop. The packet pools won't be cleared until all
threads are killed.
Wait for flow management threads to close before moving on to the
next steps in the shutdown process.
Don't destroy flow force reassembly packet pool too early. Worker
threads may still want to return packets to it.
The code was not checking if we had enough room in the direct
data. In case default_packet_size was set really small, this was
resulting in data being written over the data and causing a crash.
The patch fixes the issue by forcing an allocation if the direct
data size in the Packet is to small.
Update pktacq loop to process flow timeouts in a running engine.
Add a new step to the shutdown phase of packet acquisition loop
threads (pktacqloop).
The shutdown code lets the pktacqloop break out of it's packet
acquisition loop. The thread then enters a flow timeout loop, where
it processes packets from it's tv->stream_pq queue until it's
empty _and_ the KILL flag is set.
Make sure receive threads are done before moving on to flow hash
cleanup (recycle all). Without this the flow recycler could start
it's unconditional hash clean up while detect threads are still
running on the flows.
Update unix socket to match live modes.
Add function TmThreadsInjectPacketById that is to be used to inject flow
timeout packets into the threads stream_pq queue.
TmThreadsInjectPacketById will also wake up listening threads if
applicable.
Packets are passed all packets together in an NULL terminated array
to reduce locking overhead.
Flow timeout code keeps track of thread module running detect, and
fails (hard) if it doesn't find it.
This changeset retrieves the global g_detect_disabled and passes
it to the timeout handling code during setup.
If FlowForceReassemblyForFlowV2 can't get packets to inject into the
engine, until now it would bail and retry later. In case of resource
starvation issues, this would cause a lot of lock contention, as the
flow manager would try over and over again.
This patch limits FlowForceReassemblyForFlowV2 to one try per flow,
if it fails... bad luck. It will only fail in serious conditions,
which means we must prefer the health of the engine over the proper
inspection of the flow in question.
StreamMsgs would be stored in a per thread queue before being
attached to the tcp ssn. This is unnecessary, so this patch
removes this queue and puts the smsgs into the ssn directly.
Large patch as it affects a lot of tests.
Preparation for removing flow pointer from StreamMsg. Instead of
getting the ssn indirectly through StreamMsg->flow, we pass it
directly as all callers have it already.
app-layer.[ch], app-layer-detect-proto.[ch] and app-layer-parser.[ch].
Things addressed in this commit:
- Brings out a proper separation between protocol detection phase and the
parser phase.
- The dns app layer now is registered such that we don't use "dnstcp" and
"dnsudp" in the rules. A user who previously wrote a rule like this -
"alert dnstcp....." or
"alert dnsudp....."
would now have to use,
alert dns (ipproto:tcp;) or
alert udp (app-layer-protocol:dns;) or
alert ip (ipproto:udp; app-layer-protocol:dns;)
The same rules extend to other another such protocol, dcerpc.
- The app layer parser api now takes in the ipproto while registering
callbacks.
- The app inspection/detection engine also takes an ipproto.
- All app layer parser functions now take direction as STREAM_TOSERVER or
STREAM_TOCLIENT, as opposed to 0 or 1, which was taken by some of the
functions.
- FlowInitialize() and FlowRecycle() now resets proto to 0. This is
needed by unittests, which would try to clean the flow, and that would
call the api, AppLayerParserCleanupParserState(), which would try to
clean the app state, but the app layer now needs an ipproto to figure
out which api to internally call to clean the state, and if the ipproto
is 0, it would return without trying to clean the state.
- A lot of unittests are now updated where if they are using a flow and
they need to use the app layer, we would set a flow ipproto.
- The "app-layer" section in the yaml conf has also been updated as well.
Flow timeout code worked by luck when checking if a flow still needed
reassembly for app layer inspection or logging. It would check for a
part of raw reassembly (smsg list) to determine if detection was
needed. In this case it would also process app layer cleanup,
including logging.
Introduced AppLayerTransactionGetActive which returns the lowest tx_id
in a direction that still needs some work.
FlowForceReassemblyNeedReassmbly now uses it to determine if the
applayer still needs work.
Converted FlowForceReassemblyForHash to use the checking function
FlowForceReassemblyNeedReassmbly as well, so that checking if a flow
needs work is now unified.
In FFRv2, dereference flow from a packet using the new reference/dereference
util macros. This allows the decr use_cnt for flow and reseting the flow
pointer to NULL for the pseudo pkt to happen simultaneously, in case there we
fail to retrieve a pseudo_packet and have to return the already obtained
pseudo packets, back to the packetpool.
This patch modifies the init of Detect threads. They are now started
with a dummy function and their initialisation is done after the
signatures are loaded. Just after this, the dummy function is switched
to normal one.
In IPS mode, this permit to route packets without waiting for the
signature to start and should fix#488.
Offline mode such as pcap file don't use this mode to be sure to
analyse all packets in the file.
The patch introduces a "delayed-detect" configuration variable
under detect-engine. It can be used to activate the feature
(set to "yes" to have signature loaded after capture is started).
Major redesign of the flow engine. Remove the flow queues that turned
out to be major choke points when using many threads. Flow manager now
walks the hash table directly. Simplify the way we get a new flow in
case of emergency.