This replaces the tcp.reused_ssn counter. The flow engine now
enforces the TCP flow reuse logic.
The counter is incremented only when the flow is timed out, so
after the "tcp closed" timeout expired for a flow.
Until now, TCP session reuse was handled in the TCP stream engine.
If the state was TCP_CLOSED, a new SYN packet was received and a few
other conditions were met, the flow was 'reset' and reused for the
'new' TCP session.
There are a number of problems with this approach:
- it breaks the normal flow lifecycle wrt timeout, detection, logging
- new TCP sessions could come in on different threads due to mismatches
in timeouts between suricata and flow balancing hw/nic/drivers
- cleanup code was often causing problems
- it complicated locking because of the possible thread mismatch
This patch implements a different solution, where a new TCP session also
gets a new flow. To do this 2 main things needed to be done:
1. the flow engine needed to be aware of when the TCP reuse case was
happening
2. the flow engine needs to be able to 'skip' the old flow once it was
replaced by a new one
To handle (1), a new function TcpSessionPacketSsnReuse() is introduced
to check for the TCP reuse conditions. It's called from 'FlowCompare()'
for TCP packets / TCP flows that are candidates for reuse. FlowCompare
returns FALSE for the 'old' flow in the case of TCP reuse.
This in turn will lead to the flow engine not finding a flow for the TCP
SYN packet, resulting in the creation of a new flow.
To handle (2), FlowCompare flags the 'old' flow. This flag causes future
FlowCompare calls to always return FALSE on it. In other words, the flow
can't be found anymore. It can only be accessed by:
1. existing packets with a reference to it
2. flow timeout handling as this logic gets the flows from walking the
hash directly
3. flow timeout pseudo packets, as they are set up by (2)
The old flow will time out normally, as governed by the "tcp closed"
flow timeout setting. At timeout, the normal detection, logging and
cleanup code will process it.
The flagging of a flow making it 'unfindable' in the flow hash is a bit
of a hack. The reason for this approach over for example putting the
old flow into a forced timeout queue where it could be timed out, is
that such a queue could easily become a contention point. The TCP
session reuse case can easily be created by an attacker. In case of
multiple packet handlers, this could lead to contention on such a flow
timeout queue.
The flow keyword used flag names that were shared with the
Packet::flowflags field. Some of the flags were'nt used by the packet
though. This lead to waste of some 'flag space'.
This patch defines dedicated flags for the flow keyword and removes
the otherwise unused flags from the FLOW_PKT_* space.
FilePrune would clear the files, but not free them and remove them
from the list. This lead to ever growing lists in some cases.
Especially in HTTP sessions with many transactions, this could slow
us down.
Until this point, the flow manager would check for timed out flows
by walking the flow hash, locking first the hash row and then each
individual flow to get it's state and timestamp. To not be too
intrusive trylocks were used so that a busy flow wouldn't cause the
flow manager to wait for a long time while holding the hash row lock.
Building on the changes in handling of the flow state and lastts
fields, this patch changes the flow managers behavior.
It can now get a flows state atomically and the lastts can be safely
read while holding just the flow hash row lock. This allows the flow
manager to do the basic time out check much more cheaply:
1. it doesn't have to wait for getting a lock
2. it doesn't interupt the packet path
As a consequence the trylock is now also gone. A flow that returns
'true' on timeout is pretty much certainly not going to be busy so
we can safely lock it unconditionally. This also means the flow
manager now walks the entire row unconditionally and is guaranteed
to inspect each flow in the row.
To make sure the functions called before the flow lock don't
accidentally change the flow (which would require a lock) the args
to these flows are changed to const pointers.
In the lastts timeval struct field in the flow the timestamp of the
last packet to update is recorded. This allows for tracking the timeout
of the flow. So far, this value was updated under the flow lock and also
read under the flow lock.
This patch moves the updating of this field to the FlowGetFlowFromHash
function, where it updated at the point where both the Flow and the
Flow Hash Row lock are held. This guarantees that the field is only
updated when both locks are held.
This makes reading the field safe when either lock is held, which is the
purpose of this patch.
The flow manager, while holding the flow hash row lock, can now safely
read the lastts value. This allows it to do the flow timeout check
without actually locking the flow.
A flow has 3 states: NEW, ESTABLISHED and CLOSED.
For all protocols except TCP, a flow is in state NEW as long as just one
side of the conversation has been seen. When both sides have been
observed the state is moved to ESTABLISHED.
TCP has a different logic, controlled by the stream engine. Here the TCP
state is leading.
Until now, when parts of the engine needed to know the flow state, it
would invoke a per protocol callback 'GetProtoState'. For TCP this would
return the state based on the TcpSession.
This patch changes this logic. It introduces an atomic variable in the
flow 'flow_state'. It defaults to NEW and is set to ESTABLISHED for non-
TCP protocols when we've seen both sides of the conversation.
For TCP, the state is updated from the TCP engine directly.
The goal is to allow for access to the state without holding the Flow's
main mutex lock. This will later allow the Flow Manager(s) to evaluate
the Flow w/o interupting it.
The option sets in bytes the value at which segment data is passed to
the app layer API directly. Data sizes equal to and higher than the
value set are passed on directly.
Default is 128.
Create 2 'fast paths' for app layer reassembly. Both are about reducing
copying. In the cases described below, we pass the segment's data
directly to the app layer API, instead of first copying it into a buffer
than we then pass. This safes a copy.
The first is for the case when we have just one single segment that was
just ack'd. As we know that we won't use any other segment this round,
we can just use the segment data.
The second case is more aggressive. When the segment meets a certain
size limit (currently hardcoded at 128 bytes), we pass it to the
app-layer API directly. Thus invoking the app-layer somewhat more often
to safe some copies.
When reaching the end of either list, merging is no longer required,
simply walk down the other list.
If the non-MPM list can't have duplicates, it would be worth removing
the duplicate check for the non-MPM list when it is the only non-empty list
remaining.