Create 2 'fast paths' for app layer reassembly. Both are about reducing
copying. In the cases described below, we pass the segment's data
directly to the app layer API, instead of first copying it into a buffer
than we then pass. This safes a copy.
The first is for the case when we have just one single segment that was
just ack'd. As we know that we won't use any other segment this round,
we can just use the segment data.
The second case is more aggressive. When the segment meets a certain
size limit (currently hardcoded at 128 bytes), we pass it to the
app-layer API directly. Thus invoking the app-layer somewhat more often
to safe some copies.
Implement StreamTcpSetDisableRawReassemblyFlag() which stops raw
reassembly for _NEW_ segments in a stream direction.
It is used only by TLS/SSL now, to flag the streams as encrypted.
Existing segments will still be reassembled and inspected, while
new segments won't be. This allows for pattern based inspection
of the TLS handshake.
Like is the case with completely disabled 'raw' reassembly, the
logic is that the segments are flagged as completed for 'raw' right
away. So they are not considered in raw reassembly anymore.
As no new segments will be considered, the chunk limit check will
return true on the next call.
Have a single function StreamTcpReturnSegmentCheck determine if a
segment is ready to be removed from the stream.
Handle FLOW_NOPAYLOAD_INSPECT in raw reassembly.
Change GAP detection logic. If we encounter missing data before
last_ack, we know we have missed data. The receiving host has ack'd
it already, so a retransmission of the missing data is highly
unlikely.
AppLayer reassembly correctly only flags a segment as done when it's
completely used in reassembly. Raw reassembly could flag a partially
used segment as complete as well. In this case the segment could be
discarded early. Further reassembly would miss data, leading to a
gap. Due to this, up to 'window size' bytes worth of segments could
remain in the session for a long time, leading to memory resource
pressure.
This patch sets the flag correctly.
The reassembly gap detection makes use of the window. However, in
midstream mode the window size we use is unreliable, as we have to
assume window scaling is in place. This patch improves midstream
handling of those cases.
In midstream mode we may encounter a case where the data we is beyond
the isn, but below last_ack. This means we're missing some data, that
is already acked so it won't be retransmitted. Therefore, we can
conclude it's a data gap.
Until now a PoolInit failure for the segment pools would result in
an abort() through BUG_ON(). This patch adds a proper error message,
then exits.
Bug #1108.
When TcpSegmentPoolInit fails (e.g. because of a too low memcap),
it would free the segment. However, the segment memory is managed
by the Pool API, which would also free the same memory location.
This patch fixes that.
Also, memset the structure before any checks are done, as the segment
memory is passed to TcpSegmentPoolCleanup in case of error as well.
Bug #1108
The stream chunk pool contains preallocating stream chunks (StreamMsg).
These are used for raw reassembly, used in raw content inspection by
the detection engine. The default setting so far has been 250, which
was hardcoded. This meant that in setups that needed more, allocs and
frees would be happen constantly.
This patch introduces a yaml option to set the 'prealloc' value in the
pool. The default is still 250.
stream.reassembly.chunk-prealloc
Related to feature #1093.
The stream reassembly engine uses a set of pools in which preallocated
segments are stored. There are various pools each with different packet
sizes. The goal is to lower memory presure. Until now, these pools were
hardcoded.
This patch introduces the ability to configure them fully from the yaml.
There can be at max 256 of these pools.
Yaml layout is as follows:
stream:
reassemble:
segments:
- size: 2048
prealloc: 3000
- size: 4
prealloc: 1000
- size: 1024
prealloc: 2000
The size is the packet size. The prealloc value indicates how many
segments are set up at startup.
The pools have no limit wrt how many segments can be used of a certain
size. If the engine needs more than the prealloc size, segments are
malloc'd and free'd. The only limit here is the stream.reassemble.memcap.
If the yaml part if omitted, the default values are the same as before.
Feature #1093
Move app layer event handling into app-layer-event.[ch].
Convert 'Set' macro's to functions.
Get rid of duplication in Set and SetRaw. Set now calls SetRaw.
Fix potentential int overflow condition in the event storage.
Update callers.
To be able to register counters from AppLayerGetCtxThread, the
ThreadVars pointer needs to be available in it and thus in it's
callers:
- AppLayerGetCtxThread
- DecodeThreadVarsAlloc
- StreamTcpReassembleInitThreadCtx
StreamMsgs would be stored in a per thread queue before being
attached to the tcp ssn. This is unnecessary, so this patch
removes this queue and puts the smsgs into the ssn directly.
Large patch as it affects a lot of tests.
StreamMsg' flow reference was used mostly to make sure a flow would
not get removed from the hash before inspection. For this it needed
to reference the flow use_cnt reference counter. Nowadays we have
more advanced flow timeout handling. This will make sure that if
there still are pending smsgs' in a flow, these will still be
processed.
Preparation for removing flow pointer from StreamMsg. Instead of
getting the ssn indirectly through StreamMsg->flow, we pass it
directly as all callers have it already.
StreamSmgs are used for raw stream reassembly only. They could also
be used to tell the rest of the engine about sequence gaps. This was
a left over from the older implementation, where the app layer used
the smsgs as well.
app-layer.[ch], app-layer-detect-proto.[ch] and app-layer-parser.[ch].
Things addressed in this commit:
- Brings out a proper separation between protocol detection phase and the
parser phase.
- The dns app layer now is registered such that we don't use "dnstcp" and
"dnsudp" in the rules. A user who previously wrote a rule like this -
"alert dnstcp....." or
"alert dnsudp....."
would now have to use,
alert dns (ipproto:tcp;) or
alert udp (app-layer-protocol:dns;) or
alert ip (ipproto:udp; app-layer-protocol:dns;)
The same rules extend to other another such protocol, dcerpc.
- The app layer parser api now takes in the ipproto while registering
callbacks.
- The app inspection/detection engine also takes an ipproto.
- All app layer parser functions now take direction as STREAM_TOSERVER or
STREAM_TOCLIENT, as opposed to 0 or 1, which was taken by some of the
functions.
- FlowInitialize() and FlowRecycle() now resets proto to 0. This is
needed by unittests, which would try to clean the flow, and that would
call the api, AppLayerParserCleanupParserState(), which would try to
clean the app state, but the app layer now needs an ipproto to figure
out which api to internally call to clean the state, and if the ipproto
is 0, it would return without trying to clean the state.
- A lot of unittests are now updated where if they are using a flow and
they need to use the app layer, we would set a flow ipproto.
- The "app-layer" section in the yaml conf has also been updated as well.
stream-tcp-reassemble.c:2569:17: warning: Value stored to 'seg' is never read
seg = seg->next;
^ ~~~~~~~~~
stream-tcp-reassemble.c:2587:17: warning: Value stored to 'seg' is never read
seg = seg->next;
The uint8_t *pkt in the Packet structure always points to the memory
immediately following the Packet structure. It is better to simply
calculate that value every time than store the 8 byte pointer.
Raw reassembly is used only by the detection engine. For users only
caring about logging it's a significant overhead, both in cpu and
memory usage.
The option is called 'raw' and lives under the stream.reassembly
options.
stream:
memcap: 32mb
checksum-validation: yes # reject wrong csums
inline: auto # auto will use inline mode in IPS mode, yes or no set it statically
reassembly:
memcap: 64mb
depth: 1mb # reassemble 1mb into a stream
toserver-chunk-size: 2560
toclient-chunk-size: 2560
randomize-chunk-size: yes
#randomize-chunk-range: 10
raw: false # <- new option
When checking the reassembly limit for raw reassembly, consider the
STREAMTCP_STREAM_FLAG_NOREASSEMBLY a trigger immediately. We won't
process any more segments in the reassembly engine anyway.
When multiple segments were put into a smsg, the seq would be updated
each time a segment was added. Because of this, the seq wasn't pointing
to the start of the data.
This caused some false negatives when the fast_pattern was in the raw
stream, but another part of the inspection was in the state. Because of
the wrong seq, the inspection of the smsg could be delayed. This in turn,
could make the inspection engine consider a TX inspected, even if it wasn't
fully yet.