Switch StreamBufferBlocks implementation to use RBTREE instead of
a list. This makes inserts/removals and lookups a lot cheaper if
the number of data gaps is large.
Use separate compare functions for inserts and regular lookups.
Inserts care about the offset, while lookups care about the blocks
right edge as well.
Convert to rbtree from linked list. These ranges, of which there can
be multiple per packet, are fully controlled by an attacked. The
attacker could craft a stream of packet in such a way that the list
would grow very large. This would make inserts/removals very expensive,
as well as the list walk that is done and size calculation and pruning
operations.
The RBTREE makes inserts/removals much cheaper, at a slight overhead
for 'normal' operations and slightly higher per record memory use.
Don't try to do a 'fast path' by checking RB_MAX. RB_MAX walks the
tree which means it can be quite expensive. This cost would be paid
for virtually every data segment. The actual insert that follows would
walk the tree again.
Instead, simply insert it. There is a slight cost of the unnecessary
overlap check, but this is much less than the tree walk in a full
tree.
Now that with the RBTREE we have a properly sorted Segment tree,
where with exact SEQ matches the tree is sorted by payload_len
smallest to largest, we can avoid walking backwards when checking
for overlaps. Our direct RB_PREV either overlaps or not and that
is a reliable verdict for the rest of the tree.
To improve worst case performance turn the segments list into a rbtree.
This greatly improves inserts, lookups and removals if the number of
segments gets very large.
The tree is sorted by the segment sequence number as its primary key.
If 2 segments have the same seq, the payload_len (segment length) is
used. Then the larger segment will be places after the smaller segment.
Exact matches are not added to the tree.
Add another function to get TLS version, since 'TlsGetCertInfo' only
works when a TLS session contains a clear text certificate, which is
not the case in TLSv1.3 or when a session is resumed.
In the case we exceed the number of simultaneously open
file we can reach a state were we will not close the file
after writing.
Thanks to Steve Grubb <sgrubb@redhat.com> for the analysis.
Fix mpm progress being updated by irrelevant engines. Esp in the
case of file_data engines, signature can contain multiple versions
of the same engine, registered for different 'progress' values.
This would lead to signatures being considered 'can't match' even
in cases where they clearly could still match.
Only consider those progress values that apply to the protocol in
use.