The headers table from client to server
and the one from server to client
may have different maximum sizes
(even if both endpoints have to keep both tables)
Scenario is use of dummy padding in write AndX request
or other similar commands using a data offset.
Parsing skips now these dummy bytes, and generates one event
Evasion scenario is
- a first dummy write of one byte at offset 0 is done
- the second full write of EICAR at offset 0 is then done
and does not trigger detection
The last write had the final value, and as we cannot "cancel"
the previous write, we set an event which is then transformed into
an app-layer decoder alert
This commit adds a buffer validator for compress whitespace. Buffers
containing two or more consecutive whitespace characters are invalid
with this transform.
This commit changes the name of the file used with threaded eve logging
to better support log rotation
Instead of using "eve.json.N" and creating potential issues with log
rotation (which also uses a ".N" suffix), the eve logs will be named
"eve.N.json" when threaded.
This commit changes the size of reporting variables to be dynamic based
on the buffer ids in use instead of a fixed value to address a SEGV when
the fixed value was less than the max buffer/type id in use.
If a pattern matches in the other direction, after
probing parser finished without finding a protocol,
we will rerun the probing parser, which will include
the newly protocol found by its pattern
It a protocol is found in a first direction, we should run the
probing parser, even if it is not in the known ports.
That can happen for HTTP2, where client magic is detected,
then server probe can be run
TCPProtoDetect can either set f->alproto, change f->alstate
and return error.
When the original alstate gets freed, we shall set the pointer
to NULL, as it can get reused.
This patch addresses issues discovered by redmine ticket 3896. With the
approach of finding latest record, there was a chance that no record was
found at all and consumed + needed became input length.
e.g.
input_len = 1000
input = 01 05 00 02 00 03 a5 56 00 00 .....
There exists no |05 00| identifier in the rest of the record. After
having parsed |05 00|, there was a search for another record with the
leftover data. Current data length at this point would be 997. Since the
identifier was not found in the data, we calculate the consumed bytes at
this point i.e. consumed = current_data.len() - 1 which would be 996.
Needed bytes still stay at a constant of 2. So, consumed + needed = 996
+ 2 = 998 which is lesser than initial input length of 1000 and hence
the assertion fails.
There could be two fixes to this problem.
1. Finding the latest record but making use of the last found record in
case no new record was found.
2. Always use the earliest record.
This patch takes the approach (2). It also makes sure that the gap and
current direction are the same.