Use per tx detect_flags to track prefilter. Detect flags are used for 2
things:
1. marking tx as fully inspected
2. tracking already run prefilter (incl mpm) engines
This supercedes the MpmIDs API for directionless tracking
of the prefilter engines.
When we have no SGH we have to flag the txs that are 'complete'
as inspected as well.
Special handling for the stream engine:
If a rule mixes TX inspection and STREAM inspection, we can encounter
the case where the rule is evaluated against multiple transactions
during a single inspection run. As the stream data is exactly the same
for each of those runs, it's wasteful to rerun inspection of the stream
portion of the rule.
This patch enables caching of the stream 'inspect engine' result in
the local 'RuleMatchCandidateTx' array. This is valid only during the
live of a single inspection run.
Remove stateful inspection from 'mask' (SignatureMask). The mask wasn't
used in most cases for those rules anyway, as there we rely on the
prefilter. Add a alproto check to catch the remaining cases.
When building the active non-mpm/non-prefilter list check not just
the mask, but also the alproto. This especially helps stateful rules
with negated mpm.
Simplify AppLayerParserHasDecoderEvents usage in detection to only
return true if protocol detection events are set. Other detection is done
in inspect engines.
Move rule group lookup and handling into it's own function. Handle
'post lookup' tasks immediately, instead of after the first detect
run. The tasks were independent of the initial detection.
Many cleanups and much refactoring.
Add API meant to replace the MpmIDs API. It uses a u64 for each direction
in a tx to keep track of 2 things:
1. is inspection done?
2. which prefilter engines (like mpm) are already completed
Avoid looping in transaction output.
Update app-layer API to store the bits in one step
and retrieve the bits in a single step as well.
Update users of the API.
Create a bitmap of the loggers per protocol. This is done at runtime
based on the loggers that are enabled. Take the logger_id for each
logger and store it as a bitmap in the app-layer protcol storage.
Goal is to be able to use it as an expectation later.
Since the parser now also does nfs2, the name nfs3 became confusing.
As it's still in beta, we can rename so this patch renames all 'nfs3'
logic to simply 'nfs'.
TCP reassembly is now deactivated more frequently and triggering a
bypass on it is resulting in missing some alerts due forgetting
about packet based signature.
So this patch is introducing a dedicated flag that can be set in
the app layer and transmitted in the streaming to trigger bypass.
It is currently used by the SSL app layer to trigger bypass when
the stream becomes encrypted.
A parser can now set a flag that will tell the application
layer that it is capable of handling gaps. If enabled, and a
gap occurs, the app-layer needs to be prepared to accept
input that is NULL with a length, where the length is the
number of bytes lost. It is up to the app-layer to
determine if it can sync up with the input data again.
In various scenarios buffers would be checked my MPM more than
once. This was because the buffers would be inspected for a
certain progress value or higher.
For example, for each packet in a file upload, the engine would
not just rerun the 'http client body' MPM on the new data, it
would also rerun the method, uri, headers, cookie, etc MPMs.
This was obviously inefficent, so this patch changes the logic.
The patch only runs the MPM engines when the progress is exactly
the intended progress. If the progress is beyond the desired
value, it is run once. A tracker is added to the app layer API,
where the completed MPMs are tracked.
Implemented for HTTP, TLS and SSH.
Prevents the case where the logged id is incremented if a newer
transaction is complete and an older one is still outstanding.
For example, dns request0, unsolicited dns response, dns response0
would result in the valid response0 never being logged.
Similarily this could happen for:
request0, request1, response1, response0
which would end up having request0, request1 and response1 logged,
but response0 would not be logged.
This permits to set a stream depth value for each
app-layer.
By default, the stream depth specified for tcp is set,
then it's possible to specify a own value into the app-layer
module with a proper API.
Add support for the ENIP/CIP Industrial protocol
This is an app layer implementation which uses the "enip" protocol
and "cip_service" and "enip_command" keywords
Implements AFL entry points
This patch adds a transaction counter for application layers
supporting it. Analysis is done after the parsing by the
different application layers.
This result in new data in the stats output, that looks like:
```
"app-layer": {
"tx": {
"dns_udp": 21433,
"http": 12766,
"smtp": 0,
"dns_tcp": 0
}
},
```
To be able to add a transaction counter we will need a ThreadVars
in the AppLayerParserParse function.
This function is massively used in unittests
and this result in an long commit.
This function globally checks if the protocol is registered and
enabled by testing for the per alproto callback:
StateGetProgressCompletionStatus
This check is to be used before enabling Tx-aware code, like loggers.
Add function AppLayerParserRegisterLoggerFuncs for registering
a callback function for checking if a specific logger has logged
a transaction, and a callback function for specifying that it has.
Also add functions AppLayerParserGetTxLogged and
AppLayerParserSetTxLogged to invoke these callback functions.
Change AppLayerParserRegisterGetStateProgressCompletionStatus to
only store one ProgressCompletionStatus callback function for each
alproto, instead of storing one for each ipproto.
This enables us to use AppLayerParserGetStateProgressCompletionStatus
in functions where we do not know the ipproto used.
Add support for AFL PERSISTANT_MODE when Suricata is compiled with
a supported compiler (only afl-clang-fast for now).
This gives a ~10x performance boost when fuzzing.
This patch introduces a new set of commandline options meant for
assisting in fuzz testing the app layer implementations.
Per protocol, 2 commandline options are added:
--afl-http-request=<filename>
--afl-http=<filename>
In the former case, the contents of the file are passed directly to
the HTTP parser as request data.
In the latter case, the data is devided between request and responses.
First 64 bytes are request, then next 64 are response, next 64 are
request, etc, etc.
Stream GAPs and stream reassembly depth are tracked per direction. In
many cases they will happen in one direction, but not in the other.
Example:
HTTP requests a generally smaller than responses. So on the response
side we may hit the depth limit, but not on the request side.
The asynchronious 'disruption' has a side effect in the transaction
engine. The 'progress' tracking would never mark such transactions
as complete, and thus some inspection and logging wouldn't happen
until the very last moment: when EOF's are passed around.
Especially in proxy environments with _very_ many transactions in a
single TCP connection, this could lead to serious resource issues. The
EOF handling would suddenly have to handle thousands or more
transactions. These transactions would have been stored for a long time.
This patch introduces the concept of disruption flags. Flags passed to
the tx progress logic that are and indication of disruptions in the
traffic or the traffic handling. The idea is that the progress is
marked as complete on disruption, even if a tx is not complete. This
allows the detection and logging engines to process the tx after which
it can be cleaned up.