Issue:
https://redmine.openinfosecfoundation.org/issues/2041
One approach to fixing this issue to just validate the
checksum instead of regenerating it and comparing it. This
method is used in some kernels and other network tools.
When validating, the current checksum is passed in as an
initial argument which will cause the final checksum to be 0
if OK. If generating a checksum, 0 is passed and the result
is the generated checksum.
When detection is running flags are set on flows to indicate if file
hashing is needed. This is based on global output settings and rules.
In the case of --disable-detection this was not happening, so all
files where hashed with all methods. This has a significant
performance impact.
This patch adds logic to set the flow flags in --disable-detect mode.
Match on TLS certificate serial number using tls_cert_serial
keyword, e.g.:
alert tls any any -> any any (msg:"TLS cert serial test";
tls_cert_serial; content:"5C:19:B7:B1:32:3B:1C:A1";
sid:12345;)
Flowvars were already using a temporary store in the detect thread
ctx.
Use the same facility for pktvars. The reasons are:
1. packet is not always available, e.g. when running pcre on http
buffers.
2. setting of vars should be done post match. Until now it was also
possible that it is done on a partial match.
Until now variable names, such as flowbit names, were local to a detect
engine. This made sense as they were only ever used in that context.
For the purpose of logging these names, this needs a different approach.
The loggers live outside of the detect engine. Also, in the case of
reloads and multi-tenancy, there are even multiple detect engines, so
it would be even more tricky to access them from the outside.
This patch brings a new approach. A any time, there is a single active
hash table mapping the variable names and their id's. For multiple
tenants the table is shared between tenants.
The table is set up in a 'staging' area, where locking makes sure that
multiple loading threads don't mess things up. Then when the preparing
of a detection engine is ready, but before the detect threads are made
aware of the new detect engine, the active varname hash is swapped with
the staging instance.
For this to work, all the mappings from the 'current' or active mapping
are added to the staging table.
After the threads have reloaded and the new detection engine is active,
the old table can be freed.
For multi tenancy things are similar. The staging area is used for
setting up until the new detection engines / tenants are applied to
the system.
This patch also changes the variable 'id'/'idx' field to uint32_t. Due
to data structure padding and alignment, this should have no practical
drawback while allowing for a lot more vars.
Matches on the start of a HTTP request or response.
Uses a buffer constructed from the request line and normalized request
headers, including the Cookie header.
Or for the response side, it uses the response line plus the
normalized response headers, including the Set-Cookie header.
Both buffers are terminated by an extra \r\n.
A sticky buffer that allows content inspection on a contructed buffer
of HTTP header names. The buffer starts with \r\n, the names are
separated by \r\n and the end of the buffer contains an extra \r\n.
E.g. \r\nHost\r\nUser-Agent\r\n\r\n
The leading \r\n is to make sure one can match on a full name in all
cases.
Some keywords need a scratch space where they can do store the results
of expensive operations that remain valid for the time of a packets
journey through the detection engine.
An example is the reconstructed 'http_header' field, that is needed
in MPM, and then for each rule that manually inspects it. Storing this
data in the flow is a waste, and reconstructing multiple times on
demand as well.
This API allows for registering a keyword with an init and free function.
It it mean to be used an initialization time, when the keyword is
registered.
To replace the hardcoded SigMatch list id's, use this API to register
and query lists by name.
Also allow for registering descriptions and whether mpm is supported.
Registration is only allowed at startup.
The code to get the rule group (sgh) would return the group for
IP proto 0 instead of nothing. This lead to certain types of rules
unintentionally matching (False Positive).
Since the packets weren't actually IP, the logged alert records
were missing the IP header.
Bug #2017.
Luajit has a strange memory requirement, it's 'states' need to be in the
first 2G of the process' memory.
This patch improves the pool approach by moving it to the front of the
start up.
A new config option 'luajit.states' is added to control how many states
are preallocated. It defaults to 128.
Add a warning when more states are used then preallocated. This may fail
if flow/stream/detect engines use a lot of memory. Add hint at exit that
gives the max states in use if it's higher than the default.
Introduce 'Protocol detection'-only rules. These rules will only be
fully evaluated when the protocol detection completed. To allow
mixing of the app-layer-protocol keyword with other types of matches
the keyword can also inspect the flow's app-protos per packet.
Implement prefilter for the 'PD-only' rules.
Add support for the ENIP/CIP Industrial protocol
This is an app layer implementation which uses the "enip" protocol
and "cip_service" and "enip_command" keywords
Implements AFL entry points
The order of keyword registration currently affects inspect engine
registration order and ultimately the order of inspect engines per
rule. Which in turn affects state keeping.
This patch makes sure the ordering is the same as with older
releases.
Inspect engines are called per signature per sigmatch list. Most
wrap around DetectEngineContentInspection, but it's more generic.
Until now, the inspect engines were setup in a large per ipproto,
per alproto, per direction table. For stateful inspection each
engine needed a global flag.
This approach had a number of issues:
1. inefficient: each inspection round walked the table and then
checked if the inspect engine was even needed for the current
rule.
2. clumsy registration with global flag registration.
3. global flag space was approaching the need for 64 bits
4. duplicate registration for alprotos supporting both TCP and
TCP (DNS).
This patch introduces a new approach.
First, it does away with the per ipproto engines. This wasn't used.
Second, it adds a per signature list of inspect engine containing
only those engines that actually apply to the rule.
Third, it gets rid of the global flags and replaces it with flags
assigned per rule per engine.
Register keywords globally at start up.
Create a map of the registery per detection engine. This we need because
the sgh_mpm_context value is set per detect engine.
Remove APP_MPMS_MAX.
Introduce prefilter keyword to force a keyword to be used as prefilter.
e.g.
alert tcp any any -> any any (content:"A"; flags:R; prefilter; sid:1;)
alert tcp any any -> any any (content:"A"; flags:R; sid:2;)
alert tcp any any -> any any (content:"A"; dsize:1; prefilter; sid:3;)
alert tcp any any -> any any (content:"A"; dsize:1; sid:4;)
In sid 2 and 4 the content keyword is used in the MPM engine.
In sid 1 and 3 the flags and dsize keywords will be used.