This introduces a new parser registration function for LDAP/UDP, and update
ldap configuration in order to be able to enable/disable a single parser
independently (such as dns).
Also, GAPs are accepted only for TCP parser and not for UDP.
Ticket #7203
When replaying a pcap file, it is not possible to get rules
profiling because it has to be activated from the unix socket.
This patch adds a new option to be able to activate profiling
collection at start so a pcap run can get rules profiling
information.
Ticket: 3958
- transactions are now bidirectional
- there is a logger
- gap support is improved with probing for resync
- frames support
- app-layer events
- enip_command keyword accepts now string enumeration as values.
- add enip.status keyword
- add keywords :
enip.product_name, enip.protocol_version, enip.revision,
enip.identity_status, enip.state, enip.serial, enip.product_code,
enip.device_type, enip.vendor_id, enip.capabilities,
enip.cip_attribute, enip.cip_class, enip.cip_instance,
enip.cip_status, enip.cip_extendedstatus
When we added feature #5976 (72146b969), we overlook that we also have
a config stats option for the human-readable stats logs to output
0 counters.
Due to not seeing this before, we now have two different setting names
for basically the same thing, but in different logs:
- zero-valued-counters for EVE
- null-values for stats.log
This ensures we use the same terminology, and change the recently added
one to `null-values`, as this one has been around for longer.
Task #6962
The stats config for EVE logs have a comment about exception policy
stats counters. This went in with 72146b969c, but shouldn't have,
as there are no options there.
Some stats can be quite verbose if logging all zero valued-counters.
This allows users to disable logging such counters. Default is still
true, as that's the expected behavior for the engine.
Task #5976
While our documentation indicated what were the possible configuration
settings for exception policies, our yaml only explicitly mentioned
exception policy for the master switch. Clearly indicate which config
settings are about exception policies.
Related to
Task #5816
Ticket: 5926
HTTP2 continuation frames are defined in RFC 9113.
They allow header blocks to be split over multiple HTTP2 frames.
For Suricata to process correctly these header blocks, it
must do the reassembly of the payload of these HTTP2 frames.
Otherwise, we get incomplete decoding for headers names and/or
values while decoding a single frame.
Design is to add a field to the HTTP2 state, as the RFC states that
these continuation frames form a discrete unit :
> Field blocks MUST be transmitted as a contiguous sequence of frames,
> with no interleaved frames of any other type or from any other stream.
So, we do not have to duplicate this reassembly field per stream id.
Another design choice is to wait for the reassembly to be complete
before doing any decoding, to avoid quadratic complexity on partially
decoding of the data.
When the packet load is low, Suricata can run in interrupt
mode. This more resembles the classic approach of processing
packets - CPU cores run low and only fetch packets
on interrupt.
Ticket: #5839
Add a new configuration flag, "datasets.rules.allow-write" to control
if rules can contain "save" or "state" rules which allow write access
to the file system.
Ticket: #6123
For dataset filenames coming from rules, do not allow filenames that
are absolute or contain a directory traversal with "..". This prevents
datasets from escaping the define data-directory which may allow a bad
rule to overwrite any file that Suricata has permission to write to.
Add a new configuration option,
"datasets.rules.allow-absolute-filenames" to allow absolute filenames
in dataset rules. This will be a way to revert back to the pre 6.0.13
behavior where save/state rules could use any filename.
Ticket: #6118
To protect against possible supply chain attacks, disable Lua rules by
default. They can be enabled under the "security" section of
suricata.yaml.
Ticket: #6122
For the rules profiling, we really want to limit the performance
impact to the maximum. So let's use an hash size that is a power
of 2. This will allow to not use the modulo operation that is
costly and simply use a single binary operator.
This code is only active for rules profiling so we are backward
compatible.