If we see packets for a capture bypassed flow after some times, it
means that the capture method is not handling correctly the bypass
so it is better to switch to local bypass method.
If a packet triggers a rule which contains both
bypass and filestore keywords,
it won't be stored since it's not inspected.
To avoid that, when a rule containing filestore keyword
we make sure that also bypass keyword is present.
This adds a new keyword which permits to call the
bypass callback when a sig is matched.
The callback must be called when the match of the sig
is complete.
This patch activates bypass for encrypted flow and for flow
that have reached stream depth on both side.
For encrypted flow , suricata is stopping the inspection so
we can just get it out via bypass. The same logic apply
for flow that have reached the stream depth.
For a basic test of feature, use the following ruleset:
```
table ip filter {
chain output {
type filter hook output priority 0; policy accept;
ct mark 0x1 counter accept
oif lo counter queue num 0
}
chain connmark_save {
type filter hook output priority 1; policy accept;
mark 0x1 ct mark set mark counter
ct mark 0x1 counter
}
}
```
And use bypass mark and mask of 1 in nfq configuration. Then you
can test the system by scp big file to 127.0.0.1. You can also
use iperf to measure the performance on localhost. It is recommended
to lower the MTU to 1500 to get something more realistic by increasing
the number of packets..
Call the packet bypass callback if necessary and update the flow
state. In case of failure we switch to local bypassed state and set
capture bypassed state if the callback is successful.
As capture method like nfq will cut both side of the flow instantly
we will not get the hack for most data which have been received. So
it is better to force reassembly to be sure to get the timeout of
the entry.
This patch adds two new states to the flow:
* local bypass: for suricata only bypass, packets belonging to
a flow in this state will be discard fast
* capture bypass: capture method is handling the bypass and suricata
will discard packets that are currently queued
A bypassed state to flow that will be set on flow when a bypass
decision is taken. In the case of capture bypass this will allow
to remove faster the flow entry from the flow table instead of
waiting for the "established" timeout.
Previously the src/dest ips in TLS events would differ between
IDS and IPS modes. Make the header creation direction sensitive
so they are identical in both modes.
This patch adds a transaction counter for application layers
supporting it. Analysis is done after the parsing by the
different application layers.
This result in new data in the stats output, that looks like:
```
"app-layer": {
"tx": {
"dns_udp": 21433,
"http": 12766,
"smtp": 0,
"dns_tcp": 0
}
},
```
To be able to add a transaction counter we will need a ThreadVars
in the AppLayerParserParse function.
This function is massively used in unittests
and this result in an long commit.
When a segment only partially fit in streaming depth, the stream
depth reached flag was not set resulting in a continuous
inspection of the rest of the session.
By setting the stream depth reached flag when the segment partially
fit we avoid to reenter the code and we don't take anymore a code
path resulting in the flag not to be set.
Detection plugin for TLS certificate fields notBefore and notAfter.
Supports equal to, less than, greater than, and range operations
for both keywords. Dates can be represented as either ISO 8601 or
epoch (Unix time).
Examples:
alert tls [...] tls_cert_notafter:1445852105; [...]
alert tls [...] tls_cert_notbefore:<2015-10-22T23:59:59; [...]
alert tls [...] tls_cert_notbefore:>2015-10-22; [...]
alert tls [...] tls_cert_notafter:2000-10-22<>2020-05-15; [...]
If a rule option value starts with a double quote, ensure it
ends with a double quote, exclusive of white space which gets
trimmed anyways.
Catches errors like 'filemagic:"picture" sid:5555555;' reporting
that a missing semicolon may be the error.
Until now the flow manager would walk the entire flow hash table on an
interval. It would thus touch all flows, leading to a lot of memory
and cache pressure. In scenario's where the number of tracked flows run
into the hundreds on thousands, and the memory used can run into many
hundreds of megabytes or even gigabytes, this would lead to serious
performance degradation.
This patch introduces a new approach. A timestamp per flow bucket
(hash row) is maintained by the flow manager. It holds the timestamp
of the earliest possible timeout of a flow in the list. The hash walk
skips rows with timestamps beyond the current time.
As the timestamp depends on the flows in the hash row's list, and on
the 'state' of each flow in the list, any addition of a flow or
changing of a flow's state invalidates the timestamp. The flow manager
then has to walk the list again to set a new timestamp.
A utility function FlowUpdateState is introduced to change Flow states,
taking care of the bucket timestamp invalidation while at it.
Empty flow buckets use a special value so that we don't have to take
the flow bucket lock to find out the bucket is empty.
This patch also adds more performance counters:
flow_mgr.flows_checked | Total | 929
flow_mgr.flows_notimeout | Total | 391
flow_mgr.flows_timeout | Total | 538
flow_mgr.flows_removed | Total | 277
flow_mgr.flows_timeout_inuse | Total | 261
flow_mgr.rows_checked | Total | 1000000
flow_mgr.rows_skipped | Total | 998835
flow_mgr.rows_empty | Total | 290
flow_mgr.rows_maxlen | Total | 2
flow_mgr.flows_checked: number of flows checked for timeout in the
last pass
flow_mgr.flows_notimeout: number of flows out of flow_mgr.flows_checked
that didn't time out
flow_mgr.flows_timeout: number of out of flow_mgr.flows_checked that
did reach the time out
flow_mgr.flows_removed: number of flows out of flow_mgr.flows_timeout
that were really removed
flow_mgr.flows_timeout_inuse: number of flows out of flow_mgr.flows_timeout
that were still in use or needed work
flow_mgr.rows_checked: hash table rows checked
flow_mgr.rows_skipped: hash table rows skipped because non of the flows
would time out anyway
The counters below are only relating to rows that were not skipped.
flow_mgr.rows_empty: empty hash rows
flow_mgr.rows_maxlen: max number of flows per hash row. Best to keep low,
so increase hash-size if needed.
flow_mgr.rows_busy: row skipped because it was locked by another thread
Instead of a single big FlowProto array containing timeouts separately
for normal and emergency cases, plus the 'Free' pointer for the
protoctx, split up these arrays.
An array made of FlowProtoTimeout for just the normal timeouts and an
mirror of that for emergency timeouts are used through a pointer that
will be set at init and by swapped by the emergency logic. It's swapped
back when the emergency is over.
The free funcs are moved to their own array.
This simplifies the timeout lookup code and shrinks the data that is
commonly used.
Add a generic 'capture' section to the YAML:
# general settings affecting packet capture
capture:
# disable NIC offloading. It's restored when Suricata exists.
# Enabled by default
#disable-offloading: false
#
# disable checksum validation. Same as setting '-k none' on the
# commandline
#checksum-validation: none
Moved and adapted code from detect-filemd5 to util-detect-file-hash,
generalised code to work with SHA-1 and SHA-256 and added necessary
flags and other constants.
Many rules have the same address vars, so instead of parsing them
each time use a hash to store the string and the parsed result.
Rules now reference the stored result in the hash table.
remove quote from the end of the boundary= string. This was throwing off
the mime parser so that it wouldn't always catch mime boundaries causing
things like missed attachments.
When running in live mode, the new default 'auto' value of
unix-command.enabled causes unix-command to be activated. This
will allow users of live capture to benefit from the feature and
result in no side effect for user running in offline capture.
We did ignore additional USR2 signals while a rule-reload was running.
This changes the counter to be incremented with every additional USR2
signal so we don't ignore them anymore but it's still limited to prevent
huge overload or even overflow.
The rules were using the wrong decoder event type, which was
only set in the unlikely event of a complete overlap, which
really had nothing to do with being too large.
Remove FRAG_TOO_LARGE as its no longer being used, an overlap
event is already set in the case where this event would be set.
As the logging modules are no longer threading modules, rename
them so they don't look like they are being registered as
threading modules.
Also, move the registration to the output.c which will handle
registration of the loggers.
Introduces a new thread module, TMM_LOGGER, which is the
root most logger.
Only handles loggers in the packet path, stats and flow
logging are not included.
The loggers are made up of a hierarchy of loggers. At the top we
have the root logger which is the main entry point to
logging. Under the root there exists parent loggers that are the
entry point for specific types of loggers such as packet logger,
transaction loggers, etc. Each parent logger may have 0 or more
loggers that actual handle the job of producing output to something
like a file.
Thread restarts never worked well and the rest of the engine was
never really expecting errors to lead to thread restarts. Either
and error is recoverable in the thread, or not at all.
So this patch removes the functionality completely.
Fix improper fread string handling. Improve error handling.
Skip trailing spaces for slightly more pretty printing.
Coverity CID 400763.
Thanks to Steve Grubb for helping address this issue.
Address issue https://redmine.openinfosecfoundation.org/issues/1889
for hostbits. This involves updating the regular expresssion
to capture any trailing data as the regex already keeps
spaces out of the name.
A unit test was converted to new macros to find out which
line it was failing at after updating regex.
Fixes issue: https://redmine.openinfosecfoundation.org/issues/1889
To catch the issue where the ';' is missing we have to expand the
regex to capture the whole name string, not just the leading
valid stuff. Then verify that there are no spaces in the name
(Snort has the same restriction) and fail if there is.
- drop:
alerts: yes # log alerts that caused drops
flows: all # start or all: 'start' logs only a single drop
# per flow direction. All logs each dropped pkt.