If there are many rules for TCP flags these rules would be inspected
against each TCP packet. Even though the flags check is not expensive,
the combined cost of inspecting multiple rules against each and every
packet is high.
This patch implements a prefilter engine for flags. If a rule group
has rules looking for specific flags and engine for that flag or
flags combination is set up. This way those rules are only inspected
if the flag is actually present in the packet.
Introduce prefilter keyword to force a keyword to be used as prefilter.
e.g.
alert tcp any any -> any any (content:"A"; flags:R; prefilter; sid:1;)
alert tcp any any -> any any (content:"A"; flags:R; sid:2;)
alert tcp any any -> any any (content:"A"; dsize:1; prefilter; sid:3;)
alert tcp any any -> any any (content:"A"; dsize:1; sid:4;)
In sid 2 and 4 the content keyword is used in the MPM engine.
In sid 1 and 3 the flags and dsize keywords will be used.
In preparation of the introduction of more general purpose prefilter
engines, rename PatternMatcherQueue to PrefilterRuleStore. The new
engines will fill this structure a similar way to the current mpm
prefilters.
In the case of a bypassed flow we add a 'bypass' key that can
be 'local' or 'capture'. This will allow the user to know if
capture bypass method is failing by looking at the 'bypass' key.
This adds a new timeout value for local bypassed state. For user
simplication it is called only `bypassed`. The patch also adds
a emergency value so we can clean bypassed flows a bit faster.
If we see packets for a capture bypassed flow after some times, it
means that the capture method is not handling correctly the bypass
so it is better to switch to local bypass method.
If a packet triggers a rule which contains both
bypass and filestore keywords,
it won't be stored since it's not inspected.
To avoid that, when a rule containing filestore keyword
we make sure that also bypass keyword is present.
This adds a new keyword which permits to call the
bypass callback when a sig is matched.
The callback must be called when the match of the sig
is complete.
This patch activates bypass for encrypted flow and for flow
that have reached stream depth on both side.
For encrypted flow , suricata is stopping the inspection so
we can just get it out via bypass. The same logic apply
for flow that have reached the stream depth.
For a basic test of feature, use the following ruleset:
```
table ip filter {
chain output {
type filter hook output priority 0; policy accept;
ct mark 0x1 counter accept
oif lo counter queue num 0
}
chain connmark_save {
type filter hook output priority 1; policy accept;
mark 0x1 ct mark set mark counter
ct mark 0x1 counter
}
}
```
And use bypass mark and mask of 1 in nfq configuration. Then you
can test the system by scp big file to 127.0.0.1. You can also
use iperf to measure the performance on localhost. It is recommended
to lower the MTU to 1500 to get something more realistic by increasing
the number of packets..
Call the packet bypass callback if necessary and update the flow
state. In case of failure we switch to local bypassed state and set
capture bypassed state if the callback is successful.
As capture method like nfq will cut both side of the flow instantly
we will not get the hack for most data which have been received. So
it is better to force reassembly to be sure to get the timeout of
the entry.
This patch adds two new states to the flow:
* local bypass: for suricata only bypass, packets belonging to
a flow in this state will be discard fast
* capture bypass: capture method is handling the bypass and suricata
will discard packets that are currently queued
A bypassed state to flow that will be set on flow when a bypass
decision is taken. In the case of capture bypass this will allow
to remove faster the flow entry from the flow table instead of
waiting for the "established" timeout.
Previously the src/dest ips in TLS events would differ between
IDS and IPS modes. Make the header creation direction sensitive
so they are identical in both modes.
This patch adds a transaction counter for application layers
supporting it. Analysis is done after the parsing by the
different application layers.
This result in new data in the stats output, that looks like:
```
"app-layer": {
"tx": {
"dns_udp": 21433,
"http": 12766,
"smtp": 0,
"dns_tcp": 0
}
},
```
To be able to add a transaction counter we will need a ThreadVars
in the AppLayerParserParse function.
This function is massively used in unittests
and this result in an long commit.
When a segment only partially fit in streaming depth, the stream
depth reached flag was not set resulting in a continuous
inspection of the rest of the session.
By setting the stream depth reached flag when the segment partially
fit we avoid to reenter the code and we don't take anymore a code
path resulting in the flag not to be set.
Detection plugin for TLS certificate fields notBefore and notAfter.
Supports equal to, less than, greater than, and range operations
for both keywords. Dates can be represented as either ISO 8601 or
epoch (Unix time).
Examples:
alert tls [...] tls_cert_notafter:1445852105; [...]
alert tls [...] tls_cert_notbefore:<2015-10-22T23:59:59; [...]
alert tls [...] tls_cert_notbefore:>2015-10-22; [...]
alert tls [...] tls_cert_notafter:2000-10-22<>2020-05-15; [...]
If a rule option value starts with a double quote, ensure it
ends with a double quote, exclusive of white space which gets
trimmed anyways.
Catches errors like 'filemagic:"picture" sid:5555555;' reporting
that a missing semicolon may be the error.
Until now the flow manager would walk the entire flow hash table on an
interval. It would thus touch all flows, leading to a lot of memory
and cache pressure. In scenario's where the number of tracked flows run
into the hundreds on thousands, and the memory used can run into many
hundreds of megabytes or even gigabytes, this would lead to serious
performance degradation.
This patch introduces a new approach. A timestamp per flow bucket
(hash row) is maintained by the flow manager. It holds the timestamp
of the earliest possible timeout of a flow in the list. The hash walk
skips rows with timestamps beyond the current time.
As the timestamp depends on the flows in the hash row's list, and on
the 'state' of each flow in the list, any addition of a flow or
changing of a flow's state invalidates the timestamp. The flow manager
then has to walk the list again to set a new timestamp.
A utility function FlowUpdateState is introduced to change Flow states,
taking care of the bucket timestamp invalidation while at it.
Empty flow buckets use a special value so that we don't have to take
the flow bucket lock to find out the bucket is empty.
This patch also adds more performance counters:
flow_mgr.flows_checked | Total | 929
flow_mgr.flows_notimeout | Total | 391
flow_mgr.flows_timeout | Total | 538
flow_mgr.flows_removed | Total | 277
flow_mgr.flows_timeout_inuse | Total | 261
flow_mgr.rows_checked | Total | 1000000
flow_mgr.rows_skipped | Total | 998835
flow_mgr.rows_empty | Total | 290
flow_mgr.rows_maxlen | Total | 2
flow_mgr.flows_checked: number of flows checked for timeout in the
last pass
flow_mgr.flows_notimeout: number of flows out of flow_mgr.flows_checked
that didn't time out
flow_mgr.flows_timeout: number of out of flow_mgr.flows_checked that
did reach the time out
flow_mgr.flows_removed: number of flows out of flow_mgr.flows_timeout
that were really removed
flow_mgr.flows_timeout_inuse: number of flows out of flow_mgr.flows_timeout
that were still in use or needed work
flow_mgr.rows_checked: hash table rows checked
flow_mgr.rows_skipped: hash table rows skipped because non of the flows
would time out anyway
The counters below are only relating to rows that were not skipped.
flow_mgr.rows_empty: empty hash rows
flow_mgr.rows_maxlen: max number of flows per hash row. Best to keep low,
so increase hash-size if needed.
flow_mgr.rows_busy: row skipped because it was locked by another thread