Address the issue where a DNS response would not be logged when
the traffic is like:
- Request 1
- Request 2
- Response 1
- Response 2
which can happen on dual stack machines where the request for A
and AAAA are sent out at the same time on the same UDP "session".
A "window" is used to set the maximum number of outstanding
responses before considering the olders lost.
The advancement through the buffer was not taking into account
the size of the length field resulting in the second request
being detected as bad data.
In case of a tunnel packet, adding a mark to the root packet will have
for consequence to bypass all the flows that are hosted in this tunnel.
This is not the attended behavior and as initial fix let's simply warn
suricata that bypass for NFQ is not possible for this kind of packets.
This patch also fixes a segfault. The root packet was accessed even if it is
NULL causing a NULL dereference:
ASAN:SIGSEGV
=================================================================
==24408==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000060 (pc 0x00000076f948 bp 0x7f435c000240 sp 0x7f435c000220 T5)
ASAN:SIGSEGV
==24408==AddressSanitizer: while reporting a bug found another one. Ignoring.
#0 0x76f947 in NFQBypassCallback /home/victor/dev/suricata/src/source-nfq.c:510
#1 0x4d0f02 in PacketBypassCallback /home/victor/dev/suricata/src/decode.c:395
#2 0x7b8a95 in StreamTcpPacket /home/victor/dev/suricata/src/stream-tcp.c:4661
#3 0x7b9ddd in StreamTcp /home/victor/dev/suricata/src/stream-tcp.c:4913
#4 0x68fa50 in FlowWorker /home/victor/dev/suricata/src/flow-worker.c:194
#5 0x7f0abd in TmThreadsSlotVarRun /home/victor/dev/suricata/src/tm-threads.c:128
#6 0x7f2958 in TmThreadsSlotVar /home/victor/dev/suricata/src/tm-threads.c:585
#7 0x7f436368e6f9 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x76f9)
#8 0x7f4362802b5c in clone (/lib/x86_64-linux-gnu/libc.so.6+0x106b5c)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /home/victor/dev/suricata/src/source-nfq.c:510 NFQBypassCallback
Thread T5 (W#04) created by T0 (Suricata-Main) here:
#0 0x7f4364ff2253 in pthread_create (/usr/lib/x86_64-linux-gnu/libasan.so.2+0x36253)
#1 0x7f9c48 in TmThreadSpawn /home/victor/dev/suricata/src/tm-threads.c:1843
#2 0x8da7c0 in RunModeSetIPSAutoFp /home/victor/dev/suricata/src/util-runmodes.c:519
#3 0x73e3ff in RunModeIpsNFQAutoFp /home/victor/dev/suricata/src/runmode-nfq.c:74
#4 0x7503fa in RunModeDispatch /home/victor/dev/suricata/src/runmodes.c:382
#5 0x7e5cb3 in main /home/victor/dev/suricata/src/suricata.c:2547
#6 0x7f436271c82f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2082f)
For flow bypass, the flow timeout handling is triggered which may
create up to 3 pseudo packets that hold a reference to the flow.
However, in the bypass case the code signaled to the timeout logic
that the flow can be freed unconditionally by returning 1. This
lead to packets going through the engine with a pointer to a now
freed/recycled flow.
This patch fixes the logic by removing the special bypass case,
which seemed redundant anyway. Effectively reverts 68d9677.
Bug #1928.
Fix rate_filter issues: if action was modified it wouldn't be logged
in EVE. To address this pass the PacketAlert structure to the threshold
code so it can flag the PacketAlert as modified. Use this in logging.
Update API to use const where possible. Fix a timout issue that this
uncovered.
Introduce 'Protocol detection'-only rules. These rules will only be
fully evaluated when the protocol detection completed. To allow
mixing of the app-layer-protocol keyword with other types of matches
the keyword can also inspect the flow's app-protos per packet.
Implement prefilter for the 'PD-only' rules.
Add negated matches to match list instead of amatch.
Allow matching on 'failed'.
Introduce per packet flags for proto detection. Flags are used to
only inspect once per direction. Flag packet on PD-failure too.
The Flow::data_al_so_far was used for tracking data already
parsed when protocol for the current direction wasn't known yet. As
this behaviour has changed the tracking can be removed.
When the current direction doesn't get a protocol detection, but the
opposing direction did, previously we would send the current data to
the parser. Then when we'd be invoked again (until the protocol
detection finally failed) we'd get the same data + the new data. To
make sure we'd not send the same data to the parser again, the flow
kept track of how much was already sent to the app-layer using
data_al_so_far.
This patch changes the behaviour. Instead of sending the data for
the current direction right away, we only do this when protocol
detection is complete. This way we won't have to track anything.
Suricata should not completely bypass a flow before both end of it
have reached the stream depth or have reached a certain state.
Justification is that suricata need the ACK to treat the other side
so we can't really decide to cut only one side.
The SCPacketTimestamp function returns packet timestamps as 2 real
numbers (seconds & microseconds).
Example:
local sec, usec = SCPacketTimestamp()
Signed-off-by: Nicolas Thill <ntl@p1sec.com>
A warning log is already emitted if eve-log is enabled in the
configuration but json support is not built so the logger
registration functions can be silent.
When memory allocations happened in HTTP body and general file
tracking, malloc/realloc errors (most likely in the form of memcap
reached conditions) could lead to an endless loop in the buffer
grow logic.
This patch implements proper error handling for all Append/Insert
functions for the streaming API, and it explicitly enables compiler
warnings if the results are ignored.
Some code won't work well when the OS doesn't allow RWX pages. This
page introduces a check for runtime evaluation of the OS' policy on
this.
Thanks to Shawn Webb from HardenedBSD for suggesting this solution.
When a rules match and fired filestore we may want
to increase the stream reassembly depth for this specific.
This add the 'depth' setting in file-store config,
which permits to specify how much data we want to reassemble
into a stream.
Some protocol like modbus requires
a infinite stream depth because session
are kept open and we want to analyze everything.
Since we have a stream reassembly depth per stream,
we can also set a stream reassembly depth per proto.
This permits to set a stream depth value for each
app-layer.
By default, the stream depth specified for tcp is set,
then it's possible to specify a own value into the app-layer
module with a proper API.
detect-cipservice.c:161:29: warning: Assigned value is garbage or undefined
cipserviced->cipservice = input[0];
^ ~~~~~~~~
detect-cipservice.c:162:27: warning: Assigned value is garbage or undefined
cipserviced->cipclass = input[1];
^ ~~~~~~~~
detect-cipservice.c:163:31: warning: Assigned value is garbage or undefined
cipserviced->cipattribute = input[2];
^ ~~~~~~~~
3 warnings generated.
Add support for the ENIP/CIP Industrial protocol
This is an app layer implementation which uses the "enip" protocol
and "cip_service" and "enip_command" keywords
Implements AFL entry points
Currently the regular 'Header' inspection code will run each time
after the HTTP progress moved beyond 'headers'. This will include
the trailers if there are any.
Leave the code in place as this model will change in the not too
distant future.
Instead of the linked list of engines setup an array
with the engines. This should provide better locality.
Also shrink the engine structure so that we can fit
2 on a cacheline.
Remove the FreeFunc from the runtime engines. Engines
now have a 'gid' (global id) that can be used to look
up the registered Free function.
The order of keyword registration currently affects inspect engine
registration order and ultimately the order of inspect engines per
rule. Which in turn affects state keeping.
This patch makes sure the ordering is the same as with older
releases.
Move engine and registration into the keyword file.
Register as 'ALPROTO_UNKNOWN' instead of per alproto. The
registration will only apply it to those rules that have
events set.
Inspect engines are called per signature per sigmatch list. Most
wrap around DetectEngineContentInspection, but it's more generic.
Until now, the inspect engines were setup in a large per ipproto,
per alproto, per direction table. For stateful inspection each
engine needed a global flag.
This approach had a number of issues:
1. inefficient: each inspection round walked the table and then
checked if the inspect engine was even needed for the current
rule.
2. clumsy registration with global flag registration.
3. global flag space was approaching the need for 64 bits
4. duplicate registration for alprotos supporting both TCP and
TCP (DNS).
This patch introduces a new approach.
First, it does away with the per ipproto engines. This wasn't used.
Second, it adds a per signature list of inspect engine containing
only those engines that actually apply to the rule.
Third, it gets rid of the global flags and replaces it with flags
assigned per rule per engine.
Register keywords globally at start up.
Create a map of the registery per detection engine. This we need because
the sgh_mpm_context value is set per detect engine.
Remove APP_MPMS_MAX.
Instead of a type per buffer type, pass just 3 possible types:
packet, stream, state.
The individual types weren't used. State is just there to be
not packet and not stream.
Many of the packet engines are very generic. Rules are generally more
limited.
A rule like 'alert tcp any any -> any 888 (flags:S; sid:1;)' would still
be inspected against every SYN packet in most cases (it depends a bit on
rule grouping though).
This extra match logic adds an additional check to these packet engines.
It can add a check based on alproto, source port and dest port. It uses
only one of these 3. Priority order is src port > alproto > dst port.
For the ports only 'single' ports are used at this time.
If there are many rules for TCP flags these rules would be inspected
against each TCP packet. Even though the flags check is not expensive,
the combined cost of inspecting multiple rules against each and every
packet is high.
This patch implements a prefilter engine for flags. If a rule group
has rules looking for specific flags and engine for that flag or
flags combination is set up. This way those rules are only inspected
if the flag is actually present in the packet.
Introduce prefilter keyword to force a keyword to be used as prefilter.
e.g.
alert tcp any any -> any any (content:"A"; flags:R; prefilter; sid:1;)
alert tcp any any -> any any (content:"A"; flags:R; sid:2;)
alert tcp any any -> any any (content:"A"; dsize:1; prefilter; sid:3;)
alert tcp any any -> any any (content:"A"; dsize:1; sid:4;)
In sid 2 and 4 the content keyword is used in the MPM engine.
In sid 1 and 3 the flags and dsize keywords will be used.
In preparation of the introduction of more general purpose prefilter
engines, rename PatternMatcherQueue to PrefilterRuleStore. The new
engines will fill this structure a similar way to the current mpm
prefilters.
In the case of a bypassed flow we add a 'bypass' key that can
be 'local' or 'capture'. This will allow the user to know if
capture bypass method is failing by looking at the 'bypass' key.
This adds a new timeout value for local bypassed state. For user
simplication it is called only `bypassed`. The patch also adds
a emergency value so we can clean bypassed flows a bit faster.
If we see packets for a capture bypassed flow after some times, it
means that the capture method is not handling correctly the bypass
so it is better to switch to local bypass method.
If a packet triggers a rule which contains both
bypass and filestore keywords,
it won't be stored since it's not inspected.
To avoid that, when a rule containing filestore keyword
we make sure that also bypass keyword is present.
This adds a new keyword which permits to call the
bypass callback when a sig is matched.
The callback must be called when the match of the sig
is complete.
This patch activates bypass for encrypted flow and for flow
that have reached stream depth on both side.
For encrypted flow , suricata is stopping the inspection so
we can just get it out via bypass. The same logic apply
for flow that have reached the stream depth.
For a basic test of feature, use the following ruleset:
```
table ip filter {
chain output {
type filter hook output priority 0; policy accept;
ct mark 0x1 counter accept
oif lo counter queue num 0
}
chain connmark_save {
type filter hook output priority 1; policy accept;
mark 0x1 ct mark set mark counter
ct mark 0x1 counter
}
}
```
And use bypass mark and mask of 1 in nfq configuration. Then you
can test the system by scp big file to 127.0.0.1. You can also
use iperf to measure the performance on localhost. It is recommended
to lower the MTU to 1500 to get something more realistic by increasing
the number of packets..
Call the packet bypass callback if necessary and update the flow
state. In case of failure we switch to local bypassed state and set
capture bypassed state if the callback is successful.
As capture method like nfq will cut both side of the flow instantly
we will not get the hack for most data which have been received. So
it is better to force reassembly to be sure to get the timeout of
the entry.
This patch adds two new states to the flow:
* local bypass: for suricata only bypass, packets belonging to
a flow in this state will be discard fast
* capture bypass: capture method is handling the bypass and suricata
will discard packets that are currently queued
A bypassed state to flow that will be set on flow when a bypass
decision is taken. In the case of capture bypass this will allow
to remove faster the flow entry from the flow table instead of
waiting for the "established" timeout.
Previously the src/dest ips in TLS events would differ between
IDS and IPS modes. Make the header creation direction sensitive
so they are identical in both modes.
This patch adds a transaction counter for application layers
supporting it. Analysis is done after the parsing by the
different application layers.
This result in new data in the stats output, that looks like:
```
"app-layer": {
"tx": {
"dns_udp": 21433,
"http": 12766,
"smtp": 0,
"dns_tcp": 0
}
},
```
To be able to add a transaction counter we will need a ThreadVars
in the AppLayerParserParse function.
This function is massively used in unittests
and this result in an long commit.
When a segment only partially fit in streaming depth, the stream
depth reached flag was not set resulting in a continuous
inspection of the rest of the session.
By setting the stream depth reached flag when the segment partially
fit we avoid to reenter the code and we don't take anymore a code
path resulting in the flag not to be set.
Detection plugin for TLS certificate fields notBefore and notAfter.
Supports equal to, less than, greater than, and range operations
for both keywords. Dates can be represented as either ISO 8601 or
epoch (Unix time).
Examples:
alert tls [...] tls_cert_notafter:1445852105; [...]
alert tls [...] tls_cert_notbefore:<2015-10-22T23:59:59; [...]
alert tls [...] tls_cert_notbefore:>2015-10-22; [...]
alert tls [...] tls_cert_notafter:2000-10-22<>2020-05-15; [...]
If a rule option value starts with a double quote, ensure it
ends with a double quote, exclusive of white space which gets
trimmed anyways.
Catches errors like 'filemagic:"picture" sid:5555555;' reporting
that a missing semicolon may be the error.
Until now the flow manager would walk the entire flow hash table on an
interval. It would thus touch all flows, leading to a lot of memory
and cache pressure. In scenario's where the number of tracked flows run
into the hundreds on thousands, and the memory used can run into many
hundreds of megabytes or even gigabytes, this would lead to serious
performance degradation.
This patch introduces a new approach. A timestamp per flow bucket
(hash row) is maintained by the flow manager. It holds the timestamp
of the earliest possible timeout of a flow in the list. The hash walk
skips rows with timestamps beyond the current time.
As the timestamp depends on the flows in the hash row's list, and on
the 'state' of each flow in the list, any addition of a flow or
changing of a flow's state invalidates the timestamp. The flow manager
then has to walk the list again to set a new timestamp.
A utility function FlowUpdateState is introduced to change Flow states,
taking care of the bucket timestamp invalidation while at it.
Empty flow buckets use a special value so that we don't have to take
the flow bucket lock to find out the bucket is empty.
This patch also adds more performance counters:
flow_mgr.flows_checked | Total | 929
flow_mgr.flows_notimeout | Total | 391
flow_mgr.flows_timeout | Total | 538
flow_mgr.flows_removed | Total | 277
flow_mgr.flows_timeout_inuse | Total | 261
flow_mgr.rows_checked | Total | 1000000
flow_mgr.rows_skipped | Total | 998835
flow_mgr.rows_empty | Total | 290
flow_mgr.rows_maxlen | Total | 2
flow_mgr.flows_checked: number of flows checked for timeout in the
last pass
flow_mgr.flows_notimeout: number of flows out of flow_mgr.flows_checked
that didn't time out
flow_mgr.flows_timeout: number of out of flow_mgr.flows_checked that
did reach the time out
flow_mgr.flows_removed: number of flows out of flow_mgr.flows_timeout
that were really removed
flow_mgr.flows_timeout_inuse: number of flows out of flow_mgr.flows_timeout
that were still in use or needed work
flow_mgr.rows_checked: hash table rows checked
flow_mgr.rows_skipped: hash table rows skipped because non of the flows
would time out anyway
The counters below are only relating to rows that were not skipped.
flow_mgr.rows_empty: empty hash rows
flow_mgr.rows_maxlen: max number of flows per hash row. Best to keep low,
so increase hash-size if needed.
flow_mgr.rows_busy: row skipped because it was locked by another thread
Instead of a single big FlowProto array containing timeouts separately
for normal and emergency cases, plus the 'Free' pointer for the
protoctx, split up these arrays.
An array made of FlowProtoTimeout for just the normal timeouts and an
mirror of that for emergency timeouts are used through a pointer that
will be set at init and by swapped by the emergency logic. It's swapped
back when the emergency is over.
The free funcs are moved to their own array.
This simplifies the timeout lookup code and shrinks the data that is
commonly used.
Add a generic 'capture' section to the YAML:
# general settings affecting packet capture
capture:
# disable NIC offloading. It's restored when Suricata exists.
# Enabled by default
#disable-offloading: false
#
# disable checksum validation. Same as setting '-k none' on the
# commandline
#checksum-validation: none
Moved and adapted code from detect-filemd5 to util-detect-file-hash,
generalised code to work with SHA-1 and SHA-256 and added necessary
flags and other constants.
Many rules have the same address vars, so instead of parsing them
each time use a hash to store the string and the parsed result.
Rules now reference the stored result in the hash table.
remove quote from the end of the boundary= string. This was throwing off
the mime parser so that it wouldn't always catch mime boundaries causing
things like missed attachments.
When running in live mode, the new default 'auto' value of
unix-command.enabled causes unix-command to be activated. This
will allow users of live capture to benefit from the feature and
result in no side effect for user running in offline capture.
We did ignore additional USR2 signals while a rule-reload was running.
This changes the counter to be incremented with every additional USR2
signal so we don't ignore them anymore but it's still limited to prevent
huge overload or even overflow.
The rules were using the wrong decoder event type, which was
only set in the unlikely event of a complete overlap, which
really had nothing to do with being too large.
Remove FRAG_TOO_LARGE as its no longer being used, an overlap
event is already set in the case where this event would be set.
As the logging modules are no longer threading modules, rename
them so they don't look like they are being registered as
threading modules.
Also, move the registration to the output.c which will handle
registration of the loggers.
Introduces a new thread module, TMM_LOGGER, which is the
root most logger.
Only handles loggers in the packet path, stats and flow
logging are not included.
The loggers are made up of a hierarchy of loggers. At the top we
have the root logger which is the main entry point to
logging. Under the root there exists parent loggers that are the
entry point for specific types of loggers such as packet logger,
transaction loggers, etc. Each parent logger may have 0 or more
loggers that actual handle the job of producing output to something
like a file.