Currently alert logging relies on the ability to change existing
values in the json_t structure to overwrite addresses with xff
data. This feature is also used for the "target" logging.
As we can't do this with JsonBuilder, create a new struct to
hold the 5 tuple, with the values swapped as needed, and
overwritten with XFF data if needed. This struct will now
be used to write out the 5 tuple, as well as cache the information
for log fields to be written out later on in the log path.
Remove the Jansson specific feature of being able to delete
an object from json_t, in prep for refactors to JsonBuilder.
Instead create a new header for each alert to be logged.
Move the logging of the rule text to where the alert object
is being logged to remove the usage of json_object_get...
Getting previously logged objects will not be possible with
JsonBuilder.
When PCRE `jit` is available, store the JIT stack in the keyword context
instead of on a global id. This ensures proper cleanup and
re-initialization over a rule reload.
File store v1 has been deprecated and was scheduled for removal
by June 2020.
Log an error if a file-store configuration is loaded without
version set to 2.
All the dead code in C after the Rust implementation is hereby removed.
Invalid/migrated tests have also been deleted.
All the function calls in C have been replaced with appropriate calls to
Rust functions. Same has been done for smb/detect.rs as a part of this
migration.
This commit improves how the parser handles the `RSET` command.
Termination of the transaction occurs when the `RSET` ack is seen (reply
code 250).
Bug: #3677
In case of bad IPv4, TCP or UDP, the per packet ip4vars/tcpvars/udpvar
structures would not be cleaned up because the cleanup depends on the
'header' pointer being set, but the error handling would unset that.
This could mean these structures were already filled with values before
the error was detected. As packets were recycled, the next packet decoding
would use this unclean structure.
To make things worse these structures are part of unions. IPv4/IPv6 and
TCP/ICMPv4/ICMPv6 share the same memory location.
LibFuzzer+UBSAN found this both locally and in Oss-Fuzz:
decode-ipv6.c:654:9: runtime error: load of value 6, which is not a valid value for type 'bool'
#0 0x6146f0 in DecodeIPV6 /src/suricata/src/decode-ipv6.c:654:9
#1 0x617e96 in DecodeNull /src/suricata/src/decode-null.c:70:13
#2 0x9dd8a4 in DecodePcapFile /src/suricata/src/source-pcap-file.c:412:9
#3 0x4c8ed2 in LLVMFuzzerTestOneInput /src/suricata/src/tests/fuzz/fuzz_sigpcap.c:158:25
#4 0x457e51 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:556:15
#5 0x457575 in fuzzer::Fuzzer::RunOne(unsigned char const*, unsigned long, bool, fuzzer::InputInfo*, bool*) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:470:3
#6 0x459917 in fuzzer::Fuzzer::MutateAndTestOne() /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:698:19
#7 0x45a6a5 in fuzzer::Fuzzer::Loop(std::__Fuzzer::vector<fuzzer::SizedFile, fuzzer::fuzzer_allocator<fuzzer::SizedFile> >&) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:830:5
#8 0x448728 in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:824:6
#9 0x472552 in main /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:19:10
#10 0x7ff0d097b82f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2082f)
#11 0x41bde8 in _start (/out/fuzz_sigpcap+0x41bde8)
Bug: #3496
Support reassembling multi-frag certificates. For this the cert queuing
code is changed to queue just the cert, not entire tls record.
Improve message tracking. Better track where a message starts and ends
before passing data around.
Add wrapper macros to check for 'impossible' conditions that are activate
in debug validation mode. This helps fuzzers find input that might trigger
these conditions, if they exist.
To avoid future memcpy issues introduce a wrapper and check the
result of it.
When compiled with --enable-debug-validation the wrapper will abort if
the input is wrong.
In some error conditions, or potentially in case of multiple 'certificate'
records, the extracted subject, issuerdn and serial could be overwritten
without freeing the original memory.
'trec' buffer was not grown properly when it was checked as too small.
After this it wasn't checked again so that copying into the buffer could
overflow it.
This commit checks whether pre-6.x settings for ERSPAN Type I are
present. ERSPAN Type I is no longer enabled/disabled through a
configuration setting -- it's always enabled.
When a setting exists to enable/disable ERSPAN Type I decoding, a
warning message is logged.
Enabling/disabling ERSPAN Type I decode has been deprecated in 6.x
This commit adds a warning value when ERSPAN Type I configuration
settings are detected; specifically, when ERSPAN Type I `enabled` is
specified.
Enabling/disabling ERSPAN Type I decode has been deprecated in 6.x
This commit fixes the conversion of timestamps. Without the extra
parens, the resulting timestamp value for usecs will be 1 or 0 due to
the operator precedence order (+ takes precedence over ?:)
The end-of-processing has been restructured so that Packet and Hostbuffer
data structures are now released within the NapatechReleasePacket() callback
function.
atoi() and related functions lack a mechanism for reporting errors for
invalid values. Replace them with calls to the appropriate
ByteExtractString* functions.
Partially closes redmine ticket #3053.
This patch allows to OR multiple flowbits on isset and isnotset flowbit
actions.
e.g.
Earlier in order to check if either fb1 or fb2 was set, it was required
to write two rules,
```
alert ip any any -> any any (msg:\"Flowbit fb1 isset\"; flowbits:isset,fb1; sid:1;)
alert ip any any -> any any (msg:\"Flowbit fb2 isset\"; flowbits:isset,fb2; sid:2;)
```
now, the same can be achieved with
```
alert ip any any -> any any (msg:\"Flowbit fb2 isset\"; flowbits:isset,fb1|fb2; sid:23;)
```
This operator can be used to check if one of the many flowbits is set
and also if one of the many flowbits is not set.
For a done transaction with command PORT,
we expect FTP_STATE_FINISHED
and we got FTP_STATE_PORT_DONE instead
which prevented logging of these transactions
We change the order of the evaluations to get the right result
Protects against evasion by TCP packet splitting
The problem arised if the FTP response is split on multiple packets
The fix is to bufferize the content, until we get a complete line
Each 'add' and 'lookup' would increment the use_cnt, without anything
bringing it back down.
Since there is no removal yet, nothing is actually affected by it yet.
Support either the __thread GNUism or the C11 _Thread_local.
Use 'thread_local' to point to the one that is used. Convert existing
__thread user to 'thread_local'.
Remove non-thread-local code from the packet pool code.
Until now both atomic ints and pointers were initialized by SC_ATOMIC_INIT
by setting them to 0. However, C11's atomic pointer type cannot be
initialized this way w/o causing compiler warnings.
As a preparation to supporting C11's atomics, this patch introduces a
new macro to initialize atomic pointers and updates the relevant callers
to use it.
Until this point the SC_ATOMIC_ADD macro pointed to a 'add_fetch'
intrinsic. This patch changes it to a 'fetch_add'.
There are 2 reasons for this:
1. C11 stdatomics.h has only 'atomic_fetch_add' and no 'add_fetch'
So this patch prepares for adding support for C11 atomics.
2. It was not consistent with SC_ATOMIC_SUB, which did use 'fetch_sub'
and not 'sub_fetch'.
Most callers are not using the return value, so these are unaffected.
The callers that do use the return value are updated.
A deeply nested YAML file can cause a stack-overflow while
reading in the configuration to do the recursive parser. Limit
the recursion level to something sane (128) to prevent this
from happening.
The default Suricata configuration has a recursion level of 128
so there is still lots of room to grow (not that we should).
Redmine ticket:
https://redmine.openinfosecfoundation.org/issues/3630
Expose UTH flow builder to new 'FUZZ' define as well. Move UTHbufferToFile
as well and rename it to a more generic 'TestHelperBufferToFile'.
This way UNITTESTS can be disabled. This leads to smaller code size
and more realistic testing as in some parts of the code things
behave slightly differently when UNITTESTS are enabled.
Fixes runs with --enable-debug-validation. The target did not init a
packet pool, so for a tunnel packet would try to get a packet from
an uninitialized pool. In non-debug mode, this silently works by
falling back to a packet from alloc.
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x00007ffff35a6801 in __GI_abort () at abort.c:79
#2 0x00007ffff359639a in __assert_fail_base (fmt=0x7ffff371d7d8 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x555557fe7260 "!(pool->initialized == 0)",
file=file@entry=0x555557fe7220 "tmqh-packetpool.c", line=line@entry=253, function=function@entry=0x555557fe7500 <__PRETTY_FUNCTION__.21181> "PacketPoolGetPacket") at assert.c:92
#3 0x00007ffff3596412 in __GI___assert_fail (assertion=0x555557fe7260 "!(pool->initialized == 0)", file=0x555557fe7220 "tmqh-packetpool.c", line=253,
function=0x555557fe7500 <__PRETTY_FUNCTION__.21181> "PacketPoolGetPacket") at assert.c:101
#4 0x00005555577e24be in PacketPoolGetPacket () at tmqh-packetpool.c:253
#5 0x0000555556914ecd in PacketGetFromQueueOrAlloc () at decode.c:183
#6 0x00005555569161e1 in PacketTunnelPktSetup (tv=0x555559863980 <tv>, dtv=0x614000068e40, parent=0x61e0000fc080, pkt=0x61e0000fc470 "LL", len=72, proto=DECODE_TUNNEL_IPV4) at decode.c:286
#7 0x00005555569de694 in DecodeIPv4inIPv6 (tv=0x555559863980 <tv>, dtv=0x614000068e40, p=0x61e0000fc080, pkt=0x61e0000fc470 "LL", plen=72) at decode-ipv6.c:59
#8 0x00005555569e60b5 in DecodeIPV6ExtHdrs (tv=0x555559863980 <tv>, dtv=0x614000068e40, p=0x61e0000fc080, pkt=0x61e0000fc470 "LL", len=112) at decode-ipv6.c:522
#9 0x00005555569e846f in DecodeIPV6 (tv=0x555559863980 <tv>, dtv=0x614000068e40, p=0x61e0000fc080, pkt=0x61e0000fc420 "cLL", len=255) at decode-ipv6.c:641
#10 0x0000555556a032f9 in DecodeRaw (tv=0x555559863980 <tv>, dtv=0x614000068e40, p=0x61e0000fc080, pkt=0x61e0000fc420 "cLL", len=255) at decode-raw.c:70
#11 0x0000555557659ba8 in DecodePcapFile (tv=0x555559863980 <tv>, p=0x61e0000fc080, data=0x614000068e40) at source-pcap-file.c:412
#12 0x0000555556573401 in LLVMFuzzerTestOneInput (data=0x613000000047 "\241\262\315\064", size=339) at tests/fuzz/fuzz_sigpcap.c:158
#13 0x0000555557a4dc66 in main (argc=2, argv=0x7fffffffdfa8) at tests/fuzz/onefile.c:51
That line:
BUG_ON(pool->initialized == 0);
Ensure that the by_rule threshold table is initialized if a rule
is thresholded by_rule. Replace manual table reallocaton with calls
to the common function.
The only difference between threshold calculations for by_src/by_dst,
by_rule or by_both is which table stores the DetectThresholdEntry.
Refactor the ThresholdHandlePacket* functions to do table lookup and
storage individually, but calculate thresholds in a common function.
Make sure to cleanup expectations for a flow as the first step, before
parts of the flow itself are getting cleaned/freed.
Also indicate use unlikely as flows with expectations should be relatively
rare.
When a flow timeout, we can have still existing expectations that
are linked to this flow. Given that there is a delay between the
real ending of the flow and its destruction by Suricata, the
expectation should be already honored so we can assume the risk
to clean the expectations that have been triggered by the
to-be-deleted flow.
This patch introduces a limitation in term of number of
expectations attached to one IPPair. This is done using
a circle list so we have a FIFO approach on expectation
handling.
Circleq list code is copied from BSD code like was pre existing code
in queue.h.
In case of transform issues (transform not consumed before pkt_data
for example), the code would hit an ugly BUG_ON.
Address this by a more graceful error message, that will still
invalidate the sig but not crash the engine.
Implement support for limiting Teredo detection and decoding to specific
UDP ports, with 3544 as the default.
If no ports are specified, the old behaviour of detecting/decoding on any
port is still in place. This can also be forced by specifying 'any' as the
port setting.
StringParse* functions would perform a stricter check compared to
ByteExtractString* functions. These new functions shall also check if
any extra characters follow the extracted numeric value in addition to
the checks performed by ByteExtractString* and return -1 in that case.
This is particularly important in parser, configuration and setup functions.
This commit adds support for the Remote Framebuffer Protocol (RFB) as
used, for example, by various VNC implementations. It targets the
official versions 3.3, 3.7 and 3.8 of the protocol and provides logging
for the RFB handshake communication for now. Logged events include
endpoint versions, details of the security (i.e. authentication)
exchange as well as metadata about the image transfer parameters.
Detection is enabled using keywords for:
- rfb.name: Session name as sticky buffer
- rfb.sectype: Security type, e.g. VNC-style challenge-response
- rfb.secresult: Result of the security exchange, e.g. OK, FAIL, ...
The latter could be used, for example, to detect brute-force attempts
on open VNC servers, while the name could be used to map unwanted VNC
sessions to the desktop owners or machines.
We also ship example EVE-JSON output and keyword docs as part of the
Sphinx source for Suricata's RTD documentation.
This command causes `pcre_jit_exec` to be used when available. If it's
available and there are allocation errors preparing for it, things
fallback to `pcre_exec`.
This commit adds a warning used by the PCRE detect logic when it fails
to register initialization and free functions for per-thread JIT stack
handling.
This error code is only used when the platform has PCRE JIT exec
functionality.
When a TCP DNS flow would start with a GAP on the TS side, the successful
protocol detection on the TC side would trigger 'opposing side' reassembly
and app-layer processing. In this case the stream flags would indicate the
wrong direction and the wrong parser would be called.
This patch simplifies the return codes app-layer parsers use,
in preparation of a patch set for overhauling the return type.
Introduce two macros:
APP_LAYER_OK (value 0)
APP_LAYER_ERROR (value -1)
Update all parsers to use this.
The idea of an OK signature parsing error is an error that is
allowed to occur, but still lets test mode pass, unlike
silent errors which will still fail testing.
This is introduced to allow for app-layer event keywords to be
removed, but not have old rules fail out on this case. For example
the Rust DNS parser removes from DNS app-layer events that are
not used anymore.
To signal that an error is OK, -3 is returned. This also implies
silent.
On an unknown app-layer event, return -3 for "silent OK fail". A
warning will still be emitted, but its not considered a rule parse
error. This is to handle app-layer events being removed in a more
graceful manner for the user.
This allows -T to pass with an old app-layer events rule file
that may used removed app-layer event keywords.
This commit places restrictions on the length of the file name specified
in attachments (`name=` or `filename=`) to `NAME_MAX`. Names exceeding
these limits will be truncated and processing will continue with the
truncated name.
This patch addresses two problems.
First, various parts of the engine, but most notably the flow manager (FM),
use a minimum of the time notion of the packet threads. This did not
however, take into account the scenario where one or more of these
threads would be inactive for prolonged times. This could lead to the
time used by the FM could get stale.
This is addressed by keeping track of the last time the per thread packet
timestamp was updated, and only considering it for the 'minimum' when it
is reasonably current.
Second, there was a minor race condition at start up, where the FM would
already inspect the hash table(s) while the packet threads weren't active
yet. Since FM gets the time from the packet threads, it would use a bogus
time of 0.
This is addressed by adding a wait loop to the start of the FM that waits
for 'time' to get ready.
A race condition during the start of pcap file processing could cause
missed alerts and logged events. This race happens between the packet
threads and the flow manager. It was observed on slower hardware, but in
theory could happen on any machine. It required the 'autofp' runmode.
In commit 6f560144c1 ("time: improve offline time handling") the logic
was added to make the flow manager use a minimum of all the packet threads
perception of time.
The race condition was that the flow manager may become active _before_
all of the packet threads have started processing packets and thus setting
their timestamp. The threads that had not yet initialized their timestamp
would not be considered when calculating the minimum.
As a result of this, older packets timestamps would not yet be registered.
This would give the Flow Manager a timestamp too far in the future. While
the FM was running, the packet processing would start and a flow would
be created. This flow would then immediately be considered 'timed out' by
the FM, due to the timestamp too far in the future.
In the observed case, the thread processing packet 1 from the pcap had not
yet started processing while other threads had already started. The FM was
also already active. Due to the timestamps in the pcap this meant that the
time the FM used was about 500 seconds in the future compared to packet 1.
This patch fixes the issue by initializing all of the threads timestamps
with the timestamp value of the first packet. This way the minimum will
always consider this timestamp.
This commit adds logic checking if the sticky buffer in effect provides
the required content.
If the sticky buffer doesn't, the rule will not load and a diagnostic
message with follow-on steps is displayed.
Improve flow file flags and file flags updates. Introduce a mask
that is set at start up to avoid lots of runtime checks.
Disable cocci flags check as it doesn't support the more dynamic
nature of the flag updates.
Using the run-as configuration option with the nflog capture method
results in the following error during the startup of suricata:
[ERRCODE: SC_ERR_NFLOG_BIND(248)] - nflog_bind_pf() for AF_INET failed
This is because SCDropMainThreadCaps does not have any capabilities
defined for the nflog runmode (unlike other runmodes). Therefore, apply
the same capabilities to the nflog runmode that are already defined for
the nfqueue runmode. This has been confirmed to allow suricata start
and drop its privileges in the nflog runmode.
Fixes redmine issue #3265.
Signed-off-by: Timo Sigurdsson <public_timo.s@silentcreek.de>
Previously each 'TmSlot' had it's own packet queue that was passed
to the registered SlotFunc as an argument. This was used mostly for
tunnel packets by the decoders and by defrag.
This patch removes that in favor of a single queue in the ThreadVars:
decode_pq. This is the non-locked version of the queue as this is
only a temporary store for handling packets within a thread.
This patch removes the PacketQueue pointer argument from the API.
The new queue can be accessed directly through the ThreadVars
pointer.
Shrink ThreadVars by removing the queue handler names that are only
used at shutdown. Since this is not performance critical, we can use
the id's to look up the queue handler.
When TmThreadsSlotProcessPkt fails it will return the packet that was
passed to it to the packetpool.
Some of the packet sources were doing this manually as well. This patch
fixes those sources.
Use a single packet queue per thread for flow timeout packet
injection. The per slot queue was unused except for this use
case. Having a single queue makes the logic and implementation
simpler.
In case of 'autofp', the per thread packet queue will actually
use the threads input queue. For workers/single a dedicated
queue will be set up.
Rename TmThreadsSlotHandlePostPQs to TmThreadsHandleInjectedPackets
to reflect the changed logic.
Set the livedev on reassembled packets to that of the parent
packet. Fixes issues with multidetect, specifically a segfault
as reported in issue 3380.
Bug #3380.
This patch splits the list of loggers the tx logging walks into lists per
alproto. The list was getting longer with each eve addition. The result
was that for each tx we would have to loop through multiple loggers that
did not apply to this tx as it was for the wrong protocol.
In some cases a SBB could be seen as overlapping with the requested
offset, when it was in fact precisely before it. In some special cases
this could lead to the stream engine not progressing the 'raw' progress.
Don't assume that the next block after the sbb head is after the
requested offset.
If the next block was before the offset, the returned data_len
would underflow and return a nonsense value to the app-layer.
Bug #2993.
This commit causes Suricata to exit when a buffer from the analyzer
table is not recognized.
Since the table must match what's registered, exiting will bring noticed
to the condition.
Currently all failures in IPv4 option decode are ignore with
respect to continuing to handle the packet.
Change this to fail, and abort handling the packet if the
option length is invalid.
Ticket 3328:
https://redmine.openinfosecfoundation.org/issues/3328
Fix evasion posibility by rejecting packets with a broken ACK field.
These packets have a non-0 ACK field, but do not have a ACK flag set.
Bug #3324.
Reported-by: Nicolas Adba
RST injection during the SYN_SENT state could trick Suricata into marking
a session as CLOSED. The way this was done is: using invalid TSECR value
in RST+ACK packet. The ACK was needed to force Linux into considering the
TSECR value and compare it to the TSVAL from the SYN packet.
The second works only against Windows. The client would not use a TSVAL
but the RST packet would. Windows will reject this, but Suricata considered
the RST valid and triggered the CLOSED logic.
This patch addresses both. When the SYN packet used timestamp support
the timestamp of incoming packet is validated. Otherwise, packet responding
should not have a timestamp.
Bug #3286
Reported-by: Nicolas Adba
Fix address negation detection not resolving variables when
looking for the negation.
This patch makes use of the actual parsing routines to relay this
information to the signature parser.
Bug #3389.
Fixes: 92f08d85aa ("detect/iponly: improve negation handling in parsing")
If file prune is called inspect has already run. So if file is closed
we can just prune. No need to consider a window anymore.
When still in progress, fix the left_edge calculation.
Now that we call the file prune loop very regularly, we can move the
SMTP specific inspection pruning logic into this loop. Helps with
cases there we don't (often) update a files inspection trackers.
If we already know that the boundary exists, we can start looking
there. Otherwise, we can skip trying as the boundary is a subset
of the form end marker.
This commit ensures direction warnings for ICMP v4 and v6
are suppressed and corrects check so that both protocols
are checked (instead of the same protocol being checked twice).
Update the default packet size computation to use LiveDeviceName
instead of LiveDevice as the LiveDevice list is not built when
the default packet size is built.
Coccinelle test was doing a false positive on the function
AppLayerParserStateSetFlag and AppLayerParserStateIssetFlag.
To address that, this patch adds a new coccinelle markup:
/* coccinelle: AppLayerParserStateSetFlag():2,2:APP_LAYER_PARSER_ */
It indicates that AppLayerParserStateSetFlag is a setter and getter
and that the checks should be disabled inside the function.
Currently this markup is only used for that but following patch will
add some checks on option value.
When transforms are part of a rule, improve information displayed with
fast patterns to include the original buffer name and whether any
transform(s) are applied.
When registing a detection engine, check that the app-layer
protocol supports tx detect flags. Exit with a fatal
error if it does not as this is a code implementation
error that should be resolved during development.
Add method to check if a parser for an app-layer protocol
supports tx detect flags.
This is a bit of a hack for now as where we need to run
this check from we do not have the IP protocol.
When HTTP pipelining was in use, the transaction id used for events
and files could be off. If the request side was several requests ahead
of the responses, it would use the HtpState::transaction_cnt for events
and files, even though that is only incremented on complete requests.
Split request and response tx id tracking. The response is still handled
by the HtpState::transaction_cnt, but the request side is now handled by
its own logic.
Since ebcc4db84a the flow worker runs
file pruning after parsing, detection and loging. This means we can
simplify the pruning logic. If a file is in state >= CLOSED, we can
prune it. Detection and outputs will have had a final chance to
process it.
Remove the calls to the pruning code from Rust. They are no longer
needed.
If a protocol does not support TxDetectFlags, don't try to use them.
The consequence of trying to use them was that a TX would never be
considered done, and it would never be freed. This would lead to excessive
memory use and performance problems due to walking an ever increasing
list.
When a BPF filter is given on the command line when reading a
pcap file, the BPF filter is not honored.
The regression has been introduced in:
commit 3ab9120821
Author: Dana Helwig <dana.helwig@protectwise.com>
Date: Thu Apr 27 11:17:16 2017 -0600
source-pcap-file: Pcap Directory Mode (Feature #2222)
Reported-By: Tim Colin <tcolin@et.esiea.fr>
A BUG_ON statement would seemingly randomly trigger during the threading
shutdown logic. After a packet thread reached the THV_RUNNING_DONE state,
it would sometimes still receive flow timeout packets which would then
remain unprocessed.
1 main: TmThreadDisableReceiveThreads(); <- stop capturing packets
2 worker: -> TmThreadTimeoutLoop (THV_FLOW_LOOP) phase starts
3 main: FlowForceReassembly(); <- inject packets from flow engine
4 main: TmThreadDisablePacketThreads(); <- then disable packet threads
5 main: -> checks if 'worker' is ready processing packets
6 main: -> sends THV_KILL to worker
7 worker: breaks out of TmThreadTimeoutLoop and changes to THV_RUNNING_DONE.
Part of the problem was with (5) above. When checking if the worker was
already done with its work, TmThreadDisablePacketThreads would not consider
the injected flow timeout packets. The second part of the problem was with (7),
where the worker checked if it was ready with the TmThreadTimeoutLoop in a
thread unsafe way.
As a result TmThreadDisablePacketThreads would not wait long enough for the
worker(s) to finish its work and move the threads to the THV_RUNNING_DONE
phase by issuing the THV_KILL command.
When waiting for packet processing threads to process all in-flight packets,
also consider the 'stream_pq'. This will have received the flow timeout
packets.
Bug #1871.
This commit adds `ARRAY_SIZE` as an helper for determining the number of
elements in an initialized array. The calculation is the same but the
macro provides a convenient shortcut. The implementation was borrowed
from the kernel sources.
Create a single function to return the version string, to avoid lots
of ifdefs in multiple places.
Make the version determine the 'release' status. If the version from
autoconf has '-dev' in the name, it is not a release. If it hasn't
it is considered a release version.
Avoids using uninitialized memory. Show showed itself
in nonsense values in counters, and in nfq_handle_packet
errors that were likely the result of passing uninitialized
memory to the nfq API.
Bug 3263.
Bug 3120.
Fixes: b2a6c60dee ("source-nfq: increase maximum queues number to 65535")
NFQ can generate warnings/errors with a delay. After Suricata has
succesfully passed a verdict to the kernel, there are still things
that can go wrong for that verdict. This is then passed to the
queue through a netlink error message, which leads to nfq_handle_packet
returning an error code.
Suppress the warning. Also remove the errno/strerror use as
nfq_handle_packet does not set the errno.
Thanks to Florian Westphal.
Bug 3120.
TCP_OPT_INVALID_LEN was set if the opt len was 2. While useless
an empty SACK is not uncommon.
Seen on an iOS device talking to an Apple server.
Bug #3254.