A library/plugin user wanting to register a custom flow logger must
include "output-flow.h", however that depends on some other includes.
One train of thought with respect to include files in libraries, is
that they should include all their dependencies on behalf of the
user. To make a custom flow logger just a little easier, include
"flow.h" and "decode.h".
Ticket: #7227
The use of OutputCtx as the data type for initdata was leaking Eve
submodule logic into the low level flow logger. Instead use void *, as
the flow logging module is not concerned with the type of data here.
Also document this initdata parameter.
Ticket: #7227
The initdata argument to OutputFlowThreadInit was always NULL, remove
it. Internally the ThreadInit functions still get initdata, but this
is the data provided when that logging instance was registered.
Ticket: #7227
The lookup function was not taking into account that we can have
an IPv4 or an IPv6 address as parameters and that this addresses
need to be converted to Suricata internal storage.
By using the already defined dedicated parsing function, we are
fixing the issue.
Issue: #6969
As it fails to work correctly on FreeBSD and OpenBSD.
On FreeBSD, these are the errors:
Info: pcap: Pcap-file will use 4096 buffer size [PcapFileGlobalInit:source-pcap-file.c:159]
Error: pcap: failed to get first packet timestamp. pcap_next_ex(): -2 [PeekFirstPacketTimestamp:source-pcap-file-helper.c:186]
Warning: pcap: Failed to init pcap file input.pcap, skipping [ReceivePcapFileThreadInit:source-pcap-file.c:299]
Error: pcap: pcap file reader thread failed to initialize [ReceivePcapFileLoop:source-pcap-file.c:185]
This introduces a new parser registration function for LDAP/UDP, and update
ldap configuration in order to be able to enable/disable a single parser
independently (such as dns).
Also, GAPs are accepted only for TCP parser and not for UDP.
Ticket #7203
Ticket: 4863
On the way, convert some keywords to use the first-class integer
support.
And helpers for pure rust the support for multi-buffer.
Move the C unit tests about keyword mqtt.protocol_version
to unit tests for generic integer parsing, and test version 5
instead of testing twice version 3.
Also iterate all tx's messages for reason code as is done for other
keywords.
And allow detection on empty topics.
Ticket: 7172
When parsing an integer for a rule keyword fails, we return error
straight away, without bothering to try to free the NULL pointer.
On the way, remove some one-line wrapper around DetectUxParse
Bring back the pf-ring command line arguments, but instead of
initializing the pfring runmode, initialize the capture plugin runmode
with a plugin named "pfring".
Ticket: #7162
Introduce KiB, MiB and GiB. They are case sensitive as a lower case 'b'
means bits in the IEEE 1541 scheme.
KiB = 1024
MiB = 1048576
GiB = 1073741824
Ticket: #1457.
Don't set an ACK value if ACK flag is no longer set. This avoids a bogus
`pkt_broken_ack` event set.
Fixes: ebf465a11b ("tcp: do not assign TCP flags to pseudopackets")
Ticket: #7158.
truncate fn is only active and used by dcerpc and smb parsers. In case
stream depth is reached for any side, truncate fn is supposed to set the
tx entity (request/response) in the same direction as complete so the
other side is not forever waiting for data.
However, whether the stream depth is reached is already checked by
AppLayerParserGetStateProgress fn which is called by:
- DetectTx
- DetectEngineInspectBufferGeneric
- AppLayerParserSetTransactionInspectId
- OutputTxLog
- AppLayerParserTransactionsCleanup
and, in such a case, StateGetProgressCompletionStatus is returned for
the respective direction. This fn following efc9a7a, always returns 1
as long as the direction is valid meaning that the progress for the
current direction is marked complete. So, there is no need for the additional
callback to mark the entities as done in case of depth or a gap.
Remove all such glue code and callbacks for truncate fns.
Bug 7044
as its functionality is already covered by the generic code.
This removes APP_LAYER_PARSER_TRUNC_TC and APP_LAYER_PARSER_TRUNC_TS
flags as well as FlowGetDisruptionFlags sets STREAM_DEPTH flag in case
the respective stream depth was reached. This flag tells that whether
all the open files should be truncated or not.
Bug 7044
There is no sane way to set override the DNS eve version in Suricata
tests without using a copy of the configuration file, and many of the
tests by design use the configuration file of the Suricata under test,
so making a copy would break this assumption.
To get around this, respect the SURICATA_EVE_DNS_VERSION environment
variable as a way to set the version if not explicitly set in the
configuration file.
DNS v3 logging fixes the discrepancies between request and response
logging with the main difference being queries always being placed in an
array.
Bug: #6281
V3 style DNS logging fixes the discrepancies between request and
response logging better dns records and alert records.
The main change is that queries and answers are always logged as
arrays, and header fields are not logged in array items.
For alerts this means that answers are now logged as arrays, queries
already were.
DNS records will get this new format as well, but with a configuration
parameter.
Bug: #6281
Getting clock through Time Stamp Counter (TSC) can be precise and fast,
however only for a short duration of time.
The implementation across CPUs seems to vary. The original idea is to
increment the counter with every tick. Then dividing the delta of CPU ticks
by the CPU frequency can return the time that passed.
However, the CPU clock/frequency can change over time, resulting in uneven
incrementation of TSC. On some CPUs this is handled by extra logic.
As a result, obtaining time through this method might drift from the real
time.
This commit therefore substitues TSC time retrieval by the standard system
call wrapped in GetTime function - on Linux it is gettimeofday.
Ticket: 7115
So far, when the data size was passed to the THash API, it was sent as
a sizeof(Struct) which works fine for the other data types as they have
a fixed length but not for the StringType.
However, because of the sizeof construct, the length of a string type
dataset was always taken to be 16 Bytes which is only the size of the struct
itself. It did not accomodate the actual size of the string that the
StringType holds. Fix this so that the memuse that is used to determine
whether memcap was reached also takes into consideration the size of the
actual string.
Bug 3910
Implement new `type backoff` for thresholding. This allows alerts to be
limited.
A count of 1 with a multiplier of 10 would generate alerts for matching packets:
1, 10, 100, 1000, 10000, 100000, etc.
A count of 1 with a multiplier of 2 would generate alerts for matching packets:
1, 2, 4, 8, 16, 32, etc.
Like with other thresholds, rule actions like drop and setting of
flowbits will still be performed for each matching packet.
Current implementation is only for the by_flow tracker and for per rule
threshold statements.
Tracking is done using uint32_t. When it reaches this value, the rest of
the packets in the tracker will use the silent match.
Ticket: #7120.
Instead of a Host and IPPair table thresholding layer, use a dedicated
THash to store both. This allows hashing on host+sid+tracker or
ippair+sid+tracker, to create more unique hash keys.
This allows for fewer hash collisions.
The per rule tracking also uses this, so that the single big lock is no
longer a single point of contention.
Reimplement storage for flow thresholds to reuse as much logic as
possible from the host/ippair/rule thresholds.
Ticket: #426.
Thresholding often has 2 stages:
1. recording matches
2. appling an action, like suppress
E.g. with something like:
threshold:type limit, count 10, seconds 3600, track by_src;
the recording state is about counting 10 first hits for an IP,
then followed by the "suppress" state that might last an hour.
By_src/by_dst are expensive, as they do a host table lookup and lock
the host. If many threads require this access, lock contention becomes
a serious problem.
This patch adds a thread local cache to avoid the synchronization
overhead. When the threshold for a host enters the "apply" stage,
a thread local hash entry is added. This entry knows the expiry
time and the action to apply. This way the action can be applied
w/o the synchronization overhead.
A rbtree is used to handle expiration.
Implemented for IPv4.
Limits propegation checked for DETECT_DEPTH as a content flag,
which appears to have worked by chance. After reshuffling the
keyword id's it no longer worked. This patch uses the proper
flag DETECT_CONTENT_DEPTH.
Traffic variables (flowvars, flowbits, xbits, etc) use a smaller int for
their type than detection types. As a workaround make sure the values fit
in a uint8_t.
Add support for 'by_flow' track option. This allows using the various
threshold options in the context of a single flow.
Example:
alert tcp ... stream-event:pkt_broken_ack; \
threshold:type limit, track by_flow, count 1, seconds 3600;
The example would limit the number of alerts to once per hour for
packets triggering the 'pkt_broken_ack' stream event.
Implemented as a special "flowvar" holding the threshold entries. This
means no synchronization is required, making this a cheaper option
compared to the other trackers.
Ticket: #6822.
Issue: 6487
To avoid ambiguity, a single definition for base 64 decoding modes will
be used. The Rust base64 transform contains the definitions for the
existing mode types: Strict, RFC2045, RFC4648
Issue: 6487
Implement the from_base64 transform:
[bytes value] [offset value] [mode strict|rfc4648|rfc2045]
The value for bytes and offset may be a byte_ variable or an
unsigned integer.
Ticket: 6390
This can happen with keyword filestore:both,flow
If one direction does not have a signature group with a filestore,
the file is set to nostore on opening, until a signature in
the other direction tries to set it to store.
Subsequent files will be stored in both directions as flow flags
are now set.
Add an argument to the packet prefilter registration function to include
`SignatureMask` flags. This will be used at runtime to only call these
prefilter engines when the mask check passes.
If a signature uses a condition that requires a real packet, filter
out pseudo packets as early as possible. To do this, the SignatureMask
logic is used.
This allows for the removal of checks for pseudo packets in individual
keywords `Match` functions, which will be done in a follow up commit.
Update analyzer to output the new flag.
Ticket: #7002.
Ticket: 4863
On the way, convert unit test DetectSNMPCommunityTest to a SV test.
And also, make snmp.pdu_type use a generic uint32 for detection,
allowing operators, instead of just equality.
When replaying a pcap file, it is not possible to get rules
profiling because it has to be activated from the unix socket.
This patch adds a new option to be able to activate profiling
collection at start so a pcap run can get rules profiling
information.
Implement special "isset" and "isnotset" modes.
"isset" matches if an IP address is part of an iprep category with any
value.
It is internally implemented as ">=,0", which should always be true if
there is a value to evaluate, as valid reputation values are 0-127.
"isnotset" matches if an IP address is not part of an iprep category.
Internally it is implemented outside the uint support.
Ticket: #6857.
Replaces default "alert" logic and removed SIG_FLAG_NOALERT.
Instead, "noalert" unsets ACTION_ALERT. Same for flowbits:noalert and
friends.
In signature ordering rules w/o action are sorted as if they have 'alert',
which is the same behavior as before, but now implemented explicitly.
Ticket: #5466.
Ticket: 3958
- transactions are now bidirectional
- there is a logger
- gap support is improved with probing for resync
- frames support
- app-layer events
- enip_command keyword accepts now string enumeration as values.
- add enip.status keyword
- add keywords :
enip.product_name, enip.protocol_version, enip.revision,
enip.identity_status, enip.state, enip.serial, enip.product_code,
enip.device_type, enip.vendor_id, enip.capabilities,
enip.cip_attribute, enip.cip_class, enip.cip_instance,
enip.cip_status, enip.cip_extendedstatus
Before, the JsonBuilder object for the pgsql event was being created
from the C-side function that actually called the Rust logger.
This resulted that if another module - such as the Json Alert called the
PGSQL logger, we wouldn't have the `pgsql` key present in the log output
- only its inner fields.
Bug #6983
Adds the following frames:
command_line
data
response_line
The *_line frames are per line, so in multi-line responses each line
will have it's own frame.
Ticket: #4905.
Add new flags to trigger FLOW_TS_APP_UPDATED/FLOW_TC_APP_UPDATED flags
to be set for the next packet in the relevant direction.
This allows for app relevant work to be done in the next packet in our
direction.
EVE logging has a direction parameter that can cause the logging
of an application layer to be done in a direction that is not linked
to the packet. As a result the source IP addres could be assigned the
MAC address of the destination IP and reverse.
This patch addresses this by propagating the direction to the ethernet
logging function and using it there to define the correct mapping.
Issue #6405
Use quoted include style for Lua includes ("lua.h" instead of <lua.h>)
as this could result in system includes being picked up instead of the
includes from our vendor directory.
In certain conditions, it can take a long time for threads to start up.
For example in af-packet, setting up the socket, rings, etc has been
observed to take close to half a second per thread, and since the
threads go one by one in a preset order, this means the start up can
take a lot of time if there are many threads. The old logic would just
allow a hard coded 60s. This was not always enough when the number of
threads was high.
This patch makes the wait time take the number of threads into account.
It adds a second of time budget to the base 60s for each thread.
So as an example, if a system has 112 af-packet threads, it would wait
172 seconds (60 + 112) for the threads to get ready.
Ticket: #7048.
When starting a large amount of threads, the loop was inefficient. It
would loop over the threads and if one wasn't yet ready it would sleep a
bit and then reevaluate all the threads. This reevaluation of threads
already checked was inefficient, and could lead to the time budget
running out.
This patch splits the check, and keeps track of the threads that have
already passed. This avoids the rescanning of already checked threads.
Update the Lua allocated to set a code on memory allocation limit
exceeded errors so an appropriate error message can be logged and a
state incremented.
Fixes the tracking of the allocated size by using the difference
between original size, and new size and toss in some debug
validations.
Distinguish between a generic Lua script error and an error created by a
function being blocked, so each is logged once respective of each other.
Also add a stat that is incremented when a script fails due to a
blocked function.
NOTE: This does not catch calls to functions that are blocked by not
having the library loaded, such as "io.open", as they are blocked by
not even loading the "io" library.
The Lua library surface area is small enough to manage an allow list,
which is generally better than a deny list, as we'll explicitly need
to opt-in to new functions provided by the Lua runtime.
Move prototypes for functions that exist in util-port-interval-tree.c
from detect-engine-port.h to util-port-interval-tree.h.
Fix header guard names while there.
Ticket: 6575
Multi buffers keywords now use a single registration function
DetectAppLayerMultiRegister with a GetBuffer argument.
This GetBuffer function pointer is similar to the ones used by
single-buffer keyword, except that it takes an additional
parameter which is the index of the buffer to get.
Under the hood, an anonymous union between these 2 functions
pointers types is used.
In the end, this deduplicates code, especially the calls to
DetectEngineContentInspection
Bug: https://redmine.openinfosecfoundation.org/issues/6782
Callers to these allocators often use ``sc_errno`` to provide context of
the error. And in the case of the above bug, they return ``sc_errno``,
but as it has not been set ``sc_errno = 0; == SC_OK``.
This patch simply sets this variable to ensure there is context provided
upon error.
The on-disk pcap pkthdr is 16 bytes. This was calculated using
`sizeof(struct pcap_pkthdr)`, which is 24 bytes on 64 bit Linux. On
Macos, it's even worse, as a comment field grows the struct to 280
bytes.
Address this by hardcoding the value of 16.
Bug: #7037.
In offline mode, a timestamp is kept per thread, and the lowest
timestamp of the active threads is used. This was also considering the
non-packet threads, which could lead to the used timestamp being further
behind that needed. This would happen at the start of the program, as
the non-packet threads were set up the same way as the packet threads.
This patch both no longer sets up the timestamp for non-packet threads
as well as not considering non-packet threads during timestamp
retrieval.
Fixes: 6f560144c1 ("time: improve offline time handling")
Bug: #7034.
HttpRangeOpenFileAux may return NULL in different cases, including
when memcap is reached.
But is only caller did not check it before calling HttpRangeAppendData
which would dereference the NULL value.
Ticket: 7029
So far, the SANs were available as a part of IssuerDN via x509_parser
crate but SANs were not available to the SSLState* to be directly used
to setup and match against a sticky buffer.
Expose it to SSLStateConnp.
Feature 5234
If a rule script crashed, the return value was treated as a no
match. This would make a negation of the rule match and alert.
Instead cleanup and exit early if the rule script crashed and don't
run negation logic.
A stat, detect.lua.errors has been added to count how many times a
script crashes.
Also consolidates the running of the Lua script and return value
handling to a common function.
Bug: #6940