For example, "requires: foo bar" is an unknown requirement, however
its not tracked, nor an error as it follows the syntax. Instead,
record these unknown keywords, and fail the requirements check if any
are present.
A future version of Suricata may have new requires keywords, for
example a check for keywords.
Ticket: #7418
This prevents the clippy warning:
508 | #[derive(FromPrimitive, Debug)]
| ^------------
| |
| `FromPrimitive` is not local
| move the `impl` block outside of this constant `_IMPL_NUM_FromPrimitive_FOR_IsakmpPayloadType`
509 | pub enum IsakmpPayloadType {
| ----------------- `IsakmpPayloadType` is not local
|
= note: the derive macro `FromPrimitive` defines the non-local `impl`, and may need to be changed
= note: the derive macro `FromPrimitive` may come from an old version of the `num_derive` crate, try updating your dependency with `cargo update -p num_derive`
= note: an `impl` is never scoped, even when it is nested inside an item, as it may impact type checking outside of that item, which can be the case if neither the trait or the self type are at the same nesting level as the `impl`
= note: items in an anonymous const item (`const _: () = { ... }`) are treated as in the same scope as the anonymous const's declaration for the purpose of this lint
= note: this warning originates in the derive macro `FromPrimitive` (in Nightly builds, run with -Z macro-backtrace for more info)
To ensure that all calls to cargo use the same environment variables,
put the environment variables in CARGO_ENV so every call to cargo can
easily use the same vars.
The Cargo build system is smarter than make, it can detect a change in
an environment variable that affects the build, and the setting of
SURICATA_LUA_SYS_HEADER_DST changing could cause a rebuild.
Also update suricata-lua-sys, which is smarter about copying headers. It
will only copy if the destination does not exist, or the source header
is newer than the target, which can also prevent unnecessary rebuilds.
This is mainly to fix an issue where subsequent builds may fail,
especially when running an editor with a LSP enabled:
Update lua crate to 0.1.0-alpha.5. This update will force a rewrite of
the headers if the env var SURICATA_LUA_SYS_HEADER_DST changes. This
fixes the issue where the headers may not be written.
The cause is that Rust dependencies are cached, and if your editor is
using rust-analyzer, it might cache the build without this var being
set, so these headers are not available to Suricata. This crate update
forces the re-run of the Lua build.rs if this env var changes, fixing
this issue.
Generic ssn2vec_map was a HashMap used for mapping session key to
different types of vector data:
- GUID
- filename
- share name
Turn this into a bounded LruCache. Rename to ssn2vec_cache.
Size of the cache is 512 by default, and can be configured using:
`app-layer.protocols.smb.max-session-cache-size`
Ticket: #5672.
Reimplement the ssnguid2vec_map HashMap as a LruCache.
Since this is a DCERPC record cache, name it as such.
Default size is 128. Can be controlled by
`app-layer.protocols.smb.max-dcerpc-frag-cache-size`.
Ticket: #5672.
Turn the map mapping the smb session key to smb tree into a lru cache,
limited to 1024 by default.
Add `app-layer.protocols.smb.max-tree-cache-size` option to control the
limit.
Ticket: #5672.
Don't tag the session as gap'd when the GAP is in a precise location:
1. in "skip" data, where the GAP just fits the skip data
2. in file data, where we pass the GAP on to the file
This reduces load of GAP post-processing that is unnecessary in these
case.
This crate lets us instruct it where to copy the header files instead
of our Makefile trying to find the correct ones and copying them into
place.
Can prevent the simultaneous copy errors sometimes seen on a make
without a clean.
Introduce a common function for mapping names to IDs that performs
bounds checking.
Note: For event IDs in the enum that are larger than a uint8_t, -1
will be returned instead of -4. -4 has special meaning during
signature parsin that means requirements were not met. -4 has no
special handling prior to requirements, or the meaning has been lost.
Add a pure rust base64 decoder. This supports 3 modes of operation just
like the C decoder as follows.
1. RFC 2045
2. RFC 4648
3. Strict
One notable change is that "strict" mode is carried out by the rust
base64 crate instead of native Rust. This crate was already used for
encoding in a few places like datasets of string type. As a part of this
mode, now, only the strings that can be reliably converted back are
decoded.
The decoder fn is available to C via FFI.
Bug 6280
Ticket 7065
Ticket 7058
According to RFC 3261, a single header can be repeated one or more times,
and its name can also be specified using the 'compact form.'
This patch updates the hashmap used for storing headers to accommodate multiple
values instead of just one.
Additionally, if a header name is defined in the compact form, it is expanded
into its long form (i.e., the standard name).
This conversion simplifies the logic for matching a given header
and ensures 1:1 parity with keywords.
Ticket #6374
Once we are tracking tx progress per-direction for PGSQL, we can trigger
the raw stream reassembly, for detection purposes, as soon as the
transactions are completed in the given direction.
Task #7000
PGSQL's current implementation tracks the transaction progress without
taking into consideration flow direction, and also has indirections
that make it harder to understand how the progress is tracked, as well
as when a request or response is actually complete.
This patch introduces tracking such progress per direction and adds
completion status per direction, too. This will help when triggering
raw stream reassembly or for unidirectional transactions, and may be
useful when we implement sub-protocols that can have multiple requests
per transaction, as well.
CancelRequests and TerminationRequests are examples of unidirectional
transactions. There won't be any responses to those requests, so we can
also mark the response side as done, and set their transactions as
completed.
Bug #7113
If a stream-only rule matches, and we find a tx where we
want to log the app-layer data, store into the tx data that
we already logged, so that we do not log again the app-layer metadata
Ticket: 7085
DCERPC/TCP tends to return the same values for invalid and incomplete
headers. As a result of this, invalid headers and any traffic following
it is buffered and processed later on assumed to be valid DCERPC traffic.
Fix this by clearly defining error and incomplete data and taking
appropriate actions.
Bug 7230
warning: first doc comment paragraph is too long
--> src/detect/iprep.rs:57:1
|
57 | / /// value matching is done use `DetectUintData` logic.
58 | | /// isset matching is done using special `DetectUintData` value ">= 0"
59 | | /// isnotset matching bypasses `DetectUintData` and is handled directly
60 | | /// in the match function (in C).
| |_
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#too_long_first_doc_paragraph
= note: `#[warn(clippy::too_long_first_doc_paragraph)]` on by default
help: add an empty line
warning: you seem to be trying to use `match` for destructuring a single pattern. Consider using `if let`
--> src/dcerpc/log.rs:36:33
|
36 | DCERPC_TYPE_BIND => match &state.bind {
| _________________________________^
37 | | Some(bind) => {
38 | | jsb.open_array("interfaces")?;
39 | | for uuid in &bind.uuid_list {
... |
51 | | None => {}
52 | | },
| |_____________^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#single_match
= note: `#[warn(clippy::single_match)]` on by default
In the DCERPC over TCP pcap, logging and rule matching is disrupted by adding a simple rule:
alert tcp any any -> any any (flow:to_server,established; \
dce_iface:5d2b62aa-ee0a-4a95-91ae-b064fdb471fc; dce_opnum:1; \
dce_stub_data; content:"|42 77 4E 6F 64 65 49 50 2E 65 78 65 20|"; \
content:!"|00|"; within:100; distance:97; sid:1; rev:1; )
Works: alert + 3 dcerpc records.
But when adding a trivial rule:
alert tcp any any -> any any (flow:to_server,established; \
dce_iface:5d2b62aa-ee0a-4a95-91ae-b064fdb471fc; dce_opnum:1; \
dce_stub_data; content:"|42 77 4E 6F 64 65 49 50 2E 65 78 65 20|"; \
content:!"|00|"; within:100; distance:97; sid:1; rev:1; )
alert tcp any any -> any any (dsize:3; sid:2; rev:1; )
The alert for sid:1 disappears and also there is one dcerpc event less.
In the single rule case we can aggressively free the transactions, as there
is only an sgh in the toserver direction.
This means that when we encounter the 2nd REQUEST, the first 2 transactions
have already been processed and freed. So for the 2nd REQUEST we open a new
TX and run inspection and logging on it.
When the 2nd rule is added, it adds toclient sgh as well. This means that we
will now slightly delay the freeing of the transactions.
As a consequence we still have the TX for the first REQUEST when the 2nd REQUEST
is parsed. This leads to the 2nd REQUEST re-using the TX. Since the TX is
already marked as inspected, it means the toserver rule now no longer matches.
Also we're not logging this TX correctly now.
This commit fixes the issue by not "finding" a TX that as already been
marked complete in the search direction.
Bug #7187.
base64 crate is updated to the latest version 0.22.1. This came with
several API changes which are applied to the code. The old calls have
been replaced with the newer calls.
This was done following the availability of better fns to directly
decode into slices/vectors as needed and also that previous version was
too old.
Along with this change, update the Cargo.lock.in to reflect all changes
in the package versions.
Task 7219
PgsqlTransactionState has a variant named "Init" which is a little too
generic to export to C. Fortunately this method doesn't need to be
exposed to C, instead remove it as it was only called by
rs_pgsql_tx_get_alstate_progress which also doesn't need to be public
or expose to C.
Ticket: #7227
Following the same logic as for PGSQL, if there is a gap in an LDAP request or
response, the parser tries to sync up again by checking if the message can be
parsed and effectively parses it on the next call.
Ticket #7176
This introduces a new parser registration function for LDAP/UDP, and update
ldap configuration in order to be able to enable/disable a single parser
independently (such as dns).
Also, GAPs are accepted only for TCP parser and not for UDP.
Ticket #7203
warning: can be more succinctly written as a byte str
--> src/mime/smtp.rs:762:37
|
762 | mime_smtp_find_url_strings(ctx, &[b'\n']);
| ^^^^^^^^ help: try: `b"\n"`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#byte_char_slices
= note: `#[warn(clippy::byte_char_slices)]` on by default
Ticket: 5734
Adds frames for SSH records, that come after banner, and before
the data is encrypted.
These records may contain cipher lists for instance.
by making tx parsing and creation more easily available,
without needing a dns state.
Dns event NotResponse is now set on the right tx, and not the one
before.
Also debug log for Z-flag on request says "request" instead of
"response"
Also rustfmt dns.rs
This implementation adds types and filters specified in the LDAP RFC to
work with the ldap_parser.
Although using the parser directly would be
best, strange behavior has been observed during transaction logging.
It appears that C pointers are being overwritten, leading to incorrect
output when LDAP fields are logged.
Ticket: 4863
On the way, convert some keywords to use the first-class integer
support.
And helpers for pure rust the support for multi-buffer.
Move the C unit tests about keyword mqtt.protocol_version
to unit tests for generic integer parsing, and test version 5
instead of testing twice version 3.
Also iterate all tx's messages for reason code as is done for other
keywords.
And allow detection on empty topics.