Don't treat 'external' parsers as more experimental. All parsers
depend on crates to some extend, and all have C glue code. So the
distinction doesn't really make sense.
Add a new parser for Internet Key Exchange version (IKEv2), defined in
RFC 7296.
The IKEv2 parser itself is external. The embedded code includes the
parser state and associated variables, the state machine, and the
detection code.
The parser looks the first two messages of a connection, and analyzes
the client and server proposals to check the cryptographic parameters.
When skipping records the skip tracker could underflow if the record
parsing had more data than expected.
Enforce the calculation by moving it into a method and make the actual
fields private.
parse_smb2_response_read()/parse_smb2_response_write() can be called on
incomplete data, so they didn't use the read/write length field to grab
the data field. Instead it just used rest(). However in some cases
SMB2 records have trailing data, which would be included in the
READ/WRITE data.
This patch addresses this by using the length field if enough data is
available.
Improve ntlmssp version extraction and logging, make its data structures
optional. Extract native os/lm from smb1 ssn setup.
Move session setup handling into their own files.
Only log auth data for the session setup tx.
Implement SMB app-layer parser for SMB1/2/3. Features:
- file extraction
- eve logging
- existing dce keyword support
- smb_share/smb_named_pipe keyword support (stickybuffers)
- auth meta data extraction (ntlmssp, kerberos5)
The 'debug' feature is enabled if suricata was configured with the
--enabled-debug' flag.
If enabled, the SCLogDebug format and calls the logging function as
usual. Otherwise, this macro is a no-op (similarly to the C code).
Also remove the now useless 'state' argument from the SetTxDetectState
calls. For those app-layer parsers that use a state == tx approach,
the state pointer is passed as tx.
Update app-layer parsers to remove the unused call and update the
modified call.
Until now, the transaction space is assumed to be terse. Transactions
are handled sequentially so the difference between the lowest and highest
active tx id's is small. For this reason the logic of walking every id
between the 'minimum' and max id made sense. The space might look like:
[..........TTTT]
Here the looping starts at the first T and loops 4 times.
This assumption isn't a great fit though. A protocol like NFS has 2 types
of transactions. Long running file transfer transactions and short lived
request/reply pairs are causing the id space to be sparse. This leads to
a lot of unnecessary looping in various parts of the engine, but most
prominently: detection, tx house keeping and tx logging.
[.T..T...TTTT.T]
Here the looping starts at the first T and loops for every spot, even
those where no tx exists anymore.
Cases have been observed where the lowest tx id was 2 and the highest
was 50k. This lead to a lot of unnecessary looping.
This patch add an alternative approach. It allows a protocol to register
an iterator function, that simply returns the next transaction until
all transactions are returned. To do this it uses a bit of state the
caller must keep.
The registration is optional. If no iterator is registered the old
behaviour will be used.
TFTP parsing and logging written in Rust.
Log on eve.json the type of request (read or write), the name of the file and
the mode.
Example of output:
"tftp":{"packet":"read","file":"rfc1350.txt","mode":"octet"}
READ replies with large data chunks are processed partially to avoid
queuing too much data. When the final chunk was received however, the
start of the chunk would already tag the transaction as 'done'. The
more aggressive tx freeing that was recently merged would cause this
tx to be freed before the rest of the in-progress chunk was done.
This patch delays the tagging of the tx until the final data has been
received.
Avoid looping in transaction output.
Update app-layer API to store the bits in one step
and retrieve the bits in a single step as well.
Update users of the API.
Add option to put Rust code in non-'--release' mode, preserving
debug symbols.
Until now Suricata would have to be compiled with --enable-debug for
this.
Use expectation to be able to identify connections that are
ftp data. It parses the PASV response, STOR message and the
RETR message to provide extraction of files.
Implementation in Rust of FTP messages parsing is available.
Also this patch changes some var name prefixed by ssh to ftp.
Converting the NTP parser to the new registration method is a simple,
3-steps process:
- change the extern functions to use generic input parameters (functions
in all parsers must share common types to be generic) and cast them
- declare the Parser structure
- remove the C code and call the registration function
Add Rust support for the common interface to declare and register all
parsers.
Add a common structure definition to contain all required elements
required for registering a parser, similar to the C interface.
This also reduces the risk of incorrectly registering a parser: the
compiler prevents omitting required functions from the structure, and
functions (even if external) are type-checked. Optional functions are
explicitly marked.
As the DNS probe just uses the query portion of a response, don't
require there to be as many bytes as specified in the TCP DNS
header. This can occur in large responses where probe is called
without all the data.
Fixes the cases where the app proto is recorded as failed.
Fixes issue:
https://redmine.openinfosecfoundation.org/issues/2169
Fix handling of TXT records when there are multiple strings
in a single TXT record. For now, conform to the C implementation
where an answer record is created for each string in a single
txt record.
Also removes the data_len field from the answer entry. In Rust,
the length is available from actual data, which after decoding
may actually be different than the encoded data length, so just
use the length from the actual data.
In logging (SCLog*), safely convert strings to cstrings instead
of blindly unwrapping them.
Also implement a simple rust logger if the Suricata C context
is not available.
During configure, substitute the path of cargo, as well as the
value of CARGO_HOME as variables. This fixes the case where a
user might do:
make
sudo make install
Which will cause the cargo bits to be rebuilt, including
re-downloading external crates.
By saving these to variables we can be sure that the same
values are used during make install as were used during
make which prevents the Rust artifacts from being rebuild
during "sudo make install".
Rust strings are UTF8 and we cannot yet rely on jansson
having json_stringn on all supported OS distributions yet
so sanitize strings to ascii before printing.
Also add set_string_from_bytes which is like set_string, but
accepts a byte array as input.
Since the parser now also does nfs2, the name nfs3 became confusing.
As it's still in beta, we can rename so this patch renames all 'nfs3'
logic to simply 'nfs'.
In normal records it will try to continue parsing.
GAP 'data' will be passed to file api as '0's. New call is used
so that the file API does know it is dealing with a GAP. Such
files are flagged as truncated at the end of the file and no
checksums are calculated.
Allow distcheck to pass if cargo vendor is not found by not
failing out. It is not required to successfully build a dist
tarball, the Rust sources will just not be vendored in.
Also don't fail out make dist if Python is not installed. A build
will still be successful is Python is available on the end
build system.
Update nom to ~3.0.
Prefix dependencies with ~, which will allow for newer patch
versions only. Minor version updates should get a test before
using.
Remove Cargo.lock from the repo, but still generate as part
of the vendoring process for release builds. This will ensure
that all users of a particular distribution tarball will be
linking against the same Rust dependencies.
Initial version of a filetracker API that depends on the filecontainer
and wraps around the Suricata File API in C.
The API expects chunk based transfers where chunks can be out of order.
Wrapper around Suricata's File and FileContainer API. Built around
assumption that a rust owned structure will have a
'SuricataFileContainer' member that is managed by the C-side of
things.
Where the context is a struct passed from C with pointers
to all the functions that may be called.
Instead of referencing C functions directly, wrap them
in function pointers so pure Rust unit tests can still run.
Rust is currently optional, use the --enable-rust configure
argument to enable Rust.
By default Rust will be built in release mode. If debug is enabled
then it will be built in debug mode.
On make dist, "cargo vendor" will be run to make a local copy
of Rust dependencies for the distribution archive file.
Add autoconf checks to test for the vendored source, and if it
exists setup the build to use the vendored code instead of
fetching it from the network.
Also, as Cargo requires semantic versioning, the Suricata version
had to change from 4.0dev to 4.0.0-dev.