After a gap in a file transaction, the file tracker is truncated. However
this did not clear any stored out of order chunks from memory or stop more
chunks to be stored, leading to accumulation of a large number of chunks.
This patches fixes this be clearing the stored chunks on trunc. It also
makes sure no more chunks are stored in the tracker after the trunc.
Bug: #5781.
Issue: 2497
This changeset provides subsystem and module identifiers in the log when
the log format string contains "%S". By convention, the log format
surrounds "%S" with brackets.
The subsystem name is generally the same as the thread name. The module
name is derived from the source code module name and usually consists of
the first one or 2 segments of the name using the dash character as the
segment delimiter.
Remove the distinction between the C template protocol "template" and
the Rust template protocol "template-rust" and make the Rust parser
simply template now that we no longer have support to generate a C
protocol template.
Remove the app-layer-PROTO stub for Rust based parsers. It is no longer
needed as Rust parsers now contain the registration function in Rust.
Ticket: 4939
Use the lzma-rs crate for decompressing swf/lzma files instead of
the lzma decompressor in libhtp. This decouples suricata from libhtp
except for actual http parsing, and means libhtp no longer has to
export a lzma decompression interface.
Ticket: #5638
Fuzzing highlighted an issue where a command sequence on the same file
id triggered a logging issue:
file data for id N
close id N
file data for id N
If this happened in a single blob of data passed to the parser, the
existing file tx would be reused, the file "reopened", confusing the
file logging logic. This would trigger a debug assert.
This patch makes sure a new file tx is created for the file data
coming in after the first file tx is closed.
Bug: #5567.
It was not enough to set Cursor position to 0,
also its inner Vec should be cleared.
This way, a new input gets written at the beginning of the
Cursor and its inner Vec...
Ticket: #5691
Some fields that were previously strings are not always value UTF-8
data, instead the protocol specification refers to them as strings of
bytes, so in other words byte arrays.
Currently fields converted are:
- client_version
- info_hash
- response.id
- request.id
- nodes
- token
These variants, in particular the Brotli one can be large at over 2500
bytes which is allocated no matter which decompressor is being used.
Gzip comes in at over 500 bytes. Box deflate for consistency.
Where possible mark the relevant functions unsafe. Otherwise suppress
the warning for now as this pattern is supposed to be a safe API around
an unsafe one. Might need some further investigation, but in general the
"guarantee" here is provided from the C side.
Use .is_some() and .is_none() instead of comparing against None.
Comparing against None requires a value to impl PartialEq, is_none() and
is_some() do not and are more idiomatic.
This patch updates the NT status code definition to use the status
definition used on Microsoft documentation website. A first python
script is building JSON object with code definition.
```
import json
from bs4 import BeautifulSoup
import requests
ntstatus = requests.get('https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/596a1078-e883-4972-9bbc-49e60bebca55')
ntstatus_parsed = BeautifulSoup(ntstatus.text, 'html.parser')
ntstatus_parsed = ntstatus_parsed.find('tbody')
ntstatus_dict = {}
for item in ntstatus_parsed.find_all('tr'):
cell = item.find_all('td')
if len(cell) == 0:
continue
code = cell[0].find_all('p')
description_ps = cell[1].find_all('p')
description_list = []
if len(description_ps):
for desc in description_ps:
if not desc.string is None:
description_list.append(desc.string.replace('\n ', ''))
else:
description_list = ['Description not available']
if not code[0].string.lower() in ntstatus_dict:
ntstatus_dict[code[0].string.lower()] = {"text": code[1].string, "desc": ' '.join(description_list)}
print(json.dumps(ntstatus_dict))
```
The second one is generating the code that is ready to be inserted into the
source file:
```
import json
ntstatus_file = open('ntstatus.json', 'r')
ntstatus = json.loads(ntstatus_file.read())
declaration_format = 'pub const SMB_NT%s:%su32 = %s;\n'
resolution_format = ' SMB_NT%s%s=> "%s",\n'
declaration = ""
resolution = ""
text_max = len(max([ntstatus[x]['text'] for x in ntstatus.keys()], key=len))
for code in ntstatus.keys():
text = ntstatus[code]['text']
text_spaces = ' ' * (4 + text_max - len(text))
declaration += declaration_format % (text, text_spaces, code)
resolution += resolution_format % (text, text_spaces, text)
print(declaration)
print('\n')
print('''
pub fn smb_ntstatus_string(c: u32) -> String {
match c {
''')
print(resolution)
print('''
_ => { return (c).to_string(); },
}.to_string()
}
''')
```
Bug #5412.
Introduce AppLayerTxData::file_tx as direction(s) indicator for transactions.
When set to 0, its not a file tx and it will not be considered for file
inspection, logging and housekeeping tasks.
Various tx loop optimizations in housekeeping and output.
Update the "file capable" app-layers to set the fields based on their
directional file support as well as on the traffic.
Update APIs to store files in transactions instead of the per flow state.
Goal is to avoid the overhead of matching up files and transactions in
cases where there are many of both.
Update all protocol implementations to support this.
Update file logging logic to account for having files in transactions. Instead
of it acting separately on file containers, it is now tied into the
transaction logging.
Update the filestore keyword to consider a match if filestore output not
enabled.
This commit creates a module named "detect" for rule parsing logic. As
part of this commit, detect.rs is moved from its toplevel position into
the new module. Thus, use crate::detect::detect to refer to items within
detect.rs (instead of create::detect).
Ticket: 5077
Issue: 5077
This commit
- Converts the PCRE based parser to Rust.
- Adds unit tests to the new Rust modules
- Removes the PCRE parser from detect-bytemath.c
- Adjusts the C source modules to refer to the Rust definitions
- Includes the multiply operator (missing from the C parser)
When having many transactions in a single parsing call...
Fix has overhead of having one more field in the mqtt state.
Completes commit a8079dc978
Ticket: #5399
When doing a DCERPC request, we can use the context id to log the
interface that is used. Doing that we can see in one single event
what is the DCERPC interface and opnum that are used. This allows
to have all the information needed to resolve the request to a
function call.
Feature #5413.
As an SMB2 async response does not have a tree id, even if
the request has it.
Per spec, MessageId should be enough to identifiy a message request
and response uniquely across all messages that are sent on the same
SMB2 Protocol transport connection.
So, the tree id is redundant anyways.
Ticket: #5508
As state fields can grow abitrarily, and this can lead to DOS
by quadratic complexity (CPU time and disk space)
Adds a direction field to retain all the information in the
transaction.
Also checks array vendor_ids had at least one element before
logging it.
Ticket: #5455
Also logs if the ticket encryption is weak.
It is different from the encryption used for the rest of the
packet, and this allows to detect kerberoasting attack.
Ticket: #5442
kerberos parser crate is also used by other procotols : nfs and
smb. These protocols use an older der_parser crate version.
Upgrading der_parser will simplify the code further.
- Implement the Display trait on Direction to print "toserver" or
"toclient" which used in a format string.
- Use Direction struct inside Frame instead of a u32. Requires a helper
method as there are two representation in C for direction, and the C
methods for frames don't use the internal representation of the
Direction enum (some sweeping changes could help here)
On the Rust side, a Frame requires a StreamSlice to be created. We can
derive the direction from the StreamSlice removing the need for callers
to provide the direction when operating on the frame.
The format of initial packet for quic ietf, ie quic v1,
is described in rfc 9000, section 17.2.2
Parse more frames and logs interesting extensions from crypto frame
Do not try to parse encrypted data, ie after we have seen
a crypto frame in each direction.
Use sni from crypto frame with tls for detection already implemented
Ticket: #4967
Bug introduced by https://github.com/OISF/suricata/pull/7111
Nom's count begins by allocating a Vector, which leads to arbitrary
allocation due to flavors_cnt coming from network, and not even
being checked against i.len()
Ticket: #5237
Move it away from http2 to generic core crate.
And use it for DCERPC (and SMB)
And remove the C version.
Main change in API is the free function is not free itself, but
a rust wrapper around unbox.
Ticket: #4112
Without dangerous snprintf pattern identified by CodeQL
even if this pattern is not a problem in those precise cases,
it may easily get copy pasted in a dangerous place, so better
get rid of it and make CodeQL happy
Adds a PDU frame to the DNS parser. For UDP this is the DNS payload
portion of the DNS packet, for TCP this is the payload minus the leading
legth field.
Ticket: 4984
Wrap the calls behind frames to C code if a `cfg!(not(test))` so they
don't get compiled when running Rust unit tests. Linkage to C functions
is not yet available for Rust unit tests, and this will keep the check
out of individual parsers.
Ticket: 4984
Instead of a method that is required to return a slice of transactions,
use 2 methods, one to return the number of transactions in the
collection, and another to get a transaction by its index in the
collection.
This allows for the transaction collection to not be a contiguous array
and instead can be a VecDeque, or possibly another collection type that
supports retrieval by index.
Ticket #5278
Fuzzers found a possible integer overflow bug when parsing response
messages. To fix that, removed the case where we incremented the parsed
field length and created a new message type for situations where Suri
parsers an Unknown message. This is good because there may happen that
an unknown message to Suri is valid, and in this case, we would still be
able to log it.
Philippe Antoine found the bug while fuzzing with rust debug assertions.
Bug #5016
To recognize a protocol, Suricata first looks for
patterns, which can be confirmed by a probing parser.
If this does not work, Suricata can try to run
some probing parsers on some ports.
This is the case for SMB.
This commit makes handling the confirming and the probing
paser differently even if they share much code.
The confirmation parser knows that a pattern has been found.
So, it must not do the midstream case of looking for this
pattern in the whole buffer, but only check it at the beginning.
But it must reverse direction if needed.
The DNS name parser will error out with an error even if the
error is incomplete. Instead of manually generating errors,
use '?' to let the nom error ripple up the error handling chain.
The reason this wasn't done in the first place is this code
predates the ? operator, or we were not aware of it at the time.
This prevents the case where probing fails when there is enough data to
parse the header, but not enough to complete name parser. In such a case
a parse error is returned (instead of incomplete) resulting in the
payload not being detected as DNS.
Ticket #5034
If there is more data than a header, but not enough for a complete DNS
message, the hostname parser could return an error causing the probe to
fail on valid DNS messages.
So only parse the complete message if we have enough input data. This is
reliable for TCP as DNS messages are prefixed, but for UDP its just
going to be the size of the input buffer presented to the parser, so
incomplete could still happen.
Ticket #5034
Allow limiting in-flight out or order data chunks per size or count.
Implemented for read and writes separately:
app-layer.protocols.smb.max-write-queue-size
app-layer.protocols.smb.max-write-queue-cnt
app-layer.protocols.smb.max-read-queue-size
app-layer.protocols.smb.max-read-queue-cnt
This addresses Redmine bug #5018 by ensuring that the parser
never requests additional data via the Incomplete error, but to
raise an actual parse error, since it is supposed to have all
the data as specified by the message length in the header already.
Instead of closing files in both direction when receiving a close request,
close only toserver files for the request and close toclient on receiving
a response.