Ticket: 4948
This is not the perfect solution, but it prevents to trigger
the assert, and keep the assert.
A better solution would need to create transaction from
the reponse parsing, in case a later command was buffered and
not answered. But this would not be enough as NoNewTx prevents
the creation of a new transaction for RSET...
Ticket: 4530
As for HTTP2 and MQTT.
In FTP case, transactions are pipelined, not identified by an id.
So, there are less chances of DOS by quadratic complexity.
Parse extract-url-schemes from the mime config.
e.g. 'extract-urls-schemes: [http, https, ftp, mailto]'
Update MimeDecConfig struct to new url extraction fields.
Change app-layer-smtp.c & util-decode-mime.c to initialize new struct
fields for MimeDecConfig.
Sets the default value for extract-url-schemes if not found in the
config to 'extract-urls-schemes: [http]' for backwards compatibility.
Uses the schemes defined in the mime config value for
extract-urls-schemes to search for URLS starting with those scheme
names followed by "://".
Logs the URLS with the scheme + '://' at the start if the
log-url-scheme is set in the mime config, otherwise the old behaviour
is reverted to and the urls are logged with the schemes stripped.
Removed unused constant URL_STR now that URLS are being searched for
using extract-urls-schemes mime config values instead of just URL's
starting with 'http://'.
Added commented out new options for extract-urls-schemes and
log-url-scheme to suricata.yaml.in.
Update FindUrlStrings comments.
Remove old outdated comments/commented code from FindUrlStrings.
Update test case for mime which now needs schemes list to be set.
Add Test Cases for FindUrlStrings() method.
Feature: #2054
We want to check that a rule beginning with alert http
can be valid, that is if either HTTP1 or HTTP2 is enabled.
So, AppLayerProtoDetectGetProtoName will do a more complex
check for this ALPROTO_HTTP (any).
Suricata invokes the stream reassembly logic only for the current packet
direction if the packet contains a FIN flag. However, this does not
handle the case in which the packet ACKs data from the opposing direction.
This patch forces the invocation of the stream reassembly logic
on both direction when Suricata sees a FIN packet.
Completes commit e2370d6861
for all the fuzz targets processing pcaps
using a generic function.
FlowShutdown is not used because it uses the loop to destroy
mutexes, which we want to reuse for fuzzing
Indexing of Signature::init_data::smlists would fail for a rule that
used a frame w/o content, as the array would only be expanded when
adding a content. Adding a check to see if there list id is in bounds
is an implicit check for the "no content" case.
Bug #5011.
Pgsql was using bitwise operations to assign password output config to
its context flags, but mixing that with logic negation of the default
value, resulting in the expressions having a constant value as result.
Bug: #5007
When running with privilege dropping, the application log file
is opened before privileges are dropped resulting in Suricata
failing to re-open the file for file rotation.
If needed, chown the application to the run-as user/group after
opening.
Ticker #4523
Initialize the run-as user info after loading the config, but
before setting up logging (previously it was done while initializing
signal handlers). This will allow the log file to be given the
correct permissions if Suricata is configured to run as a non-root
user.
- add nom parsers for decoding most messages from StartupPhase and
SimpleQuery subprotocols
- add unittests
- tests/fuzz: add pgsql to confyaml
Feature: #4241
As is done for other targets,
so that all app-layer protocols are enabled,
even the ones disabled by default such as enip
And resets protocol detection every time we try
so that probing_parser_toserver_alproto_masks are fresh.
Implement a special sticky buffer to select frames for inspection.
This keyword takes an argument to specify the per protocol frame type:
alert <app proto name> ... frame:<specific frame name>
Or it can specify both in the keyword:
alert tcp ... frame:<app proto name>.<specific frame name>
The latter is useful in some cases like http, where "http" applies to
both HTTP and HTTP/2.
alert http ... frame:http1.request;
alert http1 ... frame:request;
Examples:
tls.pdu
smb.smb2.hdr
smb.smb3.data
Consider a rule like:
alert tcp ... flow:to_server; content:"|ff|SMB"; content:"some smb 1 issue";
this will scan all toserver TCP traffic, where it will only be limited by a port,
depending on how rules are grouped.
With this work we'll be able to do:
alert smb ... flow:to_server; frame:smb1.data; content:"some smb 1 issue";
This rule will only inspect the data portion of SMB1 frames. It will not affect
any other protocol, and it won't need special patterns to "search" for the
SMB1 frame in the raw stream.
The idea of stream frames is that the applayer parsers can tag PDUs and
other arbitrary frames in the stream while parsing. These frames can then
be inspected from the rule language. This will allow rules that are more
precise and less costly.
The frames are stored per direction in the `AppLayerParserState` and will only
be initialized when actual frames are in use. The per direction storage has a
fixed size static portion and dynamic support for a larger number. This is done
for effeciency.
When the Stream Buffer slides, frames are updated as they use offsets relative
to the stream. A negative offset is used for frames that started before the
current window.
Frames have events to inspect/log parser errors that don't fit the TX model.
Frame id starts at 1. So implementations can keep track of frame ids where 0
is not set.
Frames affect TCP window sliding. The frames keep a "left edge" which
signifies how much data to keep for frames that are still in progress.
Pruning of StreamBufferBlocks could remove blocks that fell entirely
after the target offset due to a logic error. This could lead to data
being evicted that was still meant to be processed in theapp-layer
parsers.
Bug: #4953.
This commit makes a small optimization when comparing IPv4 and IPv6
addresses by making the host order value invariant and calculating the
value once, before entering the loop.
Explicitly truncate file names to UINT16_MAX
Before, they got implicitly truncated, meaning a UINT16_MAX + 1
file name, went to 0 file name (because of modulo 65536)
This commit adds a signal handler for SIGSEGV when configured. The
signal handler emits a one line stack trace using SCLogError. The intent
is to provide diagnostic information in deployments where core files are
not possible.
The diagnostic message is from the offending thread and includes the
stack trace; each frame includes the symbol + offset.
Ticket: 4920
Completes commit c8dbe24fb6
which introduced AppProtoEquals to have a generic
check for http in signature can mean http1 or http2 in
traffic.
This commit missed this case, as I only looked for
git grep "alproto ==" and here we deal with
alproto_tc and alproto_ts, but not alproto by itself.
It appears that DNS servers will still process a DNS request even if the
z-bit is set, our parser will fail the transaction. So create the
transaction, but still set the event.
Ticket #4924
Set RSS hash function according to Intel ICE PMD available hash functions
Set hash functions according to the support by the ICE PMD, so that no warning
regarding RSS setting is issued.
Set RSS hash function according to Intel IXGBE PMD available hash functions.
During configuration, a warning appeared stating that RSS hash function
has been changed from one value to the other. This has meant that
the supported hash functions did not cover all required hash functions
by the configuration. This commit solves the warning.
Due to peculiar behavior of i40e PMD driver, the RSS is required to be set
via rte_flow rules or a hash filter as compared to other NICs where RSS is
configured through port configuration structure.
RTE_FLOW rules are created on 5-tuples (as opposed to 3-tuple configured
on the other NICs). Fragmented traffic have been tested with this setup
and it has been proven that fragmented packets of the same flow are
received on the same queue. At the same time, setting 3-tuple on rte_flow
rules have not yield in the expected results.
Notes from the experiments:
- Configuration of 5-tuple (as is in the commit):
fragmented and nonfragmented packets are received by the same workers
even when I applied seed to alter them via tcpreplay-edit (option --seed)
- Setting only ETH_RSS_FRAG_IPV4 and ETH_RSS_IPV4 (i.e. setting 3-tuple):
when setting ETH_RSS_IPV4, the PMD driver says that pctype is not
supported (generally this means that the "type" of traffic is not
a valid configuration for the i40e)
- Setting only ETH_RSS_FRAG_IPV4 and ETH_RSS_NONFRAG_IPV4_OTHER:
this doesn't work well, packets of the same flow are received on
the different workers (my explanation is that the fragmented packets are
matched with ETH_RSS_FRAG_IPV4 but the other UDP packets are not matched
with ETH_RSS_NONFRAG_IPV4_OTHER rte_flow rule (they would be matched with
ETH_RSS_NONFRAG_IPV4_UDP).
Register a new runmode - DPDK. This enables a new flag on Suricata start
(--dpdk).
With the flag given, DPDK runmode is enabled.
Runmode loads the configuration and then initializes EAL.
If successful, it configures the physical NICs according to the configuration
file. After that, worker threads are initialized and then are in continuous
receive loop.
Ticket: 4857
If a pattern such as GET is seen ine the beginning of the
file transferred over ftp-data, this flow will get recognized
as HTTP, and a HTTP state will be created during parsing.
Thus, we cannot override directly alproto's values
This solves the segfault, but not the logical bug that the flow
should be classified as FTP-DATA instead of HTTP
Instead of storing a name and description as a pointer in DetectBufferType
store them in fixed size arrays. This is in preparation of runtime registration
of buffer types, where a constant name/desc is not available.
In preparation of more dynamic logic in rule loading also doing
some registration, allow for buffers to be registered as fast_patterns
during rule parsing.
Leaves the register time registrations mostly as-is, but copies the
resulting list into the DetectEngineCtx and works with that onwards.
This list can then be extended.
Instead of a map that is constantly realloc'd, use 2 hash tables for
DetectBufferType entries: one by name (+transforms), the other by
id. Use these everywhere.
There is a hack to know the type of an integer
and do an explicit cast in the python script
generating the C file
Also extends some bounds check against negative values