Rollover option is causing issue with TCP streaming code because
packets from the same flow to be treated out of order. As long as
the situation is not fixed in the streaming engine, it is a bad idea
to enable it by default.
This patch implements the rollover option in af_packet capture.
This should heavily minimize the packet drops as well as the
maximum bandwidth treated for a single flow.
The option has been deactivated by default but it is activated in
the af_packet default section. This ensure there is no change for
old users using an existing YAML. And new users will benefit from
the change.
This option is available since Linux 3.10. An analysis of af_packet
kernel code shows that setting the flag in all cases should not
cause any trouble for older kernel.
This patch implements the fanout load balancing modes available
in kernel 4.0. The more interesting is cluster_qm that does the
load balancing based on the RSS queues. So if the network card
is doing a flow based load balancing then a given socket will
receive all packets of a flow indepently of the CPU affinity.
An design error was made when doing the TLS storage module which
has been made dependant of the TLS logging. At the time there was
only one TLS logging module but there is now two different ones.
By putting the TLS store module in a separate module, we can now
use EVE output and TLS store at the same time.
This patch adds the capabilities to log the TLS information the
same way it is currently possible to do with HTTP. As it is
quite hard to read ASN.1 directly in the stream, this will help
people to understand why suricata is firing on alert relative
to TLS.
The option sets in bytes the value at which segment data is passed to
the app layer API directly. Data sizes equal to and higher than the
value set are passed on directly.
Default is 128.
- Added the suricata.yaml configurations and updated the comments
- Renamed the field in the configuration structure to something generic
- Added two new constants and the warning codes
- Created app-layer-htp-xff.c and app-layer-htp-xff.h
- Added entries in the Makefile.am
- Added the necessary configuration options to EVE alert section
- Updated Unified2 XFF configuration comments and removed unnecessary whitespace
- Created a generic function to parse the configuration
- Release the flow locks sooner and remove debug logging
- Added XFF support to EVE alert output
Add a new default value for the 'threads:' setting in af-packet: "auto".
This will create as many capture threads as there are cores.
Default runmode of af-packet to workers.
As the stats api calls the loggers at a global interval, the global
interval should be configured globally.
# global stats configuration
stats:
enabled: yes
# The interval field (in seconds) controls at what interval
# the loggers are invoked.
interval: 8
If this config isn't found, the old config will be supported.
Decode Modbus request and response messages, and extracts
MODBUS Application Protocol header and the code function.
In case of read/write function, extracts message contents
(read/write address, quantity, count, data to write).
Links request and response messages in a transaction according to
Transaction Identifier (transaction management based on DNS source code).
MODBUS Messaging on TCP/IP Implementation Guide V1.0b
(http://www.modbus.org/docs/Modbus_Messaging_Implementation_Guide_V1_0b.pdf)
MODBUS Application Protocol Specification V1.1b3
(http://www.modbus.org/docs/Modbus_Application_Protocol_V1_1b3.pdf)
Based on DNS source code.
Signed-off-by: David DIALLO <diallo@et.esia.fr>
Add option (disabled by default) to honor pass rules. This means that
when a pass rule matches in a flow, it's packets are no longer stored
by the pcap-log module.
Use new management API to run the flow recycler.
Make number of threads configurable:
flow:
memcap: 64mb
hash-size: 65536
prealloc: 10000
emergency-recovery: 30
managers: 2
recyclers: 2
This sets up 2 flow recyclers.
Use new management API to run the flow manager.
Support multiple flow managers, where each of them works with it's
own part of the flow hash.
Make number of threads configurable:
flow:
memcap: 64mb
hash-size: 65536
prealloc: 10000
emergency-recovery: 30
managers: 2
This sets up 2 flow managers.
Handle misc tasks only in instance 1: Handle defrag hash timeout
handing, host hash timeout handling and flow spare queue updating
only from the first instance.
Now that we use 'filetype' instead of 'type', we should also
use 'regular' instead of 'file'.
Added fallback to make sure we stay compatible to old configs.