Update pktacq loop to process flow timeouts in a running engine.
Add a new step to the shutdown phase of packet acquisition loop
threads (pktacqloop).
The shutdown code lets the pktacqloop break out of it's packet
acquisition loop. The thread then enters a flow timeout loop, where
it processes packets from it's tv->stream_pq queue until it's
empty _and_ the KILL flag is set.
Make sure receive threads are done before moving on to flow hash
cleanup (recycle all). Without this the flow recycler could start
it's unconditional hash clean up while detect threads are still
running on the flows.
Update unix socket to match live modes.
Add function TmThreadsInjectPacketById that is to be used to inject flow
timeout packets into the threads stream_pq queue.
TmThreadsInjectPacketById will also wake up listening threads if
applicable.
Packets are passed all packets together in an NULL terminated array
to reduce locking overhead.
Create thread registration and unregistration API for assigning unique
thread id's.
Threadvars is static even if a thread restarts, so we can do the
registration before the threads start.
A thread is unregistered when the ThreadVars are freed.
- Added the suricata.yaml configurations and updated the comments
- Renamed the field in the configuration structure to something generic
- Added two new constants and the warning codes
- Created app-layer-htp-xff.c and app-layer-htp-xff.h
- Added entries in the Makefile.am
- Added the necessary configuration options to EVE alert section
- Updated Unified2 XFF configuration comments and removed unnecessary whitespace
- Created a generic function to parse the configuration
- Release the flow locks sooner and remove debug logging
- Added XFF support to EVE alert output
Incorrectly reallocing the goto table after it was freed by calling
SCACTileReallocState() when really only want to realloc the output table.
This was causing a large goto table to be allocated and never used or
freed.
Free some memory at exit that was not getting freed.
Change pid_pat_list to store copy of case-strings in the same block
of memory as the array of pointers.
Due to a logic error in AppLayerProtoDetectGetProtoByName invalid
protocols would not be detected as such. Instead of ALPROTO_UNKNOWN
ALPROTO_MAX was returned.
Bug #1329
This patch fixes the following errors:
[src/unix-manager.c:306]: (error) Memory pointed to by 'client' is freed twice.
[src/unix-manager.c:313]: (error) Memory pointed to by 'client' is freed twice.
[src/unix-manager.c:323]: (error) Memory pointed to by 'client' is freed twice.
[src/unix-manager.c:334]: (error) Memory pointed to by 'client' is freed twice.
Unix manager was treating the packet after closing the socket if message was
too long.
MLD messages should have a hop limit of 1 only. All others are invalid.
Written at MLD talk of Enno Rey, Antonios Atlasis & Jayson Salazar during
Deepsec 2014.
Have -T / --init-errors-fatal process all rules so that it's easier
to debug problems in ruleset. Otherwise it can be a lengthy fix, test
error cycle if multiple rules have issues.
Convert empty rulefile error into a warning.
Bug #977
If we manage to read the number of RSS queues from an interface,
this means that the optimal number of capture threads is equal
to the minimum of this number and of the number of cores on the
system.
This patch implements this logic thanks to the newly introduced
function GetIfaceRSSQueuesNum.
Add a new default value for the 'threads:' setting in af-packet: "auto".
This will create as many capture threads as there are cores.
Default runmode of af-packet to workers.
For some of the buffer users it's hard to predict how big the data
will be. In the stats.log case this depends on chosen runmode and
number of threads.
To deal with this case a 'MemBufferExpand' call is added. This realloc's
the buffer.
Register with type 'stats':
function init (args)
local needs = {}
needs["type"] = "stats"
return needs
end
The stats are passed as an array of tables:
{ 1, { name=<name>, tmname=<tm_name>, value=<value>, pvalue=<pvalue>}}
{ 2, { name=<name>, tmname=<tm_name>, value=<value>, pvalue=<pvalue>}}
etc
Name is the counter name (e.g. decoder.invalid), tm_name is the thread name
(e.g. AFPacketeth05), value is current value, and pvalue is the value of the
last time the script was invoked.
As the stats api calls the loggers at a global interval, the global
interval should be configured globally.
# global stats configuration
stats:
enabled: yes
# The interval field (in seconds) controls at what interval
# the loggers are invoked.
interval: 8
If this config isn't found, the old config will be supported.
Convert regular 'stats.log' output to this new API.
In addition to the current stats value, also give the last value. This
makes it easy to display the difference.
The SCStreamingBuffer call now also returns two booleans:
data, data_open, data_close = SCStreamingBuffer()
The first indicates this is the first data of this type for this
TCP session or HTTP transaction.
The second indicates this is the last data.
Ticket #1317.
sfd->target.value was always being set, even if the targettype was
not FLOWINT_TARGET_VAL. This would cause the tvar to be overwritten
with garbage data.
Add the modbus.function and subfunction) keywords for public function match in rules (Modbus layer).
Matching based on code function, and if necessary, sub-function code
or based on category (assigned, unassigned, public, user or reserved)
and negation is permitted.
Add the modbus.access keyword for read/write Modbus function match in rules (Modbus layer).
Matching based on access type (read or write),
and/or function type (discretes, coils, input or holding)
and, if necessary, read or write address access,
and, if necessary, value to write.
For address and value matching, "<", ">" and "<>" is permitted.
Based on TLS source code and file size source code (address and value matching).
Signed-off-by: David DIALLO <diallo@et.esia.fr>
Decode Modbus request and response messages, and extracts
MODBUS Application Protocol header and the code function.
In case of read/write function, extracts message contents
(read/write address, quantity, count, data to write).
Links request and response messages in a transaction according to
Transaction Identifier (transaction management based on DNS source code).
MODBUS Messaging on TCP/IP Implementation Guide V1.0b
(http://www.modbus.org/docs/Modbus_Messaging_Implementation_Guide_V1_0b.pdf)
MODBUS Application Protocol Specification V1.1b3
(http://www.modbus.org/docs/Modbus_Application_Protocol_V1_1b3.pdf)
Based on DNS source code.
Signed-off-by: David DIALLO <diallo@et.esia.fr>
A tx is considered complete after the data command completed. However,
this would lead to RSET and QUIT commands setting up a new tx.
This patch simply adds a check that refuses to setup a new tx when these
commands are encountered after the data portion is complete.
SigMatch would be added to list, then the alproto check failed, leading
to freeing of sm. But as it was still in the list, the list now contained
a dangling pointer.
When multiple email addresses were in the 'to' field, sometimes
they would be logged as "\r\n \"Name\" <email>".
The \r\n was added by GetFullValue in the mime decoder, for unknown
reasons. Disabling this seems to have no drawbacks.
Turn all buffers into uint8_t (from char) and no longer use the
string functions like strncpy/strncasecmp on them.
Store url and field names as lowercase, and also search/compare
them as lowercase. This allows us to use SCMemcmp.
The global variable suricata_ctl_flags needs to volatile, otherwise the
compiler might not cause the variable to be read every time because it
doesn't know other threads might write the variable.
This was causing Suricata to not exit under some conditions.
For these 2 cases:
1. Missing SYN:
-> syn <= missing
<- syn/ack
-> ack
-> data
2. Missing SYN and 3whs ACK:
-> syn <= missing
<- syn/ack
-> ack <= missing
-> data
Fix session pickup. The next_win settings weren't correctly set, so that
packets were rejected.
Bug 1190.
If 3whs SYN/ACK and ACK are missing we can still pick up the session if
in async-oneside mode.
-> syn
<- syn/ack <= missing
-> ack <= missing
-> data
Bug 1190.
The entire body of these functions are protected by ifdef PROFILING.
If the functions are inlined, then this check removes the need for the
function entirely.
Previously, the empty function was still called, even when not built
for profiling. The functions showed as being 0.25% of total CPU time
without being built for profiling.
If vlan is disabled the cluster_flow mode will still take VLAN tags
into account due to using pf_ring's 6-tuple mode.
So this forces to use pf_ring's 5-tuple mode.
Bug #1292
Implements new API to expand the IP reputation
to netblocks with CIDR notation
A new object 'srepCIDRTree' is kept in the DetectionEngineCtx,
which contains two tree (one for ipv4 and one for ipv6)
where the reputation values are stored.
ACK packets completing a valid FIN shutdown could be flagged as
'bad window update' if they would shrink the window.
This patch detects this case before doing the bad window update
check.
If the 3whs ACK and some data after this is lost, we would get stuck
in the 'SYN_RECV' state, where from there each packet might be
considered invalid.
This patch improves the handling of this case.
The keyword would not allow matching on "OpenSSH_5.5p1 Debian-6+squeeze5"
as the + and space characters were not allowed.
This patch adds support for them.
Stream pseudo packets are taken from the packet pool, which can be empty.
In this case a pseudo packet will not be created and processed.
This patch adds a counter "tcp.pseudo_failed" to track this.
doesn't need the gpu results and to release the packet for the next run.
Previously the inspection engine wouldn't inform the cuda module, if it
didn't need the results. As a consequence, when the packet is next taken
for re-use, and if the packet is still being processed by the cuda module,
the engine would wait till the cuda module frees the packet.
This commits updates this functionality to inform the cuda module to
release the packet for the afore-mentioned case.
Add option (disabled by default) to honor pass rules. This means that
when a pass rule matches in a flow, it's packets are no longer stored
by the pcap-log module.
SigMatchGetLastSMFromLists() is finding the sm with the largest
index among all of the values returned from SigMatchGetLastSM() on
the set of (list and type) tuples passed as arguments.
The function was creating an array of the types, then creating an array
of the results of SigMatchGetLastSM(), sorting that list completely, then
only returning the first values from the list.
The new code, gets one set of arguments from the variable arguments, calls
SigMatchGetLastSM() and if the returned sm has a larger index, keeps that
as the last sm.
Allow a script to set the 'stream' buffer type. This will add the
script to the PMATCH list.
Example script:
alert tcp any any -> any any (content:"html"; lua:stream.lua; sid:1;)
function init (args)
local needs = {}
needs["stream"] = tostring(true)
return needs
end
-- return match via table
function match(args)
local result = {}
b = tostring(args["stream"])
o = tostring(args["offset"])
bo = string.sub(b, o);
print (bo)
return result
end
return 0
To prevent accidental writes into the orignal packet buffer, use
const pointers for the extension header pointers used by IPv6. This
will cause compiler warnings in case of writes.
A logic error in the IPv6 Routing header parsing caused accidental
updating of the original packet buffer. The calculated extension
header lenght was set to the length field of the routing header,
causing it to be wrong.
This has 2 consequences:
1. defrag failure. As the now modified payload was used in defrag,
the decoding of the reassembled packet now contained a broken length
field for the routing header. This would lead to decoding failure.
The potential here is evasion, although it would trigger:
[1:2200014:1] SURICATA IPv6 truncated extension header
2. in IPS mode, especially the AF_PACKET mode, the modified and now
broken packet would be transmitted on the wire. It's likely that
end hosts and/or routers would reject this packet.
NFQ based IPS mode would be less affected, as it 'verdicts' based on
the packet handle. In case of replacing the packet (replace keyword
or stream normalization) it could broadcast the bad packet.
Additionally, the RH Type 0 address parsing was also broken. It too
would modify the original packet. As the result of this code was not
used anywhere else in the engine, this code is now disabled.
Reported-By: Rafael Schaefer <rschaefer@ernw.de>
AF_PACKET is not setting the engine mode to IPS when some
interfaces are peered and use IPS mode. This is due to the
fact, it is possible to peer 2 interfaces and run an IPS on
them and have a third one that is running in normal IDS mode.
In fact this choice is the bad one as unwanted side effect is
that there is no drop log and that stream inline is not used.
To fix that, this patch puts suricata in IPS mode as soon as
there is two interfaces in IPS mode. And it displays a error
message to warn user that the accuracy of detection on IDS only
interfaces will be low.
When using AMATCH, continue detection would fail if the tx part
had already run. This lead to start detection rerunning, causing
multiple alerts for the same issue.
As there is no inspection engine for request_line, the sigmatch was
added to the AMATCH list. However, no AppLayerMatch function for
lua scripts was defined.
This patch defines a AppLayerMatch function.
Bug #1273.
I found three somewhat serious IPv6 address bugs within the Suricata 2.0.x source code. Two are in the source module "detect-engine-address.c", and the third is in "util-radix-tree.c".
The first bug occurs within the function DetectAddressParse2(). When parsing an address string and a negated block is encountered (such as when parsing !$HOME_NET, for example), any corresponding IPv6 addresses were not getting added to the Group Heads in the DetectAddressList. Only IPv4 addresses were being added.
I discovered another bug related to IPv6 address ranges in the Signature Match Address Array comparison code for IPv6 addresses. The function DetectAddressMatchIPv6() walks a signature's source or destination match address list comparing each to the current packet's corresponding address value. The match address list consists of value pairs representing a lower and upper IP address range. If the packet's address is within that range (including equal to either the lower or upper bound), then a signature match flag is returned.
The original test of each signature match address to the packet was performed using a set of four compounded AND comparisons looking at each of the four 32-bit blocks that comprise an IPv6 address. The problem with the old comparison is that if ANY of the four 32-bit blocks failed the test, then a "no-match" was returned. This is incorrect. If one or more of the more significant 32-bit blocks met the condition, then it is a match no matter if some of the less significant 32-bit blocks did not meet the condition. Consider this example where Packet represents the packet address being checked, and Target represents the upper bound of a match address pair. We are testing if Packet is less than Target.
Packet -- 2001:0470 : 1f07:00e2 : 1988:01f1 : d468:27ab
Target -- 2001:0470 : 1f07:00e2 : a48c:2e52 : d121:101e
In this example the Packet's address is less than the target and it should give a match. However, the old code would compare each 32-bit block (shown spaced out above for clarity) and logically AND the result with the next least significant block comparison. If any of the four blocks failed the comparison, that kicked out the whole address. The flaw is illustrated above. The first two blocks are 2001:0470 and 1f07:00e2 and yield TRUE; the next less significant block is 1988:01f1 and a48c:2e52, and also yields TRUE (that is, Packet is less than Target); but the last block compare is FALSE (d468:27ab is not less than d121:101e). That last block is the least significant block, though, so its FALSE determination should not invalidate a TRUE from any of the more significant blocks. However, in the previous code using the compound logical AND block, that last least significant block would invalidate the tests done with the more significant blocks.
The other bug I found for IPv6 occurs when trying to parse and insert an IPv6 address into a Radix Tree using the function SCRadixAddKeyIPV6String(). The test for min and max values for an IPv6 CIDR mask incorrectly tests the upper limit as 32 when it should be 128 for an IPv6 address. I think this perhaps is an old copy-paste error if the IPv6 version of this function was initially copied from the corresponding IPv4 version directly above it in the code. Without this patch, the function will return null when you attempt to add an IPv6 network whose CIDR mask is larger than 32 (for example, the popular /64 mask will cause the function to return the NULL error condition).
(amended by Victor Julien)
When using the inspection engines, track the current tx_id in the
thread storage the detect thread uses. As 0 is a valid tx_id, add
a simple bool that indicates if the tx_id field is set.
Allow use of the Flow Logging API through Lua scripts.
Minimal script:
function init (args)
local needs = {}
needs["type"] = "flow"
return needs
end
function setup (args)
end
function log(args)
startts = SCFlowTimeString()
ipver, srcip, dstip, proto, sp, dp = SCFlowTuple()
print ("Flow IPv" .. ipver .. " src " .. srcip .. " dst " .. dstip ..
" proto " .. proto .. " sp " .. sp .. " dp " .. dp)
end
function deinit (args)
end
Add SCStreamingBuffer lua function to retrieve the data passed
to the script per streaming API invocation.
Example:
function log(args)
data = SCStreamingBuffer()
hex_dump(data)
end
Make normalized body data available to the script through
HttpGetRequestBody and HttpGetResponseBody.
There no guarantees that all of the body will be availble.
Example:
function log(args)
a, o, e = HttpGetResponseBody();
--print("offset " .. o .. " end " .. e)
for n, v in ipairs(a) do
print(v)
end
end
Get the host from libhtp's tx->request_hostname, which can either be
the host portion of the url or the host portion of the Host header.
Example:
http_host = HttpGetRequestHost()
if http_host == nil then
http_host = "<hostname unknown>"
end
SCFlowAppLayerProto: get alproto as string from the flow. If alproto
is not (yet) known, it returns "unknown".
function log(args)
alproto = SCFlowAppLayerProto()
if alproto ~= nil then
print (alproto)
end
end
A new callback to give access to thread id, name and group name:
SCThreadInfo. It gives: tid (integer), tname (string), tgroup (string)
function log(args)
tid, tname, tgroup = SCThreadInfo()
SCFlowTimeString: returns string form of start time of a flow
Example:
function log(args)
startts = SCFlowTimeString()
ts = SCPacketTimeString()
if ts == startts then
print("new flow")
end
Add SCPacketTimeString to get the packets time string in the format:
11/24/2009-18:57:25.179869
Example use:
function log(args)
ts = SCPacketTimeString()
SCRuleIds(): returns sid, rev, gid:
function log(args)
sid, rev, gid = SCRuleIds()
SCRuleMsg(): returns msg
function log(args)
msg = SCRuleMsg()
SCRuleClass(): returns class msg and prio:
function log(args)
class, prio = SCRuleClass()
if class == nil then
class = "unknown"
end
Add flow store and retrieval wrappers for accessing the flow through
Lua's lightuserdata method.
The flow functions store/retrieve a lock hint as well.
If the script needing a packet doesn't specify a filter, it will
be run against all packets. This patch adds the support for this
mode. It is a packet logger with a condition function that always
returns true.
Add a lua callback for getting Suricata's log path, so that lua scripts
can easily get the logging directory Suricata uses.
Update the Setup logic to register callbacks before the scripts 'setup'
is called.
Example:
name = "fast_lua.log"
function setup (args)
filename = SCLogPath() .. "/" .. name
file = assert(io.open(filename, "a"))
end
Add file logger support. The script uses:
function init (args)
local needs = {}
needs['type'] = 'file'
return needs
end
The type is set to file to make it a file logger.
Add utility functions for placing things on the stack for use
by the scripts. Functions for numbers, strings and byte arrays.
Add callback for returing IP header info: ip version, src ip,
dst ip, proto, sp, dp (or type and code for icmp and icmpv6):
SCPacketTuple
Through 'needs' the script init function can indicate it wants to
see packets and select a condition function. Currently only alerts
is an option:
function init (args)
local needs = {}
needs["type"] = "packet"
needs["filter"] = "alerts"
return needs
end
Add an argument to the registration to indicate which iterator
needs to be used: Stream or HttpBody
Add HttpBody Iterator, calling the logger(s) for each Http body chunk.
StreamIterator implementation for iterating over ACKed segments.
Flag each segment as logged when the log function has been called for it.
Set a 'OPEN' flag for the first segment in both directions.
Set a 'CLOSE' flag when the stream ends. If the last segment was already
logged, a empty CLOSE call is performed with NULL data.
This patch adds a new Log API for streaming data such as TCP reassembled
data and HTTP body data. It could also replace Filedata API.
Each time a new chunk of data is available, the callback will be called.
- Removed unnecessary assignment of the data field
- Removed else condition (same function called for IPv4 and IPV6)
- Fixed constants to be a power of two (used in bitwise operations)
The field ext_pkt was cleaned before calling the release function.
The result was that IPS mode such as the one of AF_PACKET were not
working anymore because they were not able to send the data which
were initially pointed by ext_pkt.
This patch moves the ext_pkt cleaning to the cleaning macro. This
ensures that the cleaning is done for allocated and pool packets.
Call PACKET_RELEASE_REFS from PacketPoolGetPacket() so that
we only access the large packet structure just before actually
using it. Should give better cache behaviour.
The Source Routing Header had routing defined as a char* for a field
of variable size. Since that field was not being used in the code, I
removed the pointer and added a comment.
Structures that are used to cast packet data into fields need to be packed
so that the compiler doesn't add any padding to these fields. This also helps
Tile-Gx to avoid unaligned loads because the compiler will insert code to
handle the possible unaligned load.
Reported in bug 1238 is an issue where stream reassembly can be
disrupted.
A packet that was in-window, but otherwise unexpected set the
window to a really low value, causing the next *expected* packet
to be considered out of window. This lead to missing data in the
stream reassembly.
The packet was unexpected in various ways:
- it would ack unseen traffic
- it's sequence number would not match the expected next_seq
- set a really low window, while not being a proper window update
Detection however, it greatly hampered by the fact that in case of
packet loss, quite similar packets come in. Alerting in this case
is unwanted. Ignoring/skipping packets in this case as well.
The logic used in this patch is as follows. If:
- the packet is not a window update AND
- packet seq > next_seq AND
- packet acq > next_seq (packet acks unseen data) AND
- packet shrinks window more than it's own data size
THEN set event and skip the packet in the stream engine.
So in case of a segment with no data, any window shrinking is rejected.
Bug #1238.
Until now the time out handling in defrag was done using a single
uint32_t that tracked seconds. This lead to corner cases, where
defrag trackers could be timed out a little too early.
If a next header / protocol is encountered that we can't handle (yet)
set an event. Disabled the rule by default.
decode-event:ipv6.unknown_next_header;
Fix or rather implement handling of unfragmentable exthdrs in ipv6.
The exthdr(s) appearing before the frag header were copied into the
reassembled packet correctly, however the stripping of the frag header
did not work correctly.
Example:
The common case is a frag header directly after the ipv6 header:
[ipv6 header]->[frag header]->[icmpv6 (part1)]
[ipv6 header]->[frag header]->[icmpv6 (part2)]
This would result in:
[ipv6 header]->[icmpv6]
The ipv6 headers 'next header' setting would be updated to point to
whatever the frag header was pointing to.
This would also happen when is this case:
[ipv6 header]->[hop header]->[frag header]->[icmpv6 (part1)]
[ipv6 header]->[hop header]->[frag header]->[icmpv6 (part2)]
The result would be:
[ipv6 header]->[hop header]->[icmpv6]
However, here too the ipv6 header would have been updated to point
to what the frag header pointed at. So it would consider the hop header
as if it was an ICMPv6 header, or whatever the frag header pointed at.
The result is that packets would not be correctly parsed, and thus this
issue can lead to evasion.
This patch implements handling of the unfragmentable part. In the first
segment that is stored in the list for reassembly, this patch detects
unfragmentable headers and updates it to have the last unfragmentable
header point to the layer after the frag header.
Also, the ipv6 headers 'next hdr' is only updated if no unfragmentable
headers are used. If they are used, the original value is correct.
Reported-By: Rafael Schaefer <rschaefer@ernw.de>
Bug #1244.
Some tests are already registered via the function
AppLayerParserRegisterProtocolUnittests. So we don't need to
egister them during runmode initialization.
Exponentially increase the memory allocated for new states when adding new
states, then at the end resize down to the actually final size so that no space is wasted.
On non-TLS systems, check each time the Thread Local Storage
is requested and if it has not been initialized for this thread, initialize it.
The prevents not initializing the worker threads in autofp run mode.
Use new management API to run the flow recycler.
Make number of threads configurable:
flow:
memcap: 64mb
hash-size: 65536
prealloc: 10000
emergency-recovery: 30
managers: 2
recyclers: 2
This sets up 2 flow recyclers.
Use new management API to run the flow manager.
Support multiple flow managers, where each of them works with it's
own part of the flow hash.
Make number of threads configurable:
flow:
memcap: 64mb
hash-size: 65536
prealloc: 10000
emergency-recovery: 30
managers: 2
This sets up 2 flow managers.
Handle misc tasks only in instance 1: Handle defrag hash timeout
handing, host hash timeout handling and flow spare queue updating
only from the first instance.
Currently management threads do their own thread setup and handling. This
patch introduces a new way of handling management threads.
Functionality that needs to run as a management thread can now register
itself as a regular 'thread module' (TmModule), where the 'Management'
callback is registered.
The flow end flags field is filled by the flow manager or the flow
hash (in case of forced timeout of a flow) to record the timeout
conditions in the flow:
- emergency mode
- state
- reason (timed out or forced)
Add logging to the flow logger.
Thread was killed by the generic TmThreadKillThreads instead of
the FlowKillFlowRecyclerThread. The latter wakes the thread up, so
that shutdown is quite a bit faster.
For netflow logging track TCP flags per stream direction. As the struct
had no more space left without expanding it, the flags and wscale
fields are now compressed.
Most flows are marked for clean up by the flow manager, which then
passes them to the recycler. The recycler logs and cleans up. However,
under resource stress conditions, the packet threads can recycle
existing flow directly. So here the recycler has no role to play, as
the flow is immediately used.
For this reason, the packet threads need to be able to invoke the
flow logger directly.
The flow logging thread ctx will stored in the DecodeThreadVars
stucture. Therefore, this patch makes the DecodeThreadVars an argument
to FlowHandlePacket.
Now that we use 'filetype' instead of 'type', we should also
use 'regular' instead of 'file'.
Added fallback to make sure we stay compatible to old configs.
At -O1+ in both Gcc and Clang, PacketPoolWait would optimize the
wait loop in the wrong way. Adding a compiler barrier to prevent
this optimization issue.
Fix pcap packet acquisition methods passing 0 to pcap_dispatch.
Previously they passed the packet pool size, but the packet_q_len
variable was now hardcoded at 0.
This patch sets packet_q_len to 64. If packet pool is empty, we fall
back to direct alloc. As the pcap_dispatch function is only called
when packet pool is not empty, we alloc at most 63 packets.
Better handle the autofp case where one thread allocates the majority
of the packets and other threads free those packets.
Add a list of locally pending packets. The first packet freed goes on the
pending list, then subsequent freed packets for the same Packet Pool are
added to this list until it hits a fixed number of packets, then the
entire list of packets is pushed onto the pool's return stack. If a freed
packet is not for the pending pool, it is freed immediately to its pool's
return stack, as before.
For the autofp case, since there is only one Packet Pool doing all the
allocation, every other thread will keep a list of pending packets for
that pool.
For the worker run mode, most packets are allocated and freed locally. For
the case where packets are being returned to a remote pool, a pending list
will be kept for one of those other threads, all others are returned as before.
Which remote pool for which to keep a pending list is changed each time the
pending list is returned. Since the return pending pool is cleared when it is
freed, then next packet to be freed chooses the new pending pool.
Using a stack for free Packet storage causes recently freed Packets to be
reused quickly, while there is more likelihood of the data still being in
cache.
The new structure has a per-thread private stack for allocating Packets
which does not need any locking. Since Packets can be freed by any thread,
there is a second stack (return stack) for freeing packets by other threads.
The return stack is protected by a mutex. Packets are moved from the return
stack to the private stack when the private stack is empty.
Returning packets back to their "home" stack keeps the stacks from getting out
of balance.
The PacketPoolInit() function is now called by each thread that will be
allocating packets. Each thread allocates max_pending_packets, which is a
change from before, where that was the total number of packets across all
threads.
Whenever DETECT_CONTENT_NOCASE is set for a BoyerMoore matcher, the
function BoyerMooreCtxToNocase() must be called. This call was missing
in AppLayerProtoDetectPMRegisterPattern().
Also created BoyerMooreNocaseCtxInit() that calls BoyerMooreCtxToNocase()
to make some code cleaner and safer.
Rather than converting the search string to lower case while searching,
convert it to lowercase during initialization.
Changes the Boyer Moore search API for take BmCtx
Change the API for BoyerMoore to take a BmCtx rather than the two parts that
are stored in the context. Which is how it is mostly used. This enforces
always calling BoyerMooreCtxToNocase() to convert to no-case.
Use CtxInit and CtxDeinit functions to create and destroy the context,
even in unit tests.
EVE logging is a really good substitute for pcapinfo. Suriwire is
now supporting EVE output so it is not anymore necessary to have
pcapinfo in Suricata.
When using multi mode, the filename can use a few variables:
%n -- thread number, where the 1st thread has 1, and it increments
%i -- thread id (system thread id, similar to pid)
%t -- timestamp, where seconds or seconds+usecs depends on
the ts-format option.
Example:
filename: filename: pcaps/%n/pcap.%t
This will translate to: pcaps/3/pcap.1256792217 for the 3rd thread.
Note that while it's possible to use directories, they won't be
created. So make sure they exist.
This patch adds a field 'is_private' to PcapLogData, so that the
using thread knows if it needs to lock access to it or not.
Reshuffle PcapLogData to roughly match order of access.
This patch implements a new mode in pcap-logging: 'multi'. It stores
a pcap file per logger thread, instead of just one file globally.
This removes lock contention, so it brings a lot more performance.
The trade off is that there are now mulitple files where there would
be one before.
Files have a thread id added to their name: base_name.tid.ts, so by
we have something like: "log.pcap.20057.1254500095".
PcapLog uses the global data structure PcapLogData as thread data
as well. This is possible because all operations on it are locked.
This patch introduces PcapLogThreadData. It contains a pointer to
the PcapLogData. Currently to the global instance, but in the future
it may hold a thread-local instance of PcapLogData.
Add profiling to a logfile. Default is $log_dir/pcaplog_stats.log
The counters for open, close, rotate, write and handles are written
to it, as well as:
- total bytes written
- cost per MiB
- cost per GiB
Option is disabled by default.
Tracks: file open, file close, file rotate (which includes open and
close), file write and open handles.
Open handles measures the cost of open the libpcap handles.
When the config is missing, DefragPolicyGetHostTimeout will default
to returning -1. This will effectively set no timeout at all, leading
to defrag trackers being freed too early.
This patch is fixing an issue in defragmentation code. The
insertion of a fragment in the list of fragments is done with
respect to the offset of the fragment. But the code was using
the original offset of the fragment and not the one of the
new reconstructed fragment (which can be different in the
case of overlapping segment where the left part is trimmed).
This case could lead to some evasion techniques by causing
Suricata to analyse a different payload.
This patch fixes the following issue reported by valgrind:
31 errors in context 1 of 1:
Conditional jump or move depends on uninitialised value(s)
at 0x8AB2F8: UnixSocketPcapFilesCheck (runmode-unix-socket.c:279)
by 0x97725D: UnixCommandBackgroundTasks (unix-manager.c:368)
by 0x97BC52: UnixManagerThread (unix-manager.c:884)
by 0x6155F6D: start_thread (pthread_create.c:311)
by 0x6E3A9CC: clone (clone.S:113)
The running field in PcapCommand was not initialized.
This patch fixes an issue in unix socket handling. It is possible
that a socket did disconnect when analysing a command and because
the data treatment is done in a loop on clients this was leading
to a update of the list of clients during the loop. So we need
in fact to use TAILQ_FOREACH_SAFE instead of TAILQ_FOREACH.
Reported-by: Luigi Sandon <luigi.sandon@gmail.com>
Fix-suggested-by: Luigi Sandon <luigi.sandon@gmail.com>
It is possible to have a non-contiguous CPU set, which was not being
handled correctly on the TILE architecture.
Added a "rank" field in the ThreadVar to store the worker's rank separately
from the cpu for this case.
When applying wildcard thresholds (with sid = 0 and/or gid = 0) it's wrong
to exit on the first signature already having an event filter. Indeed,
doing so results in the theshold not being applied to all subsequent
signatures. Change the code in order to skip signatures with event
filters instead of breaking out of the loop.
If a live reload signal was given before the engine was fully started
up (e.g. pcap file thread waiting for a disk to spin up), a segv could
occur.
This patch only enables live reloads after the threads have been
started up completely.
*** CID 1211009: Bad bit shift operation (BAD_SHIFT)
/src/output-json-http.c: 265 in JsonHttpLogJSON()
259 /* log custom fields if configured */
260 if (http_ctx->fields != 0)
261 {
262 HttpField f;
263 for (f = HTTP_FIELD_ACCEPT; f < HTTP_FIELD_SIZE; f++)
264 {
>>> CID 1211009: Bad bit shift operation (BAD_SHIFT)
>>> In expression "1 << f", left shifting by more than 31 bits has undefined behavior. The shift amount, "f", is as much as 46.
265 if ((http_ctx->fields & (1<<f)) != 0)
266 {
267 /* prevent logging a field twice if extended logging is
268 enabled */
269 if (((http_ctx->flags & LOG_HTTP_EXTENDED) == 0) ||
270 ((http_ctx->flags & LOG_HTTP_EXTENDED) !=
________________________________________________________________________________________________________
*** CID 1211010: Bad bit shift operation (BAD_SHIFT)
/src/output-json-http.c: 492 in OutputHttpLogInitSub()
486 {
487 if ((strcmp(http_fields[f].config_field,
488 field->val) == 0) ||
489 (strcasecmp(http_fields[f].htp_field,
490 field->val) == 0))
491 {
>>> CID 1211010: Bad bit shift operation (BAD_SHIFT)
>>> In expression "1 << f", left shifting by more than 31 bits has undefined behavior. The shift amount, "f", is as much as 46.
492 http_ctx->fields |= (1<<f);
493 break;
494 }
495 }
496 }
497 }
StreamTcpSetDisableRawReassemblyFlag() has the same effect as
AppLayerParserTriggerRawStreamReassembly in that it will force the
raw reassembly to flush out asap. So it is redundant to call both.
Implement StreamTcpSetDisableRawReassemblyFlag() which stops raw
reassembly for _NEW_ segments in a stream direction.
It is used only by TLS/SSL now, to flag the streams as encrypted.
Existing segments will still be reassembled and inspected, while
new segments won't be. This allows for pattern based inspection
of the TLS handshake.
Like is the case with completely disabled 'raw' reassembly, the
logic is that the segments are flagged as completed for 'raw' right
away. So they are not considered in raw reassembly anymore.
As no new segments will be considered, the chunk limit check will
return true on the next call.
Have a single function StreamTcpReturnSegmentCheck determine if a
segment is ready to be removed from the stream.
Handle FLOW_NOPAYLOAD_INSPECT in raw reassembly.
httplog_ctx->fields would not be initialized before setting flags in
it:
Scanbuild:
output-json-http.c:491:46: warning: The left expression of the compound assignment is an uninitialized value. The computed value will also be garbage
http_ctx->fields |= (1<<f);
~~~~~~~~~~~~~~~~ ^
1 warning generated.
Drmemory:
~~27874~~ Error #1: UNINITIALIZED READ: reading register eax
~~27874~~ # 0 JsonHttpLogJSON [/home/buildbot/qa/buildbot/donkey/drmemory/Suricata/src/output-json-http.c:260]
~~27874~~ # 1 JsonHttpLogger [/home/buildbot/qa/buildbot/donkey/drmemory/Suricata/src/output-json-http.c:375]
Just memset the whole structure right after initialition.
Fix issue detected byCoverity:
*** CID 1197756: Bad bit shift operation (BAD_SHIFT)
/src/util-rohash.c: 74 in ROHashInit()
68 }
69 if (hash_bits < 4 || hash_bits > 32) {
70 SCLogError(SC_ERR_HASH_TABLE_INIT, "invalid hash_bits setting, valid range is 4-32");
71 return NULL;
72 }
73
>>> CID 1197756: Bad bit shift operation (BAD_SHIFT)
>>> In expression "1U << hash_bits", left shifting by more than 31 bits has undefined behavior. The shift amount, "hash_bits", is as much as 32.
74 uint32_t size = hashsize(hash_bits) * sizeof(ROHashTableOffsets);
75
76 ROHashTable *table = SCMalloc(sizeof(ROHashTable) + size);
77 if (unlikely(table == NULL)) {
78 SCLogError(SC_ERR_HASH_TABLE_INIT, "failed to alloc memory");
79 return NULL;
This was only a potential issue as ROHashInit was only called with
hash_bits 16 in the code.
Bug #1170.
During socket creation all error cases were leading to suricata to
retry the opening of capture. This patch updates this behavior to
have fatal and recoverable error case. In case of a fatal error,
suricata is leaving cleanly.
Change GAP detection logic. If we encounter missing data before
last_ack, we know we have missed data. The receiving host has ack'd
it already, so a retransmission of the missing data is highly
unlikely.
AppLayer reassembly correctly only flags a segment as done when it's
completely used in reassembly. Raw reassembly could flag a partially
used segment as complete as well. In this case the segment could be
discarded early. Further reassembly would miss data, leading to a
gap. Due to this, up to 'window size' bytes worth of segments could
remain in the session for a long time, leading to memory resource
pressure.
This patch sets the flag correctly.
Some traffic is very unbalanced. For example a 4 bytes request
followed by 12mb of response data. If the 4 bytes don't lead to
the protocol being detected, then app layer processing won't
start, but it will not give up either. It will just wait for more
data. This leads to piling up data on the opposite side.
Observed:
TS: 4 bytes. PP failed (bit set), PM has not given up (bit not set).
This makes sense as max_depth is not yet reached.
TC: 12mb. PP and PM failed (bits set).
As ts-PM never gives up, we never consider proto detect complete,
so all segments in either direction are still kept in the session.
This patch adds a cutoff for this case:
- IF for TS we have PP bit set, PM not set, AND
- we have TC both bits set, AND
- proto is unknown, AND
- TC data is 100k already, THEN
- give up on protocol detection.
The same for the opposite direction.
The reassembly gap detection makes use of the window. However, in
midstream mode the window size we use is unreliable, as we have to
assume window scaling is in place. This patch improves midstream
handling of those cases.
In midstream mode we may encounter a case where the data we is beyond
the isn, but below last_ack. This means we're missing some data, that
is already acked so it won't be retransmitted. Therefore, we can
conclude it's a data gap.
If we're getting a lot of data in one direction and the proto for this
direction is unknown, proto detect will hold up segments in the segment
list in the stream. They are held so that if we detect the protocol on
the opposing stream, we can still parse this side of the stream as well.
However, some sessions are very unbalanced. FTP data channels, large
PUT/POST request and many others, can lead to cases where we would have
to store many megabytes worth of segments before we see the opposing
stream. This leads to risks of resource starvation.
In this patch, a cutoff point is enforced. If we've stored 100k in one
direction and we've seen no data in the other direction, we give up.
If we've given up, the applayer_proto_detection_skipped event is set.
app-layer-event: applayer_proto_detection_skipped;
If a TCP session is midstream, we may end up with a case where the
start of an HTTP request is missing. We won't detect HTTP based on
the request. However, the reply is fine, so we detect HTTP anyway.
This leads to passing the incomplete request to the htp parser.
This has been observed, where the http parser then saw many bogus
requests in the incomplete data. This is not limited to HTTP.
To counter this case, a midstream session MUST find it's protocol
in the toserver direction. If not, we assume the start of the
request/toserver is incomplete and no reliable detection and parsing
is possible. So we give up.
If midstream is enabled and the first packet is the syn/ack packet from
the 3whs, initialized server.last_ack to the packets seq.
This fixes tracking the session.
Extended data were freed before the release function was called.
The result was that, in AF_PACKET IPS mode, the release function
was only sending void data because it the content of the extended
data is the content of the packet.
This patch updates the code to have the freeing of extended data
done in the cleaning function for a packet which is called by the
release function. This improves consistency of the code and fixes
the bug.
The direction specific masks were not used correctly. The toserver ones
were only used for 'dp' registrations, the toclient ones only for 'sp'.
The patch merges them.
If a flow matches both an 'sp' based PP registration and a 'dp' based,
until now we would only check the 'dp' one. This patch changes that. It
will inspect both.
Instead of the notion of toserver and toclient protocol detection, use
destination port and source port.
Independent of the data direction, the flow's port settings will be used
to find the correct probing parser, where we first try the dest port,
and if that fails the source port.
Update the configuration file format, where toserver is replaced by 'dp'
and toclient by 'sp'. Toserver is intrepreted as 'dp' and toclient as
'sp' for backwards compatibility.
Example for dns:
dns:
# memcaps. Globally and per flow/state.
#global-memcap: 16mb
#state-memcap: 512kb
# How many unreplied DNS requests are considered a flood.
# If the limit is reached, app-layer-event:dns.flooded; will match.
#request-flood: 500
tcp:
enabled: yes
detection-ports:
dp: 53
udp:
enabled: yes
detection-ports:
dp: 53
Like before, progress of protocol detection is tracked per flow direction.
Bug #1142.
Instead of error phrone externs with macro's, use functions with a local
static enum var instead.
- EngineModeIsIPS(): in IPS mode
- EngineModeIsIDS(): in IDS mode
To set the modes:
- EngineModeSetIDS(): IDS mode (default)
- EngineModeSetIPS(): IPS mode
Bug #1177.
The SLOAD define using __insn_ld2s_L2 is used to provide a compiler
hint that the load will come from the L2 cache instead of the L1. It
also specifies that it is a 2 byte signed load. For the Tiny MPM, that
needs to be a 1-byte load, which is what is specified in util-ac-mpm-tile.c,
but the #undef was removing that definition.
Previously, the alstate use in the main detect loop was unsafe. The
alstate pointer would be set duing a lock, but it would again be used
after one or more lock/unlock cycles. If the data pointed to would
disappear, a dangling pointer would be the result.
Due to they way flows are cleaned up using reference counting and
such, changes of this happening were very small. However, at least
one path can lead to this situation. So it had to be fixed.
Make DeStateDetectContinueDetection get it's own alstate pointer instead
of using the one that was passed to it. We now get and use it only
inside a flow lock.
Make DeStateDetectStartDetection get it's own alstate pointer instead
of using the one that was passed to it. We now get and use it only
inside a flow lock.
This is an intrusive change. This patch modifies the way AMATCH
inspection uses locking.
So far, each keyword did it's own locking. This lead to a situation
where a 'alstate' pointer was passed around that was not always
protected by a lock.
This patch moves the locking to the Stateful detection functions.
This patch fixes a warning message when suricata is started without
giving an interface name on the command line. The code was trying
to get the MTU even if pcap_dev was not set.
FreeBSD 10 32-bit with clang 3.3:
log-tlslog.c:172:14: error: format specifies type 'long' but the argument has type 'time_t' (aka 'int') [-Werror,-Wformat]
p->ts.tv_sec,
^~~~~~~~~~~~
1 error generated.
detect-engine-payload.c:508:27: warning: format specifies type 'long' but the argument has type 'time_t' (aka 'int') [-Wformat]
printf("%ld.%06ld\n", tv_diff.tv_sec, (long int)tv_diff.tv_usec);
~~~ ^~~~~~~~~~~~~~
%d
1 warning generated.
Add output 'free list' that contains all the output ctx' that need
cleanup at shutdown. It differs from the runmode output list in that
it will also contain a 'parent' for the submodules that share the
context of it's parent.
For the loggers that we allow only one instance for: tls, ssh, drop, we
track active loggers through Output*Enable functions. Add Disable
functions to mirror this. They are to be called from the shutdown funcs
those loggers use.
This is a beginning of implementation for bug #1660:
https://redmine.openinfosecfoundation.org/issues/1160
This patch adds a cleaning function for each logger of new type
(packet, tx and file). These functions are called in RunModeShutDown().
The state of this patch is that it is crashing suricata when sending
pcap to analyse:
- At first pcap if tx and file cleaning function are called
- At second pcap if only packet cleaning function is called
The cause in first case is unknown. In second case this is due to
the necessity of cleaning the list of logger registered to a logging
type.
The OpenSSL implementation of RFC 6520 (Heartbeat extension) does not
check the payload length correctly, resulting in a copy of at most 64k
of memory from the server (ref: CVE-2014-0160).
This patch adds support for decoding heartbeat messages (if not
encrypted), and checking several parts (type, length and padding).
When an anomaly is detected, a TLS event is raised.
In case of multiple transactions, the stored AMATCH list would not have
been reset, but it would still be reconsidered. Even though none would
match, the engine would still conclude that the rule matched.
Pierre Chifflier pointed out that a rule like:
alert tls any any -> any any (msg:"TLS store"; tls.issuerdn:!"C=FR"; tls.store;)
was alerting but not storing the certificate. If the filter was
removed:
alert tls any any -> any any (msg:"TLS store"; tls.store;)
then tls.store is working as expected.
This was linked with fact that logging is only done once for a SSL
state. So without filter, once we have the info we can log and we
run the storage. But when there is a filter, we log and then there
is a filter analysis and alerting. And as logging as already be done
we don't enter in the logging function and there is no storage.
This patch forces the entrance in the log function when there is a
request for TLS storage. And it adds an exit in the logging function
to only do the storage part if the TLS state has already being logged.
Previously the sync code would depend on traffic to complete. This
patch adds poll support and can complete the setup if the poll timeout
is reached as well.
Part of bug #1130.
This patch is updating af-packet to discard packets that have been
sent to a socket before all socket in a fanout group have been setup.
Without this, there is no way to assure that all packets for a single
flow will be treated by the same thread.
Tests have been done on a system with an ixgbe network card. When using
'cluster_flow' load balancing and disactivating receive hash on the iface:
ethtool -K IFACE rxhash off
then suricata is behaving as expected and all packets for a single flow
are treated by the same thread.
For some unknown reason, this is not the case when using cluster_cpu. It
seems that in that case the load balancing is not perfect on the card side.
The rxhash offloading has a direct impact on the cluster_flow load balancing
because load balancing is done by using a generic hash key attached to
each skb. This hash can be computed by the network card or can be
computed by the kernel. In the xase of a ixgbe network card, it seems there
is some issue with the hash key for TCP. This explains why it is necessary to
remove the rxhash offloading to have a correct behavior. This could also
explain why cluster_cpu is currently failing because the card is using the
same hash key computation to do the RSS queues load balancing.
This allows for registering a keyword under another name while keeping
the old name active and supported.
Do this for 'luajit', which can now also be used as just 'lua'.
The HTTP module of Eve didn't register itself with the app-layer
for HTTP. This meant that if no other HTTP logger was active, the
HTTP logging in Eve wouldn't work.
This patch makes the HTTP Eve module register itself correctly.
Bug #1133.
To remain constistent with the other logs, set the event type to
the same name as the structure containing the defails. In this
case fileinfo.
Part of bug #1127.
Due to what appears to be an issue in logstash, the 'file' part of
the file event types was masked by a field that logstash-forwarder
added itself.
Since logstash-forwarder is an important part of the logstash stack,
this patch works around the issue by renaming our 'file' structure
to 'fileinfo', thus resolving the naming conflict.
Bug #1127
Move pfring_enable_ring to the start of ReceivePfringLoop() so that
it's guaranteed to be called after all threads have called
pfring_set_cluster first.
This is necessary because pfring will already make packets available
to thread N, while thread N+1 is still registering itself. This leads
to cases where the first packet(s) of a flow are processed by a
different thread in Suricata than the later ones.
This is a race condition only at start up. New flows after the pfring
initialization is complete will not be influenced by this.
Bug #1129.
This patch updates the timestamp format used in eve loggin.
It uses a ISO 8601 comptatible string. This allow tools parsing
the output to easily detect adn/or use the timestamp.
In the EVE JSON output, the value of the timestamp key has been
changed to 'timestamp' (instead of 'time'). This allows tools
like Splunk to detect the timestamp and use it without configuration.
Logstash configuration is simple:
input {
file {
path => [ "/usr/local/var/log/suricata/eve.json" ]
codec => json
type => "suricata-log"
}
}
filter {
if [type] == "suricata-log" {
date {
match => [ "timestamp", "ISO8601" ]
}
}
}
In splunk, auto detection of the fle format is failling and it seems
you need to define a type to parse JSON in
$SPLUNK_DIR/etc/system/local/props.conf:
[suricata]
KV_MODE = json
NO_BINARY_CHECK = 1
TRUNCATE = 0
Then you can simply declare the log file in
$SPLUNK_DIR/etc/system/local/inputs.conf:
[monitor:///usr/local/var/log/suricata/eve.json]
sourcetype = suricata
In both cases the timestamp are correctly imported by
the tools.
PF_RING is delivering the packet with VLAN header stripped. This
patch updates the code to get the information from PF_RING extended
header information.
This patch uses the new function SCKernelVersionIsAtLeast to know
that we've got a old kernel that do not strip the VLAN header from
the message before sending it to userspace.
In case of 'alert ip' rules that have ports, the port checks would
be bypassed for non-port protocols, such as ICMP. This would lead to
a rule matching: a false positive.
This patch adds a check. If the rule has a port setting other than
'any' and the protocol is not TCP, UDP or SCTP, then we rule won't
match.
Rules with 'alert ip' and ports are rare, so the impact should be
minimal.
Bug #611.
Eve-log would call GET_VLAN_ID on the packets vlan header if p->vlan_idx
was bigger than 0. GET_VLAN_ID would then unconditionally dereference
p->vlanh[0] or [1]. However, there are a number of cases in which these
pointers are not set. Defrag pseudo packets, AF_PACKET and in the future
PF_RING, do set the id's, but not the header pointers.
This patch adds 2 new macro's which are wrappers around a function:
VLAN_GET_ID1 and VLAN_GET_ID2 get the id's by calling DecodeVLANGetId.
This function will return the correct id.
Bug #1120.
app-layer-ssh.c:165:5: warning: Value stored to 'input_len' is never read
input_len -= 1;
^ ~
1 warning generated.
app-layer-ssh.c:160:5: warning: Value stored to 'input_len' is never read
input_len -= 4;
^ ~
1 warning generated.
Previously the software version would only contain up to the first
space.
E.g. in SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu3
It would contain "OpenSSH_4.7p1".
This patch changes the behavior to:
"OpenSSH_4.7p1 Debian-8ubuntu3"
This patch adds a check that was missing when specifying BPF filter
from a file. Suricata behavior should have been the same as when
BPF filter is specified on command line.
Fix warning spotted by clang on FreeBSD:
source-ipfw.c:241:49: warning: use of logical '||' with constant operand [-Wconstant-logical-operand]
if (suricata_ctl_flags & (SURICATA_STOP || SURICATA_KILL)) {
^ ~~~~~~~~~~~~~
source-ipfw.c:241:49: note: use '|' for a bitwise operation
if (suricata_ctl_flags & (SURICATA_STOP || SURICATA_KILL)) {
^~
|
Use same logic as the one used in other capture mode.
In the case of running mode like NFQ there is no need possibility
to compute the statistics as it is done in LiveDevice (drop and
checksum count are meaningless).
This patch adds a function that allow running mode to disable the
display of the counters at exit.
Remove local copies from each MPM file and use include file instead.
Might be better to also add util-memcpy.c rather than inlining it each time,
to get smaller code, since only seems to be used at initialization.
This is required because SCMemcmpLowercase() expects it first argument
to be already lowercase for the comparison. This is done by using
memcpy_tolower() for NO_CASE patterns.
This addresses code review comments from Victor.
The case_state in MPMs was just to track when a pid could have no-case and
case-sensitive matches for the same PID. Now that can't happen after fixing
bug 1110, so remove the code and storage for case_state.
This fixes bug 1110. When assigning PIDs, use the NO_CASE flag when comparing
for duplicates. The state of the flag must be the same, but also use the same
type of comparisons when checking for duplicates.
Previously, "foo":CS would match with "foo":CI when it should not.
and "foo":CI would not match "FoO":CI when it should. Both of those
cases are fixed with this change.
This then allows simplifying the use of pid in MPMs because now if they
pids match, then so do the flags, so checking the flags is not required.
In the stream engine, Init() can fail if the memcap is reached. In this
case the segment was not freed by PoolGet:
==8600== Thread 1:
==8600== 70,480 bytes in 1,762 blocks are definitely lost in loss record 611 of 612
==8600== at 0x4C2A2DB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==8600== by 0x914CC8: TcpSegmentPoolAlloc (stream-tcp-reassemble.c:166)
==8600== by 0xA0D315: PoolGet (util-pool.c:297)
==8600== by 0x9302CD: StreamTcpGetSegment (stream-tcp-reassemble.c:3768)
==8600== by 0x921FE8: StreamTcpReassembleHandleSegmentHandleData (stream-tcp-reassemble.c:1873)
==8600== by 0x92EEDA: StreamTcpReassembleHandleSegment (stream-tcp-reassemble.c:3584)
==8600== by 0x8D3BB1: HandleEstablishedPacketToServer (stream-tcp.c:1969)
==8600== by 0x8D7F98: StreamTcpPacketStateEstablished (stream-tcp.c:2323)
==8600== by 0x8F13B8: StreamTcpPacket (stream-tcp.c:4243)
==8600== by 0x8F2537: StreamTcp (stream-tcp.c:4485)
==8600== by 0x95DFBB: TmThreadsSlotVarRun (tm-threads.c:559)
==8600== by 0x8BE60D: TmThreadsSlotProcessPkt (tm-threads.h:142)
tcp.segment_memcap_drop | PcapFile | 1762
This patch fixes PoolGet to both Cleanup and Free the Alloc'd data in
case Init fails.
==15745== 3 bytes in 1 blocks are definitely lost in loss record 5 of 615
==15745== at 0x4C2A2DB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==15745== by 0x71858C1: strdup (strdup.c:42)
==15745== by 0xA20814: SCProtoNameInit (util-proto-name.c:75)
==15745== by 0x952D1B: PostConfLoadedSetup (suricata.c:1983)
==15745== by 0x9537CD: main (suricata.c:2112)
Also, clean up and add a check to make sure it's initialized only once.
The full path of the script names is stored in a buffer that wasn't
freed at exit.
==24195== 41 bytes in 1 blocks are definitely lost in loss record 300 of 613
==24195== at 0x4C2A2DB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==24195== by 0x565D06: DetectLoadCompleteSigPath (detect.c:251)
==24195== by 0x7CABE8: DetectLuajitParse (detect-luajit.c:595)
==24195== by 0x7CD2AE: DetectLuajitSetup (detect-luajit.c:827)
==24195== by 0x7DC273: SigParseOptions (detect-parse.c:547)
==24195== by 0x7DDC75: SigParse (detect-parse.c:856)
==24195== by 0x7E1C2B: SigInitHelper (detect-parse.c:1336)
==24195== by 0x7E2968: SigInit (detect-parse.c:1559)
==24195== by 0x7E37B1: DetectEngineAppendSig (detect-parse.c:1831)
==24195== by 0x566D17: DetectLoadSigFile (detect.c:335)
==24195== by 0x567636: SigLoadSignatures (detect.c:423)
==24195== by 0x951A97: LoadSignatures (suricata.c:1816)
This patch frees the buffer.
For packets that were freed, not recycled, profiling memory wasn't
freed:
==15745== 13,312 bytes in 8 blocks are definitely lost in loss record 611 of 615
==15745== at 0x4C2C494: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==15745== by 0xA190D5: SCProfilePacketStart (util-profiling.c:963)
==15745== by 0x4E4345: PacketGetFromAlloc (decode.c:134)
==15745== by 0x83FE75: FlowForceReassemblyPseudoPacketGet (flow-timeout.c:276)
==15745== by 0x8413BF: FlowForceReassemblyForHash (flow-timeout.c:588)
==15745== by 0x841897: FlowForceReassembly (flow-timeout.c:716)
==15745== by 0x9540F6: main (suricata.c:2296)
==15745==
==15745== 14,976 bytes in 9 blocks are definitely lost in loss record 612 of 615
==15745== at 0x4C2C494: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==15745== by 0xA190D5: SCProfilePacketStart (util-profiling.c:963)
==15745== by 0x4E4345: PacketGetFromAlloc (decode.c:134)
==15745== by 0x83FE75: FlowForceReassemblyPseudoPacketGet (flow-timeout.c:276)
==15745== by 0x841508: FlowForceReassemblyForHash (flow-timeout.c:620)
==15745== by 0x841897: FlowForceReassembly (flow-timeout.c:716)
==15745== by 0x9540F6: main (suricata.c:2296)
This patch addresses that.
If lock profiling was compiled in, but disabled in the config a
serious memory leak condition was triggered.
Valgrind output:
==11169== 9,091,248 bytes in 189,401 blocks are definitely lost in loss record 564 of 564
==11169== at 0x4C2A2DB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==11169== by 0xABC44C: LockRecordAdd (util-profiling-locks.c:112)
==11169== by 0xABC950: SCProfilingAddPacketLocks (util-profiling-locks.c:141)
==11169== by 0xA04CD5: TmThreadsSlotVarRun (tm-threads.c:562)
==11169== by 0x958793: TmThreadsSlotProcessPkt (tm-threads.h:142)
==11169== by 0x9599C3: PcapFileCallbackLoop (source-pcap-file.c:172)
==11169== by 0x56FC130: ??? (in /usr/lib/x86_64-linux-gnu/libpcap.so.1.4.0)
==11169== by 0x959D24: ReceivePcapFileLoop (source-pcap-file.c:210)
==11169== by 0xA05B9E: TmThreadsSlotPktAcqLoop (tm-threads.c:703)
==11169== by 0x6155F6D: start_thread (pthread_create.c:311)
==11169== by 0x6E399CC: clone (clone.S:113)
Some of the packets counters were using a 32bit integer. Given the
bandwidth that is often seen, this is not a good idea. This patch
switches to 64bit counter.
Packets counter is incremented in AFPDumpCounters and it was
also incremented during packet reading. The result was a value
that is twice the expected result.
Spotted-by: Victor Julien <victor@inliniac.net>
Until now a PoolInit failure for the segment pools would result in
an abort() through BUG_ON(). This patch adds a proper error message,
then exits.
Bug #1108.
When TcpSegmentPoolInit fails (e.g. because of a too low memcap),
it would free the segment. However, the segment memory is managed
by the Pool API, which would also free the same memory location.
This patch fixes that.
Also, memset the structure before any checks are done, as the segment
memory is passed to TcpSegmentPoolCleanup in case of error as well.
Bug #1108
Fixes:
** CID 1075221: Dereference after null check (FORWARD_NULL)
/src/suricata.c: 1344 in ParseCommandLine()
The reason it gave this warning is that in other paths using optarg
there was a check, so the checker assumed optarg can be NULL.
This patch fixes:
** CID 1187544: Missing break in switch (MISSING_BREAK)
/src/decode-icmpv6.c: 268 in DecodeICMPV6()
** CID 1187545: Missing break in switch (MISSING_BREAK)
/src/decode-icmpv6.c: 270 in DecodeICMPV6()
** CID 1187546: Missing break in switch (MISSING_BREAK)
/src/decode-icmpv6.c: 272 in DecodeICMPV6()
** CID 1187547: Missing break in switch (MISSING_BREAK)
/src/decode-icmpv6.c: 274 in DecodeICMPV6()
It duplicates the logic instead of adding 'fall through' statements
as the debug statements were wrong and confusing. For ND_REDIRECT
all 5 ND_* types would have been printed.
By assuming that HTPCallbackRequestLine would always be run first,
an memory leak was introduced. It would not check if user data already
existed in the tx, causing it to overwrite the user data pointer is
it already existed.
Bug #1092.
In really short Suricata runtimes the capture counters would not
be updated. This patch does a force update at the end of the
capture loops in pcap and af-packet.
The probing parser registration function
AppLayerProtoDetectPPParseConfPorts was a void, meaning it would
give no feedback to the registering protocol implementation. If a
config was missing, it would just give up.
This patch changes it to return a bool. 0 if no config was found,
1 if a config was found.
This allows the caller to setup a default case.
The probing parser detection ports yaml settings of the TCP part
of the DNS parser accidentally used udp as protocol string, causing
the wrong part of the YAML to be evaluated.
If the protocol is disabled, app-layer-event would print a cryptic
error message. This patch makes sure we inform the user the protocol
is in fact disabled.
This patch introduces a new counter "decoder.vlan_qinq". It counts
packets that have more than two stacked vlan layers.
Packets with 2 vlan layers will both increment "decoder.vlan" and
"decoder.vlan_qinq".
A node isn't known to be a sequence node until the YAML is parsed.
If a node sequence node was set on the command line, promote
it to a sequence node when it is discovered by YAML to be
a sequence node.
Fixes comment #18 in issue 921.
When creating a pseudo packet with the reassembled IP packet, the
parent's vlan id or id's are also needed. The defrag packet is run
through decode and the flow engine, where the vlan id is necessary
for connecting the packet to the correct flow.
Some old distribution don't ship recent enough linux header. This
result in TP_STATUS_VLAN_VALID being undefined. This patch defines
the constant and use it as it is used in backward compatible method
in the code: the flag is not set by kernel and a test on vci value
will be made.
This should fix https://redmine.openinfosecfoundation.org/issues/1106
Flow-timeout code injects pseudo packets into the decoders, leading
to various issues. For a full explanation, see:
https://redmine.openinfosecfoundation.org/issues/1107
This patch works around the issues with a hack. It adds a check to
each of the decoder entry points to bail out as soon as a pseudo
packet from the flow timeout is encountered.
Ticket #1107.