Reduce hash table size as regular classification files are usually
below 100 in size. It's not performance critical anyway.
Convert pcre_get_substring calls to pcre_copy_substring.
Added more WebDAV functions. A complete list of what http
methods libhtp can handle can be found at:
https://github.com/OISF/libhtp/blob/0.5.x/htp/htp_core.h#L260.
So now the methods array reflects these available functions.
The comments have also been changed to reflect the desired style.
1) Reworked pattern registration for http methods and versions.
Instead of being a manual and verbose action of adding one
and one http method with N-amount if prefix spacings and
the same for HTTP versions (eg. HTTP/1.1) i moved it all
to be loop based actions reading values from char arrays.
In the future all that is needed is to add new methods
to the arrays and they will be added as a pattern.
2) Modified pattern registration after feedback.
Changed variable used in snprintf for http method registration
Should have been size of dest buffer at not another var (catsize)
that i had created. Also removed this variable.
Fixed a typo in the comment for registering http versions.
TO_CIENT -> TO_CLIENT.
AppLayerTest09, AppLayerTest10 and AppLayerTest11 depended on a max
protocol detection pattern size of < 17. Update the tests to pass one
extra byte to the app layer. This makes the protocol detection code
flag the session as 'proto detection completed' again.
Consider the host's xbits expiry status when checking the host for
timeout. If a single active non-expired bit is found, the host won't
be timeout just yet.
A bad last_ack update where it would be set beyond next_seq could
lead to rejection of valid segments and thus stream gaps.
Update tests to reflect new last_ack/next_seq behaviour.
The post match list was called with an unlocked flow until now.
However, recent de_state handling updates changed this. The stateful
detection code can now call the post match functions while keeping
the flow locked. The normal detection code still calls it with an
unlocked flow.
This patch adds a hint to the DetectEngineThreadCtx called
'flow_locked' that is set to true if the caller has already locked
the flow.
The detection engine and log engines can walk the tx list indirectly,
by looping AppLayerParserGetTx. This would lead to new list walks in
the DNS tx list though. Leading to bad performance.
This patch stores the last returned tx and uses that to determine if
the next tx is what we need next. If so, we can return that w/o list
walk.
Load the YAML into a prefix "detect-engine-reloads.N" where N is the
reload counter. This way we can load the updated config w/o overwriting
the current one.
Initalize detection engine by configuration prefix.
DetectEngineCtxInitWithPrefix(const char *prefix)
Takes the detection engine configuration from:
<prefix>.<config>
If prefix is NULL the regular config will be used.
Update sure that DetectLoadCompleteSigPath considers the prefix when
retrieving the configuration.
Add function to load a yaml file and insert it into the conf tree at
a specific prefix.
Example YAML:
somefile: myfile.txt
If loaded using ConfYamlLoadFileWithPrefix with prefix "myprefix", it
can be retrieved by the name of "myprefix.somefile".
Rename the thread init function DetectEngineThreadCtxInitForLiveRuleSwap
to DetectEngineThreadCtxInitForReload and change it's logic to take the
new detection engine as argument and let it return the
DetectEngineThreadCtx or NULL on error.
The old approach used the thread init API format, but it wasn't used in
that way.
Add DetectEngineReference, which takes a reference to a detect engine,
and make DetectEngineThreadCtxInitForLiveRuleSwap use it. This way
reload will not depend on master staying the same. This allows master
to be updated in between w/o affecting the reload that is in progress.
Use new DetectEngineReload() function. It's called from the main loop
instead of it being spawned into it's own temporary thread. This greatly
simplifies the signal handling.
An added advantage is that this seems to improve the memory usage.
Related to bug #1358
The minimal detect engine has only the minimal memory use and setup
time. It's to be used for 'delayed' detect where the first detection
engine is essentially empty.
The threads setup are also minimal.
Instead of threading logic with dummy slots and all, use the regular
reload logic for delayed detect.
This means we pass a empty detect engine to the threads and then
reload (live swap) it as soon as the engine is running.
Add code to allow for unittests not following the complete api.
Update replace tests as they don't use the unittests runmode that
powers the workaround based on RunmodeIsUnittests().
Update detect engine management to make it easier to reload the detect
engine.
Core of the new approach is a 'master' ctx, that keeps a list of one or
more detect engines. The detect engines will not be passed to any thread
directly, but instead will only be accessed through the detect engine
thread contexts. As we can replace those atomically, replacing a detect
engine becomes easier.
Each thread keeps a reference to its detect context. When a detect engine
is replaced or removed, it's added to a free list. Once its reference
count reaches 0, it is freed.
On midstream SYN/ACK pickups, we would flip the direction of packets
after the first. This meant the first (pickup) packet's direction
was wrong.
This patch fixes that.
Nodes that are sequences weren't being recorded as such, causing
rules to fail to load.
Change sequence test name to reflect better what it tests, and
test that the sequence node is detected as a sequence.
Use separate data structures for storing TX and FLOW (AMATCH) detect
state.
- move state storing into util funcs
- remove de_state_m
- simplify reset state logic on reload
PacketPoolWait in autofp can wait for considerable time. Until now
it was essentially spinning, keeping the CPU 100% busy.
This patch introduces a condition to wait in such cases.
Atomically flag pool that consumer is waiting, so that we can sync
the pending pool right away instead of waiting for the
MAX_PENDING_RETURN_PACKETS limit.
Set actions that are set directly from Signatures using the new
utility function DetectSignatureApplyActions. This will apply
the actions and also store info about the 'drop' that first made
the rule drop.
The code was not checking if we had enough room in the direct
data. In case default_packet_size was set really small, this was
resulting in data being written over the data and causing a crash.
The patch fixes the issue by forcing an allocation if the direct
data size in the Packet is to small.
In flow timeout handling we need a function that allocate and blank
a place that will be used to put constructed packet data. This new
function has no other goal.
This will prevent log files that have not been rotated by some
external tool from being deleted, but log files that were
rotated (moved out of the way) will be re-opened.
This is a better default behaviour, especially when not all
log files are rotated at the same time.
Thanks to iro on IRC.
This patch adds the capabilities to log the TLS information the
same way it is currently possible to do with HTTP. As it is
quite hard to read ASN.1 directly in the stream, this will help
people to understand why suricata is firing on alert relative
to TLS.
cc1: warnings being treated as errors
app-layer-smtp.c: In function ‘SMTPParseCommandBDAT’:
app-layer-smtp.c:908: warning: dereferencing type-punned pointer will break strict-aliasing rules
If ETHTOOL_GRXRINGS is undefined we will not be able to build the
RX rings code. So we can make the build conditional to the
definition of ETHTOOL_GRXRINGS.
The HTTP tracking code would parse the content lenght and store it
in the TX user data. It didn't take the possibility or errors into
account though, leading to a possible negative int being cases to
unsigned int. Luckily, the result was unused.
This patch simply removes the offending code.
Reported-by: The Yahoo pentest team
Fix error handling of stub parsers. In case of SCRealloc error the
function would return a non-error code. This could possibly lead to
memory corruption.
Reported-By: The Yahoo pentest team
In case of autofp (or more general, when flow and stream engine
run in different threads) the flow engine should not trigger a flow
reuse as this can lead to race conditions between the flow and the
stream engine.
In such cases, the flow engine can be far ahead of the stream engine
as packets are in a queue between the threads.
Observed:
Flow engine tags packet 10 as start of new flow. Flow is tagged as
'reused'.
Stream engine evaluates packet 5 which belongs to the old flow. It
rejects the flow as it's tagged 'reused'. Attaches packet 5 to the
new flow which is wrong.
Solution:
This patch connects the flow engines handling of reuse cases to
the runmode. It hooks into the RunmodeSetFlowStreamAsync() call to
notify the flow engine that it shouldn't handle the reuse.
For the autofp case, handling TCP reuse in the flow engine didn't work.
The problem is the mismatch between the moment the flow engine looks at
packets and the stream, and the moment the stream engine runs. Flow engine
is invoked in the packet capture thread(s), while the stream engine runs
as part of the stream/detect thread(s). Because of the queues between
those threads the flow manager may already inspect a new SYN while the
stream engine still has to process the previous session.
Moving the flow engine to the stream/detect thread(s) wasn't an option
as the 'autofp' load balancing depends on the flow already being
available in the packet.
The solution here is to add a check for this condition to the stream
engine. At this point the TCP state is up to date. If a TCP reuse case
is encountered, this is the global logic:
- detach packet for old flow
- get a new flow and attach it to the packet
- flag the old flow that it is now obsolete
Additional logic makes sure that the packets already in the queue
between the flow thread(s) and the stream thread are reassigned the
new flow.
Some special handling:
Apply previous 'reuse' before checking for a new reuse. Otherwise we're
tagging the wrong flow in some cases (multiple reuses in the same tuple).
When in a flow/ssn reuse condition, properly remove the packet from
the flow.
Don't 'reuse' if packet is a SYN retransmission.
The old flow is timed out normally by the flow manager.
This replaces the tcp.reused_ssn counter. The flow engine now
enforces the TCP flow reuse logic.
The counter is incremented only when the flow is timed out, so
after the "tcp closed" timeout expired for a flow.
Until now, TCP session reuse was handled in the TCP stream engine.
If the state was TCP_CLOSED, a new SYN packet was received and a few
other conditions were met, the flow was 'reset' and reused for the
'new' TCP session.
There are a number of problems with this approach:
- it breaks the normal flow lifecycle wrt timeout, detection, logging
- new TCP sessions could come in on different threads due to mismatches
in timeouts between suricata and flow balancing hw/nic/drivers
- cleanup code was often causing problems
- it complicated locking because of the possible thread mismatch
This patch implements a different solution, where a new TCP session also
gets a new flow. To do this 2 main things needed to be done:
1. the flow engine needed to be aware of when the TCP reuse case was
happening
2. the flow engine needs to be able to 'skip' the old flow once it was
replaced by a new one
To handle (1), a new function TcpSessionPacketSsnReuse() is introduced
to check for the TCP reuse conditions. It's called from 'FlowCompare()'
for TCP packets / TCP flows that are candidates for reuse. FlowCompare
returns FALSE for the 'old' flow in the case of TCP reuse.
This in turn will lead to the flow engine not finding a flow for the TCP
SYN packet, resulting in the creation of a new flow.
To handle (2), FlowCompare flags the 'old' flow. This flag causes future
FlowCompare calls to always return FALSE on it. In other words, the flow
can't be found anymore. It can only be accessed by:
1. existing packets with a reference to it
2. flow timeout handling as this logic gets the flows from walking the
hash directly
3. flow timeout pseudo packets, as they are set up by (2)
The old flow will time out normally, as governed by the "tcp closed"
flow timeout setting. At timeout, the normal detection, logging and
cleanup code will process it.
The flagging of a flow making it 'unfindable' in the flow hash is a bit
of a hack. The reason for this approach over for example putting the
old flow into a forced timeout queue where it could be timed out, is
that such a queue could easily become a contention point. The TCP
session reuse case can easily be created by an attacker. In case of
multiple packet handlers, this could lead to contention on such a flow
timeout queue.
The flow keyword used flag names that were shared with the
Packet::flowflags field. Some of the flags were'nt used by the packet
though. This lead to waste of some 'flag space'.
This patch defines dedicated flags for the flow keyword and removes
the otherwise unused flags from the FLOW_PKT_* space.
FilePrune would clear the files, but not free them and remove them
from the list. This lead to ever growing lists in some cases.
Especially in HTTP sessions with many transactions, this could slow
us down.
Until this point, the flow manager would check for timed out flows
by walking the flow hash, locking first the hash row and then each
individual flow to get it's state and timestamp. To not be too
intrusive trylocks were used so that a busy flow wouldn't cause the
flow manager to wait for a long time while holding the hash row lock.
Building on the changes in handling of the flow state and lastts
fields, this patch changes the flow managers behavior.
It can now get a flows state atomically and the lastts can be safely
read while holding just the flow hash row lock. This allows the flow
manager to do the basic time out check much more cheaply:
1. it doesn't have to wait for getting a lock
2. it doesn't interupt the packet path
As a consequence the trylock is now also gone. A flow that returns
'true' on timeout is pretty much certainly not going to be busy so
we can safely lock it unconditionally. This also means the flow
manager now walks the entire row unconditionally and is guaranteed
to inspect each flow in the row.
To make sure the functions called before the flow lock don't
accidentally change the flow (which would require a lock) the args
to these flows are changed to const pointers.
In the lastts timeval struct field in the flow the timestamp of the
last packet to update is recorded. This allows for tracking the timeout
of the flow. So far, this value was updated under the flow lock and also
read under the flow lock.
This patch moves the updating of this field to the FlowGetFlowFromHash
function, where it updated at the point where both the Flow and the
Flow Hash Row lock are held. This guarantees that the field is only
updated when both locks are held.
This makes reading the field safe when either lock is held, which is the
purpose of this patch.
The flow manager, while holding the flow hash row lock, can now safely
read the lastts value. This allows it to do the flow timeout check
without actually locking the flow.
A flow has 3 states: NEW, ESTABLISHED and CLOSED.
For all protocols except TCP, a flow is in state NEW as long as just one
side of the conversation has been seen. When both sides have been
observed the state is moved to ESTABLISHED.
TCP has a different logic, controlled by the stream engine. Here the TCP
state is leading.
Until now, when parts of the engine needed to know the flow state, it
would invoke a per protocol callback 'GetProtoState'. For TCP this would
return the state based on the TcpSession.
This patch changes this logic. It introduces an atomic variable in the
flow 'flow_state'. It defaults to NEW and is set to ESTABLISHED for non-
TCP protocols when we've seen both sides of the conversation.
For TCP, the state is updated from the TCP engine directly.
The goal is to allow for access to the state without holding the Flow's
main mutex lock. This will later allow the Flow Manager(s) to evaluate
the Flow w/o interupting it.
The option sets in bytes the value at which segment data is passed to
the app layer API directly. Data sizes equal to and higher than the
value set are passed on directly.
Default is 128.
Create 2 'fast paths' for app layer reassembly. Both are about reducing
copying. In the cases described below, we pass the segment's data
directly to the app layer API, instead of first copying it into a buffer
than we then pass. This safes a copy.
The first is for the case when we have just one single segment that was
just ack'd. As we know that we won't use any other segment this round,
we can just use the segment data.
The second case is more aggressive. When the segment meets a certain
size limit (currently hardcoded at 128 bytes), we pass it to the
app-layer API directly. Thus invoking the app-layer somewhat more often
to safe some copies.
When reaching the end of either list, merging is no longer required,
simply walk down the other list.
If the non-MPM list can't have duplicates, it would be worth removing
the duplicate check for the non-MPM list when it is the only non-empty list
remaining.
Create a copy of the SigMatch data in the sig_lists linked-lists and store
it in an array for faster access and not next and previous pointers. The
array is then used when calling the Match() functions.
Gives a 7.7% speed up on one test.
The Match functions don't need a pointer to the SigMatch object, just the
context pointer contained inside, so pass the Context to the Match function
rather than the SigMatch object. This allows for further optimization.
Change SigMatch->ctx to have type SigMatchCtx* rather than void* for better
type checking. This requires adding type casts when using or assigning it.
The SigMatch contex should not be changed by the Match() funciton, so pass it
as a const SigMatchCtx*.
Read one signature pointer ahead to prefetch the value.
Use a variable, sflags, for s->flags, since it is used many times and the
compiles doesn't know that the signatures structure doesn't change, so it
will reload s->flags.
Use an MPM specific pattern index, which is simply an index starting
at zero and incremented for each pattern added to the MPM, rather than
the externally provided Pattern ID (pid), since that can be much
larger than the number of patterns. The Pattern ID is shared across at
MPMs. For example, an MPM with one pattern with pid=8000 would result
in a max_pid of 8000, so the pid_pat_list would have 8000 entries.
The pid_pat_list[] is replaced by a array of pattern indexes. The PID is
moved to the SCACTilePatternList as a single value. The PatternList is
also indexed by the Pattern Index.
max_pat_id is no longer needed and mpm_ctx->pattern_cnt is used instead.
The local bitarray is then also indexed by pattern index instead of PID, making
it much smaller. The local bit array sets a bit for each pattern found
for this MPM. It is only kept during one MPM search (stack allocated).
One note, the local bit array is checked first and if the pattern has already
been found, it will stop checking, but count a match. This could result in
over counting matches of case-sensitve matches, since following case-insensitive
matches will also be counted. For example, finding "Foo" in "foo Foo foo" would
report finding "Foo" 2 times, mis-counting the third word as "Foo".
In PmqMerge() use MpmAddSids() instead of blindly copying the src
rule list onto the end of the dst rule list, since there might not
be enough room in the dst list. MpmAddSids() will resize the dst array
if needed.
Also add code to MpmAddSids() MpmAddPid() to better handle the case
that realloc fails to get more space. It first tries 2x the needed
space, but if that fails, it tries for just 1x. If that fails resize
returns 0. For MpmAddPid(), if resize fails, the new pid is lost. For
MpmAddSids(), as many SIDs as will fit are added, but some will be
lost.
Rather than creating the array of size maxpatid, dynamically resize as needed.
This also handles the case where duplicate pid are added to the array.
Also fix error in bitarray allocation (local version) to always use bitarray_size.
Rather than statically allocate 64K entries in every rule_id_array,
increase the size only when needed. Created a new function MpmAddSids()
to check the size before adding the new sids. If the array is not large
enough, it calls MpmAddSidsResize() that calls realloc and does error
checking. If the realloc fails, it prints an error and drops the new sids
on the floor, which seems better than exiting Suricata.
The size is increased to (current_size + new_count) * 2. This handles the
case where new_count > current_size, which would not be handled by simply
using current_size * 2. It should also be faster than simply reallocing to
current_size + new_count, which would then require another realloc for each
new addition.
Use a local pattern bit array to making sure we don't match more than
once, in addition to the pmq bitarray that is still used for results
validation higher up in the rule matching process.
Why: pmq->pattern_id_bitarray is currently sometimes used in a
'stateful' way, meaning that for a single packet we run multiple
MPM's on the same pmq w/o resetting it.
The new bitarray is used to determine wherther we need to append the
patterns associated 'sids' list to the pmq rule_id_array.
It has been observed that MPM1 matches for PAT1, and MPM2 matches for
PAT1 as well. However, in MPM1 PAT1 doesn't have the same sids list.
In this case MPM2 would not add it's sids to the list, leading to missed
detection.
Use MPM and non-MPM lists to build our match array. Both lists are
sorted, and are merged and sorted into the match array.
This disables the old match array building code and thus also bypasses
the mask checking.
Treat negated MPM sigs as if non-MPM, so we consider them always.
As MPM results and non-MPM rules lists are now merged and considered
for further inspection, rules that need to be considerd when a pattern
is absent are caught in the middle.
As a HACK/workaround this patch adds them to the non-MPM list. This
causes them to be inspected each time.
Array of rule id's that are not using MPM prefiltering. These will be
merged with the MPM results array. Together these should lead to a
list of all the rules that can possibly match.
Pmq add rule list: Array of uint32_t's to store (internal) sids from the MPM.
AC: store sids in the pattern list, append to Pmq::rule_id_array on match.
Detect: sort rule_id_array after it was set up by the MPM. Rule id's
(Signature::num) are ordered, and the rule's with the lowest id are to
be inspected first. As the MPM doesn't fill the array in order, but instead
'randomly' we need this sort step to assure proper inspection order.
Creates an inline wrapper to check for flowvarlist == NULL before calling
DetectFlowvarProcessList() to remove the overhead of checking since the
list is usually empty.
** CID 1257764: Dereference after null check (FORWARD_NULL)
/src/defrag.c: 291 in Defrag4Reassemble()
** CID 1257763: Dereference after null check (FORWARD_NULL)
/src/defrag.c: 409 in Defrag6Reassemble()
In the error case 'rp' can be both NULL or non-NULL.
MemcmpLowercase would not compare the first byte of both input buffers
leading to two non-identical buffers to be considered the same.
Affects SSE_4_1 and SSE_4_2 implementations of SCMemcmpLowercase, as well
as the non-SIMD implementation. SSE_3 and Tile version are not affected.
Work around OS X 10.10 Yosemite returning EDEADLK on a rwlock wrlocked
then tested by wrtrylock. All other OS' (and versions of OS X that I
tested) seem to return EBUSY instead.
Fix issue where SMTPStateGetTxCnt would return the actual active tx'.
The 'GetCnt' API call is not named correctly. It should be 'GetMaxId',
as this is actually the expected behavior.
This patches is fixing a issue in the OutputJSONBuffer function. It
was writing to file the content of the buffer starting from the start
to the final offset. But as the writing is done for each JSON string
we are duplicating the previous events if we are reusing the same
buffer.
Duplication was for example triggered when we have multiple alerts
attached to a packet. In the case of two alerts, the first one was
logged twice more as the second one.
Update pktacq loop to process flow timeouts in a running engine.
Add a new step to the shutdown phase of packet acquisition loop
threads (pktacqloop).
The shutdown code lets the pktacqloop break out of it's packet
acquisition loop. The thread then enters a flow timeout loop, where
it processes packets from it's tv->stream_pq queue until it's
empty _and_ the KILL flag is set.
Make sure receive threads are done before moving on to flow hash
cleanup (recycle all). Without this the flow recycler could start
it's unconditional hash clean up while detect threads are still
running on the flows.
Update unix socket to match live modes.
Add function TmThreadsInjectPacketById that is to be used to inject flow
timeout packets into the threads stream_pq queue.
TmThreadsInjectPacketById will also wake up listening threads if
applicable.
Packets are passed all packets together in an NULL terminated array
to reduce locking overhead.
Create thread registration and unregistration API for assigning unique
thread id's.
Threadvars is static even if a thread restarts, so we can do the
registration before the threads start.
A thread is unregistered when the ThreadVars are freed.
- Added the suricata.yaml configurations and updated the comments
- Renamed the field in the configuration structure to something generic
- Added two new constants and the warning codes
- Created app-layer-htp-xff.c and app-layer-htp-xff.h
- Added entries in the Makefile.am
- Added the necessary configuration options to EVE alert section
- Updated Unified2 XFF configuration comments and removed unnecessary whitespace
- Created a generic function to parse the configuration
- Release the flow locks sooner and remove debug logging
- Added XFF support to EVE alert output
Incorrectly reallocing the goto table after it was freed by calling
SCACTileReallocState() when really only want to realloc the output table.
This was causing a large goto table to be allocated and never used or
freed.
Free some memory at exit that was not getting freed.
Change pid_pat_list to store copy of case-strings in the same block
of memory as the array of pointers.
Due to a logic error in AppLayerProtoDetectGetProtoByName invalid
protocols would not be detected as such. Instead of ALPROTO_UNKNOWN
ALPROTO_MAX was returned.
Bug #1329
This patch fixes the following errors:
[src/unix-manager.c:306]: (error) Memory pointed to by 'client' is freed twice.
[src/unix-manager.c:313]: (error) Memory pointed to by 'client' is freed twice.
[src/unix-manager.c:323]: (error) Memory pointed to by 'client' is freed twice.
[src/unix-manager.c:334]: (error) Memory pointed to by 'client' is freed twice.
Unix manager was treating the packet after closing the socket if message was
too long.
MLD messages should have a hop limit of 1 only. All others are invalid.
Written at MLD talk of Enno Rey, Antonios Atlasis & Jayson Salazar during
Deepsec 2014.
Have -T / --init-errors-fatal process all rules so that it's easier
to debug problems in ruleset. Otherwise it can be a lengthy fix, test
error cycle if multiple rules have issues.
Convert empty rulefile error into a warning.
Bug #977
If we manage to read the number of RSS queues from an interface,
this means that the optimal number of capture threads is equal
to the minimum of this number and of the number of cores on the
system.
This patch implements this logic thanks to the newly introduced
function GetIfaceRSSQueuesNum.
Add a new default value for the 'threads:' setting in af-packet: "auto".
This will create as many capture threads as there are cores.
Default runmode of af-packet to workers.
For some of the buffer users it's hard to predict how big the data
will be. In the stats.log case this depends on chosen runmode and
number of threads.
To deal with this case a 'MemBufferExpand' call is added. This realloc's
the buffer.
Register with type 'stats':
function init (args)
local needs = {}
needs["type"] = "stats"
return needs
end
The stats are passed as an array of tables:
{ 1, { name=<name>, tmname=<tm_name>, value=<value>, pvalue=<pvalue>}}
{ 2, { name=<name>, tmname=<tm_name>, value=<value>, pvalue=<pvalue>}}
etc
Name is the counter name (e.g. decoder.invalid), tm_name is the thread name
(e.g. AFPacketeth05), value is current value, and pvalue is the value of the
last time the script was invoked.
As the stats api calls the loggers at a global interval, the global
interval should be configured globally.
# global stats configuration
stats:
enabled: yes
# The interval field (in seconds) controls at what interval
# the loggers are invoked.
interval: 8
If this config isn't found, the old config will be supported.
Convert regular 'stats.log' output to this new API.
In addition to the current stats value, also give the last value. This
makes it easy to display the difference.
The SCStreamingBuffer call now also returns two booleans:
data, data_open, data_close = SCStreamingBuffer()
The first indicates this is the first data of this type for this
TCP session or HTTP transaction.
The second indicates this is the last data.
Ticket #1317.