Commit Graph

5541 Commits (fbe6dac1ae013d4d118cbe3da2fe9351f19dd9d4)
 

Author SHA1 Message Date
Victor Julien fbe6dac1ae file: optimize file pruning
FilePrune would clear the files, but not free them and remove them
from the list. This lead to ever growing lists in some cases.
Especially in HTTP sessions with many transactions, this could slow
us down.
10 years ago
Victor Julien 5251ea9ff5 flow: lockless flow manager checks
Until this point, the flow manager would check for timed out flows
by walking the flow hash, locking first the hash row and then each
individual flow to get it's state and timestamp. To not be too
intrusive trylocks were used so that a busy flow wouldn't cause the
flow manager to wait for a long time while holding the hash row lock.

Building on the changes in handling of the flow state and lastts
fields, this patch changes the flow managers behavior.

It can now get a flows state atomically and the lastts can be safely
read while holding just the flow hash row lock. This allows the flow
manager to do the basic time out check much more cheaply:

1. it doesn't have to wait for getting a lock
2. it doesn't interupt the packet path

As a consequence the trylock is now also gone. A flow that returns
'true' on timeout is pretty much certainly not going to be busy so
we can safely lock it unconditionally. This also means the flow
manager now walks the entire row unconditionally and is guaranteed
to inspect each flow in the row.

To make sure the functions called before the flow lock don't
accidentally change the flow (which would require a lock) the args
to these flows are changed to const pointers.
10 years ago
Victor Julien 5587372ce1 flow: modify lastts update logic
In the lastts timeval struct field in the flow the timestamp of the
last packet to update is recorded. This allows for tracking the timeout
of the flow. So far, this value was updated under the flow lock and also
read under the flow lock.

This patch moves the updating of this field to the FlowGetFlowFromHash
function, where it updated at the point where both the Flow and the
Flow Hash Row lock are held. This guarantees that the field is only
updated when both locks are held.

This makes reading the field safe when either lock is held, which is the
purpose of this patch.

The flow manager, while holding the flow hash row lock, can now safely
read the lastts value. This allows it to do the flow timeout check
without actually locking the flow.
10 years ago
Victor Julien a0732d3db2 flow: change flow state logic
A flow has 3 states: NEW, ESTABLISHED and CLOSED.

For all protocols except TCP, a flow is in state NEW as long as just one
side of the conversation has been seen. When both sides have been
observed the state is moved to ESTABLISHED.

TCP has a different logic, controlled by the stream engine. Here the TCP
state is leading.

Until now, when parts of the engine needed to know the flow state, it
would invoke a per protocol callback 'GetProtoState'. For TCP this would
return the state based on the TcpSession.

This patch changes this logic. It introduces an atomic variable in the
flow 'flow_state'. It defaults to NEW and is set to ESTABLISHED for non-
TCP protocols when we've seen both sides of the conversation.

For TCP, the state is updated from the TCP engine directly.

The goal is to allow for access to the state without holding the Flow's
main mutex lock. This will later allow the Flow Manager(s) to evaluate
the Flow w/o interupting it.
10 years ago
Victor Julien 9327b08ab1 tcp: add stream.reassembly.zero-copy-size option
The option sets in bytes the value at which segment data is passed to
the app layer API directly. Data sizes equal to and higher than the
value set are passed on directly.

Default is 128.
10 years ago
Victor Julien 37b56dca55 tcp: add debug stats about reassembly fast paths
Only shown if --enable-debug is passed to configure.
10 years ago
Victor Julien 2bba5eb704 tcp: zero copy fast path in app-layer reassembly
Create 2 'fast paths' for app layer reassembly. Both are about reducing
copying. In the cases described below, we pass the segment's data
directly to the app layer API, instead of first copying it into a buffer
than we then pass. This safes a copy.

The first is for the case when we have just one single segment that was
just ack'd. As we know that we won't use any other segment this round,
we can just use the segment data.

The second case is more aggressive. When the segment meets a certain
size limit (currently hardcoded at 128 bytes), we pass it to the
app-layer API directly. Thus invoking the app-layer somewhat more often
to safe some copies.
10 years ago
Victor Julien 8c1bc7cfb6 stream: move raw stream gap handling into util func 10 years ago
Victor Julien 6ca9c8eb32 stream: move raw reassembly into util func 10 years ago
Victor Julien ff2fecf590 stream: remove StreamTcpReassembleInlineAppLayer
Function is now unused.
10 years ago
Victor Julien 97908bcd2d stream: unify inline and non-inline applayer assembly
Unifiy inline and non-inline app layer stream reassembly to aid
maintainability of the code.
10 years ago
Victor Julien e1d134b027 stream: remove STREAM_SET_FLAGS
Use the unified StreamGetAppLayerFlags instead.
10 years ago
Victor Julien 29d2483efb stream: update inline tests
Make sure inline tests set the stream_inline flag.
10 years ago
Victor Julien e4cb8715de stream: replace STREAM_SET_INLINE_FLAGS macro
Replace it by a generic function StreamGetAppLayerFlags, that can
be used both by inline and non-inline.
10 years ago
Victor Julien ed791a562e stream: track data sent to app-layer 10 years ago
Victor Julien e494336453 stream: move reassembly loop into util funcs
Move IDS per segment reassembly and gap handling into utility functions.
10 years ago
Victor Julien a5641bc7c2 Update changelog for 2.1beta3 10 years ago
Victor Julien 5b6f8bda1d detect: fix small memory leaks
Fix small memory leaks in option parsing. Move away from
pcre_get_substring in favor of pcre_copy_substring.

Related to #1046.
10 years ago
Victor Julien 5a8094136c Clean up Conf API memory on shutdown. 10 years ago
Victor Julien 04e49cea89 Fix live reload detect counter setup
When profiling was compiled in the detect counters were not setup
properly after a reload.
10 years ago
Victor Julien 844065bf58 conf api: use const pointers where possible
Use const pointers where possible in the Conf API.
10 years ago
Victor Julien ddce14360d Cosmetic fixes to main() 10 years ago
Victor Julien a3de4ecd97 Suppress debug statements 10 years ago
Victor Julien a8c16405fb detect: properly size det_ctx::non_mpm_id_array
Track which sgh has the higest non-mpm sig count and use that value
to size the det_ctx::non_mpm_id_array array.
10 years ago
Victor Julien 62751c8017 Fix live reload detect thread ctx setup
Code failed to setup non_mpm_id_array in case of a live reload.
10 years ago
Victor Julien 4e98a3e530 AC: fix memory leak 10 years ago
Victor Julien f88405c650 geoip: adapt to 'const' pointer passing 10 years ago
Victor Julien f1f5428faa detect: expand mask checking
Change mask to u16, and add checks for various protocol states
that need to be present for a rule to be considered.
10 years ago
Victor Julien ca59eabca3 detect: introduce DetectPrefilterBuildNonMpmList
Move building of non-mpm list into a separate function, that is inlined
for performance reasons.
10 years ago
Victor Julien cc4f7a4b96 detect: add profiling for non-mpm list build & filter 10 years ago
Victor Julien 4c10635dc1 detect: optimize non-mpm mask checking
Store id and mask in a single array of type SignatureNonMpmStore so
that both are loaded into the same cache line.
10 years ago
Victor Julien b5a3127151 detect: add mask check prefilter for non mpm list
Add mask array for non_mpm sigs, so that we can exclude many sigs before
we merge sort.

Shows 50% less non mpm sigs inspected on average.
10 years ago
Ken Steele 904441327c Conditionalize SigMatch performance counters.
Only include the counters when PROFILING.
10 years ago
Victor Julien 30b7fdcb49 Detect perf counters 10 years ago
Victor Julien ef6875d583 detect: Disable unused SignatureHeader code 10 years ago
Ken Steele 65af1f1c5e Remove sgh->mask_array
Not needed by new MPM opt.
10 years ago
Ken Steele 4bd280f196 Indentation clean up 10 years ago
Ken Steele 403b5a4645 Further optimize merging mpm and non-mpm rule ID lists.
When reaching the end of either list, merging is no longer required,
simply walk down the other list.

If the non-MPM list can't have duplicates, it would be worth removing
the duplicate check for the non-MPM list when it is the only non-empty list
remaining.
10 years ago
Ken Steele 86f4c6c47b Custom Quick Sort for Signature IDs
Use an in place Quick Sort instead of qsort(), which does merge sort and
calls memcpy().

Improves performance on my tests.
10 years ago
Ken Steele 736ac6a459 Use SigIntId as the type for storing signature IDs (Internal)
Previously using uint32_t, but SigIntId is currently uint16_t, so arrays
will take less memory.
10 years ago
Ken Steele d01d3324fc Increase max pattern ID allowed in MPM AC-tile to 28-bits 10 years ago
Victor Julien 6717c356e3 Clean up sm_array memory at SigFree 10 years ago
Ken Steele 1874784c10 Create optimized sig_arrays from sig_lists
Create a copy of the SigMatch data in the sig_lists linked-lists and store
it in an array for faster access and not next and previous pointers. The
array is then used when calling the Match() functions.

Gives a 7.7% speed up on one test.
10 years ago
Ken Steele 923a77e952 Change Match() function to take const SigMatchCtx*
The Match functions don't need a pointer to the SigMatch object, just the
context pointer contained inside, so pass the Context to the Match function
rather than the SigMatch object. This allows for further optimization.

Change SigMatch->ctx to have type SigMatchCtx* rather than void* for better
type checking. This requires adding type casts when using or assigning it.

The SigMatch contex should not be changed by the Match() funciton, so pass it
as a const SigMatchCtx*.
10 years ago
Ken Steele 900def5caf Create Specialized SCMemcmpNZ() when the length can't be zero. 10 years ago
Ken Steele 7835070385 Replace memcpy() in MpmAddSids with copy loop
For the short size of most sids lists, a straight copy loop is faster.
10 years ago
Ken Steele 83ed01a279 Fix compiler warnings in ac-tile.
Signed vs unsigned comparisons.
10 years ago
Ken Steele 1c76fa50b1 Prefetch the next signature pointer
Read one signature pointer ahead to prefetch the value.
Use a variable, sflags, for s->flags, since it is used many times and the
compiles doesn't know that the signatures structure doesn't change, so it
will reload s->flags.
10 years ago
Ken Steele fa51118dfe Move type first in SigMatch array since it is used more often. 10 years ago
Ken Steele 7a2095d851 In AC-Tile, convert from using pids for indexing to pattern index
Use an MPM specific pattern index, which is simply an index starting
at zero and incremented for each pattern added to the MPM, rather than
the externally provided Pattern ID (pid), since that can be much
larger than the number of patterns. The Pattern ID is shared across at
MPMs. For example, an MPM with one pattern with pid=8000 would result
in a max_pid of 8000, so the pid_pat_list would have 8000 entries.

The pid_pat_list[] is replaced by a array of pattern indexes. The PID is
moved to the SCACTilePatternList as a single value. The PatternList is
also indexed by the Pattern Index.

max_pat_id is no longer needed and mpm_ctx->pattern_cnt is used instead.

The local bitarray is then also indexed by pattern index instead of PID, making
it much smaller. The local bit array sets a bit for each pattern found
for this MPM. It is only kept during one MPM search (stack allocated).

One note, the local bit array is checked first and if the pattern has already
been found, it will stop checking, but count a match. This could result in
over counting matches of case-sensitve matches, since following case-insensitive
matches will also be counted. For example, finding "Foo" in "foo Foo foo" would
report finding "Foo" 2 times, mis-counting the third word as "Foo".
10 years ago