Suricata is a network Intrusion Detection System, Intrusion Prevention System and Network Security Monitoring engine developed by the OISF and the Suricata community.
You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
Go to file
Victor Julien 6f560144c1 time: improve offline time handling
When we run on live traffic, time handling is simple. Packets have a
timestamp set by the capture method. Management threads can simply
use 'gettimeofday' to know the current time. There should never be
any serious gap between the two or major differnces between the
threads.

In offline mode, things are dramatically different. Here we try to keep
the time from the pcap, which means that if the packets are recorded in
2011 the log output should also reflect this. Multiple issues:

 1. merged pcaps might have huge time jumps or time going backward
 2. slowly recorded pcaps may be processed much faster than their
    'realtime'
 3. management threads need a concept of what the 'current' time is for
    enforcing timeouts
 4. due to (1) individual threads may have very different views on what
    the current time is. E.g. T1 processed packet 1 with TS X, while T2
    at the very same time processes packet 2 with TS X+100000s.

The changes in flow handling make the problems worse. The capture thread
no longer handles the flow lookup, while it did set the global 'time'.
This meant that a thread may be working on Packet 1 with TS 1, while the
capture thread already saw packet 2 with TS 10000. Management threads
would take TS 10000 as the 'current time', considering a flow created by
the first thread as timed out immediately.

This was less of a problem before the flow changes as the capture thread
would also create a flow reference for a packet, meaning the flow
couldn't time out as easily. Packets in the queues between capture
thread and workers would all hold such references.

The patch updates the time handling to be as follows.

In offline mode we keep the timestamp per thread. If a management thread
needs current time, it will get the minimum of the threads' values. This
is to avoid the problem that T2s time value might already trigger a flow
timeout as the flow lastts + 100000s is almost certainly meaning the
flow would be considered timed out.
9 years ago
benches
contrib suri-graphite: add ouput to file option 10 years ago
doc Fix make distcheck on CentOS 5.11 11 years ago
lua output-lua: add SCPacketTimeString 11 years ago
m4 Prelude plugin: add detection in configure script 16 years ago
qa qa: update drmemory suppressions for hyperscan spm matching 9 years ago
rules rules: add rules for TLS SNI app layer events 10 years ago
scripts app-layer setup scripts: enable new modules on copy 10 years ago
src time: improve offline time handling 9 years ago
.gitignore unittest: make check use a qa/log dir for logging 12 years ago
.travis.yml travis: set CFLAGS to error on cc warnings 10 years ago
COPYING GPL license sync with official gpl-2.0.txt 10 years ago
ChangeLog Update Changelog for 3.0.1 9 years ago
LICENSE GPL license sync with official gpl-2.0.txt 10 years ago
Makefile.am build: install app-layer-events.rules 10 years ago
Makefile.cvs
README.md readme: initial readme for github 9 years ago
acsite.m4 Added C99 defs/macros to acsite.m4 for CentOS 16 years ago
autogen.sh OpenBSD 5.2 build fixes, Unit test fix. 13 years ago
classification.config Import of classification.config 16 years ago
config.rpath Add file needed for some autotools version. 12 years ago
configure.ac typos: surictsc -> suricatasc 9 years ago
doxygen.cfg doxygen: define UNITTESTS to generate test framework docs 9 years ago
reference.config Update reference.config 11 years ago
suricata.yaml.in spm: add "spm-algo: auto" setting 9 years ago
threshold.config threshold: improve comments of shipped threshold.config, add links to wiki. 13 years ago

README.md

======== Suricata

Introduction

Suricata is a network IDS, IPS and NSM engine.

Installation

https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Suricata_Installation

User Guide

https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Suricata_User_Guide

Contributing

We're happily taking patches and other contributions. Please see https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Contributing for how to get started.

Suricata is a complex piece of software dealing with mostly untrusted input. Mishandling this input will have serious consequences: in IPS mode a crash may knock a network offline; in passive mode a compromise of the IDS may lead to loss of critical and confidential data; missed detection may lead to undetected compromise of the network. In other words, we think the stakes are pretty high, especially since in many common cases the IDS/IPS will be directly reachable by an attacker.

For this reason, we have developed a QA process that is quite extensive. A consequence is that contributing to Suricata can be a somewhat lengthy process.

On a high level, the steps are:

  1. Travis-CI based build & unit testing. This runs automatically when a pull request is made.

  2. Review by devs from the team and community

  3. QA runs

Overview of Suricata's QA steps

Trusted devs and core team members are able to submit builds to our (semi) public Buildbot instance. It will run a series of build tests and a regression suite to confirm no existing features break.

The final QA run takes a few hours minimally, and is started by Victor. It currently runs:

  • extensive build tests on different OS', compilers, optimization levels, configure features
  • static code analysis using cppcheck, scan-build
  • runtime code analysis using valgrind, DrMemory, AddressSanitizer, LeakSanitizer
  • regression tests for past bugs
  • output validation of logging
  • unix socket testing
  • pcap based fuzz testing using ASAN and LSAN

Next to these tests, based on the type of code change further tests can be run manually:

  • traffic replay testing (multi-gigabit)
  • large pcap collection processing (multi-terabytes)
  • AFL based fuzz testing (might take multiple days or even weeks)
  • pcap based performance testing
  • live performance testing
  • various other manual tests based on evaluation of the proposed changes

It's important to realize that almost all of the tests above are used as acceptance tests. If something fails, it's up to you to address this in your code.

One step of the QA is currently run post-merge. We submit builds to the Coverity Scan program. Due to limitations of this (free) service, we can submit once a day max. Of course it can happen that after the merge the community will find issues. For both cases we request you to help address the issues as they may come up.

FAQ

Q: Will you accept my PR?

A: That depends on a number of things, including the code quality. With new features it also depends on whether the team and/or the community think the feature is useful, how much it affects other code and features, the risk of performance regressions, etc.

Q: When will my PR be merged?

A: It depends, if it's a major feature or considered a high risk change, it will probably go into the next major version.

Q: Why was my PR closed?

A: As documented in the Suricata Github workflow here https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Github_work_flow, we expect a new pull request for every change.

Normally, the team (or community) will give feedback on a pull request after which it is expected to be replaced by an improved PR. So look at the comments. If you disagree with the comments we can still discuss them in the closed PR.

If the PR was closed without comments it's likely due to QA failure. If the Travis-CI check failed, the PR should be fixed right away. No need for a discussion about it, unless you believe the QA failure is incorrect.

Q: the compiler/code analyser/tool is wrong, what now?

A: to assist in the automation of the QA, we're not accepting warnings or errors to stay. In some cases this could mean that we add a suppression if the tool supports that (e.g. valgrind, DrMemory). Some warnings can be disabled. In some exceptional cases the only 'solution' is to refactor the code to work around a static code checker limitation false positive. While frusterating, we prefer this over leaving warnings in the output. Warnings tend to get ignored and then increase risk of hiding other warnings.

Q: I think your QA test is wrong

A: If you really think it is, we can discuss how to improve it. But don't come to this conclusion to quickly, moreoften it's the code that turns out to be wrong.

Q: do you require signing of a contributor license agreement?

A: Yes, we do this to keep the ownership of Suricata in one hand: the Open Information Security Foundation. See http://suricata-ids.org/about/open-source/ and http://suricata-ids.org/about/contribution-agreement/