You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
suricata/src/tm-threads.h

242 lines
7.7 KiB
C

Add per packet profiling. Per packet profiling uses tick based accounting. It has 2 outputs, a summary and a csv file that contains per packet stats. Stats per packet include: 1) total ticks spent 2) ticks spent per individual thread module 3) "threading overhead" which is simply calculated by subtracting (2) of (1). A number of changes were made to integrate the new code in a clean way: a number of generic enums are now placed in tm-threads-common.h so we can include them from any part of the engine. Code depends on --enable-profiling just like the rule profiling code. New yaml parameters: profiling: # packet profiling packets: # Profiling can be disabled here, but it will still have a # performance impact if compiled in. enabled: yes filename: packet_stats.log append: yes # per packet csv output csv: # Output can be disabled here, but it will still have a # performance impact if compiled in. enabled: no filename: packet_stats.csv Example output of summary stats: IP ver Proto cnt min max avg ------ ----- ------ ------ ---------- ------- IPv4 6 19436 11448 5404365 32993 IPv4 256 4 11511 49968 30575 Per Thread module stats: Thread Module IP ver Proto cnt min max avg ------------------------ ------ ----- ------ ------ ---------- ------- TMM_DECODEPCAPFILE IPv4 6 19434 1242 47889 1770 TMM_DETECT IPv4 6 19436 1107 137241 1504 TMM_ALERTFASTLOG IPv4 6 19436 90 1323 155 TMM_ALERTUNIFIED2ALERT IPv4 6 19436 108 1359 138 TMM_ALERTDEBUGLOG IPv4 6 19436 90 1134 154 TMM_LOGHTTPLOG IPv4 6 19436 414 5392089 7944 TMM_STREAMTCP IPv4 6 19434 828 1299159 19438 The proto 256 is a counter for handling of pseudo/tunnel packets. Example output of csv: pcap_cnt,ipver,ipproto,total,TMM_DECODENFQ,TMM_VERDICTNFQ,TMM_RECEIVENFQ,TMM_RECEIVEPCAP,TMM_RECEIVEPCAPFILE,TMM_DECODEPCAP,TMM_DECODEPCAPFILE,TMM_RECEIVEPFRING,TMM_DECODEPFRING,TMM_DETECT,TMM_ALERTFASTLOG,TMM_ALERTFASTLOG4,TMM_ALERTFASTLOG6,TMM_ALERTUNIFIEDLOG,TMM_ALERTUNIFIEDALERT,TMM_ALERTUNIFIED2ALERT,TMM_ALERTPRELUDE,TMM_ALERTDEBUGLOG,TMM_ALERTSYSLOG,TMM_LOGDROPLOG,TMM_ALERTSYSLOG4,TMM_ALERTSYSLOG6,TMM_RESPONDREJECT,TMM_LOGHTTPLOG,TMM_LOGHTTPLOG4,TMM_LOGHTTPLOG6,TMM_PCAPLOG,TMM_STREAMTCP,TMM_DECODEIPFW,TMM_VERDICTIPFW,TMM_RECEIVEIPFW,TMM_RECEIVEERFFILE,TMM_DECODEERFFILE,TMM_RECEIVEERFDAG,TMM_DECODEERFDAG,threading 1,4,6,172008,0,0,0,0,0,0,47889,0,0,48582,1323,0,0,0,0,1359,0,1134,0,0,0,0,0,8028,0,0,0,49356,0,0,0,0,0,0,0,14337 First line of the file contains labels. 2 example gnuplot scripts added to plot the data.
14 years ago
/* Copyright (C) 2007-2011 Open Information Security Foundation
*
* You can copy, redistribute or modify this Program under the terms of
* the GNU General Public License version 2 as published by the Free
* Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* version 2 along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
* 02110-1301, USA.
*/
/**
* \file
*
* \author Victor Julien <victor@inliniac.net>
* \author Anoop Saldanha <anoopsaldanha@gmail.com>
*/
#ifndef __TM_THREADS_H__
#define __TM_THREADS_H__
#include "tmqh-packetpool.h"
Add per packet profiling. Per packet profiling uses tick based accounting. It has 2 outputs, a summary and a csv file that contains per packet stats. Stats per packet include: 1) total ticks spent 2) ticks spent per individual thread module 3) "threading overhead" which is simply calculated by subtracting (2) of (1). A number of changes were made to integrate the new code in a clean way: a number of generic enums are now placed in tm-threads-common.h so we can include them from any part of the engine. Code depends on --enable-profiling just like the rule profiling code. New yaml parameters: profiling: # packet profiling packets: # Profiling can be disabled here, but it will still have a # performance impact if compiled in. enabled: yes filename: packet_stats.log append: yes # per packet csv output csv: # Output can be disabled here, but it will still have a # performance impact if compiled in. enabled: no filename: packet_stats.csv Example output of summary stats: IP ver Proto cnt min max avg ------ ----- ------ ------ ---------- ------- IPv4 6 19436 11448 5404365 32993 IPv4 256 4 11511 49968 30575 Per Thread module stats: Thread Module IP ver Proto cnt min max avg ------------------------ ------ ----- ------ ------ ---------- ------- TMM_DECODEPCAPFILE IPv4 6 19434 1242 47889 1770 TMM_DETECT IPv4 6 19436 1107 137241 1504 TMM_ALERTFASTLOG IPv4 6 19436 90 1323 155 TMM_ALERTUNIFIED2ALERT IPv4 6 19436 108 1359 138 TMM_ALERTDEBUGLOG IPv4 6 19436 90 1134 154 TMM_LOGHTTPLOG IPv4 6 19436 414 5392089 7944 TMM_STREAMTCP IPv4 6 19434 828 1299159 19438 The proto 256 is a counter for handling of pseudo/tunnel packets. Example output of csv: pcap_cnt,ipver,ipproto,total,TMM_DECODENFQ,TMM_VERDICTNFQ,TMM_RECEIVENFQ,TMM_RECEIVEPCAP,TMM_RECEIVEPCAPFILE,TMM_DECODEPCAP,TMM_DECODEPCAPFILE,TMM_RECEIVEPFRING,TMM_DECODEPFRING,TMM_DETECT,TMM_ALERTFASTLOG,TMM_ALERTFASTLOG4,TMM_ALERTFASTLOG6,TMM_ALERTUNIFIEDLOG,TMM_ALERTUNIFIEDALERT,TMM_ALERTUNIFIED2ALERT,TMM_ALERTPRELUDE,TMM_ALERTDEBUGLOG,TMM_ALERTSYSLOG,TMM_LOGDROPLOG,TMM_ALERTSYSLOG4,TMM_ALERTSYSLOG6,TMM_RESPONDREJECT,TMM_LOGHTTPLOG,TMM_LOGHTTPLOG4,TMM_LOGHTTPLOG6,TMM_PCAPLOG,TMM_STREAMTCP,TMM_DECODEIPFW,TMM_VERDICTIPFW,TMM_RECEIVEIPFW,TMM_RECEIVEERFFILE,TMM_DECODEERFFILE,TMM_RECEIVEERFDAG,TMM_DECODEERFDAG,threading 1,4,6,172008,0,0,0,0,0,0,47889,0,0,48582,1323,0,0,0,0,1359,0,1134,0,0,0,0,0,8028,0,0,0,49356,0,0,0,0,0,0,0,14337 First line of the file contains labels. 2 example gnuplot scripts added to plot the data.
14 years ago
#include "tm-threads-common.h"
#include "tm-modules.h"
#ifdef OS_WIN32
static inline void SleepUsec(uint64_t usec)
{
uint64_t msec = 1;
if (usec > 1000) {
msec = usec / 1000;
}
Sleep(msec);
}
#define SleepMsec(msec) Sleep((msec))
#else
#define SleepUsec(usec) usleep((usec))
#define SleepMsec(msec) usleep((msec) * 1000)
#endif
#define TM_QUEUE_NAME_MAX 16
#define TM_THREAD_NAME_MAX 16
typedef TmEcode (*TmSlotFunc)(ThreadVars *, Packet *, void *, PacketQueue *);
typedef struct TmSlot_ {
/* function pointers */
union {
TmSlotFunc SlotFunc;
TmEcode (*PktAcqLoop)(ThreadVars *, void *, void *);
TmEcode (*Management)(ThreadVars *, void *);
};
/** linked list of slots, used when a pipeline has multiple slots
* in a single thread. */
struct TmSlot_ *slot_next;
TmEcode (*SlotThreadInit)(ThreadVars *, const void *, void **);
void (*SlotThreadExitPrintStats)(ThreadVars *, void *);
TmEcode (*SlotThreadDeinit)(ThreadVars *, void *);
/* data storage */
const void *slot_initdata;
SC_ATOMIC_DECLARE(void *, slot_data);
/* queue filled by the SlotFunc with packets that will
* be processed futher _before_ the current packet.
* The locks in the queue are NOT used */
PacketQueue slot_pre_pq;
/* store the thread module id */
int tm_id;
} TmSlot;
extern ThreadVars *tv_root[TVT_MAX];
extern SCMutex tv_root_lock;
void TmSlotSetFuncAppend(ThreadVars *, TmModule *, const void *);
TmSlot *TmSlotGetSlotForTM(int);
ThreadVars *TmThreadCreate(const char *, const char *, const char *, const char *, const char *, const char *,
void *(fn_p)(void *), int);
ThreadVars *TmThreadCreatePacketHandler(const char *, const char *, const char *, const char *, const char *,
const char *);
ThreadVars *TmThreadCreateMgmtThread(const char *name, void *(fn_p)(void *), int);
ThreadVars *TmThreadCreateMgmtThreadByName(const char *name, const char *module,
int mucond);
ThreadVars *TmThreadCreateCmdThreadByName(const char *name, const char *module,
int mucond);
TmEcode TmThreadSpawn(ThreadVars *);
void TmThreadSetFlags(ThreadVars *, uint8_t);
unix-manager: add unix command socket and associated script This patch introduces a unix command socket. JSON formatted messages can be exchanged between suricata and a program connecting to a dedicated socket. The protocol is the following: * Client connects to the socket * It sends a version message: { "version": "$VERSION_ID" } * Server answers with { "return": "OK|NOK" } If server returns OK, the client is now allowed to send command. The format of command is the following: { "command": "pcap-file", "arguments": { "filename": "smtp-clean.pcap", "output-dir": "/tmp/out" } } The server will try to execute the "command" specified with the (optional) provided "arguments". The answer by server is the following: { "return": "OK|NOK", "message": JSON_OBJECT or information string } A simple script is provided and is available under scripts/suricatasc. It is not intended to be enterprise-grade tool but it is more a proof of concept/example code. The first command line argument of suricatasc is used to specify the socket to connect to. Configuration of the feature is made in the YAML under the 'unix-command' section: unix-command: enabled: yes filename: custom.socket The path specified in 'filename' is not absolute and is relative to the state directory. A new running mode called 'unix-socket' is also added. When starting in this mode, only a unix socket manager is started. When it receives a 'pcap-file' command, the manager start a 'pcap-file' running mode which does not really leave at the end of file but simply exit. The manager is then able to start a new running mode with a new file. To start this mode, Suricata must be started with the --unix-socket option which has an optional argument which fix the file name of the socket. The path is not absolute and is relative to the state directory. THe 'pcap-file' command adds a file to the list of files to treat. For each pcap file, a pcap file running mode is started and the output directory is changed to what specified in the command. The running mode specified in the 'runmode' YAML setting is used to select which running mode must be use for the pcap file treatment. This requires modification in suricata.c file where initialisation code is now conditional to the fact 'unix-socket' mode is not used. Two other commands exists to get info on the remaining tasks: * pcap-file-number: return the number of files in the waiting queue * pcap-file-list: return the list of waiting files 'pcap-file-list' returns a structured object as message. The structure is the following: { 'count': 2, 'files': ['file1.pcap', 'file2.pcap'] }
14 years ago
void TmThreadKillThreadsFamily(int family);
void TmThreadKillThreads(void);
unix-manager: add unix command socket and associated script This patch introduces a unix command socket. JSON formatted messages can be exchanged between suricata and a program connecting to a dedicated socket. The protocol is the following: * Client connects to the socket * It sends a version message: { "version": "$VERSION_ID" } * Server answers with { "return": "OK|NOK" } If server returns OK, the client is now allowed to send command. The format of command is the following: { "command": "pcap-file", "arguments": { "filename": "smtp-clean.pcap", "output-dir": "/tmp/out" } } The server will try to execute the "command" specified with the (optional) provided "arguments". The answer by server is the following: { "return": "OK|NOK", "message": JSON_OBJECT or information string } A simple script is provided and is available under scripts/suricatasc. It is not intended to be enterprise-grade tool but it is more a proof of concept/example code. The first command line argument of suricatasc is used to specify the socket to connect to. Configuration of the feature is made in the YAML under the 'unix-command' section: unix-command: enabled: yes filename: custom.socket The path specified in 'filename' is not absolute and is relative to the state directory. A new running mode called 'unix-socket' is also added. When starting in this mode, only a unix socket manager is started. When it receives a 'pcap-file' command, the manager start a 'pcap-file' running mode which does not really leave at the end of file but simply exit. The manager is then able to start a new running mode with a new file. To start this mode, Suricata must be started with the --unix-socket option which has an optional argument which fix the file name of the socket. The path is not absolute and is relative to the state directory. THe 'pcap-file' command adds a file to the list of files to treat. For each pcap file, a pcap file running mode is started and the output directory is changed to what specified in the command. The running mode specified in the 'runmode' YAML setting is used to select which running mode must be use for the pcap file treatment. This requires modification in suricata.c file where initialisation code is now conditional to the fact 'unix-socket' mode is not used. Two other commands exists to get info on the remaining tasks: * pcap-file-number: return the number of files in the waiting queue * pcap-file-list: return the list of waiting files 'pcap-file-list' returns a structured object as message. The structure is the following: { 'count': 2, 'files': ['file1.pcap', 'file2.pcap'] }
14 years ago
void TmThreadClearThreadsFamily(int family);
void TmThreadAppend(ThreadVars *, int);
void TmThreadSetGroupName(ThreadVars *tv, const char *name);
void TmThreadDumpThreads(void);
TmEcode TmThreadSetCPUAffinity(ThreadVars *, uint16_t);
TmEcode TmThreadSetThreadPriority(ThreadVars *, int);
TmEcode TmThreadSetCPU(ThreadVars *, uint8_t);
TmEcode TmThreadSetupOptions(ThreadVars *);
void TmThreadSetPrio(ThreadVars *);
int TmThreadGetNbThreads(uint8_t type);
void TmThreadInitMC(ThreadVars *);
void TmThreadTestThreadUnPaused(ThreadVars *);
void TmThreadContinue(ThreadVars *);
void TmThreadContinueThreads(void);
void TmThreadPause(ThreadVars *);
void TmThreadPauseThreads(void);
void TmThreadCheckThreadState(void);
TmEcode TmThreadWaitOnThreadInit(void);
ThreadVars *TmThreadsGetCallingThread(void);
int TmThreadsCheckFlag(ThreadVars *, uint32_t);
void TmThreadsSetFlag(ThreadVars *, uint32_t);
void TmThreadsUnsetFlag(ThreadVars *, uint32_t);
void TmThreadWaitForFlag(ThreadVars *, uint32_t);
TmEcode TmThreadsSlotVarRun (ThreadVars *tv, Packet *p, TmSlot *slot);
ThreadVars *TmThreadsGetTVContainingSlot(TmSlot *);
void TmThreadDisablePacketThreads(void);
void TmThreadDisableReceiveThreads(void);
TmSlot *TmThreadGetFirstTmSlotForPartialPattern(const char *);
uint32_t TmThreadCountThreadsByTmmFlags(uint8_t flags);
static inline void TmThreadsSlotProcessPktFail(ThreadVars *tv, TmSlot *s, Packet *p)
{
if (p != NULL) {
TmqhOutputPacketpool(tv, p);
}
TmqhReleasePacketsToPacketPool(&s->slot_pre_pq);
if (tv->stream_pq_local) {
SCMutexLock(&tv->stream_pq_local->mutex_q);
TmqhReleasePacketsToPacketPool(tv->stream_pq_local);
SCMutexUnlock(&tv->stream_pq_local->mutex_q);
}
TmThreadsSetFlag(tv, THV_FAILED);
}
/**
* \brief Handle timeout from the capture layer. Checks
* stream_pq which may have been filled by the flow
* manager.
* \param s pipeline to run on these packets.
*/
static inline void TmThreadsHandleInjectedPackets(ThreadVars *tv)
{
PacketQueue *pq = tv->stream_pq_local;
if (pq && pq->len > 0) {
while (1) {
SCMutexLock(&pq->mutex_q);
Packet *extra_p = PacketDequeue(pq);
SCMutexUnlock(&pq->mutex_q);
if (extra_p == NULL)
break;
TmEcode r = TmThreadsSlotVarRun(tv, extra_p, tv->tm_flowworker);
if (r == TM_ECODE_FAILED) {
TmThreadsSlotProcessPktFail(tv, tv->tm_flowworker, extra_p);
break;
}
tv->tmqh_out(tv, extra_p);
}
}
}
/**
* \brief Process the rest of the functions (if any) and queue.
*/
static inline TmEcode TmThreadsSlotProcessPkt(ThreadVars *tv, TmSlot *s, Packet *p)
{
if (s == NULL) {
tv->tmqh_out(tv, p);
return TM_ECODE_OK;
}
TmEcode r = TmThreadsSlotVarRun(tv, p, s);
if (unlikely(r == TM_ECODE_FAILED)) {
TmThreadsSlotProcessPktFail(tv, s, p);
return TM_ECODE_FAILED;
}
tv->tmqh_out(tv, p);
TmThreadsHandleInjectedPackets(tv);
return TM_ECODE_OK;
}
/** \brief inject packet if THV_CAPTURE_INJECT_PKT is set
* Allow caller to supply their own packet
*
* Meant for detect reload process that interupts an sleeping capture thread
* to force a packet through the engine to complete a reload */
static inline void TmThreadsCaptureInjectPacket(ThreadVars *tv, Packet *p)
{
TmThreadsUnsetFlag(tv, THV_CAPTURE_INJECT_PKT);
if (p == NULL)
p = PacketGetFromQueueOrAlloc();
if (p != NULL) {
p->flags |= PKT_PSEUDO_STREAM_END;
PKT_SET_SRC(p, PKT_SRC_CAPTURE_TIMEOUT);
if (TmThreadsSlotProcessPkt(tv, tv->tm_flowworker, p) != TM_ECODE_OK) {
TmqhOutputPacketpool(tv, p);
}
}
}
static inline void TmThreadsCaptureHandleTimeout(ThreadVars *tv, Packet *p)
{
if (TmThreadsCheckFlag(tv, THV_CAPTURE_INJECT_PKT)) {
TmThreadsCaptureInjectPacket(tv, p);
} else {
TmThreadsHandleInjectedPackets(tv);
/* packet could have been passed to us that we won't use
* return it to the pool. */
if (p != NULL)
tv->tmqh_out(tv, p);
}
}
void TmThreadsListThreads(void);
int TmThreadsRegisterThread(ThreadVars *tv, const int type);
void TmThreadsUnregisterThread(const int id);
int TmThreadsInjectPacketsById(Packet **, int id);
time: improve offline time handling When we run on live traffic, time handling is simple. Packets have a timestamp set by the capture method. Management threads can simply use 'gettimeofday' to know the current time. There should never be any serious gap between the two or major differnces between the threads. In offline mode, things are dramatically different. Here we try to keep the time from the pcap, which means that if the packets are recorded in 2011 the log output should also reflect this. Multiple issues: 1. merged pcaps might have huge time jumps or time going backward 2. slowly recorded pcaps may be processed much faster than their 'realtime' 3. management threads need a concept of what the 'current' time is for enforcing timeouts 4. due to (1) individual threads may have very different views on what the current time is. E.g. T1 processed packet 1 with TS X, while T2 at the very same time processes packet 2 with TS X+100000s. The changes in flow handling make the problems worse. The capture thread no longer handles the flow lookup, while it did set the global 'time'. This meant that a thread may be working on Packet 1 with TS 1, while the capture thread already saw packet 2 with TS 10000. Management threads would take TS 10000 as the 'current time', considering a flow created by the first thread as timed out immediately. This was less of a problem before the flow changes as the capture thread would also create a flow reference for a packet, meaning the flow couldn't time out as easily. Packets in the queues between capture thread and workers would all hold such references. The patch updates the time handling to be as follows. In offline mode we keep the timestamp per thread. If a management thread needs current time, it will get the minimum of the threads' values. This is to avoid the problem that T2s time value might already trigger a flow timeout as the flow lastts + 100000s is almost certainly meaning the flow would be considered timed out.
10 years ago
void TmThreadsSetThreadTimestamp(const int id, const struct timeval *ts);
void TmreadsGetMinimalTimestamp(struct timeval *ts);
#endif /* __TM_THREADS_H__ */