To support the 'eve-log' idea, we need to be able to force all log
modules to be enabled by the master eve-log module, and need to be
able to make all logs go into a single file. This didn't fit the
API so far, so added the sub-module concept.
A sub-module is a regular module, that registers itself as a sub-
module of another module:
OutputRegisterTxSubModule("eve-log", "JsonHttpLog", "http",
OutputHttpLogInitSub, ALPROTO_HTTP, JsonHttpLogger);
The first argument is the name of the parent. The 4th argument is
the OutputCtx init function. It differs slightly from the non-sub
one. The different is that in addition to it's ConfNode, it gets
the OutputCtx from the parent. This way it can set the parents
LogFileCtx in it's own OutputCtx.
The runmode setup code will take care of all the extra setup. It's
possible to register a module both as a normal module and as a sub-
module, which can operate at the same time.
Only the TxLogger API is handled in this patch, the rest will be
updated later.
- restore json output-file.[ch] as output-json-file.[ch] after rebase conflict
- fix Makefile.am after merge conflict
- some dev-log-api-v4.0 rebase json fallout cleanup
The stream chunk pool contains preallocating stream chunks (StreamMsg).
These are used for raw reassembly, used in raw content inspection by
the detection engine. The default setting so far has been 250, which
was hardcoded. This meant that in setups that needed more, allocs and
frees would be happen constantly.
This patch introduces a yaml option to set the 'prealloc' value in the
pool. The default is still 250.
stream.reassembly.chunk-prealloc
Related to feature #1093.
The stream reassembly engine uses a set of pools in which preallocated
segments are stored. There are various pools each with different packet
sizes. The goal is to lower memory presure. Until now, these pools were
hardcoded.
This patch introduces the ability to configure them fully from the yaml.
There can be at max 256 of these pools.
Yaml layout is as follows:
stream:
reassemble:
segments:
- size: 2048
prealloc: 3000
- size: 4
prealloc: 1000
- size: 1024
prealloc: 2000
The size is the packet size. The prealloc value indicates how many
segments are set up at startup.
The pools have no limit wrt how many segments can be used of a certain
size. If the engine needs more than the prealloc size, segments are
malloc'd and free'd. The only limit here is the stream.reassemble.memcap.
If the yaml part if omitted, the default values are the same as before.
Feature #1093
A new logger API for registering file storage handlers. Where the
FileLog handler is called once per file, this handler will be called
for each data chunk so that storing the entire file is possible.
The logger call in the API is as follows:
typedef int (*FiledataLogger)(ThreadVars *, void *thread_data,
const Packet *, const File *, const FileData *, uint8_t flags);
All data is const, thus should be read only. The final flags field
is used to indicate to the caller that the file is new, or if it's
being closed.
Files use an internal unique id 'file_id' which can be used by the
loggers to create unique file names. This id can use the 'waldo'
feature of the log-filestore module. This patch moves that waldo
loading and storing logic to this API's implementation. A new
configuration directive 'file-store-waldo: <filename>' is added,
but the existing waldo settings will also continue to work.
The new API call:
int AppLayerParserProtocolHasLogger(uint8_t ipproto,
AppProto alproto)
Returns TRUE if a logger is registered on the ip/alproto pair, and
FALSE otherwise.
This patch introduces a new logging API for logging extracted file info.
It allows for registration of a callback that is called once per file:
when it's considered 'closed'.
Users of this API register their Log Function through:
OutputRegisterFileModule()
The API uses a magic settings globally. This might be changed later.