mirror of https://github.com/OISF/suricata
doc: fix sphinx warnings
This involved removing documents that were intentionally not referenced as they are not good candidates for the user guide.pull/2328/head
parent
3df7f97a33
commit
cd4c9e73f8
@ -1,239 +0,0 @@
|
||||
Logstash Kibana and Suricata JSON output
|
||||
========================================
|
||||
|
||||
With the release of Suricata 2.0rc1 , Suricata introduces all JSON output capability.
|
||||
|
||||
What is JSON - http://en.wikipedia.org/wiki/JSON
|
||||
|
||||
One way to handle easily Suricata's JSON log outputs is through Kibana - http://kibana.org/ :
|
||||
|
||||
::
|
||||
|
||||
Kibana is a highly scalable interface for Logstash (http://logstash.net/) and ElasticSearch (http://www.elasticsearch.org/) that allows you to efficiently search, graph, analyze and otherwise make sense of a mountain of logs.
|
||||
|
||||
The installation is very simple/basic start up with minor specifics for ubuntu. You can be up and running, looking through the logs in under 5 min.
|
||||
|
||||
The downloads can be found here - http://www.elasticsearch.org/overview/elkdownloads/
|
||||
|
||||
This is what yo need to do.
|
||||
|
||||
Suricata
|
||||
---------
|
||||
|
||||
Make sure your Suricata is compiled/installed with libjansson support enabled:
|
||||
|
||||
::
|
||||
|
||||
|
||||
$ suricata --build-info
|
||||
This is Suricata version 2.0 RELEASE
|
||||
Features: NFQ PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK HAVE_NSS HAVE_LIBJANSSON
|
||||
...
|
||||
libnss support: yes
|
||||
libnspr support: yes
|
||||
libjansson support: --> yes <--
|
||||
Prelude support: no
|
||||
PCRE jit: no
|
||||
libluajit: no
|
||||
libgeoip: yes
|
||||
Non-bundled htp: yes
|
||||
Old barnyard2 support: no
|
||||
CUDA enabled: no
|
||||
...
|
||||
|
||||
If it isn't check out the `Suricata Installation <https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Suricata_Installation>`_ page to install or compile Suricata for your distribution.
|
||||
**NOTE:** you will need these packages installed -> **libjansson4** and *libjansson-dev* before compilation.
|
||||
|
||||
Configure suricata
|
||||
------------------
|
||||
|
||||
In your suricata.yaml
|
||||
|
||||
::
|
||||
|
||||
|
||||
# "United" event log in JSON format
|
||||
- eve-log:
|
||||
enabled: yes
|
||||
type: file #file|syslog|unix_dgram|unix_stream
|
||||
filename: eve.json
|
||||
# the following are valid when type: syslog above
|
||||
#identity: "suricata"
|
||||
#facility: local5
|
||||
#level: Info ## possible levels: Emergency, Alert, Critical,
|
||||
## Error, Warning, Notice, Info, Debug
|
||||
types:
|
||||
- alert
|
||||
- http:
|
||||
extended: yes # enable this for extended logging information
|
||||
- dns
|
||||
- tls:
|
||||
extended: yes # enable this for extended logging information
|
||||
- files:
|
||||
force-magic: yes # force logging magic on all logged files
|
||||
force-md5: yes # force logging of md5 checksums
|
||||
#- drop
|
||||
- ssh
|
||||
- smtp
|
||||
- flow
|
||||
|
||||
Install ELK (elasticsearch, logstash, kibana)
|
||||
---------------------------------------------
|
||||
|
||||
First install the dependencies
|
||||
|
||||
**NOTE:** ELK recommends running with Oracle Java - how to:
|
||||
|
||||
* http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-service.html#_installing_the_oracle_jdk
|
||||
|
||||
Otherwise you can install the openjdk:
|
||||
|
||||
::
|
||||
|
||||
|
||||
apt-get install apache2 openjdk-7-jdk openjdk-7-jre-headless
|
||||
|
||||
Then download and install the software.
|
||||
|
||||
Make sure you download the latest versions -
|
||||
|
||||
* http://www.elasticsearch.org/overview/elkdownloads/
|
||||
|
||||
The installation process is simple (for example):
|
||||
|
||||
::
|
||||
|
||||
|
||||
wget https://download.elasticsearch.org/kibana/kibana/kibana-3.0.0.tar.gz
|
||||
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.6.1.deb
|
||||
wget https://download.elastic.co/logstash/logstash/packages/debian/logstash_1.5.3-1_all.deb
|
||||
|
||||
tar -C /var/www/ -xzf kibana-3.0.0.tar.gz
|
||||
dpkg -i elasticsearch-1.6.1.deb
|
||||
dpkg -i logstash_1.5.3-1_all.deb
|
||||
|
||||
Logstash configuration
|
||||
----------------------
|
||||
|
||||
Create and save a **logstash.conf** file with the following content in the /etc/logstash/conf.d/ directory :
|
||||
|
||||
::
|
||||
|
||||
|
||||
touch /etc/logstash/conf.d/logstash.conf
|
||||
|
||||
Insert the following(make sure the directory path is correct):
|
||||
|
||||
::
|
||||
|
||||
|
||||
input {
|
||||
file {
|
||||
path => ["/var/log/suricata/eve.json"]
|
||||
sincedb_path => ["/var/lib/logstash/"]
|
||||
codec => json
|
||||
type => "SuricataIDPS"
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
filter {
|
||||
if [type] == "SuricataIDPS" {
|
||||
date {
|
||||
match => [ "timestamp", "ISO8601" ]
|
||||
}
|
||||
ruby {
|
||||
code => "if event['event_type'] == 'fileinfo'; event['fileinfo']['type']=event['fileinfo']['magic'].to_s.split(',')[0]; end;"
|
||||
}
|
||||
}
|
||||
|
||||
if [src_ip] {
|
||||
geoip {
|
||||
source => "src_ip"
|
||||
target => "geoip"
|
||||
#database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
|
||||
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
|
||||
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
|
||||
}
|
||||
mutate {
|
||||
convert => [ "[geoip][coordinates]", "float" ]
|
||||
}
|
||||
if ![geoip.ip] {
|
||||
if [dest_ip] {
|
||||
geoip {
|
||||
source => "dest_ip"
|
||||
target => "geoip"
|
||||
#database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
|
||||
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
|
||||
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
|
||||
}
|
||||
mutate {
|
||||
convert => [ "[geoip][coordinates]", "float" ]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
output {
|
||||
elasticsearch {
|
||||
host => localhost
|
||||
#protocol => http
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
Configure the start-up services
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
update-rc.d elasticsearch defaults 95 10
|
||||
update-rc.d logstash defaults
|
||||
|
||||
service apache2 restart
|
||||
service elasticsearch start
|
||||
service logstash start
|
||||
|
||||
Enjoy
|
||||
-----
|
||||
|
||||
That's all. Now make sure Suricata is running and you have logs written in your JSON log files and you point your browser towards
|
||||
|
||||
::
|
||||
|
||||
|
||||
http://localhost/kibana-3.0.0
|
||||
|
||||
**NOTE:**
|
||||
Some ready to use templates can be found here:
|
||||
|
||||
* https://github.com/pevma/Suricata-Logstash-Templates
|
||||
|
||||
From here on if you would like to customize and familiarize yourself more with the interface you should read the documentation about Kibana and Logstash.
|
||||
Please have in mind that this is a very quick(under 5 min) tutorial. You should customize and review the proper way for you of using it as a service and/or consider using **httpS web interface and reversy proxy with some authentication**.
|
||||
|
||||
Some possible customization of the output of Logstash and Kibana
|
||||
|
||||
|
||||
|
||||
.. image:: elk/Logstash1.png
|
||||
|
||||
|
||||
.. image:: elk/Logstash2.png
|
||||
|
||||
|
||||
.. image:: elk/Logstash3.png
|
||||
|
||||
|
||||
.. image:: elk/Logstash4.png
|
||||
|
||||
|
||||
.. image:: elk/Logstash5.png
|
||||
|
||||
|
||||
.. image:: elk/Logstash6.png
|
||||
|
||||
Peter Manev
|
@ -1,62 +0,0 @@
|
||||
What to do with files-json.log output
|
||||
=====================================
|
||||
|
||||
.. toctree::
|
||||
|
||||
script-follow-json
|
||||
mysql
|
||||
postgresql
|
||||
useful-queries-for-mysql-and-postgresql
|
||||
mongodb
|
||||
elk
|
||||
|
||||
Suricata has the ability to produce the files-json.log output.
|
||||
Basically this is a JSON style format output logfile with entries like this:
|
||||
|
||||
::
|
||||
|
||||
{
|
||||
"timestamp": "10\/01\/2012-16:52:59.217616",
|
||||
"ipver": 4,
|
||||
"srcip": "80.239.217.171",
|
||||
"dstip": "192.168.42.197",
|
||||
"protocol": 6,
|
||||
"sp": 80,
|
||||
"dp": 32982,
|
||||
"http_uri": "\/frameworks\/barlesque\/2.11.0\/desktop\/3.5\/style\/main.css", "http_host": "static.bbci.co.uk", "http_referer": "http:\/\/www.bbc.com\/", "filename": "\/frameworks\/barl
|
||||
esque\/2.11.0\/desktop\/3.5\/style\/main.css",
|
||||
"magic": "ASCII text, with very long lines, with no line terminators",
|
||||
"state": "CLOSED",
|
||||
"md5": "be7db5e9a4416a4123d556f389b7f4b8",
|
||||
"stored": false,
|
||||
"size": 29261
|
||||
}
|
||||
|
||||
for every single file that crossed your http pipe.
|
||||
This in general is very helpful and informative.
|
||||
In this section we are going to try to explore/suggest approaches for putting it to actual use, since it could aggregate millions of entries in just a week.
|
||||
There are a god few options in general since the JSON style format is pretty common.
|
||||
http://www.json.org/
|
||||
|
||||
|
||||
This guide offers a couple of approaches -
|
||||
use of custom created script with MySQL or PostgreSQL import (bulk or continuous)
|
||||
or importing it directly to MongoDB(native import of JSON files).
|
||||
|
||||
Please read the all the pages before you jump into executing scripts and/or installing/configuring things.
|
||||
Te guide is written using Ubuntu LTS server 12.04
|
||||
|
||||
Thee are 3 options in general that we suggest, that we are going to explain here:
|
||||
|
||||
1. import JSON into MySQL
|
||||
2. import JSON into PostgreSQL
|
||||
3. import JSON into MongoDB
|
||||
|
||||
The suggested approach is
|
||||
configure Suricata.yaml
|
||||
configure your Database
|
||||
run the script (not applicable to MongoDB)
|
||||
and then execute queries against the DB to get the big picture.
|
||||
|
||||
|
||||
Peter Manev
|
@ -1,97 +0,0 @@
|
||||
MongoDB
|
||||
=======
|
||||
|
||||
If you do not have it installed, follow the istructions here:
|
||||
http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
|
||||
|
||||
Basically you do:
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
|
||||
deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen
|
||||
sudo apt-get update && sudo apt-get install mongodb-10gen
|
||||
|
||||
|
||||
The bigest benefit of MongoDB is that it can natively import json.log files:
|
||||
if you have MongoDB installed - all you have to do is:
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
mongoimport --db filejsondb --collection filejson --file files-json.log
|
||||
|
||||
here:
|
||||
|
||||
* --db filejsondb is the database,
|
||||
* --collections filejson is the equivalent of SQL "table"
|
||||
* --file files-json.log - is the json log created and logged into from Suricata.
|
||||
|
||||
last but not least - it would automatically create the db and tables for you.
|
||||
|
||||
this would import a 5 Gb (15 million entries) json log file in about 5-10 minutes - default configuration, without tuning MongoDB for high performance. (your set up and HW will definitely have effect on the import time )
|
||||
|
||||
|
||||
|
||||
MongoDB Example queries (once you have imported the files-json.log - described above - just go ahead with these queries):
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
db.files.group( { cond : {"magic":/.*PDF*./ }, key: {"srcip":true,"http_host":true,"magic":true} ,initial: {count: 0},reduce: function(value, total) {total+=value.count;} } );
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
db.filejson.find({magic:/.*PDF.*/},{srcip:1,http_host:1,magic:1}).sort({srcip:1,http_host:1,magic:1}).limit(20)
|
||||
|
||||
|
||||
Get a sorted table biggest to smallest number hosts of file downloads:
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
> map = function () { emit({srcip:this.srcip,http_host:this.http_host,magic:this.magic}, {count:1}); }
|
||||
function () {
|
||||
emit({srcip:this.srcip, http_host:this.http_host, magic:this.magic}, {count:1});
|
||||
}
|
||||
> reduce = function(k, values) {var result = {count: 0}; values.forEach(function(value) { result.count += value.count; }); return result; }
|
||||
function (k, values) {
|
||||
var result = {count:0};
|
||||
values.forEach(function (value) {result.count += value.count;});
|
||||
return result;
|
||||
}
|
||||
> db.filejson.mapReduce(map,reduce,{out: "myoutput" });
|
||||
{
|
||||
"result" : "myoutput",
|
||||
"timeMillis" : 578806,
|
||||
"counts" : {
|
||||
"input" : 3110871,
|
||||
"emit" : 3110871,
|
||||
"reduce" : 673186,
|
||||
"output" : 219840
|
||||
},
|
||||
"ok" : 1,
|
||||
}
|
||||
> db.myoutput.find().sort({'value.count':-1}).limit(10)
|
||||
{ "_id" : { "srcip" : "184.107.x.x", "http_host" : "arexx.x", "magic" : "very short file (no magic)" }, "value" : { "count" : 42560 } }
|
||||
{ "_id" : { "srcip" : "66.135.210.182", "http_host" : "www.ebay.co.uk", "magic" : "XML document text" }, "value" : { "count" : 30896 } }
|
||||
{ "_id" : { "srcip" : "66.135.210.62", "http_host" : "www.ebay.co.uk", "magic" : "XML document text" }, "value" : { "count" : 27812 } }
|
||||
{ "_id" : { "srcip" : "213.91.x.x", "http_host" : "www.focxxxx.x", "magic" : "HTML document, ISO-8859 text" }, "value" : { "count" : 26301 } }
|
||||
{ "_id" : { "srcip" : "195.168.x.x", "http_host" : "search.etaxxx.x", "magic" : "JPEG image data, JFIF standard 1.01, comment: \"CREATOR: gd-jpeg v1.0 (using IJG JPEG v80), quality = 100\"" }, "value" : { "count" : 16131 } }
|
||||
{ "_id" : { "srcip" : "184.107.x.x", "http_host" : "p2p.arxx.x:2710", "magic" : "ASCII text, with no line terminators" }, "value" : { "count" : 15829 } }
|
||||
{ "_id" : { "srcip" : "213.91.x.x", "http_host" : "www.focxx.x", "magic" : "HTML document, ISO-8859 text" }, "value" : { "count" : 14472 } }
|
||||
{ "_id" : { "srcip" : "64.111.199.222", "http_host" : "syndication.exoclick.com", "magic" : "HTML document, ASCII text, with very long lines, with no line terminators" }, "value" : { "count" : 14009 } }
|
||||
{ "_id" : { "srcip" : "69.171.242.70", "http_host" : "www.facebook.com", "magic" : "ASCII text, with no line terminators" }, "value" : { "count" : 13098 } }
|
||||
{ "_id" : { "srcip" : "69.171.242.74", "http_host" : "www.facebook.com", "magic" : "ASCII text, with no line terminators" }, "value" : { "count" : 12801 } }
|
||||
>
|
||||
|
||||
|
||||
|
||||
Peter Manev
|
@ -1,36 +0,0 @@
|
||||
MySQL
|
||||
=====
|
||||
|
||||
If you do not have MySQL installed - go ahead and do so:
|
||||
|
||||
::
|
||||
|
||||
|
||||
sudo apt-get update && sudo apt-get upgrade
|
||||
sudo apt-get install mysql-server mysql-client
|
||||
|
||||
|
||||
For MySQL make sure you create a db and a table:
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
mysql>create database filejsondb;
|
||||
mysql> create user 'filejson'@'localhost' IDENTIFIED BY 'PASSWORD123';
|
||||
Query OK, 0 rows affected (0.00 sec)
|
||||
mysql> grant all privileges on filejsondb.* to 'filejson'@'localhost' with grant option;
|
||||
mysql> flush privileges;
|
||||
mysql> use filejsondb;
|
||||
|
||||
mysql> CREATE TABLE filejson( time_received VARCHAR(64), ipver VARCHAR(4), srcip VARCHAR(40), dstip VARCHAR(40), protocol SMALLINT UNSIGNED, sp SMALLINT UNSIGNED, dp SMALLINT UNSIGNED, http_uri TEXT, http_host TEXT, http_referer TEXT, filename TEXT, magic TEXT, state VARCHAR(32), md5 VARCHAR(32), stored VARCHAR(32), size BIGINT UNSIGNED);
|
||||
|
||||
mysql> show columns from filejson;
|
||||
|
||||
|
||||
|
||||
OPTIONALLY - if you would like you can add in the MD5 whitelist table and import the data as described here :ref:`FileMD5 and white/black listing with md5 <filemd5-listing>`
|
||||
|
||||
now you can go ahead and execute the script - :ref:`Script FollowJSON <script-follow-json>`
|
||||
|
||||
Peter Manev
|
@ -1,79 +0,0 @@
|
||||
PostgreSQL
|
||||
==========
|
||||
|
||||
If you do not have PostgreSQL installed:
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
sudo apt-get update && sudo apt-get upgrade
|
||||
sudo apt-get install postgresql
|
||||
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
sudo vim /etc/postgresql/9.1/main/pg_hba.conf
|
||||
|
||||
change the line:
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
local all all trust
|
||||
|
||||
to
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
local all all md5
|
||||
|
||||
|
||||
login and change passwords
|
||||
|
||||
::
|
||||
|
||||
|
||||
sudo -u postgres psql postgres
|
||||
\password postgres
|
||||
|
||||
|
||||
Then -
|
||||
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
create database filejsondb;
|
||||
\c filejsondb;
|
||||
create user filejson with password 'PASSWORD123';
|
||||
CREATE TABLE filejson( time_received VARCHAR(64), ipver VARCHAR(4), srcip VARCHAR(40), dstip VARCHAR(40), protocol INTEGER, sp INTEGER, dp INTEGER, http_uri TEXT, http_host TEXT, http_referer TEXT, filename TEXT, magic TEXT, state VARCHAR(32), md5 VARCHAR(32), stored VARCHAR(32), size BIGINT);
|
||||
grant all privileges on database filejsondb to filejson;
|
||||
|
||||
Log out and log in again (with the "filejson" user) to test if everything is ok:
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
psql -d filejson -U filejson
|
||||
|
||||
|
||||
|
||||
|
||||
Optionally you could create and import the MD5 white list data if you wish - generally the same guidance as described in :ref:`FileMD5 and white/black listing with md5 <filemd5-listing>`
|
||||
|
||||
Some more general info and basic commands/queries:
|
||||
http://jazstudios.blogspot.se/2010/06/postgresql-login-commands.html
|
||||
http://www.thegeekstuff.com/2009/05/15-advanced-postgresql-commands-with-examples/
|
||||
|
||||
|
||||
now you can go ahead and execute the script - :ref:`Script FollowJSON <script-follow-json>`
|
||||
|
||||
Peter Manev
|
@ -1,100 +0,0 @@
|
||||
.. _script-follow-json:
|
||||
|
||||
Script FollowJSON
|
||||
=================
|
||||
|
||||
BEFORE you run the script - make sure you have set up suricata.yaml and your database correctly !!
|
||||
|
||||
Suricata.yaml:
|
||||
|
||||
1. make sure json-log is enabled
|
||||
2. and append is set to yes
|
||||
3. optionally - you have compilled in Suricata with MD5's enabled
|
||||
|
||||
MD5's are enabled and forced in the suricata yaml config ( :ref:`MD5 <md5>` )
|
||||
bottom of the page "Log all MD5s without any rules" .
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
- file-log:
|
||||
enabled: yes
|
||||
filename: files-json.log
|
||||
append: yes
|
||||
#filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'
|
||||
force-magic: yes # force logging magic on all logged files
|
||||
force-md5: yes # force logging of md5 checksums
|
||||
|
||||
|
||||
**Append is set to yes** - this is very important if you "follow" , json.log - if you use the tool to constantly parse and insert logs from files-json.log as they are being written onto the log file.
|
||||
|
||||
|
||||
There is a python script (in BETA now and) available here:
|
||||
|
||||
* https://redmine.openinfosecfoundation.org/attachments/download/843/FollowJSON.tar.gz
|
||||
|
||||
that you can use for helping out in importing files-json.log entries into a MSQL or PostgreSQL database.
|
||||
|
||||
The script would allow you to do the following:
|
||||
|
||||
|
||||
* it contains 2 files
|
||||
* one python executable
|
||||
* one yaml config file
|
||||
* one LICENSE (GPLv2)
|
||||
|
||||
This is what the script does:
|
||||
|
||||
1. Multi-threaded - spawns multiple processes if itself
|
||||
2. uses yaml as configuration
|
||||
3. Can:
|
||||
|
||||
3.1. Read files-json.log file
|
||||
|
||||
3.1.1. - Continuously - as logs are being written in the log file
|
||||
3.1.2. - mass import a stand alone files-json.log into a database
|
||||
|
||||
3.2. Into (your choice)
|
||||
|
||||
3.2.1. MySQL DB (locally/remotely,ip)
|
||||
3.2.2. PostgreSQL DB (locally/remotely,ip)
|
||||
|
||||
4. Customizable number of processes (default is number of cores - if you have more then 16 - suggested value is NumCores/2)
|
||||
5. Customizable "chunk" lines to read at once by every process - suggested (default) value is 10 (16 cores = 16 processes * 10 = 160 entries per second)
|
||||
|
||||
**Please look into the configurational yaml file** for more information.
|
||||
|
||||
The script is in BETA state - it has been tested , it works - but still, you should test it and adjust the configuration accordingly and run it on your test environment first before you put it in production.
|
||||
|
||||
After you have made:
|
||||
|
||||
#. your choices of database type (MySQL or PostgreSQL and installed/configured tables for it),
|
||||
#. created the appropriate database structure and tables (explained in the next tutorial(s) ),
|
||||
#. adjusted the yaml configuration accordingly,
|
||||
#. started Suricata,
|
||||
|
||||
you would need:
|
||||
|
||||
::
|
||||
|
||||
|
||||
sudo apt-get install python-yaml python-mysqldb python-psycopg2
|
||||
|
||||
Then you just run the script, after you have started Suricata:
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
sudo python Follow_JSON_Multi.py
|
||||
|
||||
if you would like to execute the script in the background:
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
sudo python Follow_JSON_Multi.py &
|
||||
|
||||
Peter Manev
|
@ -1,138 +0,0 @@
|
||||
Useful queries - for MySQL and PostgreSQL
|
||||
=========================================
|
||||
|
||||
|
||||
General Purpose and Useful Queries (MySQL - 99% the same for PostgreSQL) for the files-json.log databases and tables:
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
mysql>select srcip,http_host,count(*) as total from filejson where magic like "%PDF document%" group by srcip,http_host order by total DESC limit 10;
|
||||
|
||||
above top 10 source ip from which PDF's where downloaded
|
||||
change srcip with dstip to get top 10 IPs downloading PDFs
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
mysql>select srcip,http_host,count(*) as total from filejson where magic like "%executable%" group by srcip,http_host order by total DESC limit 10;
|
||||
|
||||
above top 10 source ip from which executables where downloaded from,
|
||||
change srcip with dstip to get top 10 IPs downloading executables
|
||||
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
mysql> SELECT srcip,http_host,count(*) AS Total , (COUNT(*) / (SELECT COUNT(*) FROM filejson where magic like "%executable%")) * 100 AS 'Percentage to all items' FROM filejson WHERE magic like "%executable%" GROUP BY srcip,http_host order by total DESC limit 10;
|
||||
|
||||
::
|
||||
|
||||
|
||||
+----------------+----------------------+-------+-------------------------+
|
||||
| srcip | http_host | Total | Percentage to all items |
|
||||
+----------------+----------------------+-------+-------------------------+
|
||||
| 149.5.130.7 | ws.livepcsupport.com | 225 | 9.1167 |
|
||||
..............................
|
||||
.............................
|
||||
|
||||
This would give you a sorted table depicting source ip and host name, number of executable downloads from that host/source ip and what percentage is that of the total executable downloads.
|
||||
Note: the term executable means - dll, exe, com, msi, java ... and so on , NOT just .exe files
|
||||
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
mysql>select count(magic) as totalPDF from filejson where magic like "%PDF%"
|
||||
|
||||
This will give you the total number of PDFs out of all files
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
mysql>SELECT ( select count(magic) from filejson where magic like "%PDF%" ) as "PDF Total" , (select count(magic) from filejson where magic like "%executable%") as "Executables Total" , (select count(magic) from filejson where filename like "%.xls") as "Excel Total";
|
||||
|
||||
This will give you:
|
||||
|
||||
::
|
||||
|
||||
|
||||
+-----------+-------------------+-------------+
|
||||
| PDF Total | Executables Total | Excel Total |
|
||||
+-----------+-------------------+-------------+
|
||||
| 391 | 2468 | 7 |
|
||||
+-----------+-------------------+-------------+
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
mysql> SELECT ( select count(magic) from filejson where magic like "%PDF%" ) as "PDF Total" , (select count(magic) from filejson where magic like "%executable%") as "Executables Total" , (select count(magic) from filejson where filename like "%.xls") as "Excel Total", (select count(magic) from filejson) as "TOTAL NUMER OF FILES";
|
||||
|
||||
::
|
||||
|
||||
|
||||
+-----------+-------------------+-------------+----------------------+
|
||||
| PDF Total | Executables Total | Excel Total | TOTAL NUMER OF FILES |
|
||||
+-----------+-------------------+-------------+----------------------+
|
||||
| 391 | 2468 | 7 | 3743925 |
|
||||
+-----------+-------------------+-------------+----------------------+
|
||||
|
||||
the above query - a breakdown for PDF, executables and files hat have extension .xls
|
||||
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
mysql>select srcip,filename,http_host,count(*) as total from filejson where filename like "%.xls" group by srcip,filename,http_host order by total DESC limit 10;
|
||||
|
||||
the above will select top 10 source ip and document NAMES where excel files (files with extension .xls) were downloaded form
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
mysql>select srcip,http_host,count(*) as total from filejson where filename like "%.exe" group by srcip,http_host order by total DESC limit 10;
|
||||
|
||||
the above will select the top 10 source ips from where ".exe" files where downloaded from
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
mysql>select srcip,http_host,count(*) as total from filejson where filename like "%.doc" group by srcip,http_host order by total DESC limit 10;
|
||||
|
||||
the above for ".doc" files
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
mysql>select magic,http_host,count(*) as count from filejson group by magic,http_host order by count DESC limit 20;
|
||||
|
||||
select top 20 hosts grouped and ordered by count
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
mysql>select dstip,size,count(*) as total from filejson group by dstip,size order by total DESC limit 10;
|
||||
|
||||
the above query will show you he top 10 downloading ips by size of downloads
|
||||
|
||||
|
||||
::
|
||||
|
||||
|
||||
mysql>select dstip,http_host,count(*) as total from filejson where filename like "%.exe" group by dstip order by total DESC limit 5;
|
||||
|
||||
the above query will show you the top 5 downloading ips (and the hosts they downloaded from) that downloaded files with .exe extensions.
|
||||
|
||||
|
||||
Peter Manev
|
Loading…
Reference in New Issue