af-packet: remove use-mmap option

This option is obsolete and was not used in 7.0 as tpacket-v1 support
was removed (see ticket #4796).
pull/12815/head
Jason Ish 4 months ago committed by Victor Julien
parent 51f7b5924d
commit 374762d202

@ -188,7 +188,6 @@ Then setup the `ebpf-filter-file` variable in af-packet section in ``suricata.ya
# eBPF file containing a 'filter' function that will be inserted into the # eBPF file containing a 'filter' function that will be inserted into the
# kernel and used as load balancing function # kernel and used as load balancing function
ebpf-filter-file: /usr/libexec/suricata/ebpf/vlan_filter.bpf ebpf-filter-file: /usr/libexec/suricata/ebpf/vlan_filter.bpf
use-mmap: yes
ring-size: 200000 ring-size: 200000
You can then run Suricata normally :: You can then run Suricata normally ::
@ -209,7 +208,6 @@ update af-packet configuration in ``suricata.yaml`` to set bypass to `yes` ::
# kernel and used as packet filter function # kernel and used as packet filter function
ebpf-filter-file: /usr/libexec/suricata/ebpf/bypass_filter.bpf ebpf-filter-file: /usr/libexec/suricata/ebpf/bypass_filter.bpf
bypass: yes bypass: yes
use-mmap: yes
ring-size: 200000 ring-size: 200000
Constraints on eBPF code to have a bypass compliant code are stronger than for regular filters. The Constraints on eBPF code to have a bypass compliant code are stronger than for regular filters. The
@ -246,7 +244,6 @@ and point the ``ebpf-lb-file`` variable to the ``lb.bpf`` file ::
# eBPF file containing a 'loadbalancer' function that will be inserted into the # eBPF file containing a 'loadbalancer' function that will be inserted into the
# kernel and used as load balancing function # kernel and used as load balancing function
ebpf-lb-file: /usr/libexec/suricata/ebpf/lb.bpf ebpf-lb-file: /usr/libexec/suricata/ebpf/lb.bpf
use-mmap: yes
ring-size: 200000 ring-size: 200000
Setup XDP bypass Setup XDP bypass
@ -281,7 +278,6 @@ also use the ``/usr/libexec/suricata/ebpf/xdp_filter.bpf`` (in our example TCP o
# if the ebpf filter implements a bypass function, you can set 'bypass' to # if the ebpf filter implements a bypass function, you can set 'bypass' to
# yes and benefit from these feature # yes and benefit from these feature
bypass: yes bypass: yes
use-mmap: yes
ring-size: 200000 ring-size: 200000
# Uncomment the following if you are using hardware XDP with # Uncomment the following if you are using hardware XDP with
# a card like Netronome (default value is yes) # a card like Netronome (default value is yes)
@ -384,7 +380,6 @@ A sample configuration for pure XDP load balancing could look like ::
xdp-mode: driver xdp-mode: driver
xdp-filter-file: /usr/libexec/suricata/ebpf/xdp_lb.bpf xdp-filter-file: /usr/libexec/suricata/ebpf/xdp_lb.bpf
xdp-cpu-redirect: ["1-17"] # or ["all"] to load balance on all CPUs xdp-cpu-redirect: ["1-17"] # or ["all"] to load balance on all CPUs
use-mmap: yes
ring-size: 200000 ring-size: 200000
It is possible to use `xdp_monitor` to have information about the behavior of CPU redirect. This It is possible to use `xdp_monitor` to have information about the behavior of CPU redirect. This

@ -225,7 +225,6 @@ In the af-packet section of suricata.yaml config :
cluster-id: 99 cluster-id: 99
cluster-type: cluster_qm cluster-type: cluster_qm
defrag: no defrag: no
use-mmap: yes
mmap-locked: yes mmap-locked: yes
tpacket-v3: yes tpacket-v3: yes
ring-size: 100000 ring-size: 100000
@ -236,7 +235,6 @@ In the af-packet section of suricata.yaml config :
cluster-id: 99 cluster-id: 99
cluster-type: cluster_qm cluster-type: cluster_qm
defrag: no defrag: no
use-mmap: yes
mmap-locked: yes mmap-locked: yes
tpacket-v3: yes tpacket-v3: yes
ring-size: 100000 ring-size: 100000
@ -347,7 +345,6 @@ In the af-packet section of suricata.yaml config:
cluster-id: 99 cluster-id: 99
cluster-type: cluster_flow cluster-type: cluster_flow
defrag: no defrag: no
use-mmap: yes
mmap-locked: yes mmap-locked: yes
tpacket-v3: yes tpacket-v3: yes
ring-size: 100000 ring-size: 100000

@ -77,7 +77,6 @@ sure af-packet v3 is used it can specifically be enforced it in the
.... ....
.... ....
.... ....
use-mmap: yes
tpacket-v3: yes tpacket-v3: yes
ring-size ring-size

@ -67,7 +67,6 @@ Capture settings::
cluster-id: 99 cluster-id: 99
cluster-type: cluster_flow cluster-type: cluster_flow
defrag: yes defrag: yes
use-mmap: yes
tpacket-v3: yes tpacket-v3: yes
This configuration uses the most recent recommended settings for the IDS This configuration uses the most recent recommended settings for the IDS

@ -203,7 +203,6 @@ between interface ``eth0`` and ``eth1``: ::
copy-mode: ips copy-mode: ips
copy-iface: eth1 copy-iface: eth1
buffer-size: 64535 buffer-size: 64535
use-mmap: yes
- interface: eth1 - interface: eth1
threads: 1 threads: 1
cluster-id: 97 cluster-id: 97
@ -212,7 +211,6 @@ between interface ``eth0`` and ``eth1``: ::
copy-mode: ips copy-mode: ips
copy-iface: eth0 copy-iface: eth0
buffer-size: 64535 buffer-size: 64535
use-mmap: yes
This is a basic af-packet configuration using two interfaces. Interface This is a basic af-packet configuration using two interfaces. Interface
``eth0`` will copy all received packets to ``eth1`` because of the `copy-*` ``eth0`` will copy all received packets to ``eth1`` because of the `copy-*`
@ -228,8 +226,6 @@ The configuration on ``eth1`` is symmetric ::
There are some important points to consider when setting up this mode: There are some important points to consider when setting up this mode:
- The implementation of this mode is dependent of the zero copy mode of
AF_PACKET. Thus you need to set `use-mmap` to `yes` on both interface.
- MTU on both interfaces have to be equal: the copy from one interface to - MTU on both interfaces have to be equal: the copy from one interface to
the other is direct and packets bigger then the MTU will be dropped by kernel. the other is direct and packets bigger then the MTU will be dropped by kernel.
- Set different values of `cluster-id` on both interfaces to avoid conflict. - Set different values of `cluster-id` on both interfaces to avoid conflict.
@ -264,7 +260,6 @@ and eBPF load balancing looks like the following: ::
copy-mode: ips copy-mode: ips
copy-iface: eth1 copy-iface: eth1
buffer-size: 64535 buffer-size: 64535
use-mmap: yes
- interface: eth1 - interface: eth1
threads: 16 threads: 16
cluster-id: 97 cluster-id: 97
@ -274,7 +269,6 @@ and eBPF load balancing looks like the following: ::
copy-mode: ips copy-mode: ips
copy-iface: eth0 copy-iface: eth0
buffer-size: 64535 buffer-size: 64535
use-mmap: yes
The eBPF file ``/usr/libexec/suricata/ebpf/lb.bpf`` may not be present on disk. The eBPF file ``/usr/libexec/suricata/ebpf/lb.bpf`` may not be present on disk.
See :ref:`ebpf-xdp` for more information. See :ref:`ebpf-xdp` for more information.

@ -278,13 +278,6 @@ static void *ParseAFPConfig(const char *iface)
} }
} }
if (ConfGetChildValueBoolWithDefault(if_root, if_default, "use-mmap", &boolval) == 1) {
if (!boolval) {
SCLogWarning(
"%s: \"use-mmap\" option is obsolete: mmap is always enabled", aconf->iface);
}
}
(void)ConfGetChildValueBoolWithDefault(if_root, if_default, "mmap-locked", &boolval); (void)ConfGetChildValueBoolWithDefault(if_root, if_default, "mmap-locked", &boolval);
if (boolval) { if (boolval) {
SCLogConfig("%s: enabling locked memory for mmap", aconf->iface); SCLogConfig("%s: enabling locked memory for mmap", aconf->iface);

@ -663,12 +663,10 @@ af-packet:
# In some fragmentation cases, the hash can not be computed. If "defrag" is set # In some fragmentation cases, the hash can not be computed. If "defrag" is set
# to yes, the kernel will do the needed defragmentation before sending the packets. # to yes, the kernel will do the needed defragmentation before sending the packets.
defrag: yes defrag: yes
# To use the ring feature of AF_PACKET, set 'use-mmap' to yes
#use-mmap: yes
# Lock memory map to avoid it being swapped. Be careful that over # Lock memory map to avoid it being swapped. Be careful that over
# subscribing could lock your system # subscribing could lock your system
#mmap-locked: yes #mmap-locked: yes
# Use tpacket_v3 capture mode, only active if use-mmap is true # Use tpacket_v3 capture mode.
# Don't use it in IPS or TAP mode as it causes severe latency # Don't use it in IPS or TAP mode as it causes severe latency
#tpacket-v3: yes #tpacket-v3: yes
# Ring size will be computed with respect to "max-pending-packets" and number # Ring size will be computed with respect to "max-pending-packets" and number
@ -722,7 +720,6 @@ af-packet:
# in the list above. # in the list above.
- interface: default - interface: default
#threads: auto #threads: auto
#use-mmap: no
#tpacket-v3: yes #tpacket-v3: yes
# Linux high speed af-xdp capture support # Linux high speed af-xdp capture support

Loading…
Cancel
Save