Linux. Network Tuning

    Tuning of the network options prevents data loss and maximum bandwidth.

    System buffer

    How to configure

    The buffer size options should be defined in the /etc/sysctl.conf file. Recommended to use next values for 1G ethernet adapters:

    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    net.ipv4.udp_mem = 8388608 12582912 16777216
    net.ipv4.tcp_rmem = 4096 87380 8388608
    net.ipv4.tcp_wmem = 4096 65536 8388608
    net.core.wmem_default = 16777216
    net.core.rmem_default = 16777216
    net.ipv4.tcp_tw_recycle = 0
    

    For 10G ethernet adapters:

    net.core.rmem_max = 67108864
    net.core.wmem_max = 67108864
    net.ipv4.udp_mem = 8388608 16777216 33554432
    net.ipv4.tcp_rmem = 4096 87380 33554432
    net.ipv4.tcp_wmem = 4096 65536 33554432
    net.core.wmem_default = 33554432
    net.core.rmem_default = 33554432
    net.ipv4.tcp_congestion_control=htcp
    net.ipv4.tcp_tw_recycle = 0
    

    For 40G ethernet adapters:

    net.core.rmem_max = 134217728
    net.core.wmem_max = 134217728
    net.ipv4.udp_mem = 8388608 33554432 67108864
    net.ipv4.tcp_rmem = 4096 87380 67108864
    net.ipv4.tcp_wmem = 4096 65536 67108864
    net.core.wmem_default = 67108864
    net.core.rmem_default = 67108864
    net.ipv4.tcp_congestion_control=htcp
    net.ipv4.tcp_tw_recycle = 0
    

    To apply changes restart system or launch:

    sysctl -p
    

    You can verify the current values with next command:

    sysctl net.core.rmem_default net.core.rmem_max net.core.wmem_default net.core.wmem_max net.ipv4.udp_mem net.ipv4.tcp_wmem
    

    The size of the buffer of the network card

    How to configure
    [root@astra ~]# ethtool -g eth1
    Ring parameters for eth1:
    Pre-set maximums:
    RX:		4096
    RX Mini:	0
    RX Jumbo:	0
    TX:		4096
    Current hardware settings:
    RX:		4096
    RX Mini:	0
    RX Jumbo:	0
    TX:		256
    

    Here we can see the rx-buffer increased by the maximum. Usually it is quite difficult to find the value. The most optimal is some "average" value. With a high-frequency and multi-core processor (>3GHz), you can get closer to the maximum/maximum buffer. Example of a command to increase the buffer:

    ethtool -G eth1 rx 2048
    

    RP filter

    rp_filter - is a technique for the purposes of ensuring loop-free forwarding of multicast packets in multicast routing.

    If your server has several network interfaces, recommend to set routes for multicast groups. If this is not applicable and interface is defined in the source or destination address you should disable rp_filter.

    How to configure

    Append into the /etc/sysctl.conf file next lines:

    net.ipv4.conf.eth0.rp_filter = 2
    

    First two line is identical for any servers. Third line is depend of the interfaces names. Append lines, like a third, for each interface and replace eth0 with the name of interfaces.

    To apply changes restart system or launch:

    sysctl -p
    

    IGMP Version

    Many operating systems send a subscription request to a multicast group in igmp v3 format
    If the network switch can not work with this Protocol, or the Protocol is not configured - attempt to subscribe to multicast group will fail. Igmp v2 is supported by most switches/network equipment.

    How to configure

    The IGMP version could be defined in the /etc/sysctl.conf file. For example setup IGMPv2 for eth1 interface:
    net.ipv4.conf.eth1.force_igmp_version=2

    To apply changes restart system or launch:

    sysctl -p
    

    You can verify IGMP version with tcpdump. Launch:

    tcpdump -i eth1 igmp
    

    Then try to subscribe to the multicast stream. For example:
    astra --analyze udp://eth1@239.255.1.1:1234

    How to check information about losses

    Keywords words: missed, dropped, fifo, error, rx.

    ip -s -s link show eth1
    

    You need to look at RX Errors. Some network cards provide more detailed information about the nature of the loss:

    ethtool -S eth1
    

    Losses can be not only on the network cards of your server. They can also be on the network equipment port. You can learn how to see it from the documentation of the network equipment manufacturer.

    Setup and diagonstic network subsystem using netutils

    This set of utilities allows you to diagnose network losses, configure the network subsystem and perform some other diagnostics.

    See more:

    Install

    apt install python-pip
    pip install netutils-linux
    

    network-top

    Image

    This utility is needed to evaluate the applied settings and displays the uniformity of the load distribution (interrupts, softirqs, the number of packets per second per processor core) on the server resources, all kinds of packet processing errors. Values that exceed the thresholds are highlighted.

    rss-ladder

    # rss-ladder eth1 0
    - distributing interrupts of eth1 (-TxRx) on socket 0:"
      - eth1: irq 67 eth1-TxRx-0 -> 0
      - eth1: irq 68 eth1-TxRx-1 -> 1
      - eth1: irq 69 eth1-TxRx-2 -> 2
      - eth1: irq 70 eth1-TxRx-3 -> 3
      - eth1: irq 71 eth1-TxRx-4 -> 8
      - eth1: irq 72 eth1-TxRx-5 -> 9
      - eth1: irq 73 eth1-TxRx-6 -> 10
      - eth1: irq 74 eth1-TxRx-7 -> 11
    

    This utility distributes network card interrupts to the cores of the selected physical processor (default is 0).

    server-info

    # server-info --rate
    cpu:
      BogoMIPS: 7
      CPU MHz: 7
      CPU(s): 1
      Core(s) per socket: 1
      L3 cache: 1
      Socket(s): 10
      Thread(s) per core: 10
      Vendor ID: 10
     disk:
       vda:
         size: 1
         type: 1
     memory:
       MemTotal: 1
       SwapTotal: 10
     net:
       eth1:
         buffers:
           cur: 5
           max: 10
         driver: 1
         queues: 1
     system:
       Hypervisor vendor: 1
       Virtualization type: 1
    

    This utility allows you to do two things:

    server-info --show: see what hardware is installed on the server. In General, it is similar to lshw, but with an emphasis on the parameters of interest to us.

    server-info --rate: find bottlenecks in server hardware. In General, it is similar to the Windows performance index, but with an emphasis on the parameters of interest to us. The assessment is made on a scale from 1 to 10.

    Other utilities

    rx-buffers-increase eth1 
    

    automatically increases the buffer of the selected network card to the optimal value.

    maximize-cpu-freq
    

    disables the floating frequency of the processor.

    Example of use:

    Example 1. As simple as possible.

    task:

    one processor with 4 cores.
    one 1 Gbps network card (eth0) with 4 combined queues
    incoming traffic 600 Mbit/s, no outgoing traffic.
    all queues hang on CPU0, a total of 55,000 interrupts and 350,000 packets per second, of which about 200 packets/sec are lost by the network card. The remaining 3 cores are idle

    Decision:

    distribute the queues between the cores with the commandrss-ladder eth0
    increase buffer with command rx-buffers-increase eth0

    Example 2

    task:

    two processors with 8 cores\
    two NUMA-nodes
    Two dual-port 10 Gbps network cards (eth0, eth1, eth2, eth3), each port has 16 queues, all tied to node 0, incoming traffic volume: 3 Gbps per each
    1 x 1Gbps network card, 4 queues, tied to node 0, outgoing traffic: 100Mbps.

    Decision:

    1 put one of the 10 Gbit/s network cards to another PCI slot, bound to NUMA node 1.
    2 Reduce the number of combined queues for 10 Gigabit ports to the number of cores per physical processor:

    for dev in eth0 eth1 eth2 eth3; do
      ethtool -L $dev combined 8
    done
    

    3 Distribute interrupts of ports eth0, eth1 on the processor cores getting to NUMA node 0, and ports eth2, eth3 on the processor cores getting to NUMA node1:

    rss-ladder eth0 0
    rss-ladder eth1 0
    rss-ladder eth2 1
    rss-ladder eth3 1
    

    4 Increase eth0, eth1, eth2, eth3 RX buffers:

    for dev in eth0 eth1 eth2 eth3; do
      rx-buffers-increase $dev
    done
    

    Reminder:

    In the case of network cards with a single queue, you can use RPS to distribute the load between the cores, but this does not eliminate the loss of copying packets into memory.

    The distribution of interrupts is based on the calculation of the hash function (the remainder of the division) from the totality of such data: protocol, source and destination IP, and source and destination port. The technology is called: Receive-side scaling (RSS).