Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Slow single TCP connection
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Slow single TCP connection

Hey, I have problem with 4 dedis I got from Hetzner.
All of them max out at 8-10Mbps on single TCP connection.

iperf, http1.1, http2, sftp - doesnt matter, I get 8-10Mbps maximum if its just a single TCP connection.
There is no problem when I start multiple connections on iperf or http via IDM which can open multiple connections over TCP - transfers can easily go to hundreds of Mb/s then.

I have this problem on both Debian 11 and Ubuntu 22.04

Zero impact from:

  • Turning up MTU from 1500 to 9000
  • Turning on BBR
  • Disabling UFW
  • Compiling Nginx from source with AIO (so I tried directio, aio threads and sendfile, no difference, it is always 9-10Mbps when sending big files)
  • Using different "client" machine on different network
  • Increased buffer size like suggested here https://www.cyberciti.biz/faq/linux-tcp-tuning/
  • Rebooting

For comparison Oracle Free Tier shitty x86 1/8OCPU instance gets 25Mbps, MTU is on 9000. Its on CentOS7, maybe thats the cause?
There's no way dedi cant reach these speeds... I'm clearly missing something :(
Or does Hetzner limit single TCP connection speed somehow and Im just wasting time?

Server specs
2x AX41 - Ryzen 5 3600, 64GB ram, 2x 512GB NVMe etc.
2x SX162 - E5 1650V3, 128GB ram, 10x 10GB HDD etc.
CPU usage is like 1%.

Comments

  • If you do a speedtest downloading a file to one of these servers, does it reach the desired speed or the same happen?

  • MikeAMikeA Member, Patron Provider

    Is it only over a single network route or multiple?

  • My point is, if you wget a speedtest file and your speed sucks the same, then would be wise to take that up with Hetzner, maybe you have some limits...forced?

  • Hey, you're stating that you have the following servers:

    2x AX41 - Ryzen 5 3600, 64GB ram, 2x 512GB NVMe etc.
    2x SX162 - E5 1650V3, 128GB ram, 10x 10GB HDD etc.

    Is there something like a storage share (SMB/NFS/etc) active between those storage SX162's and ax41's? That could be a big bottleneck.

    Otherwise, please give us some more information on the entire stack, maybe print the output of an entire ps auxf here?

  • jmgcaguiclajmgcaguicla Member
    edited September 2022

    @FoxelVox said:
    Is there something like a storage share (SMB/NFS/etc) active between those storage SX162's and ax41's? That could be a big bottleneck.

    Irrelevant, op mentioned that testing using iperf gives the same result.

  • MTU 9000, as in jumbo frames? stop that, it's not gonna play well on WAN.
    What's the dest server? I'm sure some of us can provide you an iperf on 10g,
    but you should try speedtest-cli, since it's Hetzner - no way they are throttling 'some'
    of your dests in EU.

  • Daniel15Daniel15 Veteran
    edited September 2022

    Is this happening even with connections to other Hetzner servers?

    What model of network card/chipset do you have? A low-end Realtek network adapter won't perform as well as a higher-end Intel one for example.

    How high is kernel CPU usage while you're performing these tests?

    Compiling Nginx from source with AIO (so I tried directio, aio threads and sendfile, no difference, it is always 9-10Mbps when sending big files)

    I'd recommend just using iPerf, so that you can rule out disk I/O performance.

    @AXYZE said: Turning up MTU from 1500 to 9000

    On the internet, this is going to make things worse. Internet routers don't support jumbo frames, so you'll end up with fragmented packets (which just adds overhead) and/or the connection speed quickly dropping to 0 after a few seconds.

  • AXYZEAXYZE Member
    edited September 2022

    @FoxelVox said:
    Hey, you're stating that you have the following servers:

    2x AX41 - Ryzen 5 3600, 64GB ram, 2x 512GB NVMe etc.
    2x SX162 - E5 1650V3, 128GB ram, 10x 10GB HDD etc.

    Is there something like a storage share (SMB/NFS/etc) active between those storage SX162's and ax41's? That could be a big bottleneck.

    No, they dont have shared storage active.
    AX41 are EXT4 RAID1
    SX162 are XFS RAID-Z2

    So all of them are plenty fast
    Storage isnt bottleneck and IIRC iperf3 isnt affected by storage speed.

    Otherwise, please give us some more information on the entire stack, maybe print the output of an entire ps auxf here?

    Output will be in my next comment, because I reached max char limit. :D

    @AXYZE said: Turning up MTU from 1500 to 9000

    On the internet, this is going to make things worse. Internet routers don't support jumbo frames, so you'll end up with fragmented packets (which just adds overhead) and/or the connection speed quickly dropping to 0 after a few seconds.

    @luckypenguin said:
    MTU 9000, as in jumbo frames? stop that, it's not gonna play well on WAN.

    Ok, I only tried it because on Oracle free instance I have it set like this by default and thought that maybe the issue. There's zero difference in performance tho so I just rebooted (I made non-persistent change :) )

    What's the dest server? I'm sure some of us can provide you an iperf on 10g

    Poland, two different connections in different cities. Hetzner machines are in Finland.

    @Daniel15 said:
    Is this happening even with connections to other Hetzner servers?

    That is a great question!
    I tried exact iperf3 benchmark, but now between two Hetzner servers (AX41 and SX162)
    Hetzner -> Hetzner 950Mbps.

    So no problem here.

    What model of network card/chipset do you have? A low-end Realtek network adapter won't perform as well as a higher-end Intel one for example.

    All of them have Intel I210 Gigabit chips.

    How high is kernel CPU usage while you're performing these tests?

    Actually, I dont know how to check kernel CPU usage, could you please tell me where I can find it?

    In top I only see that iperf3 and top is eating some small percentage, nothing more. Steal, Wa, Hi, Si are 0.0. Us and Sy (User and system?) are around 1.0

    @MikeA said:
    Is it only over a single network route or multiple?

    I checked against serverius iperf server

    hetzner -> serverius is ~744Mbps
    serverius -> hetzner is ~454Mbps
    hetzner -> hetzner is ~950Mbps
    hetzner -> my two connections in different cities and different ISPs is ~10Mbps

    ping hetzner -> hetzner is 0.5ms
    ping hetzner -> serverius is 25ms
    ping hetzner -> my connection is 59ms

    And remember, there is no problem over multiple connections, I can easily get 1Gbps on "hetzner -> my connection" if I establish multiple ones, but every one is limited to 9-10Mbps.

    Here's tracert from my connection to Hetzner
    1 <1 ms <1 ms <1 ms 192.168.1.1
    2 1 ms <1 ms <1 ms 192.168.0.1
    3 6 ms 6 ms 7 ms 10.20.0.1
    4 8 ms 6 ms 7 ms c97-241.icpnet.pl [62.21.97.241]
    5 9 ms 6 ms 6 ms e91-118.icpnet.pl [46.238.91.118]
    6 19 ms 8 ms 9 ms e91-109.icpnet.pl [46.238.91.109]
    7 7 ms 6 ms 10 ms e123-10.icpnet.pl [46.238.123.10]
    8 18 ms 21 ms 17 ms 212.162.29.165
    9 56 ms 60 ms 60 ms ae1.16.edge1.Helsinki1.level3.net [4.69.161.102]
    10 59 ms 60 ms 59 ms 212.133.6.2
    11 61 ms 61 ms 55 ms core32.hel1.hetzner.com [213.239.224.26]
    12 58 ms 58 ms 58 ms ex9k2.dc4.hel1.hetzner.com [213.239.252.214]
    13 57 ms 57 ms 59 ms static.XXtargetXX.clients.your-server.de [TARGET IP]

    So it goes via my ISP (ICPNET/INEA) directly to Helsinki via Level3.
    Hmm...

  • AXYZEAXYZE Member
    edited September 2022

    ps auxf as requested by @FoxelVox

    ps auxf

    USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
    root           2  0.0  0.0      0     0 ?        S    Aug23   0:00 [kthreadd]
    root           3  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [rcu_gp]
    root           4  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [rcu_par_gp]
    root           5  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [netns]
    root           7  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kworker/0:0H-events_highpri]
    root          10  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [mm_percpu_wq]
    root          11  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [rcu_tasks_rude_]
    root          12  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [rcu_tasks_trace]
    root          13  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [ksoftirqd/0]
    root          14  0.0  0.0      0     0 ?        I    Aug23   0:35  \_ [rcu_sched]
    root          15  0.0  0.0      0     0 ?        S    Aug23   0:02  \_ [migration/0]
    root          16  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [idle_inject/0]
    root          17  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [cpuhp/0]
    root          18  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [cpuhp/1]
    root          19  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [idle_inject/1]
    root          20  0.0  0.0      0     0 ?        S    Aug23   0:02  \_ [migration/1]
    root          21  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [ksoftirqd/1]
    root          23  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kworker/1:0H-events_highpri]
    root          24  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [cpuhp/2]
    root          25  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [idle_inject/2]
    root          26  0.0  0.0      0     0 ?        S    Aug23   0:02  \_ [migration/2]
    root          27  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [ksoftirqd/2]
    root          29  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kworker/2:0H-events_highpri]
    root          30  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [cpuhp/3]
    root          31  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [idle_inject/3]
    root          32  0.0  0.0      0     0 ?        S    Aug23   0:02  \_ [migration/3]
    root          33  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [ksoftirqd/3]
    root          35  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kworker/3:0H-events_highpri]
    root          36  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [cpuhp/4]
    root          37  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [idle_inject/4]
    root          38  0.0  0.0      0     0 ?        S    Aug23   0:02  \_ [migration/4]
    root          39  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [ksoftirqd/4]
    root          41  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kworker/4:0H-events_highpri]
    root          42  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [cpuhp/5]
    root          43  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [idle_inject/5]
    root          44  0.0  0.0      0     0 ?        S    Aug23   0:02  \_ [migration/5]
    root          45  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [ksoftirqd/5]
    root          47  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kworker/5:0H-events_highpri]
    root          48  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [cpuhp/6]
    root          49  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [idle_inject/6]
    root          50  0.0  0.0      0     0 ?        S    Aug23   0:02  \_ [migration/6]
    root          51  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [ksoftirqd/6]
    root          53  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kworker/6:0H-events_highpri]
    root          54  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [cpuhp/7]
    root          55  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [idle_inject/7]
    root          56  0.0  0.0      0     0 ?        S    Aug23   0:02  \_ [migration/7]
    root          57  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [ksoftirqd/7]
    root          59  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kworker/7:0H-events_highpri]
    root          60  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [cpuhp/8]
    root          61  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [idle_inject/8]
    root          62  0.0  0.0      0     0 ?        S    Aug23   0:02  \_ [migration/8]
    root          63  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [ksoftirqd/8]
    root          65  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kworker/8:0H-events_highpri]
    root          66  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [cpuhp/9]
    root          67  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [idle_inject/9]
    root          68  0.0  0.0      0     0 ?        S    Aug23   0:02  \_ [migration/9]
    root          69  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [ksoftirqd/9]
    root          71  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kworker/9:0H-events_highpri]
    root          72  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [cpuhp/10]
    root          73  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [idle_inject/10]
    root          74  0.0  0.0      0     0 ?        S    Aug23   0:02  \_ [migration/10]
    root          75  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [ksoftirqd/10]
    root          77  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kworker/10:0H-events_highpri]
    root          78  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [cpuhp/11]
    root          79  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [idle_inject/11]
    root          80  0.0  0.0      0     0 ?        S    Aug23   0:02  \_ [migration/11]
    root          81  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [ksoftirqd/11]
    root          83  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kworker/11:0H-events_highpri]
    root          84  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [kdevtmpfs]
    root          85  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [inet_frag_wq]
    root          86  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [kauditd]
    root          90  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [khungtaskd]
    root          91  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [oom_reaper]
    root          92  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [writeback]
    root          93  0.0  0.0      0     0 ?        S    Aug23   0:19  \_ [kcompactd0]
    root          94  0.0  0.0      0     0 ?        SN   Aug23   0:00  \_ [ksmd]
    root          95  0.0  0.0      0     0 ?        SN   Aug23   0:00  \_ [khugepaged]
    root         142  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kintegrityd]
    root         143  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kblockd]
    root         144  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [blkcg_punt_bio]
    root         145  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [tpm_dev_wq]
    root         146  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [ata_sff]
    root         147  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [md]
    root         148  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [edac-poller]
    root         149  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [devfreq_wq]
    root         150  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [watchdogd]
    root         152  0.0  0.0      0     0 ?        I<   Aug23   0:03  \_ [kworker/0:1H-kblockd]
    root         153  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [irq/25-AMD-Vi]
    root         155  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [kswapd0]
    root         156  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [ecryptfs-kthrea]
    root         158  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kthrotld]
    root         159  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [irq/27-aerdrv]
    root         160  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [irq/28-aerdrv]
    root         161  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [irq/29-aerdrv]
    root         162  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [irq/31-aerdrv]
    root         163  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [irq/32-aerdrv]
    root         173  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [acpi_thermal_pm]
    root         175  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [vfio-irqfd-clea]
    root         176  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [mld]
    root         177  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [ipv6_addrconf]
    root         186  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kstrp]
    root         189  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [zswap-shrink]
    root         200  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kworker/u65:0]
    root         205  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [charger_manager]
    root         207  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [irq/26-ACPI:Eve]
    root         240  0.0  0.0      0     0 ?        I<   Aug23   0:05  \_ [kworker/11:1H-kblockd]
    root         245  0.0  0.0      0     0 ?        I<   Aug23   0:03  \_ [kworker/1:1H-kblockd]
    root         267  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [cryptd]
    root         268  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [nvme-wq]
    root         271  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [nvme-reset-wq]
    root         272  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [nvme-delete-wq]
    root         275  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [scsi_eh_0]
    root         289  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [scsi_tmf_0]
    root         295  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [scsi_eh_1]
    root         303  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [scsi_tmf_1]
    root         304  0.0  0.0      0     0 ?        I<   Aug23   0:04  \_ [kworker/3:1H-kblockd]
    root         309  0.0  0.0      0     0 ?        I<   Aug23   0:05  \_ [kworker/5:1H-kblockd]
    root         311  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [scsi_eh_2]
    root         313  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [scsi_tmf_2]
    root         314  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [scsi_eh_3]
    root         315  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [scsi_tmf_3]
    root         316  0.0  0.0      0     0 ?        I<   Aug23   0:03  \_ [kworker/7:1H-kblockd]
    root         317  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [scsi_eh_4]
    root         318  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [scsi_tmf_4]
    root         319  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [scsi_eh_5]
    root         320  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [scsi_tmf_5]
    root         321  0.0  0.0      0     0 ?        I<   Aug23   0:04  \_ [kworker/2:1H-kblockd]
    root         322  0.0  0.0      0     0 ?        I<   Aug23   0:04  \_ [kworker/9:1H-kblockd]
    root         323  0.0  0.0      0     0 ?        I<   Aug23   0:03  \_ [kworker/6:1H-kblockd]
    root         355  0.0  0.0      0     0 ?        I<   Aug23   0:05  \_ [kworker/4:1H-kblockd]
    root         356  0.0  0.0      0     0 ?        I<   Aug23   0:03  \_ [kworker/8:1H-kblockd]
    root         357  0.0  0.0      0     0 ?        I<   Aug23   0:05  \_ [kworker/10:1H-kblockd]
    root         366  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [md1_raid1]
    root         367  0.0  0.0      0     0 ?        S    Aug23   0:49  \_ [md2_raid1]
    root         369  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [md0_raid1]
    root         413  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [raid5wq]
    root         468  0.0  0.0      0     0 ?        S    Aug23   0:17  \_ [jbd2/md2-8]
    root         469  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [ext4-rsv-conver]
    root         558  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [ipmi-msghandler]
    root         565  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kaluad]
    root         568  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kmpath_rdacd]
    root         570  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kmpathd]
    root         571  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [kmpath_handlerd]
    root         716  0.0  0.0      0     0 ?        S    Aug23   0:00  \_ [jbd2/md1-8]
    root         717  0.0  0.0      0     0 ?        I<   Aug23   0:00  \_ [ext4-rsv-conver]
    root       71427  0.0  0.0      0     0 ?        I<   Aug29   0:00  \_ [dio/md2]
    root      246568  0.0  0.0      0     0 ?        I    Sep09   0:12  \_ [kworker/1:0-rcu_par_gp]
    root      247330  0.0  0.0      0     0 ?        I    Sep09   0:00  \_ [kworker/6:2-mm_percpu_wq]
    root      253612  0.0  0.0      0     0 ?        I    Sep09   0:00  \_ [kworker/9:1-mm_percpu_wq]
    root      319924  0.0  0.0      0     0 ?        I    Sep09   0:00  \_ [kworker/9:0-mm_percpu_wq]
    root      322871  0.0  0.0      0     0 ?        I    Sep09   0:00  \_ [kworker/6:0-events]
    root      322887  0.0  0.0      0     0 ?        I    Sep09   0:00  \_ [kworker/3:2-events]
    root      322893  0.0  0.0      0     0 ?        I    Sep09   0:00  \_ [kworker/7:1-mm_percpu_wq]
    root      323220  0.0  0.0      0     0 ?        I    Sep09   0:00  \_ [kworker/11:0-mm_percpu_wq]
    root      323226  0.0  0.0      0     0 ?        I    Sep09   0:00  \_ [kworker/7:0-events]
    root      323602  0.0  0.0      0     0 ?        I    Sep09   0:00  \_ [kworker/2:1-rcu_gp]
    root      323604  0.0  0.0      0     0 ?        I    Sep09   0:00  \_ [kworker/0:0-rcu_gp]
    root      323694  0.0  0.0      0     0 ?        I    Sep09   0:00  \_ [kworker/8:0-events]
    root      323697  0.0  0.0      0     0 ?        I    Sep09   0:00  \_ [kworker/3:0]
    root      324706  0.0  0.0      0     0 ?        I    01:54   0:00  \_ [kworker/10:1-mm_percpu_wq]
    root      325009  0.0  0.0      0     0 ?        I    03:06   0:00  \_ [kworker/5:1-events]
    root      325309  0.0  0.0      0     0 ?        I    03:42   0:00  \_ [kworker/10:2]
    root      325751  0.0  0.0      0     0 ?        I    05:18   0:00  \_ [kworker/5:2]
    root      326335  0.0  0.0      0     0 ?        I    08:59   0:00  \_ [kworker/8:2-mm_percpu_wq]
    root      326336  0.0  0.0      0     0 ?        I    08:59   0:00  \_ [kworker/0:1-events]
    root      326338  0.0  0.0      0     0 ?        I    08:59   0:00  \_ [kworker/1:2-events]
    root      326339  0.0  0.0      0     0 ?        I    08:59   0:00  \_ [kworker/2:2-events]
    root      326341  0.0  0.0      0     0 ?        I    08:59   0:00  \_ [kworker/4:0-events]
    root      326579  0.0  0.0      0     0 ?        I    09:48   0:00  \_ [kworker/4:1-events]
    root      326641  0.0  0.0      0     0 ?        I    09:57   0:00  \_ [kworker/u64:2-flush-9:2]
    root      326659  0.0  0.0      0     0 ?        I    09:58   0:00  \_ [kworker/11:3-events]
    root      327708  0.0  0.0      0     0 ?        I    10:12   0:00  \_ [kworker/u64:0-events_unbound]
    root      328113  0.0  0.0      0     0 ?        I    10:19   0:00  \_ [kworker/u64:1-events_power_efficient]
    root           1  0.0  0.0 167444 12692 ?        Ss   Aug23   0:26 /sbin/init
    root         530  0.0  0.3 277096 214004 ?       S<s  Aug23   1:44 /lib/systemd/systemd-journald
    root         572  0.0  0.0 289316 27104 ?        SLsl Aug23   1:00 /sbin/multipathd -d -s
    root         576  0.0  0.0  25732  6692 ?        Ss   Aug23   0:00 /lib/systemd/systemd-udevd
    root         715  0.0  0.0   3924  2644 ?        Ss   Aug23   0:12 /sbin/mdadm --monitor --scan
    systemd+     737  0.0  0.0  16116  8092 ?        Ss   Aug23   0:02 /lib/systemd/systemd-networkd
    systemd+     744  0.0  0.0  89352  6624 ?        Ssl  Aug23   0:01 /lib/systemd/systemd-timesyncd
    systemd+     749  0.0  0.0  25396 13660 ?        Ss   Aug23   0:01 /lib/systemd/systemd-resolved
    message+     750  0.0  0.0   8868  5024 ?        Ss   Aug23   0:00 @dbus-daemon --system --address=systemd: --nofork --nopidfile --system
    root         755  0.0  0.0  82836  3788 ?        Ssl  Aug23   1:23 /usr/sbin/irqbalance --foreground
    root         756  0.0  0.0  32784 18956 ?        Ss   Aug23   0:00 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
    root         758  0.0  0.0 234484  6828 ?        Ssl  Aug23   0:00 /usr/libexec/polkitd --no-debug
    syslog       759  0.0  0.0 222404  6076 ?        Ssl  Aug23   0:30 /usr/sbin/rsyslogd -n -iNONE
    root         760  0.0  0.0  31896  7420 ?        Ss   Aug23   0:01 /lib/systemd/systemd-logind
    root         764  0.0  0.0 392672 13068 ?        Ssl  Aug23   0:00 /usr/libexec/udisks2/udisksd
    root         773  0.0  0.0 317008 11968 ?        Ssl  Aug23   0:00 /usr/sbin/ModemManager
    root         791  0.0  0.0 109744 21676 ?        Ssl  Aug23   0:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shu
    root         837  0.0  0.0   6892  2880 ?        Ss   Aug23   0:00 /usr/sbin/cron -f -P
    daemon       841  0.0  0.0   3860  1256 ?        Ss   Aug23   0:00 /usr/sbin/atd -f
    root         844  0.0  0.0  15420  9504 ?        Ss   Aug23   0:45 sshd: /usr/sbin/sshd -D [listener] 1 of 10-100 startups
    root      326648  0.0  0.0  17452 11612 ?        Ss   09:58   0:00  \_ sshd: root@pts/0
    root      326758  0.0  0.0   8784  5488 pts/0    Ss   09:58   0:00  |   \_ -bash
    root      330369  0.0  0.0  10404  3752 pts/0    R+   10:25   0:00  |   |   \_ ps auxf
    root      329948  0.1  0.0   7368  3560 ?        Ss   10:24   0:00  |   \_ bash -c while true; do sleep 1;head -v -n 8 /proc/meminfo; hea
    root      330368  0.0  0.0   5768  1016 ?        S    10:25   0:00  |       \_ sleep 1
    root      326676  0.0  0.0  17144 11248 ?        Ss   09:58   0:00  \_ sshd: root@notty
    root      326816  0.0  0.0   7764  5444 ?        Ss   09:58   0:00  |   \_ /usr/lib/openssh/sftp-server
    root      330351  0.5  0.0  16432 10088 ?        Ss   10:25   0:00  \_ sshd: root [priv]
    sshd      330352  0.0  0.0  15424  5468 ?        S    10:25   0:00      \_ sshd: root [net]
    root         852  0.0  0.0   6172  1160 tty1     Ss+  Aug23   0:00 /sbin/agetty -o -p -- \u --noclear tty1 linux
    root      318581  0.0  0.0 295632 20724 ?        Ssl  Sep09   0:00 /usr/libexec/packagekitd
    root      322054  0.0  0.0 195936  7296 ?        Ss   Sep09   0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
    www-data  322055  0.0  0.0 197132 15176 ?        S    Sep09   0:00  \_ nginx: worker process
    www-data  322056  0.0  0.0 196608 11132 ?        S    Sep09   0:00  \_ nginx: worker process
    www-data  322057  0.0  0.0 197000 14632 ?        S    Sep09   0:00  \_ nginx: worker process
    www-data  322058  0.0  0.0 196608 11132 ?        S    Sep09   0:00  \_ nginx: worker process
    www-data  322059  0.0  0.0 196608 11132 ?        S    Sep09   0:00  \_ nginx: worker process
    www-data  322060  0.0  0.0 196608 11132 ?        S    Sep09   0:00  \_ nginx: worker process
    www-data  322061  0.0  0.0 196608 11132 ?        S    Sep09   0:00  \_ nginx: worker process
    www-data  322062  0.0  0.0 196608 11132 ?        S    Sep09   0:00  \_ nginx: worker process
    www-data  322063  0.0  0.0 196608 11132 ?        S    Sep09   0:00  \_ nginx: worker process
    www-data  322064  0.0  0.0 196608 11132 ?        S    Sep09   0:00  \_ nginx: worker process
    www-data  322065  0.0  0.0 196608 11132 ?        S    Sep09   0:00  \_ nginx: worker process
    www-data  322066  0.0  0.0 196608 11132 ?        S    Sep09   0:00  \_ nginx: worker process
    root      326651  0.0  0.0  17032  9836 ?        Ss   09:58   0:00 /lib/systemd/systemd --user
    root      326652  0.0  0.0 170356  4940 ?        S    09:58   0:00  \_ (sd-pam)
    
  • rm_rm_ IPv6 Advocate, Veteran
    edited September 2022

    @AXYZE said: hetzner -> serverius is ~744Mbps

    So there is no problem from Hetzner to other DCs either? Only two (home broadband?) connections in Poland?

    I was going to say "try disabling Flow Control on the NICs", but from that it sounds more like your Poland ISPs are to blame, not Hetzner. Try upload tests from Hetzner to a diverse set of locations worldwide, not just one country. Running a YABS can do that.

    @AXYZE said: Here's tracert from my connection to Hetzner

    If you face a problem with Hetzner->PL, then traceroute the same direction (from Hetzner), as it might take a different route.

  • AXYZEAXYZE Member
    edited September 2022

    @rm_ said:

    @AXYZE said: hetzner -> serverius is ~744Mbps

    So there is no problem from Hetzner to other DCs either? Only two (home broadband?) connections in Poland?

    I was going to say "try disabling Flow Control on the NICs", but from that it sounds more like your Poland ISP(s) are to blame, not Hetzner. Try upload tests from Hetzner to a diverse set of locations worldwide, not just one country.

    That can be indeed the case, because now I checked from VPSes I have in all locations (Sofia, Amsterdam, Texas etc.) and even shitty boomer.host with 100ms+ ping gets like 100Mbps!

    I created Nuremberg Cloud instance and iperf result (my connection -> Hetzner Nuremberg) is 15Mbps.

    So either Hetzner or my ISP is here to blame, I will ask friends with different ISPs in Poland to test it out and I'll come back :)

    Edit: Ok most unexpected shit ever - my phone gets 30Mbps to Hetzner, WiFi connection to same router.
    I'll check if Windows is causing this shit lol

  • rm_rm_ IPv6 Advocate, Veteran

    @AXYZE said: I checked from VPSes I have in all locations (Sofia, Amsterdam, Texas etc.) and even shitty boomer.host with 100ms+ ping gets like 100Mbps

    If you have so many VPSes, just check iperf3 upload from Hetzner to them all. As for why it varies with PL ISPs, both of those could be using the same peering link or same exchange (check with tracroute from Hetzner to PL), and that one might be overloaded.

  • AXYZEAXYZE Member
    edited September 2022

    @rm_ said:

    @AXYZE said: I checked from VPSes I have in all locations (Sofia, Amsterdam, Texas etc.) and even shitty boomer.host with 100ms+ ping gets like 100Mbps

    If you have so many VPSes, just check iperf3 upload from Hetzner to them all. As for why it varies with PL ISPs, both of those could be using the same peering link or same exchange (check with tracroute from Hetzner to PL), and that one might be overloaded.

    Traceroute from Hetzner show the same route (Level3 -> PL Inea).

    I messaged one friend that has different ISP. He got 70Mbps. Still Poland.
    He has 2x less ping but his tracert looks like mess (it routes via Germany)... Who is here to blame and who can fix it? 10Mbps is unusable, cant stream video etc.

    Tracing route to static.XXtargetXX.clients.your-server.de [*TARGET IP*]
    over a maximum of 30 hops:
    
      1     1 ms     1 ms     6 ms  192.168.0.1
      2     6 ms     2 ms     2 ms  194-116-192-1.komster.pl [194.116.192.1]
      3     5 ms     3 ms     3 ms  193-22-83-97.komster.pl [193.22.83.97]
      4     4 ms     4 ms     4 ms  gi0-0-0-1.nr01.b033898-0.poz01.atlas.cogentco.com [149.6.28.9]
      5     6 ms     3 ms     4 ms  be2824.rcr21.poz01.atlas.cogentco.com [154.25.10.94]
      6    15 ms    14 ms    11 ms  be3040.ccr41.ham01.atlas.cogentco.com [130.117.3.29]
      7    15 ms    12 ms    15 ms  be2771.rcr01.b015763-1.ham01.atlas.cogentco.com [154.54.63.242]
      8    15 ms    14 ms    15 ms  telia.ham01.atlas.cogentco.com [130.117.14.246]
      9    15 ms    14 ms    15 ms  hbg-bb3-link.ip.twelve99.net [62.115.120.70]
     10    28 ms    28 ms    29 ms  s-bb1-link.ip.twelve99.net [62.115.134.95]
     11    37 ms    35 ms    34 ms  hls-b3-link.ip.twelve99.net [62.115.122.33]
     12    30 ms    31 ms    31 ms  hetzner-svc076536-ic365572.ip.twelve99-cust.net [62.115.52.255]
     13    35 ms    35 ms    37 ms  core32.hel1.hetzner.com [213.239.203.209]
     14    33 ms    36 ms    32 ms  ex9k2.dc4.hel1.hetzner.com [213.239.252.214]
     15    32 ms    31 ms    31 ms  static.XXtargetXX.clients.your-server.de [*TARGET IP*]
    
    Trace complete.
    
  • darkimmortaldarkimmortal Member
    edited September 2022

    @AXYZE said:
    Edit: Ok most unexpected shit ever - my phone gets 30Mbps to Hetzner, WiFi connection to same router.
    I'll check if Windows is causing this shit lol

    Try netsh int tcp set global autotuninglevel=normal

    (Normal is the default, was chasing similar TCP weirdness on one windows machine, not sure how it ever got turned off)

    Thanked by 1MrPsycho
  • rm_rm_ IPv6 Advocate, Veteran
    edited September 2022

    @AXYZE said: his tracert looks like mess (it routes via Germany)...

    Once again, it is next to useless to look at the upload direction traceroute to Hetzner, if the issue is in download speeds from Hetzner. Up/down routes can and will route differently, even if broadly the same set of ISPs (can be different peering locations or routers).

    Ask your friend for his home IPv4 and then traceroute that from your server. But if he gets 70, then perhaps he is not affected, and just keep investigation to the prior two locations.

    Better yet, instead of traceroute run mtr, and leave it for a 1000 of packets, to see if there is any packet loss to the problem destinations.

  • AXYZEAXYZE Member
    edited September 2022

    @rm_ said:

    @AXYZE said: his tracert looks like mess (it routes via Germany)...

    Once again, it is next to useless to look at the upload direction traceroute to Hetzner, if the issue is in download speeds from Hetzner. Up/down routes can and will route differently, even if broadly the same ISPs (can be different peering locations or routers).
    Ask your friend for his home IPv4 and then traceroute that from your server.

    Better yet, instead of traceroute run mtr, and leave it for a 1000 of packets, to see if there is any packet loss to the problem destinations.

    Ok I understand

    Here's traceroute from Hetzner to Inea (bad speed)

    traceroute to MYINEAIP (MYINEAIP), 30 hops max, 60 byte packets
     1  100.81.66.1 (100.81.66.1)  0.494 ms  0.501 ms  0.484 ms
     2  core32.hel1.hetzner.com (213.239.252.213)  1.362 ms core31.hel1.hetzner.com (213.239.252.209)  0.398 ms  0.445 ms
     3  juniper4.dc1.hel1.hetzner.com (213.239.224.37)  0.522 ms  0.508 ms  0.522 ms
     4  ae53.edge1.Helsinki1.Level3.net (212.133.6.1)  0.634 ms  0.619 ms  0.649 ms
     5  4.69.167.126 (4.69.167.126)  49.814 ms  49.800 ms  48.043 ms
     6  213.242.118.14 (213.242.118.14)  54.241 ms  56.065 ms  56.012 ms
     7  e123-2.icpnet.pl (46.238.123.2)  42.378 ms e123-6.icpnet.pl (46.238.123.6)  50.189 ms e123-2.icpnet.pl (46.238.123.2)  40.523 ms
     8  e123-9.icpnet.pl (46.238.123.9)  50.324 ms e123-14.icpnet.pl (46.238.123.14)  55.613 ms e123-9.icpnet.pl (46.238.123.9)  52.306 ms
     9  e91-110.icpnet.pl (46.238.91.110)  48.378 ms  50.175 ms  50.188 ms
    

    and then * * * 21x

    Here's traceroute from Hetzner to Komster (good speed)

    traceroute to KOMSTERIP (KOMSTERIP), 30 hops max, 60 byte packets
     1  100.81.66.1 (100.81.66.1)  5.114 ms  5.081 ms  5.066 ms
     2  core31.hel1.hetzner.com (213.239.252.209)  1.469 ms core32.hel1.hetzner.com (213.239.252.213)  0.358 ms  1.151 ms
     3  * juniper4.dc1.hel1.hetzner.com (213.239.224.37)  0.400 ms  0.385 ms
     4  et400.RT.RAD.HKI.FI.retn.net (87.245.248.130)  0.747 ms  0.732 ms  0.752 ms
     5  et001-6.RT.NIA.POZ.PL.retn.net (87.245.233.90)  20.114 ms  20.350 ms  20.336 ms
     6  GW-HTI.retn.net (87.245.248.91)  24.331 ms  24.117 ms  24.198 ms
     7  250.240.40.164-rev.hti.pl (164.40.240.250)  24.447 ms  24.325 ms  24.631 ms
     8  149.6.28.10 (149.6.28.10)  23.662 ms  23.523 ms  23.618 ms
     9  193-22-83-65.komster.pl (193.22.83.65)  24.172 ms  24.141 ms  24.126 ms
    

    and then * * * 21x

    Both of these connections are behind NAT btw.
    As we can see traceroute to Inea is kinda the same, but to Komster completly different.

    Should I test MTR from Hetzner to local Inea connection or other way around?

    So routes are here to blame?

    @darkimmortal said:

    @AXYZE said:
    Edit: Ok most unexpected shit ever - my phone gets 30Mbps to Hetzner, WiFi connection to same router.
    I'll check if Windows is causing this shit lol

    Try netsh int tcp set global autotuninglevel=normal

    (Normal is the default, was chasing similar TCP weirdness on one windows machine, not sure how it ever got turned off)

    That made my results very inconsistent, now I get 5-22Mbps, but average improved to 17Mbps from 8-10Mbps. Its still not good enough, 30-50Mbps is something I chase...

    Still thank you very much, I wait when my friend with other 8-10Mbps limited connection is online so we can test if this helps him too

  • rm_rm_ IPv6 Advocate, Veteran
    edited September 2022

    @AXYZE said: Should I test MTR from Hetzner to local Inea connection or other way around?

    Yes, better from Hetzner. But in general what you can mostly achieve here is to exonerate Hetzner (as I said above with YABS or upload tests from Hetzner to all of your VPSes), and narrow the issue down to particular ISPs.

    You can post Hetzner a ticket with bad speeds to your ISPs, and maybe they can try making a route change. But it is not guaranteed that they will be able to (or will want to).

    Or complain to your ISPs (good luck with that).

    If the servers are otherwise a good deal for you, might be an idea to explore proxying traffic from Hetzner (rerouting via VPN) through some VPS from which you do have good speeds, such as a VPS directly in Poland.

    Thanked by 1darkimmortal
  • PM me address/IP will run iperf/wget from Orange PL to mess even more with your results, lmao.

  • @JabJab said:
    PM me address/IP will run iperf/wget from Orange PL to mess even more with your results, lmao.

    PM'ed. Lets see ;)

  • Why edit MTU default? It is perfect as it is. Never touch MTU.

  • @LTniger said:
    Why edit MTU default? It is perfect as it is. Never touch MTU.

    Like I said - Oracle instance has MTU 9000 as default and I performance was way better so I tried to copy settings (incl. MTU) to my Hetzner dedi to see if something changes.
    It was temporary, non-persistent changes so I rebooted my dedi and its back to default :)

  • @AXYZE Some NICs cause problems. Try disabling all hardware offload, or one by one. For instance: ethtool -K eth0 tso off gso off

  • I got nice 100Mbit/s from Orange PL on his iperf in single connection, so I assume it's not really "server" limited - somewhere along the network.
    Unfortunately RPi is only on 100Mbit/s card and when I run iperf3 on Windows 10 (that is gigabit NIC) results are even more worse ^.-

  • AXYZEAXYZE Member
    edited September 2022

    I have no fuckin idea what happened but problem is fixed, I did nothing, route on tracert is still the same. Now I have 300Mbps (bandwidth of my home internet) on single HTTP connection.

    I think that route was somehow broken/overloaded and they just fixed it.
    Damn...
    Ping is still 2x higher (60ms) with my provider than with different ones in same city (30ms), I'll contact Hetzner if they can reroute it.

    @MikeA , @LTniger (nice name), @JabJab , @rm_ , @darkimmortal , @Hxxx , @jmgcaguicla , @FoxelVox , @Daniel15 , @luckypenguin
    Thank you all for your input and help!!!

    Thanked by 1Hxxx
  • DPDP Administrator, The Domain Guy

    At work, when things "self heal" (at least from our view), we'll just take it as one of those "weird Internet things" moments :smiley:

    If we think it's necessary or if it was highly noticeable and before the customer asks what happened, then we'll probably request for the RCA from the relevant parties/stakeholders.

  • V_OV_O Member
    edited October 2022

    @AXYZE I had the same issue, horrible performance to different networks, I have found no help at all till I changed some lines at my NGINX conf.
    Normally I would assume your "listen" config looks something like this:

    listen 443 http2 ssl 
    listen [::]:443 http2 ssl
    

    What most user of NGINX never come across is the "sndbuf" and "rcvbuf" config of nginx which is quit important if you're sending out lots of data.
    Please try to change your config to the following:

    listen 443 http2 ssl default_server reuseport backlog=131072 so_keepalive=off rcvbuf=4m sndbuf=4m fastopen=10000;
    listen [::]:443 http2 ssl default_server reuseport backlog=131072 so_keepalive=off rcvbuf=4m sndbuf=4m fastopen=10000;
    

    "Fastopen" is optional and must be supported by the kernel. If you want to use if, please also make sure you have the following set at /etc/sysctl.conf:

    net.ipv4.tcp_fastopen = 3
    

    I have fully tested this against many ISP providers, it performs well with Hetzner. If I do a https download between NetCup (downloads) and Hetzner (uploads) I get 2,5 Gbit/s on a single https connection.

    If you are intressted in the rest of my sysctl.conf, have a look here:

    fs.file-max = 4000000
    fs.nr_open = 4000000
    kernel.core_uses_pid = 1
    kernel.msgmax = 65536
    kernel.msgmnb = 65536
    kernel.pid_max = 65536
    kernel.randomize_va_space=2
    kernel.sched_autogroup_enabled = 0
    kernel.sysrq = 0
    net.core.default_qdisc = fq
    net.core.netdev_max_backlog = 64000
    net.core.optmem_max = 25165824
    net.core.rmem_default = 67108864
    net.core.rmem_max = 67108864
    net.core.somaxconn = 65535
    net.core.wmem_default = 67108864
    net.core.wmem_max = 33554432
    net.core.wmem_max = 67108864
    net.ipv4.conf.all.accept_redirects = 0
    net.ipv4.conf.all.accept_source_route = 0
    net.ipv4.conf.all.log_martians = 0
    net.ipv4.conf.all.rp_filter = 1
    net.ipv4.conf.all.send_redirects = 0
    net.ipv4.conf.default.accept_source_route = 0
    net.ipv4.conf.default.secure_redirects = 0
    net.ipv4.ip_forward = 1
    net.ipv4.ip_local_port_range = 10000 65535
    net.ipv4.neigh.default.gc_stale_time = 7200
    net.ipv4.neigh.default.gc_thresh1 = 225280
    net.ipv4.neigh.default.gc_thresh2 = 450560
    net.ipv4.neigh.default.gc_thresh3 = 450560
    net.ipv4.tcp_adv_win_scale = 1
    net.ipv4.tcp_congestion_control = cubic
    net.ipv4.tcp_dsack = 0
    net.ipv4.tcp_fack = 0
    net.ipv4.tcp_fastopen = 3
    net.ipv4.tcp_fin_timeout = 10
    net.ipv4.tcp_keepalive_intvl = 30
    net.ipv4.tcp_keepalive_probes = 5
    net.ipv4.tcp_keepalive_time = 300
    net.ipv4.tcp_low_latency = 1
    net.ipv4.tcp_max_orphans = 262144
    net.ipv4.tcp_max_syn_backlog = 3240000
    net.ipv4.tcp_max_tw_buckets = 5880000
    net.ipv4.tcp_mem = 576 64768 98152
    net.ipv4.tcp_mtu_probing = 1
    net.ipv4.tcp_rfc1337 = 1
    net.ipv4.tcp_rmem = 4096 65536 33554432
    net.ipv4.tcp_sack = 0
    net.ipv4.tcp_slow_start_after_idle = 0
    net.ipv4.tcp_syn_retries = 3
    net.ipv4.tcp_synack_retries = 3
    net.ipv4.tcp_syncookies = 1
    net.ipv4.tcp_timestamps = 0
    net.ipv4.tcp_tw_reuse = 1
    net.ipv4.tcp_wmem = 4096 65536 33554432
    net.ipv6.conf.all.accept_ra = 0
    net.ipv6.conf.all.autoconf = 0
    net.ipv6.conf.all.disable_ipv6 = 0
    net.ipv6.conf.default.accept_ra = 0
    net.ipv6.conf.default.autoconf = 0
    net.ipv6.conf.default.disable_ipv6 = 0
    vm.dirty_background_ratio = 10
    vm.dirty_expire_centisecs = 1500
    vm.dirty_ratio = 15
    vm.dirty_writeback_centisecs = 500
    vm.max_map_count = 262144
    vm.overcommit_memory = 1
    vm.swappiness = 10
    vm.vfs_cache_pressure = 50
    

    This config is intended to put the server in any means to hits limits.
    Currently, I'm running this on 8-16 Cores and 16-32GB of RAM.

    Cheers.

    Thanked by 1malikshi
  • @V_O said: If you are intressted in the rest of my sysctl.conf, have a look here:
    This config is intended to put the server in any means to hits limits.
    Currently, I'm running this on 8-16 Cores and 16-32GB of RAM.

    Did you do any kind of testing to verify that these settings are really useful? I know if you google for these settings you get lots of recommendations for different settings but I feel like it's always anecdotal and rarely tested. The ones you have for vm.max_map_count, vm.swappiness and vm.vfs_cache_pressure are actually very commonly recommended if you google but having these settings just means you don't trust linux to manage the memory.

    If you're running a db or something, sure set the swappiness to 1 or even 0. That will definitely help. It also makes sense to configure things like net.core.somaxconn or the open files limit depending on your use case. I feel like most other configs are just blindly increasing values just because it seems like it might increase performance (but in reality it might not).

  • V_OV_O Member

    @NoComment
    Well, yes I did a lot of testing as this issue was almost driven me mad. I can definitly confirm that these settings do the job.

  • TimboJonesTimboJones Member
    edited October 2022

    @AXYZE said:

    @LTniger said:
    Why edit MTU default? It is perfect as it is. Never touch MTU.

    Like I said - Oracle instance has MTU 9000 as default and I performance was way better

    I highly doubt that. 9000 MTU only on internal, private NICs, not the public facing one.

  • @TimboJones said:

    @AXYZE said:

    @LTniger said:
    Why edit MTU default? It is perfect as it is. Never touch MTU.

    Like I said - Oracle instance has MTU 9000 as default and I performance was way better

    I highly doubt that. 9000 MTU only on internal, private NICs, not the public facing one.

    But still, set as 9000. I tried to duplicate every setting from machine that works nice, thats all.

    Its necro btw, problem was "fixed" 1.5months ago, it was routing problem from Finland and now I know that it sadly happens from time to time on this Hetzner location :/ seems like network there is much worse...

Sign In or Register to comment.