Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


PureVoltage.com - am I the crazy one?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

PureVoltage.com - am I the crazy one?

mwmw Member
edited February 12 in Reviews

So we're currently evaluating different hosts and @PureVoltage was a pleasure during pre-sales but now that we have our first delivery I need a sanity check from the community.

Below is a snippet from my ticket with them regarding the network performance.

#

Performance seems low:

https://www.speedtest.net/result/c/dde424f7-e4e5-4f6e-aef8-8c7dcc2dbf5e

https://www.speedtest.net/result/c/a9527d7d-e959-4dd4-a95e-ddac53dd3d71

https://www.speedtest.net/result/c/c0b8a8a9-52d6-486d-b98b-2c62f49e0e53]

#

45.140.189.x

root@x:~# iperf3 -c 169.197.86.x -P 4
Connecting to host 169.197.86.x, port 5201


[ ID] Interval Transfer Bitrate Retr
[ 6] 0.00-10.00 sec 208 MBytes 174 Mbits/sec 1370 sender
[ 6] 0.00-10.08 sec 206 MBytes 171 Mbits/sec receiver
[ 8] 0.00-10.00 sec 272 MBytes 228 Mbits/sec 291 sender
[ 8] 0.00-10.08 sec 270 MBytes 225 Mbits/sec receiver
[ 10] 0.00-10.00 sec 225 MBytes 188 Mbits/sec 539 sender
[ 10] 0.00-10.08 sec 222 MBytes 185 Mbits/sec receiver
[ 12] 0.00-10.00 sec 170 MBytes 143 Mbits/sec 158 sender
[ 12] 0.00-10.08 sec 167 MBytes 139 Mbits/sec receiver
[SUM] 0.00-10.00 sec 875 MBytes 734 Mbits/sec 2358 sender
[SUM] 0.00-10.08 sec 866 MBytes 721 Mbits/sec receiver
iperf Done.

#

root@x:~# iperf3 -c 169.197.86.x -P 4 -R
Connecting to host 169.197.86.x, port 5201
Reverse mode, remote host 169.197.86.x is sending


[ ID] Interval Transfer Bitrate Retr
[ 6] 0.00-10.07 sec 8.27 MBytes 6.88 Mbits/sec 108 sender
[ 6] 0.00-10.00 sec 7.36 MBytes 6.18 Mbits/sec receiver
[ 8] 0.00-10.07 sec 4.36 MBytes 3.63 Mbits/sec 50 sender
[ 8] 0.00-10.00 sec 4.08 MBytes 3.42 Mbits/sec receiver
[ 10] 0.00-10.07 sec 5.76 MBytes 4.79 Mbits/sec 61 sender
[ 10] 0.00-10.00 sec 5.25 MBytes 4.40 Mbits/sec receiver
[ 12] 0.00-10.07 sec 26.3 MBytes 21.9 Mbits/sec 56 sender
[ 12] 0.00-10.00 sec 24.7 MBytes 20.7 Mbits/sec receiver
[SUM] 0.00-10.07 sec 44.6 MBytes 37.2 Mbits/sec 275 sender
[SUM] 0.00-10.00 sec 41.4 MBytes 34.7 Mbits/sec receiver
iperf Done.

#
#

45.157.234.x

root@x:~# iperf3 -c 169.197.86.x -P 4
Connecting to host 169.197.86.x, port 5201


[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 134 MBytes 112 Mbits/sec 575 sender
[ 5] 0.00-10.09 sec 132 MBytes 109 Mbits/sec receiver
[ 7] 0.00-10.00 sec 127 MBytes 107 Mbits/sec 501 sender
[ 7] 0.00-10.09 sec 125 MBytes 104 Mbits/sec receiver
[ 9] 0.00-10.00 sec 233 MBytes 195 Mbits/sec 384 sender
[ 9] 0.00-10.09 sec 231 MBytes 192 Mbits/sec receiver
[ 11] 0.00-10.00 sec 126 MBytes 106 Mbits/sec 699 sender
[ 11] 0.00-10.09 sec 124 MBytes 103 Mbits/sec receiver
[SUM] 0.00-10.00 sec 620 MBytes 520 Mbits/sec 2159 sender
[SUM] 0.00-10.09 sec 612 MBytes 509 Mbits/sec receiver
iperf Done.

#

root@x:~# iperf3 -c 169.197.86.x -P 4 -R
Connecting to host 169.197.86.x, port 5201
Reverse mode, remote host 169.197.86.x is sending


[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.09 sec 14.2 MBytes 11.8 Mbits/sec 157 sender
[ 5] 0.00-10.00 sec 12.6 MBytes 10.5 Mbits/sec receiver
[ 7] 0.00-10.09 sec 128 MBytes 107 Mbits/sec 2010 sender
[ 7] 0.00-10.00 sec 126 MBytes 106 Mbits/sec receiver
[ 9] 0.00-10.09 sec 123 MBytes 102 Mbits/sec 919 sender
[ 9] 0.00-10.00 sec 121 MBytes 101 Mbits/sec receiver
[ 11] 0.00-10.09 sec 125 MBytes 104 Mbits/sec 1374 sender
[ 11] 0.00-10.00 sec 121 MBytes 102 Mbits/sec receiver
[SUM] 0.00-10.09 sec 391 MBytes 325 Mbits/sec 4460 sender
[SUM] 0.00-10.00 sec 380 MBytes 319 Mbits/sec receiver
iperf Done.

#

Hi there,
Please try that test with a higher number -P 16

[SUM] 0.00-10.00 sec 8.18 GBytes 7.02 Gbits/sec 20265 sender
[SUM] 0.00-10.09 sec 7.74 GBytes 6.59 Gbits/sec receiver

Is what we seen back to 45.157.234.x

Sincerely,

Liam S.
Data Center Technician
PureVoltage Hosting, Inc.

#

That won't address the very clear and obvious issue with traffic outbound.

See attached for further evidence.

#

Hi,

You will not reach 10 GbE speeds in India.

Sincerely,

Miguel
Senior Cloud Engineer
PureVoltage Hosting, Inc.

#

Tests anyway

#

root@x:~# iperf3 -c 169.197.86.x -P 16 -R
Connecting to host 169.197.86.x, port 5201
Reverse mode, remote host 169.197.86.x is sending


[ ID] Interval Transfer Bitrate Retr
[SUM] 0.00-10.08 sec 2.25 GBytes 1.92 Gbits/sec 12528 sender
[SUM] 0.00-10.00 sec 2.21 GBytes 1.90 Gbits/sec receiver
iperf Done.

#

root@x:~# iperf3 -c 169.197.86.x -P 16 -R
Connecting to host 169.197.86.x, port 5201
Reverse mode, remote host 169.197.86.x is sending
[SUM] 0.00-10.09 sec 1.58 GBytes 1.35 Gbits/sec 10371 sender
[SUM] 0.00-10.00 sec 1.53 GBytes 1.32 Gbits/sec receiver
iperf Done.

#

Are you seriously ignoring all the test evidence I am providing because the first set of tests in a screenshot is to India - did you even read the test?

#

Hi there,
If you are having issues with specific routes outbound please provide a MTR both directions so we can see what routes you are having issues with.

Sincerely,

Liam S.
Data Center Technician
PureVoltage Hosting, Inc.

#

Keep in mind we are paying for 40Gbit and was told the switch we are on was "freshly deployed today"

Am I wrong to think that the above is sufficient evidence that something is wrong, and that asking for MTR to every route sounds like they just want to close the ticket?

How is this the attitude of three separate support agents?

And ofc... yabs

root@x:~# curl -sL https://yabs.sh | bash
# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
#              Yet-Another-Bench-Script              #
#                     v2025-01-01                    #
# https://github.com/masonr/yet-another-bench-script #
# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #

Wed Feb 12 05:28:10 PM EST 2025

Basic System Information:
---------------------------------
Uptime     : 0 days, 1 hours, 22 minutes
Processor  : Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
CPU cores  : 72 @ 3700.000 MHz
AES-NI     : ✔ Enabled
VM-x/AMD-V : ✔ Enabled
RAM        : 1006.5 GiB
Swap       : 0.0 KiB
Disk       :
Distro     : Debian GNU/Linux 12 (bookworm)
Kernel     : 6.8.12-4-pve
VM Type    : NONE
IPv4/IPv6  : ✔ Online / ❌ Offline

IPv4 Network Information:
---------------------------------
ISP        : PureVoltage Hosting Inc.
ASN        : AS26548 PureVoltage Hosting Inc.
Host       : PureVoltage Hosting Inc
Location   : New York, New York (NY)
Country    : United States

fio Disk Speed Tests (Mixed R/W 50/50) (Partition rpool/ROOT/pve-1):
---------------------------------
Block Size | 4k            (IOPS) | 64k           (IOPS)
  ------   | ---            ----  | ----           ----
Read       | 163.43 MB/s  (40.8k) | 2.36 GB/s    (36.8k)
Write      | 163.86 MB/s  (40.9k) | 2.37 GB/s    (37.0k)
Total      | 327.29 MB/s  (81.8k) | 4.73 GB/s    (73.9k)
           |                      |
Block Size | 512k          (IOPS) | 1m            (IOPS)
  ------   | ---            ----  | ----           ----
Read       | 2.22 GB/s     (4.3k) | 1.95 GB/s     (1.9k)
Write      | 2.34 GB/s     (4.5k) | 2.08 GB/s     (2.0k)
Total      | 4.56 GB/s     (8.9k) | 4.03 GB/s     (3.9k)

iperf3 Network Speed Tests (IPv4):
---------------------------------
Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping
-----           | -----                     | ----            | ----            | ----
Clouvider       | London, UK (10G)          | 1.07 Gbits/sec  | 2.07 Gbits/sec  | 70.5 ms
Eranium         | Amsterdam, NL (100G)      | 1.06 Gbits/sec  | 2.59 Gbits/sec  | 74.0 ms
Uztelecom       | Tashkent, UZ (10G)        | 269 Mbits/sec   | 673 Mbits/sec   | 229 ms
Leaseweb        | Singapore, SG (10G)       | 336 Mbits/sec   | 760 Mbits/sec   | 238 ms
Clouvider       | Los Angeles, CA, US (10G) | busy            | busy            | 64.7 ms
Leaseweb        | NYC, NY, US (10G)         | 1.74 Gbits/sec  | 2.40 Gbits/sec  | 2.26 ms
Edgoo           | Sao Paulo, BR (1G)        | 954 Mbits/sec   | 1.21 Gbits/sec  | 111 ms

Running GB6 benchmark test... *cue elevator music*


Geekbench 6 Benchmark Test:
---------------------------------
Test            | Value
                |
Single Core     | 1205
Multi Core      | 11649
Full Test       | https://browser.geekbench.com/v6/cpu/10504316

YABS completed in 10 min 38 sec
Thanked by 2wamy hobofl
«1

Comments

  • crunchbitscrunchbits Member, Patron Provider, Top Host

    No personal knowledge with them but they are definitely in good facilities.

    Two POVs I'd give:
    1. Providers generally ask for specific network stuff, even if it seems redundant to you as the customer, because 98/100 times it's not being tested thoroughly or sufficiently enough and because when you bring it up with your NOC (or upstream NOC) there is a lot of box checking to be done--for good reason. Most often there is 1 specific route that end-point customer's home ISP has loss on originating somewhere well outside of provider's network and the only way to try and prove that is a handful of MTR's.
    2. In this case, you have provided (imo) a good amount of evidence to suggest something may be off. Enough that if it were a ticket brought up here, it would already be pinging internally to NOC. We may also simultaneously ask for some additional tests, but I think it would be enough to not chalk it up to "customer's problem" and ignore, though it could still be an OS/firmware/kernel-related issue. Local speedtest servers are great for sanity checks on exactly these types of issues.

    I'm not trying to make excuses for providers, but most times it really is a specific software stack or OS configuration, or 1 specific route that a customer has issues with and just blames provider (without knowledge of how the internet works). I recall a ticket going for ~12 months being told how wrong we were about speeds (no clue why they kept the service as it was not a yearly), until eventually finding out their special uber-custom kernel tuning parameters were messed up and crapping all over their specific VM's network setup. I do appreciate that the customer told us eventually :D

    Providers have issues all the time. Just today we had a freshly deployed server that is is a 2x10G LACP go out and hit a customer's hands. Tests were showing it to be light on initial speed tests (not 1% of port speed or anything--but light for our normal network performance and specific to customer's setup). Had them run a few more things and pretty quickly identified there is definitely an issue on our side and we have a misbehaving optic that is forcing one link down to 1G (instead of 10G) and crapping the whole aggregation down to 2x1G. Customer informed, resolution planned, and they get to rake me over the coals for a week :#

    How is this the attitude of three separate support agents?

    Specifically here: I wouldn't say bad attitude, but definitely could be improved. Could just be overworked/busy time (things are insane, and have been at this elevated level for quite some time now for most hosts I think). I will also say it's a challenge. Of all the tasks on our plates, it's hard to carve out time to review support ticket replies and bring it up as a "yes great job here, everyone look at this example" or "no, that reply was too short/incorrect or just not how I want us represented". We still do it at least a few times per month, because mistakes honestly do happen and if it was done from a point of working and caring but rushing? I think that is potentially fixable. There's a fine line between "Hey! SLOW DOWN. Respond to them like a human--imagine you are the customer submitting this, what do you think their next question will immediately be and just answer it now." and "You have to get more than 3 tickets in a full day worked on."

    All in all, hopefully can be resolved amicably.

    Thanked by 4hobofl admax host_c FAT32
  • mwmw Member

    @crunchbits said:
    No personal knowledge with them but they are definitely in good facilities.

    Two POVs I'd give:
    1.

    I sent MTR's and directly responded to them telling me the poor speeds was because the test endpoints we used were slow by sending them a test to Scaleway's 100G iperf server.

    They responded by saying their own test managed 900Mbps+ so the issue was us.

    2.

    We're running the same tune we run on all of our boxes, and on those networks we have never had an issue smashing cross continent. I did run a test to a local speedtest server above. 1.2Gbps upload lol

    How is this the attitude of three separate support agents?

    Could just be overworked/busy time (things are insane, and have been at this elevated level for quite some time now for most hosts I think).

    They reply within 2-5 minutes FWIW

    I seem to have overlooked (due to volume of PMs) multiple different users on LET that seem to have had the same experience... yikes.

    As always I appreciate your grounded insight @crunchbits

    Thanked by 1hobofl
  • mwmw Member

    If I boot a live Linux ISO just to run some speedtests and they're the same as above, would that serve as proof enough that the issue is simply not on my end and that their support needs to investigate?

    I'm more skeptical than most because before this we had an issue with the networking not working which is when they mentioned the switch being brand new.

    It just seems like their support simply does not care enough to properly read* what I am sending them and instead responds with the bare minimum to keep response times low.

    *I mean how can you respond to a wall of evidence with "yeh speeds to india arent gonna be fast my man" then get swapped out with another agent

    Thanked by 2DeusVult ThinVps
  • crunchbitscrunchbits Member, Patron Provider, Top Host

    @mw said:

    @crunchbits said:
    No personal knowledge with them but they are definitely in good facilities.

    Two POVs I'd give:
    1.

    I sent MTR's and directly responded to them telling me the poor speeds was because the test endpoints we used were slow by sending them a test to Scaleway's 100G iperf server.

    They responded by saying their own test managed 900Mbps+ so the issue was us.

    2.

    We're running the same tune we run on all of our boxes, and on those networks we have never had an issue smashing cross continent. I did run a test to a local speedtest server above. 1.2Gbps upload lol

    How is this the attitude of three separate support agents?

    Could just be overworked/busy time (things are insane, and have been at this elevated level for quite some time now for most hosts I think).

    They reply within 2-5 minutes FWIW

    I seem to have overlooked (due to volume of PMs) multiple different users on LET that seem to have had the same experience... yikes.

    As always I appreciate your grounded insight @crunchbits

    Yeah--your tests and replies are why I said this is something that likely would be starting to get actioned, at least everyone double checking if we left port-shaping on somewhere on accident from a prior request.

    I definitely feel the PM volume issue :D It's basically no-go land right now.

    They reply within 2-5 minutes FWIW

    Speed might be quick, but generally a real dive into something network-related isn't a 2m response for me. I'm too dumb.

    Thanked by 1mw
  • jayjayjayjay Member, Patron Provider

    Happy to run the same from other locations and share results if it helps :)

  • Their network is insanely oversold and leans heavily on Hurricane, the absolute worst "T1". They'll call me a hater for this though.

  • emghemgh Member, Megathread Squad

    @fluffernutter said:
    Their network is insanely oversold and leans heavily on Hurricane, the absolute worst "T1". They'll call me a hater for this though.

    Hater

  • mwmw Member
    root@x:~# iperf3 -c speedtest.nyc1.us.leaseweb.net -p 5210
    Connecting to host speedtest.nyc1.us.leaseweb.net, port 5210
    [  5] local 169.197.86.25 port 41582 connected to 108.62.52.129 port 5210
    [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
    [  5]   0.00-1.00   sec   339 MBytes  2.84 Gbits/sec  18164   2.67 MBytes
    [  5]   1.00-2.00   sec   439 MBytes  3.68 Gbits/sec  17234   2.93 MBytes
    [  5]   2.00-3.00   sec   489 MBytes  4.10 Gbits/sec  9941   2.82 MBytes
    [  5]   3.00-4.00   sec   500 MBytes  4.19 Gbits/sec  7745   2.67 MBytes
    [  5]   4.00-5.00   sec   466 MBytes  3.91 Gbits/sec  17171   3.07 MBytes
    [  5]   5.00-6.00   sec   504 MBytes  4.23 Gbits/sec  14751   2.73 MBytes
    [  5]   6.00-7.00   sec   489 MBytes  4.10 Gbits/sec  11118   2.92 MBytes
    [  5]   7.00-8.00   sec   508 MBytes  4.26 Gbits/sec  10548   2.90 MBytes
    [  5]   8.00-9.00   sec   485 MBytes  4.07 Gbits/sec  12133   2.91 MBytes
    [  5]   9.00-10.00  sec   481 MBytes  4.04 Gbits/sec  8191   3.10 MBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate         Retr
    [  5]   0.00-10.00  sec  4.59 GBytes  3.94 Gbits/sec  126996             sender
    [  5]   0.00-10.02  sec  4.59 GBytes  3.93 Gbits/sec                  receiver
    
    iperf Done.
    root@x:~# iperf3 -c speedtest.nyc1.us.leaseweb.net -p 5210 -R
    Connecting to host speedtest.nyc1.us.leaseweb.net, port 5210
    Reverse mode, remote host speedtest.nyc1.us.leaseweb.net is sending
    [  5] local 169.197.86.25 port 39220 connected to 108.62.52.129 port 5210
    [ ID] Interval           Transfer     Bitrate
    [  5]   0.00-1.00   sec   450 MBytes  3.78 Gbits/sec
    [  5]   1.00-2.00   sec   470 MBytes  3.95 Gbits/sec
    [  5]   2.00-3.00   sec   485 MBytes  4.07 Gbits/sec
    [  5]   3.00-4.00   sec   465 MBytes  3.90 Gbits/sec
    [  5]   4.00-5.00   sec   423 MBytes  3.55 Gbits/sec
    [  5]   5.00-6.00   sec   343 MBytes  2.87 Gbits/sec
    [  5]   6.00-7.00   sec   479 MBytes  4.02 Gbits/sec
    [  5]   7.00-8.00   sec   540 MBytes  4.53 Gbits/sec
    [  5]   8.00-9.00   sec   443 MBytes  3.71 Gbits/sec
    [  5]   9.00-10.00  sec   310 MBytes  2.60 Gbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate         Retr
    [  5]   0.00-10.04  sec  4.33 GBytes  3.70 Gbits/sec    1             sender
    [  5]   0.00-10.00  sec  4.30 GBytes  3.70 Gbits/sec                  receiver
    
    iperf Done.
    root@x:~# ping speedtest.nyc1.us.leaseweb.net
    PING speedtest.nyc1.us.leaseweb.net (108.62.52.129) 56(84) bytes of data.
    64 bytes from speedtest.nyc1.us.leaseweb.net (108.62.52.129): icmp_seq=1 ttl=55 time=2.09 ms
    64 bytes from speedtest.nyc1.us.leaseweb.net (108.62.52.129): icmp_seq=2 ttl=55 time=2.27 ms
    ^C
    --- speedtest.nyc1.us.leaseweb.net ping statistics ---
    3 packets transmitted, 2 received, 33.3333% packet loss, time 2001ms
    rtt min/avg/max/mdev = 2.090/2.178/2.267/0.088 ms
    

    it's clear what the problem is. tbh i don't feel like wasting more time with their support

  • mwmw Member

    chat i'm cooked. <500Mbps NY <-> LA

    1744 retransmissions with variance 128-294Mbps

    root@x:~# iperf3 -c la.speedtest.clouvider.net -p 5209
    Connecting to host la.speedtest.clouvider.net, port 5209
    [  5] local 169.197.86.25 port 42878 connected to 77.247.126.223 port 5209
    [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
    [  5]   0.00-1.00   sec  15.2 MBytes   128 Mbits/sec   56   8.59 MBytes
    [  5]   1.00-2.00   sec  33.8 MBytes   283 Mbits/sec  396   7.97 MBytes
    [  5]   2.00-3.00   sec  32.5 MBytes   273 Mbits/sec   98   5.20 MBytes
    [  5]   3.00-4.00   sec  31.2 MBytes   262 Mbits/sec   91   6.68 MBytes
    [  5]   4.00-5.00   sec  27.5 MBytes   231 Mbits/sec  189   6.43 MBytes
    [  5]   5.00-6.00   sec  35.0 MBytes   294 Mbits/sec   98   6.33 MBytes
    [  5]   6.00-7.00   sec  23.8 MBytes   199 Mbits/sec  752   1.47 MBytes
    [  5]   7.00-8.00   sec  31.2 MBytes   262 Mbits/sec   16   6.33 MBytes
    [  5]   8.00-9.00   sec  35.0 MBytes   294 Mbits/sec    9   6.35 MBytes
    [  5]   9.00-10.00  sec  26.2 MBytes   220 Mbits/sec   39   6.39 MBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate         Retr
    [  5]   0.00-10.00  sec   292 MBytes   245 Mbits/sec  1744             sender
    [  5]   0.00-10.07  sec   291 MBytes   243 Mbits/sec                  receiver
    
    iperf Done.
    root@x:~# iperf3 -c la.speedtest.clouvider.net -p 5209 -R
    Connecting to host la.speedtest.clouvider.net, port 5209
    Reverse mode, remote host la.speedtest.clouvider.net is sending
    [  5] local 169.197.86.25 port 56136 connected to 77.247.126.223 port 5209
    [ ID] Interval           Transfer     Bitrate
    [  5]   0.00-1.00   sec  28.7 MBytes   241 Mbits/sec
    [  5]   1.00-2.00   sec  46.8 MBytes   393 Mbits/sec
    [  5]   2.00-3.00   sec  47.0 MBytes   394 Mbits/sec
    [  5]   3.00-4.00   sec  50.0 MBytes   419 Mbits/sec
    [  5]   4.00-5.00   sec  46.8 MBytes   392 Mbits/sec
    [  5]   5.00-6.00   sec  47.0 MBytes   394 Mbits/sec
    [  5]   6.00-7.00   sec  48.1 MBytes   404 Mbits/sec
    [  5]   7.00-8.00   sec  48.7 MBytes   408 Mbits/sec
    [  5]   8.00-9.00   sec  47.0 MBytes   394 Mbits/sec
    [  5]   9.00-10.00  sec  47.5 MBytes   398 Mbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate         Retr
    [  5]   0.00-10.06  sec   460 MBytes   384 Mbits/sec    0             sender
    [  5]   0.00-10.00  sec   457 MBytes   384 Mbits/sec                  receiver
    
    iperf Done.
    
  • PureVoltagePureVoltage Member, Patron Provider

    I'd be happy to have our team spin up another server on the same switch to run some more tests tomorrow.

    I'm sure that the issue is related to not having tweaks done to their OS. However, I could be wrong.

    But looking quickly at our teams chat, and multiple tests to other servers and to scaleways going the exact same paths there is quite a difference between their test and one of ours that has a few minor OS tweaks.

    Their test to Scaleway

    To Scaleway's 100G iperf3 server:

    root@stor-ny-1:~# iperf3 -c ping.online.net -p 5208 -R
    Connecting to host ping.online.net, port 5208
    Reverse mode, remote host ping.online.net is sending
    [ 5] local 169.197.86.25 port 58632 connected to 51.158.1.21 port 5208
    [ ID] Interval Transfer Bitrate
    [ 5] 0.00-1.00 sec 20.0 MBytes 168 Mbits/sec
    [ 5] 1.00-2.00 sec 43.4 MBytes 364 Mbits/sec
    [ 5] 2.00-3.00 sec 43.3 MBytes 363 Mbits/sec
    [ 5] 3.00-4.00 sec 43.5 MBytes 365 Mbits/sec
    [ 5] 4.00-5.00 sec 43.3 MBytes 363 Mbits/sec
    [ 5] 5.00-6.00 sec 43.4 MBytes 364 Mbits/sec
    [ 5] 6.00-7.00 sec 43.4 MBytes 364 Mbits/sec
    [ 5] 7.00-8.00 sec 43.4 MBytes 364 Mbits/sec
    [ 5] 8.00-9.00 sec 43.3 MBytes 363 Mbits/sec
    [ 5] 9.00-10.00 sec 43.4 MBytes 364 Mbits/sec


    [ ID] Interval Transfer Bitrate Retr
    [ 5] 0.00-10.07 sec 467 MBytes 389 Mbits/sec 0 sender
    [ 5] 0.00-10.00 sec 410 MBytes 344 Mbits/sec receiver

    iperf Done.

    root@stor-ny-1:~# traceroute ping.online.net
    traceroute to ping.online.net (51.158.1.21), 30 hops max, 60 byte packets
    1 169.197.86.24 (169.197.86.24) 0.391 ms 0.300 ms 0.292 ms
    2 e0-25.core1.nyc9.he.net (216.66.2.105) 0.907 ms 1.187 ms 1.472 ms
    3 100ge0-53.core2.nyc5.he.net (184.104.194.249) 1.156 ms 100ge0-34.core1.ewr4.he.net (72.52.92.229) 1.611 ms *
    4 100ge0-39.core2.ewr5.he.net (184.104.188.21) 1.746 ms 1.691 ms *
    5 port-channel13.core2.par2.he.net (184.104.197.10) 71.448 ms * *
    6 * scaleway-dc2.par.franceix.net (37.49.237.111) 72.323 ms port-channel13.core2.par2.he.net (184.104.197.10) 72.801 ms
    7 51.158.8.67 (51.158.8.67) 72.384 ms scaleway-dc2.par.franceix.net (37.49.237.111) 73.885 ms 51.158.8.65 (51.158.8.65) 72.209 ms
    8 51.158.8.65 (51.158.8.65) 73.057 ms 73.305 ms 73.092 ms
    9 51.158.1.21 (51.158.1.21) 72.230 ms 72.023 ms 72.178 ms

    We will not run 16 threads. If your network is only capable of 364Mbps on a single connection, make this known so we're not wasting our time.

    And our teams test to the same server.

    Single threaded performance depends on multiple factors.
    Have you done any tweaks to your kernel or anything else to help improve these?

    iperf3 -c ping.online.net -p 5208 -R
    Connecting to host ping.online.net, port 5208
    Reverse mode, remote host ping.online.net is sending
    [ 5] local 169.197.80.162 port 48596 connected to 51.158.1.21 port 5208
    [ ID] Interval Transfer Bitrate
    [ 5] 0.00-1.00 sec 15.2 MBytes 127 Mbits/sec
    [ 5] 1.00-2.00 sec 125 MBytes 1.05 Gbits/sec
    [ 5] 2.00-3.00 sec 139 MBytes 1.16 Gbits/sec
    [ 5] 3.00-4.00 sec 140 MBytes 1.17 Gbits/sec
    [ 5] 4.00-5.00 sec 111 MBytes 935 Mbits/sec
    [ 5] 5.00-6.00 sec 87.1 MBytes 731 Mbits/sec
    [ 5] 6.00-7.00 sec 87.1 MBytes 731 Mbits/sec
    [ 5] 7.00-8.00 sec 124 MBytes 1.04 Gbits/sec
    [ 5] 8.00-9.00 sec 122 MBytes 1.03 Gbits/sec
    [ 5] 9.00-10.00 sec 141 MBytes 1.18 Gbits/sec


    [ ID] Interval Transfer Bitrate Retr
    [ 5] 0.00-10.07 sec 1.15 GBytes 979 Mbits/sec 16905 sender
    [ 5] 0.00-10.00 sec 1.07 GBytes 915 Mbits/sec receiver

    iperf3 -c ping.online.net -p 5208 -R -P 16
    [SUM] 0.00-10.07 sec 8.79 GBytes 7.50 Gbits/sec 891156 sender
    [SUM] 0.00-10.00 sec 7.53 GBytes 6.47 Gbits/sec receiver

    traceroute ping.online.net
    traceroute to ping.online.net (51.158.1.21), 30 hops max, 60 byte packets
    1 _gateway (169.197.80.161) 1.322 ms 0.270 ms 0.258 ms
    2 * e0-25.core1.nyc9.he.net (216.66.2.105) 0.769 ms *
    3 100ge0-34.core1.ewr4.he.net (72.52.92.229) 1.428 ms * 1.728 ms
    4 100ge0-39.core2.ewr5.he.net (184.104.188.21) 1.885 ms 1.811 ms *
    5 * * port-channel13.core2.par2.he.net (184.104.197.10) 74.296 ms
    6 port-channel13.core2.par2.he.net (184.104.197.10) 74.784 ms 74.442 ms scaleway-dc2.par.franceix.net (37.49.237.111) 72.111 ms
    7 scaleway-dc3.par.franceix.net (37.49.237.27) 73.437 ms 51.158.8.67 (51.158.8.67) 72.221 ms 51.158.8.65 (51.158.8.65) 72.239 ms
    8 51.158.8.65 (51.158.8.65) 73.556 ms 51.158.1.21 (51.158.1.21) 72.023 ms 51.158.8.65 (51.158.8.65) 73.198 ms

    It looks like you will want to do some tweaks to your OS and then run some of the tests. As the above tests we got almost 3x the single thread speeds along the same route.

    Our team asking for a MTR and being met with this: "If you have no idea, nor know how to fix this, we recommend getting someone who does." doesn't really help when we are asking for it so that we can see the path it's taking for you and if there is something wrong going on so we can provide help.

    Your Yabs and the one our team ran is quite different in results.

    iperf3 Network Speed Tests (IPv4):

    Provider Location (Link) Send Speed Recv Speed Ping
    Clouvider London, UK (10G) 1.97 Gbits/sec 1.18 Gbits/sec 70.3 ms
    Eranium Amsterdam, NL (100G) 8.55 Gbits/sec 5.15 Gbits/sec 76.6 ms
    Uztelecom Tashkent, UZ (10G) 1.24 Gbits/sec 799 Mbits/sec 224 ms
    Leaseweb Singapore, SG (10G) 3.19 Gbits/sec 2.27 Gbits/sec 237 ms
    Clouvider Los Angeles, CA, US (10G) 2.59 Gbits/sec 2.61 Gbits/sec 64.7 ms
    Leaseweb NYC, NY, US (10G) 8.78 Gbits/sec 8.88 Gbits/sec 1.99 ms
    Edgoo Sao Paulo, BR (1G) 5.15 Gbits/sec 2.16 Gbits/sec 110 ms

    A few other tests our team had provided in the ticket from another server.

    iperf3 -c nyc.speedtest.clouvider.net -p 5202 -P 16 -R
    [SUM] 0.00-10.00 sec 9.61 GBytes 8.25 Gbits/sec 12562 sender
    [SUM] 0.00-10.00 sec 9.57 GBytes 8.22 Gbits/sec receiver

    iperf3 -c speedtest.chi11.us.leaseweb.net -p 5201 -P 16 -R
    [SUM] 0.00-10.02 sec 7.86 GBytes 6.74 Gbits/sec 7925 sender
    [SUM] 0.00-10.00 sec 7.58 GBytes 6.52 Gbits/sec receiver

    iperf3 -c speedtest.lax12.us.leaseweb.net -p 5201 -P 16 -R
    [SUM] 0.00-10.07 sec 6.76 GBytes 5.77 Gbits/sec 6195 sender
    [SUM] 0.00-10.00 sec 6.49 GBytes 5.57 Gbits/sec receiver

    These are just a few tests we have ran on our side after you had complained about speed issues. These tests are from one of our 10G test servers.
    Seeing a MTR will help us check to see why you are having issues and if there is anything we can adjust on there for you.

    I mean looking over the ticket our team could have answered the one better in regards to the speeds to India.

    However, calling our team useless when asking you for MTR's so we can troubleshoot isn't great yet they still answered your tickets and provided help. Asking if you had done any tweaks to your server as our tests are clearly quite different.

    Thanked by 1hobofl
  • wamywamy Member

    @fluffernutter said:
    Their network is insanely oversold and leans heavily on Hurricane, the absolute worst "T1". They'll call me a hater for this though.

    Seems like it based on results here.

    Thanked by 1fluffernutter
  • mwmw Member

    @PureVoltage said:
    I'd be happy to have our team spin up another server on the same switch to run some more tests tomorrow.

    Let's continue this tomorrow - and I apologise for the snarky comment.

    In the meantime, what tweaks do you recommend I try? It's just that I have never had an issue even on pure stock Debian when test endpoints are a couple of ms away.

    Thanked by 2hobofl DeusVult
  • PureVoltagePureVoltage Member, Patron Provider

    @mw said:

    @PureVoltage said:
    I'd be happy to have our team spin up another server on the same switch to run some more tests tomorrow.

    Let's continue this tomorrow - and I apologise for the snarky comment.

    In the meantime, what tweaks do you recommend I try? It's just that I have never had an issue even on pure stock Debian when test endpoints are a couple of ms away.

    No worries. I'll send some over shortly to you

    Thanked by 1hobofl
  • @PureVoltage said:

    @mw said:

    @PureVoltage said:
    I'd be happy to have our team spin up another server on the same switch to run some more tests tomorrow.

    Let's continue this tomorrow - and I apologise for the snarky comment.

    In the meantime, what tweaks do you recommend I try? It's just that I have never had an issue even on pure stock Debian when test endpoints are a couple of ms away.

    No worries. I'll send some over shortly to you

    Would you mind sharing with the class? Would be super curious to learn more about how to juice networking performance, if you don't mind

  • FWIW, I picked up one of their promotional VPS offerings over the holidays, and I've been experiencing intermittent events of 100% packet loss lasting a few seconds to minutes at a time to both Netdata and a self-hosted Uptime Kuma instance with another provider.

    I haven't deployed anything to this server yet as it's still in my stability "burn-in" period, and I haven't had an opportunity to follow up with support. Addressing intermittent issues, especially with networking problems, is always a chore.

    Thanked by 1fluffernutter
  • I will wait and see first if their speed is consistant or not and come back later

  • weird results indeed, iv had 5-10 dedicated servers with them in NY on 10G, have always had 7-9Gbit inbound/outbound, your speeds seem very low. Definitely an issue somewhere.

  • MannDudeMannDude Patron Provider, Veteran
    edited February 13

    @MeltedMembrane said:

    @PureVoltage said:

    @mw said:

    @PureVoltage said:
    I'd be happy to have our team spin up another server on the same switch to run some more tests tomorrow.

    Let's continue this tomorrow - and I apologise for the snarky comment.

    In the meantime, what tweaks do you recommend I try? It's just that I have never had an issue even on pure stock Debian when test endpoints are a couple of ms away.

    No worries. I'll send some over shortly to you

    Would you mind sharing with the class? Would be super curious to learn more about how to juice networking performance, if you don't mind

    You can modify sysctl.conf which can have a big effect. I usually replace my kernel with xanmod on things that require high performance as well.

    On mobile, but you can Google "cloudlare tcp sysctl.conf" and other related phrases. In a pinch, be specific with ChatGPT and ask it to pump out a config. Tell it what hardware you have and kernel and to tell you what each value means and why it chose that particular numerical value for each setting. Can start tweaking from there.

  • I had a similar problem with another provider here. They showed me that in their tests the server can reach 10G, but only to a local speedtest server with many threads.

    What happened was that the packet loss on the specific dedicated server killed tcp performance, especially on higher latency links. Packet loss indicates congested links, hardware problem, or wrongly configured networking equipment, which is certainly the provider's fault, and has nothing to do with OS tweaking.

    They refused to investigate further claiming it's all normal so I canceled the server.

    I bought a VPS from Purevoltage last BF and it's running great. I hope they can solve this for you, although I'm already a bit disappointed about them reading this post.

  • mwmw Member

    I'll be spending some time on this tonight trying everything everyone has suggested :)

    Thanked by 1emgh
  • @zakkuuno said:
    I had a similar problem with another provider here. They showed me that in their tests the server can reach 10G, but only to a local speedtest server with many threads.

    What provider was this? Just curious.

  • mwmw Member
    edited February 13

    @MannDude said:

    @MeltedMembrane said:

    @PureVoltage said:

    @mw said:

    @PureVoltage said:
    I'd be happy to have our team spin up another server on the same switch to run some more tests tomorrow.

    Let's continue this tomorrow - and I apologise for the snarky comment.

    In the meantime, what tweaks do you recommend I try? It's just that I have never had an issue even on pure stock Debian when test endpoints are a couple of ms away.

    No worries. I'll send some over shortly to you

    Would you mind sharing with the class? Would be super curious to learn more about how to juice networking performance, if you don't mind

    You can modify sysctl.conf which can have a big effect. I usually replace my kernel with xanmod on things that require high performance as well.

    On mobile, but you can Google "cloudlare tcp sysctl.conf" and other related phrases. In a pinch, be specific with ChatGPT and ask it to pump out a config. Tell it what hardware you have and kernel and to tell you what each value means and why it chose that particular numerical value for each setting. Can start tweaking from there.

    For anyone interested, below is the sysctl.conf changes provided to us from PureVoltage and below is what we already use on all our 40/20G boxes:

    Provided by PureVoltage

    net.core.rmem_max = 67108864
    net.core.wmem_max = 67108864
    net.ipv4.tcp_rmem = 4096 87380 33554432
    net.ipv4.tcp_wmem = 4096 65536 33554432
    

    Using our standard 40G tune applied via tuned to all our 40G servers
    https://tuned-project.org

    # Network buffers for high-speed networks
    net.core.rmem_max=268435456
    net.core.wmem_max=268435456
    net.core.rmem_default=268435456
    net.core.wmem_default=268435456
    net.core.optmem_max=134217728
    net.ipv4.tcp_rmem=4096 87380 134217728
    net.ipv4.tcp_wmem=4096 87380 134217728
    net.ipv4.udp_mem=8388608 12582912 268435456
    net.ipv4.udp_rmem_min=16384
    net.ipv4.udp_wmem_min=16384
    
    # TCP optimizations
    net.ipv4.tcp_fastopen=3
    net.core.default_qdisc=fq
    net.ipv4.tcp_congestion_control=bbr
    net.ipv4.tcp_mtu_probing=1
    net.ipv4.tcp_slow_start_after_idle=0
    net.ipv4.tcp_tw_reuse=1
    net.ipv4.tcp_notsent_lowat=131072
    net.ipv4.tcp_window_scaling=1
    net.ipv4.tcp_low_latency=1
    net.ipv4.tcp_timestamps=1
    net.ipv4.tcp_sack=1
    net.ipv4.tcp_no_metrics_save=1
    net.ipv4.tcp_synack_retries=2
    net.ipv4.tcp_syn_retries=2
    net.ipv4.tcp_max_syn_backlog=65536
    net.ipv4.tcp_max_tw_buckets=2000000
    net.ipv4.ip_local_port_range=1024 65535
    net.ipv4.tcp_fin_timeout=10
    
    # Network processing tuning
    net.core.netdev_budget=1200
    net.core.netdev_budget_usecs=14000
    net.core.dev_weight=600
    net.core.somaxconn=65535
    net.core.netdev_max_backlog=250000
    net.core.flow_limit_table_len=8192
    net.core.bpf_jit_enable=1
    net.core.bpf_jit_harden=0
    net.core.rps_sock_flow_entries=32768
    net.core.warnings=0
    

    We did use Xanmod at a point in time, but we've since moved to using tuned + our own testing/trial-and-error and have had tremendous results. We have a tuned profile for each hardware spec at the hypervisor level and individual profiles for VMs.

  • @fluffernutter said:

    @zakkuuno said:
    I had a similar problem with another provider here. They showed me that in their tests the server can reach 10G, but only to a local speedtest server with many threads.

    What provider was this? Just curious.

    streamline-servers. It's the first review on trustpilot. Apparently he's very bitter about it.

  • kaitkait Member

    Yes you are crazy.

    Thanked by 1mw
  • mwmw Member
    edited February 13

    @zakkuuno said:
    I had a similar problem with another provider here. They showed me that in their tests the server can reach 10G, but only to a local speedtest server with many threads.

    What happened was that the packet loss on the specific dedicated server killed tcp performance, especially on higher latency links. Packet loss indicates congested links, hardware problem, or wrongly configured networking equipment, which is certainly the provider's fault, and has nothing to do with OS tweaking.

    They refused to investigate further claiming it's all normal so I canceled the server.

    I bought a VPS from Purevoltage last BF and it's running great. I hope they can solve this for you, although I'm already a bit disappointed about them reading this post.

    I have been throwing more tests at this by testing to multiple different iperf3 endpoints with 8 connections each, 1 connection each, and the results all seem to have the same thing in common.
    
    Explained by Perplexity:
    
    A total of **9,891,988 retransmissions** occurred during the test. This high number of retransmissions suggests that there might be some network issues or congestion affecting the connection quality.
    Retransmission distribution:
    • Highest: Stream 13 with 3,221,748 retransmissions
    • Lowest: Stream 7 with 248,522 retransmissions
    The wide range in retransmission counts across streams indicates inconsistent network conditions or possible issues with specific network paths.
    Summary
    The network demonstrated a substantial aggregate throughput of 12.5 Gbits/sec, which is impressive for many applications. However, the high number of retransmissions suggests that while the network can achieve high speeds, it may be doing so at the cost of reliability. This could lead to increased latency and potential data delivery issues in real-world applications.
    

    it does appear the retransmissions over TCP cause a significant penalty which is most apparent in single connection tests

    the speeds also yoyo like crazy, dropping from several hundred Mbps to tens of Mbps and this is consistent throughout testing

  • @artxs said:

    @fluffernutter said:

    @zakkuuno said:
    I had a similar problem with another provider here. They showed me that in their tests the server can reach 10G, but only to a local speedtest server with many threads.

    What provider was this? Just curious.

    streamline-servers. It's the first review on trustpilot. Apparently he's very bitter about it.

    I don’t know how you made that stupid assumption out of nowhere but apparently you are wrong.

  • @mw said:

    @zakkuuno said:
    I had a similar problem with another provider here. They showed me that in their tests the server can reach 10G, but only to a local speedtest server with many threads.

    What happened was that the packet loss on the specific dedicated server killed tcp performance, especially on higher latency links. Packet loss indicates congested links, hardware problem, or wrongly configured networking equipment, which is certainly the provider's fault, and has nothing to do with OS tweaking.

    They refused to investigate further claiming it's all normal so I canceled the server.

    I bought a VPS from Purevoltage last BF and it's running great. I hope they can solve this for you, although I'm already a bit disappointed about them reading this post.

    I have been throwing more tests at this by testing to multiple different iperf3 endpoints with 8 connections each, 1 connection each, and the results all seem to have the same thing in common.
    
    Explained by Perplexity:
    
    A total of **9,891,988 retransmissions** occurred during the test. This high number of retransmissions suggests that there might be some network issues or congestion affecting the connection quality.
    Retransmission distribution:
    • Highest: Stream 13 with 3,221,748 retransmissions
    • Lowest: Stream 7 with 248,522 retransmissions
    The wide range in retransmission counts across streams indicates inconsistent network conditions or possible issues with specific network paths.
    Summary
    The network demonstrated a substantial aggregate throughput of 12.5 Gbits/sec, which is impressive for many applications. However, the high number of retransmissions suggests that while the network can achieve high speeds, it may be doing so at the cost of reliability. This could lead to increased latency and potential data delivery issues in real-world applications.
    

    it does appear the retransmissions over TCP cause a significant penalty which is most apparent in single connection tests

    the speeds also yoyo like crazy, dropping from several hundred Mbps to tens of Mbps and this is consistent throughout testing

    The retransmission is probably the root cause. Has Purevoltage started looking into that?

  • SwiftnodeSwiftnode Member, Host Rep, LIR

    @mw said:

    it does appear the retransmissions over TCP cause a significant penalty which is most apparent in single connection tests

    MTU or MSS issue perhaps?

  • mwmw Member

    @Swiftnode said:

    @mw said:

    it does appear the retransmissions over TCP cause a significant penalty which is most apparent in single connection tests

    MTU or MSS issue perhaps?

    After checking I see the maximum MTU I can use is 1472

    Perplexity says:

    An MTU (Maximum Transmission Unit) of 1472 bytes as the highest possible value is significant and typically indicates that your connection is going through some form of encapsulation or tunneling. Here’s why:
    Explanation
    1. Standard Ethernet MTU: The standard Ethernet MTU is 1500 bytes.
    2. IP Header Size: An IPv4 header is typically 20 bytes.
    3. ICMP Header Size: The ICMP header used by ping is 8 bytes.
    4. Calculation: 1500 - 20 - 8 = 1472 bytes
    This 1472-byte limit suggests that there’s exactly 28 bytes of overhead, which is precisely the space needed for an additional IPv4 header (20 bytes) plus 8 bytes for something else.
    Common Scenarios
    1. PPPoE (Point-to-Point Protocol over Ethernet):
    • PPPoE adds 8 bytes of overhead.
    • This is common in many DSL internet connections.
    2. GRE (Generic Routing Encapsulation) Tunneling:
    • GRE adds a 24-byte header plus a 4-byte key field.
    • Often used in VPN setups or for creating network overlays.
    3. VXLAN (Virtual Extensible LAN):
    • VXLAN encapsulation reduces the effective MTU by 50 bytes.
    • Commonly used in data center networks for network virtualization.
    4. IPsec in Tunnel Mode:
    • IPsec can add varying amounts of overhead depending on the configuration.

    I have asked them for some clarity here.

    Also, I noticed my assigned IP is a /31

    Perplexity says:

    This configuration represents a /31 subnet, which is a special case used primarily for point-to-point links between network devices.
    Subnet Mask: 255.255.255.254
    Gateway Address: 169.197.86.x
    Usable IPs: 169.197.86.x

    Does this stand out to anyone?

  • mwmw Member

    @zakkuuno said:

    @mw said:

    @zakkuuno said:
    I had a similar problem with another provider here. They showed me that in their tests the server can reach 10G, but only to a local speedtest server with many threads.

    What happened was that the packet loss on the specific dedicated server killed tcp performance, especially on higher latency links. Packet loss indicates congested links, hardware problem, or wrongly configured networking equipment, which is certainly the provider's fault, and has nothing to do with OS tweaking.

    They refused to investigate further claiming it's all normal so I canceled the server.

    I bought a VPS from Purevoltage last BF and it's running great. I hope they can solve this for you, although I'm already a bit disappointed about them reading this post.

    I have been throwing more tests at this by testing to multiple different iperf3 endpoints with 8 connections each, 1 connection each, and the results all seem to have the same thing in common.
    
    Explained by Perplexity:
    
    A total of **9,891,988 retransmissions** occurred during the test. This high number of retransmissions suggests that there might be some network issues or congestion affecting the connection quality.
    Retransmission distribution:
    • Highest: Stream 13 with 3,221,748 retransmissions
    • Lowest: Stream 7 with 248,522 retransmissions
    The wide range in retransmission counts across streams indicates inconsistent network conditions or possible issues with specific network paths.
    Summary
    The network demonstrated a substantial aggregate throughput of 12.5 Gbits/sec, which is impressive for many applications. However, the high number of retransmissions suggests that while the network can achieve high speeds, it may be doing so at the cost of reliability. This could lead to increased latency and potential data delivery issues in real-world applications.
    

    it does appear the retransmissions over TCP cause a significant penalty which is most apparent in single connection tests

    the speeds also yoyo like crazy, dropping from several hundred Mbps to tens of Mbps and this is consistent throughout testing

    The retransmission is probably the root cause. Has Purevoltage started looking into that?

    They said they checked everything on their end and could not find a direct cause but will continue checking more comprehensively next week - I hope to hear back about the MTU concern I posted about above tho

Sign In or Register to comment.