All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
PureVoltage.com - am I the crazy one?
So we're currently evaluating different hosts and @PureVoltage was a pleasure during pre-sales but now that we have our first delivery I need a sanity check from the community.
Below is a snippet from my ticket with them regarding the network performance.
#
Performance seems low:
https://www.speedtest.net/result/c/dde424f7-e4e5-4f6e-aef8-8c7dcc2dbf5e
https://www.speedtest.net/result/c/a9527d7d-e959-4dd4-a95e-ddac53dd3d71
https://www.speedtest.net/result/c/c0b8a8a9-52d6-486d-b98b-2c62f49e0e53]
#
45.140.189.x
root@x:~# iperf3 -c 169.197.86.x -P 4
Connecting to host 169.197.86.x, port 5201
[ ID] Interval Transfer Bitrate Retr
[ 6] 0.00-10.00 sec 208 MBytes 174 Mbits/sec 1370 sender
[ 6] 0.00-10.08 sec 206 MBytes 171 Mbits/sec receiver
[ 8] 0.00-10.00 sec 272 MBytes 228 Mbits/sec 291 sender
[ 8] 0.00-10.08 sec 270 MBytes 225 Mbits/sec receiver
[ 10] 0.00-10.00 sec 225 MBytes 188 Mbits/sec 539 sender
[ 10] 0.00-10.08 sec 222 MBytes 185 Mbits/sec receiver
[ 12] 0.00-10.00 sec 170 MBytes 143 Mbits/sec 158 sender
[ 12] 0.00-10.08 sec 167 MBytes 139 Mbits/sec receiver
[SUM] 0.00-10.00 sec 875 MBytes 734 Mbits/sec 2358 sender
[SUM] 0.00-10.08 sec 866 MBytes 721 Mbits/sec receiver
iperf Done.#
root@x:~# iperf3 -c 169.197.86.x -P 4 -R
Connecting to host 169.197.86.x, port 5201
Reverse mode, remote host 169.197.86.x is sending
[ ID] Interval Transfer Bitrate Retr
[ 6] 0.00-10.07 sec 8.27 MBytes 6.88 Mbits/sec 108 sender
[ 6] 0.00-10.00 sec 7.36 MBytes 6.18 Mbits/sec receiver
[ 8] 0.00-10.07 sec 4.36 MBytes 3.63 Mbits/sec 50 sender
[ 8] 0.00-10.00 sec 4.08 MBytes 3.42 Mbits/sec receiver
[ 10] 0.00-10.07 sec 5.76 MBytes 4.79 Mbits/sec 61 sender
[ 10] 0.00-10.00 sec 5.25 MBytes 4.40 Mbits/sec receiver
[ 12] 0.00-10.07 sec 26.3 MBytes 21.9 Mbits/sec 56 sender
[ 12] 0.00-10.00 sec 24.7 MBytes 20.7 Mbits/sec receiver
[SUM] 0.00-10.07 sec 44.6 MBytes 37.2 Mbits/sec 275 sender
[SUM] 0.00-10.00 sec 41.4 MBytes 34.7 Mbits/sec receiver
iperf Done.#
#
45.157.234.x
root@x:~# iperf3 -c 169.197.86.x -P 4
Connecting to host 169.197.86.x, port 5201
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 134 MBytes 112 Mbits/sec 575 sender
[ 5] 0.00-10.09 sec 132 MBytes 109 Mbits/sec receiver
[ 7] 0.00-10.00 sec 127 MBytes 107 Mbits/sec 501 sender
[ 7] 0.00-10.09 sec 125 MBytes 104 Mbits/sec receiver
[ 9] 0.00-10.00 sec 233 MBytes 195 Mbits/sec 384 sender
[ 9] 0.00-10.09 sec 231 MBytes 192 Mbits/sec receiver
[ 11] 0.00-10.00 sec 126 MBytes 106 Mbits/sec 699 sender
[ 11] 0.00-10.09 sec 124 MBytes 103 Mbits/sec receiver
[SUM] 0.00-10.00 sec 620 MBytes 520 Mbits/sec 2159 sender
[SUM] 0.00-10.09 sec 612 MBytes 509 Mbits/sec receiver
iperf Done.#
root@x:~# iperf3 -c 169.197.86.x -P 4 -R
Connecting to host 169.197.86.x, port 5201
Reverse mode, remote host 169.197.86.x is sending
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.09 sec 14.2 MBytes 11.8 Mbits/sec 157 sender
[ 5] 0.00-10.00 sec 12.6 MBytes 10.5 Mbits/sec receiver
[ 7] 0.00-10.09 sec 128 MBytes 107 Mbits/sec 2010 sender
[ 7] 0.00-10.00 sec 126 MBytes 106 Mbits/sec receiver
[ 9] 0.00-10.09 sec 123 MBytes 102 Mbits/sec 919 sender
[ 9] 0.00-10.00 sec 121 MBytes 101 Mbits/sec receiver
[ 11] 0.00-10.09 sec 125 MBytes 104 Mbits/sec 1374 sender
[ 11] 0.00-10.00 sec 121 MBytes 102 Mbits/sec receiver
[SUM] 0.00-10.09 sec 391 MBytes 325 Mbits/sec 4460 sender
[SUM] 0.00-10.00 sec 380 MBytes 319 Mbits/sec receiver
iperf Done.
#
Hi there,
Please try that test with a higher number -P 16[SUM] 0.00-10.00 sec 8.18 GBytes 7.02 Gbits/sec 20265 sender
[SUM] 0.00-10.09 sec 7.74 GBytes 6.59 Gbits/sec receiverIs what we seen back to 45.157.234.x
Sincerely,
Liam S.
Data Center Technician
PureVoltage Hosting, Inc.
#
That won't address the very clear and obvious issue with traffic outbound.
See attached for further evidence.
#
Hi,
You will not reach 10 GbE speeds in India.
Sincerely,
Miguel
Senior Cloud Engineer
PureVoltage Hosting, Inc.
#
Tests anyway
#
root@x:~# iperf3 -c 169.197.86.x -P 16 -R
Connecting to host 169.197.86.x, port 5201
Reverse mode, remote host 169.197.86.x is sending
[ ID] Interval Transfer Bitrate Retr
[SUM] 0.00-10.08 sec 2.25 GBytes 1.92 Gbits/sec 12528 sender
[SUM] 0.00-10.00 sec 2.21 GBytes 1.90 Gbits/sec receiver
iperf Done.#
root@x:~# iperf3 -c 169.197.86.x -P 16 -R
Connecting to host 169.197.86.x, port 5201
Reverse mode, remote host 169.197.86.x is sending
[SUM] 0.00-10.09 sec 1.58 GBytes 1.35 Gbits/sec 10371 sender
[SUM] 0.00-10.00 sec 1.53 GBytes 1.32 Gbits/sec receiver
iperf Done.
#
Are you seriously ignoring all the test evidence I am providing because the first set of tests in a screenshot is to India - did you even read the test?
#
Hi there,
If you are having issues with specific routes outbound please provide a MTR both directions so we can see what routes you are having issues with.Sincerely,
Liam S.
Data Center Technician
PureVoltage Hosting, Inc.
#
Keep in mind we are paying for 40Gbit and was told the switch we are on was "freshly deployed today"
Am I wrong to think that the above is sufficient evidence that something is wrong, and that asking for MTR to every route sounds like they just want to close the ticket?
How is this the attitude of three separate support agents?
And ofc... yabs
root@x:~# curl -sL https://yabs.sh | bash
# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
# Yet-Another-Bench-Script #
# v2025-01-01 #
# https://github.com/masonr/yet-another-bench-script #
# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
Wed Feb 12 05:28:10 PM EST 2025
Basic System Information:
---------------------------------
Uptime : 0 days, 1 hours, 22 minutes
Processor : Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
CPU cores : 72 @ 3700.000 MHz
AES-NI : ✔ Enabled
VM-x/AMD-V : ✔ Enabled
RAM : 1006.5 GiB
Swap : 0.0 KiB
Disk :
Distro : Debian GNU/Linux 12 (bookworm)
Kernel : 6.8.12-4-pve
VM Type : NONE
IPv4/IPv6 : ✔ Online / ❌ Offline
IPv4 Network Information:
---------------------------------
ISP : PureVoltage Hosting Inc.
ASN : AS26548 PureVoltage Hosting Inc.
Host : PureVoltage Hosting Inc
Location : New York, New York (NY)
Country : United States
fio Disk Speed Tests (Mixed R/W 50/50) (Partition rpool/ROOT/pve-1):
---------------------------------
Block Size | 4k (IOPS) | 64k (IOPS)
------ | --- ---- | ---- ----
Read | 163.43 MB/s (40.8k) | 2.36 GB/s (36.8k)
Write | 163.86 MB/s (40.9k) | 2.37 GB/s (37.0k)
Total | 327.29 MB/s (81.8k) | 4.73 GB/s (73.9k)
| |
Block Size | 512k (IOPS) | 1m (IOPS)
------ | --- ---- | ---- ----
Read | 2.22 GB/s (4.3k) | 1.95 GB/s (1.9k)
Write | 2.34 GB/s (4.5k) | 2.08 GB/s (2.0k)
Total | 4.56 GB/s (8.9k) | 4.03 GB/s (3.9k)
iperf3 Network Speed Tests (IPv4):
---------------------------------
Provider | Location (Link) | Send Speed | Recv Speed | Ping
----- | ----- | ---- | ---- | ----
Clouvider | London, UK (10G) | 1.07 Gbits/sec | 2.07 Gbits/sec | 70.5 ms
Eranium | Amsterdam, NL (100G) | 1.06 Gbits/sec | 2.59 Gbits/sec | 74.0 ms
Uztelecom | Tashkent, UZ (10G) | 269 Mbits/sec | 673 Mbits/sec | 229 ms
Leaseweb | Singapore, SG (10G) | 336 Mbits/sec | 760 Mbits/sec | 238 ms
Clouvider | Los Angeles, CA, US (10G) | busy | busy | 64.7 ms
Leaseweb | NYC, NY, US (10G) | 1.74 Gbits/sec | 2.40 Gbits/sec | 2.26 ms
Edgoo | Sao Paulo, BR (1G) | 954 Mbits/sec | 1.21 Gbits/sec | 111 ms
Running GB6 benchmark test... *cue elevator music*
Geekbench 6 Benchmark Test:
---------------------------------
Test | Value
|
Single Core | 1205
Multi Core | 11649
Full Test | https://browser.geekbench.com/v6/cpu/10504316
YABS completed in 10 min 38 sec
Comments
No personal knowledge with them but they are definitely in good facilities.
Two POVs I'd give:
1. Providers generally ask for specific network stuff, even if it seems redundant to you as the customer, because 98/100 times it's not being tested thoroughly or sufficiently enough and because when you bring it up with your NOC (or upstream NOC) there is a lot of box checking to be done--for good reason. Most often there is 1 specific route that end-point customer's home ISP has loss on originating somewhere well outside of provider's network and the only way to try and prove that is a handful of MTR's.
2. In this case, you have provided (imo) a good amount of evidence to suggest something may be off. Enough that if it were a ticket brought up here, it would already be pinging internally to NOC. We may also simultaneously ask for some additional tests, but I think it would be enough to not chalk it up to "customer's problem" and ignore, though it could still be an OS/firmware/kernel-related issue. Local speedtest servers are great for sanity checks on exactly these types of issues.
I'm not trying to make excuses for providers, but most times it really is a specific software stack or OS configuration, or 1 specific route that a customer has issues with and just blames provider (without knowledge of how the internet works). I recall a ticket going for ~12 months being told how wrong we were about speeds (no clue why they kept the service as it was not a yearly), until eventually finding out their special uber-custom kernel tuning parameters were messed up and crapping all over their specific VM's network setup. I do appreciate that the customer told us eventually
Providers have issues all the time. Just today we had a freshly deployed server that is is a 2x10G LACP go out and hit a customer's hands. Tests were showing it to be light on initial speed tests (not 1% of port speed or anything--but light for our normal network performance and specific to customer's setup). Had them run a few more things and pretty quickly identified there is definitely an issue on our side and we have a misbehaving optic that is forcing one link down to 1G (instead of 10G) and crapping the whole aggregation down to 2x1G. Customer informed, resolution planned, and they get to rake me over the coals for a week
Specifically here: I wouldn't say bad attitude, but definitely could be improved. Could just be overworked/busy time (things are insane, and have been at this elevated level for quite some time now for most hosts I think). I will also say it's a challenge. Of all the tasks on our plates, it's hard to carve out time to review support ticket replies and bring it up as a "yes great job here, everyone look at this example" or "no, that reply was too short/incorrect or just not how I want us represented". We still do it at least a few times per month, because mistakes honestly do happen and if it was done from a point of working and caring but rushing? I think that is potentially fixable. There's a fine line between "Hey! SLOW DOWN. Respond to them like a human--imagine you are the customer submitting this, what do you think their next question will immediately be and just answer it now." and "You have to get more than 3 tickets in a full day worked on."
All in all, hopefully can be resolved amicably.
I sent MTR's and directly responded to them telling me the poor speeds was because the test endpoints we used were slow by sending them a test to Scaleway's 100G iperf server.
They responded by saying their own test managed 900Mbps+ so the issue was us.
We're running the same tune we run on all of our boxes, and on those networks we have never had an issue smashing cross continent. I did run a test to a local speedtest server above. 1.2Gbps upload lol
They reply within 2-5 minutes FWIW
I seem to have overlooked (due to volume of PMs) multiple different users on LET that seem to have had the same experience... yikes.
As always I appreciate your grounded insight @crunchbits
If I boot a live Linux ISO just to run some speedtests and they're the same as above, would that serve as proof enough that the issue is simply not on my end and that their support needs to investigate?
I'm more skeptical than most because before this we had an issue with the networking not working which is when they mentioned the switch being brand new.
It just seems like their support simply does not care enough to properly read* what I am sending them and instead responds with the bare minimum to keep response times low.
*I mean how can you respond to a wall of evidence with "yeh speeds to india arent gonna be fast my man" then get swapped out with another agent
Yeah--your tests and replies are why I said this is something that likely would be starting to get actioned, at least everyone double checking if we left port-shaping on somewhere on accident from a prior request.
I definitely feel the PM volume issue
It's basically no-go land right now.
Speed might be quick, but generally a real dive into something network-related isn't a 2m response for me. I'm too dumb.
Happy to run the same from other locations and share results if it helps
Their network is insanely oversold and leans heavily on Hurricane, the absolute worst "T1". They'll call me a hater for this though.
Hater
it's clear what the problem is. tbh i don't feel like wasting more time with their support
chat i'm cooked. <500Mbps NY <-> LA
1744 retransmissions with variance 128-294Mbps
I'd be happy to have our team spin up another server on the same switch to run some more tests tomorrow.
I'm sure that the issue is related to not having tweaks done to their OS. However, I could be wrong.
But looking quickly at our teams chat, and multiple tests to other servers and to scaleways going the exact same paths there is quite a difference between their test and one of ours that has a few minor OS tweaks.
Their test to Scaleway
To Scaleway's 100G iperf3 server:
root@stor-ny-1:~# iperf3 -c ping.online.net -p 5208 -R
Connecting to host ping.online.net, port 5208
Reverse mode, remote host ping.online.net is sending
[ 5] local 169.197.86.25 port 58632 connected to 51.158.1.21 port 5208
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 20.0 MBytes 168 Mbits/sec
[ 5] 1.00-2.00 sec 43.4 MBytes 364 Mbits/sec
[ 5] 2.00-3.00 sec 43.3 MBytes 363 Mbits/sec
[ 5] 3.00-4.00 sec 43.5 MBytes 365 Mbits/sec
[ 5] 4.00-5.00 sec 43.3 MBytes 363 Mbits/sec
[ 5] 5.00-6.00 sec 43.4 MBytes 364 Mbits/sec
[ 5] 6.00-7.00 sec 43.4 MBytes 364 Mbits/sec
[ 5] 7.00-8.00 sec 43.4 MBytes 364 Mbits/sec
[ 5] 8.00-9.00 sec 43.3 MBytes 363 Mbits/sec
[ 5] 9.00-10.00 sec 43.4 MBytes 364 Mbits/sec
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.07 sec 467 MBytes 389 Mbits/sec 0 sender
[ 5] 0.00-10.00 sec 410 MBytes 344 Mbits/sec receiver
iperf Done.
root@stor-ny-1:~# traceroute ping.online.net
traceroute to ping.online.net (51.158.1.21), 30 hops max, 60 byte packets
1 169.197.86.24 (169.197.86.24) 0.391 ms 0.300 ms 0.292 ms
2 e0-25.core1.nyc9.he.net (216.66.2.105) 0.907 ms 1.187 ms 1.472 ms
3 100ge0-53.core2.nyc5.he.net (184.104.194.249) 1.156 ms 100ge0-34.core1.ewr4.he.net (72.52.92.229) 1.611 ms *
4 100ge0-39.core2.ewr5.he.net (184.104.188.21) 1.746 ms 1.691 ms *
5 port-channel13.core2.par2.he.net (184.104.197.10) 71.448 ms * *
6 * scaleway-dc2.par.franceix.net (37.49.237.111) 72.323 ms port-channel13.core2.par2.he.net (184.104.197.10) 72.801 ms
7 51.158.8.67 (51.158.8.67) 72.384 ms scaleway-dc2.par.franceix.net (37.49.237.111) 73.885 ms 51.158.8.65 (51.158.8.65) 72.209 ms
8 51.158.8.65 (51.158.8.65) 73.057 ms 73.305 ms 73.092 ms
9 51.158.1.21 (51.158.1.21) 72.230 ms 72.023 ms 72.178 ms
We will not run 16 threads. If your network is only capable of 364Mbps on a single connection, make this known so we're not wasting our time.
And our teams test to the same server.
Single threaded performance depends on multiple factors.
Have you done any tweaks to your kernel or anything else to help improve these?
iperf3 -c ping.online.net -p 5208 -R
Connecting to host ping.online.net, port 5208
Reverse mode, remote host ping.online.net is sending
[ 5] local 169.197.80.162 port 48596 connected to 51.158.1.21 port 5208
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 15.2 MBytes 127 Mbits/sec
[ 5] 1.00-2.00 sec 125 MBytes 1.05 Gbits/sec
[ 5] 2.00-3.00 sec 139 MBytes 1.16 Gbits/sec
[ 5] 3.00-4.00 sec 140 MBytes 1.17 Gbits/sec
[ 5] 4.00-5.00 sec 111 MBytes 935 Mbits/sec
[ 5] 5.00-6.00 sec 87.1 MBytes 731 Mbits/sec
[ 5] 6.00-7.00 sec 87.1 MBytes 731 Mbits/sec
[ 5] 7.00-8.00 sec 124 MBytes 1.04 Gbits/sec
[ 5] 8.00-9.00 sec 122 MBytes 1.03 Gbits/sec
[ 5] 9.00-10.00 sec 141 MBytes 1.18 Gbits/sec
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.07 sec 1.15 GBytes 979 Mbits/sec 16905 sender
[ 5] 0.00-10.00 sec 1.07 GBytes 915 Mbits/sec receiver
iperf3 -c ping.online.net -p 5208 -R -P 16
[SUM] 0.00-10.07 sec 8.79 GBytes 7.50 Gbits/sec 891156 sender
[SUM] 0.00-10.00 sec 7.53 GBytes 6.47 Gbits/sec receiver
traceroute ping.online.net
traceroute to ping.online.net (51.158.1.21), 30 hops max, 60 byte packets
1 _gateway (169.197.80.161) 1.322 ms 0.270 ms 0.258 ms
2 * e0-25.core1.nyc9.he.net (216.66.2.105) 0.769 ms *
3 100ge0-34.core1.ewr4.he.net (72.52.92.229) 1.428 ms * 1.728 ms
4 100ge0-39.core2.ewr5.he.net (184.104.188.21) 1.885 ms 1.811 ms *
5 * * port-channel13.core2.par2.he.net (184.104.197.10) 74.296 ms
6 port-channel13.core2.par2.he.net (184.104.197.10) 74.784 ms 74.442 ms scaleway-dc2.par.franceix.net (37.49.237.111) 72.111 ms
7 scaleway-dc3.par.franceix.net (37.49.237.27) 73.437 ms 51.158.8.67 (51.158.8.67) 72.221 ms 51.158.8.65 (51.158.8.65) 72.239 ms
8 51.158.8.65 (51.158.8.65) 73.556 ms 51.158.1.21 (51.158.1.21) 72.023 ms 51.158.8.65 (51.158.8.65) 73.198 ms
It looks like you will want to do some tweaks to your OS and then run some of the tests. As the above tests we got almost 3x the single thread speeds along the same route.
Our team asking for a MTR and being met with this: "If you have no idea, nor know how to fix this, we recommend getting someone who does." doesn't really help when we are asking for it so that we can see the path it's taking for you and if there is something wrong going on so we can provide help.
Your Yabs and the one our team ran is quite different in results.
iperf3 Network Speed Tests (IPv4):
A few other tests our team had provided in the ticket from another server.
iperf3 -c nyc.speedtest.clouvider.net -p 5202 -P 16 -R
[SUM] 0.00-10.00 sec 9.61 GBytes 8.25 Gbits/sec 12562 sender
[SUM] 0.00-10.00 sec 9.57 GBytes 8.22 Gbits/sec receiver
iperf3 -c speedtest.chi11.us.leaseweb.net -p 5201 -P 16 -R
[SUM] 0.00-10.02 sec 7.86 GBytes 6.74 Gbits/sec 7925 sender
[SUM] 0.00-10.00 sec 7.58 GBytes 6.52 Gbits/sec receiver
iperf3 -c speedtest.lax12.us.leaseweb.net -p 5201 -P 16 -R
[SUM] 0.00-10.07 sec 6.76 GBytes 5.77 Gbits/sec 6195 sender
[SUM] 0.00-10.00 sec 6.49 GBytes 5.57 Gbits/sec receiver
These are just a few tests we have ran on our side after you had complained about speed issues. These tests are from one of our 10G test servers.
Seeing a MTR will help us check to see why you are having issues and if there is anything we can adjust on there for you.
I mean looking over the ticket our team could have answered the one better in regards to the speeds to India.
However, calling our team useless when asking you for MTR's so we can troubleshoot isn't great yet they still answered your tickets and provided help. Asking if you had done any tweaks to your server as our tests are clearly quite different.
Seems like it based on results here.
Let's continue this tomorrow - and I apologise for the snarky comment.
In the meantime, what tweaks do you recommend I try? It's just that I have never had an issue even on pure stock Debian when test endpoints are a couple of ms away.
No worries. I'll send some over shortly to you
Would you mind sharing with the class? Would be super curious to learn more about how to juice networking performance, if you don't mind
FWIW, I picked up one of their promotional VPS offerings over the holidays, and I've been experiencing intermittent events of 100% packet loss lasting a few seconds to minutes at a time to both Netdata and a self-hosted Uptime Kuma instance with another provider.
I haven't deployed anything to this server yet as it's still in my stability "burn-in" period, and I haven't had an opportunity to follow up with support. Addressing intermittent issues, especially with networking problems, is always a chore.
I will wait and see first if their speed is consistant or not and come back later
weird results indeed, iv had 5-10 dedicated servers with them in NY on 10G, have always had 7-9Gbit inbound/outbound, your speeds seem very low. Definitely an issue somewhere.
You can modify sysctl.conf which can have a big effect. I usually replace my kernel with xanmod on things that require high performance as well.
On mobile, but you can Google "cloudlare tcp sysctl.conf" and other related phrases. In a pinch, be specific with ChatGPT and ask it to pump out a config. Tell it what hardware you have and kernel and to tell you what each value means and why it chose that particular numerical value for each setting. Can start tweaking from there.
I had a similar problem with another provider here. They showed me that in their tests the server can reach 10G, but only to a local speedtest server with many threads.
What happened was that the packet loss on the specific dedicated server killed tcp performance, especially on higher latency links. Packet loss indicates congested links, hardware problem, or wrongly configured networking equipment, which is certainly the provider's fault, and has nothing to do with OS tweaking.
They refused to investigate further claiming it's all normal so I canceled the server.
I bought a VPS from Purevoltage last BF and it's running great. I hope they can solve this for you, although I'm already a bit disappointed about them reading this post.
I'll be spending some time on this tonight trying everything everyone has suggested
What provider was this? Just curious.
For anyone interested, below is the sysctl.conf changes provided to us from PureVoltage and below is what we already use on all our 40/20G boxes:
Provided by PureVoltage
Using our standard 40G tune applied via tuned to all our 40G servers
https://tuned-project.org
We did use Xanmod at a point in time, but we've since moved to using tuned + our own testing/trial-and-error and have had tremendous results. We have a tuned profile for each hardware spec at the hypervisor level and individual profiles for VMs.
streamline-servers. It's the first review on trustpilot. Apparently he's very bitter about it.
Yes you are crazy.
it does appear the retransmissions over TCP cause a significant penalty which is most apparent in single connection tests
the speeds also yoyo like crazy, dropping from several hundred Mbps to tens of Mbps and this is consistent throughout testing
I don’t know how you made that stupid assumption out of nowhere but apparently you are wrong.
The retransmission is probably the root cause. Has Purevoltage started looking into that?
MTU or MSS issue perhaps?
After checking I see the maximum MTU I can use is 1472
Perplexity says:
I have asked them for some clarity here.
Also, I noticed my assigned IP is a /31
Perplexity says:
Does this stand out to anyone?
They said they checked everything on their end and could not find a direct cause but will continue checking more comprehensively next week - I hope to hear back about the MTU concern I posted about above tho