Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Contabo new raw NVMe speed product line benchmark/review
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Contabo new raw NVMe speed product line benchmark/review

jsgjsg Member, Resident Benchmarker

Contabo / @contabo_m have created a new product line, NVMe Epyc VPS - but not with just any NVMes, nope. The NVMes in those VPSs are terrific speed daemons, about 3 times the speed of anything I've seen so far.

Before I present the results I need to mention two "BUTs", (a) The benchmark results presented here are based on pre-launch tests that is, they were largely made under optimal not real world conditions.
And (b) - and here a BIG THANK YOU goes to Contabo: Those ideal results are not what you get to see because who cares about wet dreams? As 'NVMe' is the feature of that new product line I wanted to get an see realistic results, so I asked Contabo for 5 additional VMs on the same node my german test VM was on - and they were friendly enough to comply! Imagine that ... provider "et voilà, here's your free test VM" (plus one in each of their other locations) ... me: "thanks, but I'd like 5 more and I want them to torture the living sh_t out of your product" ... them: "torture our test VM??? Sure, here you go, 5 additional VMs have been provided".

That's chutzpah! That's a provider who has confidence in their product! So again, Thanks a lot Contabo and kudos for accepting my torture challenge! (grin).

Originally my idea was to have the 5 beefy, mind you, extra VMs create the kind of load that is realistic on a node. And yes, it worked fine, but then ... a voice in my head said "boring! My benchmark program has all those nice parameters and everything needed to really torture the disk(s). Let's go amok!" ... and (blush) so I did. While the main benchmark ran the 5 other VMs, again, all on the same node were pushing (mumble) millions of single sector (4k) writes, unbuffered, sync/direct of course (after all this was about NOT playing nice) ... then more millions of 1k writes ... then tens of millions of 512 byte writes ... and then - because that damn NVMe, while getting slower of course, still refused to go on its knees .. a few (mumble, mumble) tens of millions of 256 byte writes.
Well, I can report success. I finally did manage to achieve the results I got from the 2 or 3 fastest NVMes benchmarked before, in the 200 range. The worst I could do was even lower, about 175 but that was just a glitch; I must have been mis-typing a parameter to use 256 Byte writes on the main VM too ...

The problem with that though is that it's like proudly declaring victory after breaking a family sedan by not dropping 2 but 10 thousand pounds concrete slabs on its roof - while it's driving.

Running the main benchmark in its normal mode - while 5 other VMs on the node were hammering out unbuffered writes - that is, simply simulating an occupied node in normal use ... BANG I was back in the 600 to 650 region. Polite reminder: The bloody fastest NVMes tested before those beasts occasionally (and rather rarely) crossed the 200 boundary. And trust me those already were awfully fast NVMes.

Here's the processor and system info and the processor and memory results

Machine: amd64, Arch.: amd64, Model: AMD EPYC 7282 16-Core Processor                
OS, version: FreeBSD 12.2, Mem.: 15.982 GB
CPU - Cores: 6, Family/Model/Stepping: 23/49/0
Cache: 64K/64K L1d/L1i, 512K L2, 16M L3
Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov  
          pat pse36 cflsh mmx fxsr sse sse2 htt sse3 pclmulqdq ssse3 fma
          cx16 sse4_1 sse4_2 popcnt aes xsave osxsave avx f16c rdrnd
          hypervisor
Ext. Flags: syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm lahf_lm cmp_legacy
          cr8_legacy lzcnt sse4a misalignsse 3dnowprefetch osvw perfctr_core

ProcMem SC [MB/s]: avg 343.5 - min 326.0 (94.9 %), max 347.9 (101.3 %)
ProcMem MA [MB/s]: avg 957.8 - min 902.2 (94.2 %), max 1070.0 (111.7 %)
ProcMem MB [MB/s]: avg 1013.9 - min 972.5 (95.9 %), max 1090.0 (107.5 %)

Obviously the meeting in which they decided on which flags to pass through was very short and went like this: "Nothing to decide. Just pass all of them. Have a nice day everyone". AES? popcnt? hypervisor? Yes, all available.

Except for network results which by their very nature are different per location, for the processor/memory and the disk results I only show them for one location. Simple reason: give or take a percent or two all nodes in all locations are the same.

The results are decent, typical for the Epyc range and I like that 6 vCores performance is about 3 times that of single core. Decent too. The spreads are good too but for a final verdict I'll wait until I have gathered about 200 runs after the official product launch.

Now to the main attraction, the NVMe disk results. I'll be frank, what you get to see here isn't the original benchmark numbers but an artificial - but realistic - image created by calculations based on having both, the 5 "background' VMs and the test candidate hammering the disks way harder than what I consider realistic in everyday use. Call me mistrusting but that's excactly how I look at pre-lauch benchmarking. After all we are all interested in what we can expect in full production and not in "I have the whole node for myself" mode. Chances are good that what we'll see in real use after launch will be better than what I present here.

--- Disk - Buffered --- (best case)
Write seq. [MB/s]: avg 3158.01 - min 1759.32 (55.7%), max 3888.48 (123.1%)
Write rnd. [MB/s]: avg 6280.50 - min 5051.66 (80.4%), max 7677.57 (122.2%)
Read seq. [MB/s]:  avg 4180.45 - min 2720.73 (65.1%), max 4997.83 (119.6%)
Read rnd. [MB/s]:  avg 8104.85 - min 5702.65 (70.4%), max 9407.96 (116.1%)

(worst case, probably with some other tests running)
Write seq. [MB/s]: avg 1965.07 - min 501.16 (25.5%), max 3766.08 (191.7%)
Write rnd. [MB/s]: avg 6015.16 - min 1269.46 (21.1%), max 6946.28 (115.5%)
Read seq. [MB/s]:  avg 3710.79 - min 1591.32 (42.9%), max 4728.05 (127.4%)
Read rnd. [MB/s]:  avg 7451.19 - min 5755.20 (77.2%), max 9326.81 (125.2%)

Frankly, I put those results here mainly for completeness. Boring, of bloody course NVMes in buffered mode on a modern Unix OS show nice results even when driven hard, in particular on a VM with plenty memory.

Here's the really interesting stuff:

First to underline my last point, the buffered mode results - but with the 5 "background VMs" hammering the NVMe while the main test is run:

[D] Total size per test = 2048.00 MB, Mode: Buf'd
[D] Wr Seq:  2856.96 MB/s
[D] Wr Rnd:  3705.80 MB/s
[D] Rd Seq:  3067.82 MB/s
[D] Rd Rnd:  6724.96 MB/s

Now, on to the unbuffered direct/sync results, because that's where the truth comes to light.

[D] Total size per test 4096.00 MB, Mode: Sync
[D] Wr Seq:   600 MB/s
[D] Wr Rnd:   600 MB/s
[D] Rd Seq:  2400 MB/s
[D] Rd Rnd:  5500 MB/s

First note that the size is 4 GB (instead of the usual 2GB). Plus the writes are 512 bytes each (instead of the usual 4096 that is, a "sector" on modern devices). The 5 "background" VMs also hammer the disk with 512 bytes writes but with a lot more slices so as to make sure that the drive is really hammered hard while being tested.

Using the parameters I normally always use when benchmarking a drive the write sequential result goes to over 900 MB/s!

So, unless Contabo basically heavily oversells their nodes (which I assume they do not) you should experience about 3 times the speed of what you get from other providers (some of which use NVMes that too are not slow at all). And an AMD Zen 2 server processor help too, of course, e.g. by providing plenty of PCIe lanes and fast ones too.

Drive TLDR; This is the kind of VM you definitely want for databases, heavily dynamic web sites, and the like!

Now, on to the network results, Seattle location first

US LAX lax.download.datapacket.com [F: 0]
  DL [Mb/s]:      avg 177.8 - min 162.8 (91.5%), max 187.7 (105.5%)
  Ping [ms]:      avg 33.4 - min 32.9 (98.4%), max 34.2 (102.2%)
  Web ping [ms]:  avg 34.3 - min 33.0 (96.2%), max 99.3 (289.4%)

NO OSL speedtest.osl01.softlayer.com [F: 8]
  DL [Mb/s]:      avg 31.2 - min 0.0 (0.0%), max 34.6 (110.9%)
  Ping [ms]:      avg 179.4 - min 4.2 (2.3%), max 184.2 (102.7%)
  Web ping [ms]:  avg 184.1 - min 4.2 (2.3%), max 192.4 (104.5%)

US SJC speedtest.sjc01.softlayer.com [F: 0]
  DL [Mb/s]:      avg 219.7 - min 209.1 (95.2%), max 232.1 (105.7%)
  Ping [ms]:      avg 23.4 - min 23.3 (99.4%), max 25.9 (110.5%)
  Web ping [ms]:  avg 42.3 - min 23.3 (55.1%), max 1382.4 (3267.9%)

AU MEL speedtest.c1.mel1.dediserve.com [F: 0]
  DL [Mb/s]:      avg 29.6 - min 27.4 (92.5%), max 34.1 (115.2%)
  Ping [ms]:      avg 200.6 - min 181.2 (90.3%), max 204.2 (101.8%)
  Web ping [ms]:  avg 203.4 - min 183.3 (90.1%), max 208.0 (102.3%)

JP TOK speedtest.tokyo2.linode.com [F: 0]
  DL [Mb/s]:      avg 54.0 - min 51.3 (95.0%), max 57.8 (107.0%)
  Ping [ms]:      avg 118.8 - min 117.4 (98.8%), max 147.9 (124.5%)
  Web ping [ms]:  avg 119.0 - min 117.4 (98.6%), max 150.9 (126.8%)

IT MIL speedtest.mil01.softlayer.com [F: 0]
  DL [Mb/s]:      avg 38.0 - min 35.5 (93.5%), max 40.6 (107.0%)
  Ping [ms]:      avg 158.5 - min 158.4 (99.9%), max 159.6 (100.7%)
  Web ping [ms]:  avg 171.0 - min 158.4 (92.6%), max 1196.6 (699.9%)

TR UNK 185.65.204.169 [F: 0]
  DL [Mb/s]:      avg 32.6 - min 28.7 (88.0%), max 34.4 (105.5%)
  Ping [ms]:      avg 199.1 - min 196.9 (98.9%), max 221.8 (111.4%)
  Web ping [ms]:  avg 199.5 - min 196.9 (98.7%), max 222.3 (111.4%)

FR PAR speedtest.par01.softlayer.com [F: 18]
  DL [Mb/s]:      avg 37.1 - min 0.0 (0.0%), max 43.4 (117.0%)
  Ping [ms]:      avg 142.7 - min 142.5 (99.9%), max 148.6 (104.2%)
  Web ping [ms]:  avg 170.5 - min 142.5 (83.6%), max 1455.7 (853.9%)

SG SGP mirror.sg.leaseweb.net [F: 0]
  DL [Mb/s]:      avg 32.5 - min 30.7 (94.6%), max 34.4 (106.0%)
  Ping [ms]:      avg 193.7 - min 193.5 (99.9%), max 211.2 (109.0%)
  Web ping [ms]:  avg 206.8 - min 193.5 (93.6%), max 1033.7 (499.9%)

BR SAO speedtest.sao01.softlayer.com [F: 0]
  DL [Mb/s]:      avg 32.8 - min 31.5 (96.0%), max 34.7 (105.7%)
  Ping [ms]:      avg 184.2 - min 183.9 (99.9%), max 184.5 (100.2%)
  Web ping [ms]:  avg 191.6 - min 184.0 (96.0%), max 947.5 (494.4%)

IN CHN speedtest.che01.softlayer.com [F: 26]
  DL [Mb/s]:      avg 21.9 - min 0.0 (0.0%), max 26.7 (121.8%)
  Ping [ms]:      avg 240.9 - min 240.1 (99.7%), max 257.9 (107.1%)
  Web ping [ms]:  avg 253.1 - min 240.1 (94.9%), max 1333.5 (526.9%)

GR UNK speedtest.ftp.otenet.gr [F: 56]
  DL [Mb/s]:      avg 24.3 - min 0.0 (0.0%), max 36.5 (149.8%)
  Ping [ms]:      avg 123.1 - min 0.0 (0.0%), max 188.2 (152.9%)
  Web ping [ms]:  avg 135.9 - min 0.0 (0.0%), max 1339.7 (986.0%)

US WDC mirror.wdc1.us.leaseweb.net [F: 0]
  DL [Mb/s]:      avg 79.0 - min 73.9 (93.5%), max 84.3 (106.6%)
  Ping [ms]:      avg 75.6 - min 75.4 (99.7%), max 87.0 (115.1%)
  Web ping [ms]:  avg 75.8 - min 75.4 (99.5%), max 87.0 (114.8%)

RU MOS speedtest.hostkey.ru [F: 0]
  DL [Mb/s]:      avg 31.6 - min 28.8 (91.1%), max 33.5 (106.1%)
  Ping [ms]:      avg 196.2 - min 192.9 (98.3%), max 243.0 (123.8%)
  Web ping [ms]:  avg 196.9 - min 193.1 (98.1%), max 244.8 (124.3%)

US DAL speedtest.dal05.softlayer.com [F: 2]
  DL [Mb/s]:      avg 95.4 - min 0.0 (0.0%), max 105.4 (110.5%)
  Ping [ms]:      avg 58.7 - min 57.3 (97.6%), max 147.1 (250.6%)
  Web ping [ms]:  avg 71.9 - min 57.4 (79.8%), max 1319.2 (1835.0%)

UK LON speedtest.lon02.softlayer.com [F: 4]
  DL [Mb/s]:      avg 42.1 - min 0.0 (0.0%), max 46.5 (110.6%)
  Ping [ms]:      avg 138.1 - min 137.9 (99.9%), max 138.4 (100.2%)
  Web ping [ms]:  avg 155.0 - min 137.9 (88.9%), max 1389.3 (896.1%)

US NYC nyc.download.datapacket.com [F: 0]
  DL [Mb/s]:      avg 84.7 - min 73.2 (86.5%), max 89.8 (106.0%)
  Ping [ms]:      avg 74.0 - min 73.0 (98.6%), max 80.8 (109.2%)
  Web ping [ms]:  avg 74.8 - min 73.1 (97.7%), max 99.8 (133.4%)

RO BUC 185.183.99.8 [F: 0]
  DL [Mb/s]:      avg 32.5 - min 30.1 (92.4%), max 35.4 (108.7%)
  Ping [ms]:      avg 189.9 - min 186.2 (98.0%), max 197.6 (104.0%)
  Web ping [ms]:  avg 192.1 - min 186.2 (97.0%), max 248.3 (129.3%)

NL AMS mirror.nl.leaseweb.net [F: 0]
  DL [Mb/s]:      avg 41.0 - min 38.1 (92.9%), max 42.4 (103.4%)
  Ping [ms]:      avg 153.6 - min 153.5 (99.9%), max 154.2 (100.4%)
  Web ping [ms]:  avg 154.1 - min 153.5 (99.6%), max 157.5 (102.2%)

CN HK mirror.hk.leaseweb.net [F: 0]
  DL [Mb/s]:      avg 43.5 - min 41.3 (94.9%), max 45.3 (104.1%)
  Ping [ms]:      avg 141.1 - min 140.8 (99.8%), max 180.5 (128.0%)
  Web ping [ms]:  avg 141.2 - min 140.8 (99.7%), max 180.8 (128.0%)

DE FRA fra.lg.core-backbone.com [F: 0]
  DL [Mb/s]:      avg 42.7 - min 41.5 (97.2%), max 44.2 (103.6%)
  Ping [ms]:      avg 145.4 - min 145.2 (99.9%), max 146.5 (100.8%)
  Web ping [ms]:  avg 147.1 - min 145.2 (98.7%), max 150.8 (102.5%)

170 - 220 Mb/s to California, nice. About 95 Mb/s Dallas, OK, 75 - 85 Mb/s across the whole country to the East-Coast, decent I guess. About 30 to slightly above 40 Mb/s to Europe, meee ... If at least cross-Pacific was great, but alas it isn't; 32 Mb/s to Singapore is nothing to write home about, but almost 55 Mb/s to Tokyo seems decent.

Thanked by 2Ympker Arkas
«13

Comments

  • jsgjsg Member, Resident Benchmarker

    -- part 2 --
    Now the St. Louis location

    US LAX lax.download.datapacket.com [F: 0]
      DL [Mb/s]:      avg 117.9 - min 96.7 (82.0%), max 130.4 (110.6%)
      Ping [ms]:      avg 50.2 - min 50.0 (99.7%), max 50.8 (101.3%)
      Web ping [ms]:  avg 50.5 - min 50.1 (99.2%), max 54.4 (107.7%)
    
    NO OSL speedtest.osl01.softlayer.com [F: 0]
      DL [Mb/s]:      avg 42.9 - min 40.7 (94.9%), max 45.6 (106.4%)
      Ping [ms]:      avg 139.2 - min 23.9 (17.2%), max 143.2 (102.8%)
      Web ping [ms]:  avg 143.0 - min 139.0 (97.2%), max 351.7 (245.9%)
    
    US SJC speedtest.sjc01.softlayer.com [F: 0]
      DL [Mb/s]:      avg 123.0 - min 113.2 (92.0%), max 134.8 (109.6%)
      Ping [ms]:      avg 44.0 - min 43.8 (99.6%), max 45.4 (103.2%)
      Web ping [ms]:  avg 53.4 - min 43.9 (82.2%), max 884.2 (1655.5%)
    
    AU MEL speedtest.c1.mel1.dediserve.com [F: 0]
      DL [Mb/s]:      avg 28.3 - min 26.6 (94.2%), max 32.1 (113.4%)
      Ping [ms]:      avg 215.5 - min 196.0 (91.0%), max 262.6 (121.9%)
      Web ping [ms]:  avg 218.9 - min 196.4 (89.7%), max 265.3 (121.2%)
    
    JP TOK speedtest.tokyo2.linode.com [F: 0]
      DL [Mb/s]:      avg 45.4 - min 42.7 (94.2%), max 48.6 (107.1%)
      Ping [ms]:      avg 138.5 - min 136.5 (98.6%), max 148.1 (107.0%)
      Web ping [ms]:  avg 140.6 - min 136.5 (97.1%), max 148.1 (105.4%)
    
    IT MIL speedtest.mil01.softlayer.com [F: 0]
      DL [Mb/s]:      avg 51.4 - min 49.2 (95.7%), max 54.6 (106.3%)
      Ping [ms]:      avg 115.4 - min 114.8 (99.5%), max 115.6 (100.2%)
      Web ping [ms]:  avg 118.1 - min 114.8 (97.2%), max 555.4 (470.3%)
    
    TR UNK 185.65.204.169 [F: 2]
      DL [Mb/s]:      avg 36.1 - min 0.0 (0.0%), max 45.7 (126.6%)
      Ping [ms]:      avg 149.3 - min 0.0 (0.0%), max 184.0 (123.2%)
      Web ping [ms]:  avg 150.2 - min 0.0 (0.0%), max 196.7 (131.0%)
    
    FR PAR speedtest.par01.softlayer.com [F: 12]
      DL [Mb/s]:      avg 55.5 - min 0.0 (0.0%), max 63.5 (114.3%)
      Ping [ms]:      avg 99.2 - min 99.0 (99.8%), max 100.6 (101.4%)
      Web ping [ms]:  avg 136.2 - min 99.0 (72.7%), max 1423.5 (1045.0%)
    
    SG SGP mirror.sg.leaseweb.net [F: 7]
      DL [Mb/s]:      avg 25.7 - min 0.0 (0.0%), max 29.0 (112.9%)
      Ping [ms]:      avg 222.1 - min 0.0 (0.0%), max 237.8 (107.1%)
      Web ping [ms]:  avg 223.1 - min 0.0 (0.0%), max 240.3 (107.7%)
    
    BR SAO speedtest.sao01.softlayer.com [F: 0]
      DL [Mb/s]:      avg 41.4 - min 30.2 (73.0%), max 44.6 (107.8%)
      Ping [ms]:      avg 142.4 - min 142.0 (99.7%), max 142.8 (100.3%)
      Web ping [ms]:  avg 159.3 - min 142.2 (89.3%), max 1428.2 (896.5%)
    
    IN CHN speedtest.che01.softlayer.com [F: 27]
      DL [Mb/s]:      avg 20.7 - min 0.0 (0.0%), max 26.1 (126.3%)
      Ping [ms]:      avg 257.3 - min 256.4 (99.6%), max 298.7 (116.1%)
      Web ping [ms]:  avg 281.9 - min 256.4 (90.9%), max 1483.4 (526.2%)
    
    GR UNK speedtest.ftp.otenet.gr [F: 72]
      DL [Mb/s]:      avg 27.2 - min 0.0 (0.0%), max 44.9 (164.6%)
      Ping [ms]:      avg 89.2 - min 0.0 (0.0%), max 144.1 (161.6%)
      Web ping [ms]:  avg 92.7 - min 0.0 (0.0%), max 871.6 (940.2%)
    
    US WDC mirror.wdc1.us.leaseweb.net [F: 0]
      DL [Mb/s]:      avg 186.6 - min 57.8 (30.9%), max 222.9 (119.4%)
      Ping [ms]:      avg 27.2 - min 26.8 (98.6%), max 44.2 (162.6%)
      Web ping [ms]:  avg 27.6 - min 26.8 (97.1%), max 44.2 (160.2%)
    
    RU MOS speedtest.hostkey.ru [F: 0]
      DL [Mb/s]:      avg 41.3 - min 33.1 (80.1%), max 44.4 (107.5%)
      Ping [ms]:      avg 147.3 - min 145.6 (98.8%), max 193.9 (131.6%)
      Web ping [ms]:  avg 148.3 - min 145.7 (98.3%), max 193.9 (130.8%)
    
    US DAL speedtest.dal05.softlayer.com [F: 0]
      DL [Mb/s]:      avg 185.6 - min 43.7 (23.6%), max 222.3 (119.7%)
      Ping [ms]:      avg 24.9 - min 24.7 (99.2%), max 27.9 (112.0%)
      Web ping [ms]:  avg 51.1 - min 24.8 (48.5%), max 1482.9 (2899.3%)
    
    UK LON speedtest.lon02.softlayer.com [F: 0]
      DL [Mb/s]:      avg 52.7 - min 48.7 (92.5%), max 60.9 (115.5%)
      Ping [ms]:      avg 116.6 - min 108.0 (92.6%), max 117.8 (101.0%)
      Web ping [ms]:  avg 151.9 - min 108.0 (71.1%), max 1470.0 (967.7%)
    
    US NYC nyc.download.datapacket.com [F: 0]
      DL [Mb/s]:      avg 210.4 - min 180.8 (85.9%), max 233.9 (111.2%)
      Ping [ms]:      avg 26.0 - min 25.4 (97.6%), max 35.3 (135.6%)
      Web ping [ms]:  avg 26.9 - min 25.5 (94.7%), max 78.4 (291.1%)
    
    RO BUC 185.183.99.8 [F: 17]
      DL [Mb/s]:      avg 37.9 - min 0.0 (0.0%), max 44.8 (118.2%)
      Ping [ms]:      avg 143.0 - min 0.0 (0.0%), max 153.3 (107.2%)
      Web ping [ms]:  avg 145.1 - min 0.0 (0.0%), max 227.4 (156.7%)
    
    NL AMS mirror.nl.leaseweb.net [F: 0]
      DL [Mb/s]:      avg 54.9 - min 30.1 (54.8%), max 61.4 (111.8%)
      Ping [ms]:      avg 104.0 - min 0.0 (0.0%), max 106.0 (101.9%)
      Web ping [ms]:  avg 105.0 - min 0.0 (0.0%), max 109.0 (103.8%)
    
    CN HK mirror.hk.leaseweb.net [F: 0]
      DL [Mb/s]:      avg 31.9 - min 20.3 (63.6%), max 34.7 (108.7%)
      Ping [ms]:      avg 185.6 - min 0.0 (0.0%), max 196.7 (106.0%)
      Web ping [ms]:  avg 185.8 - min 0.0 (0.0%), max 196.7 (105.9%)
    
    DE FRA fra.lg.core-backbone.com [F: 0]
      DL [Mb/s]:      avg 58.2 - min 56.2 (96.4%), max 60.9 (104.5%)
      Ping [ms]:      avg 105.6 - min 105.4 (99.8%), max 106.7 (101.0%)
      Web ping [ms]:  avg 106.2 - min 105.4 (99.2%), max 109.0 (102.6%)
    

    About 120 Mb/s to California, 185 Mb/s to Dallas, and 185 to 215 Mb/s to the East-Coast, yay, that's decent. Obviously about 10 Mb/s less to Asia but slightly over 40 Mb/s to Brazil (one of the largest IXs world wide!) that's acceptable. And the major Europe targets about or even over 50 Mb/s, nice.

    If I'd live in the USA or wanted some site or service having decent connectivity to the whole country, this is the location I'd choose.

    Now on to NYC (which btw for some reason had slightly better disk results than the other DCs)

    US LAX lax.download.datapacket.com [F: 0]
      DL [Mb/s]:      avg 94.8 - min 87.5 (92.3%), max 101.5 (107.1%)
      Ping [ms]:      avg 66.4 - min 66.3 (99.8%), max 66.8 (100.6%)
      Web ping [ms]:  avg 66.7 - min 66.4 (99.5%), max 74.4 (111.5%)
    
    NO OSL speedtest.osl01.softlayer.com [F: 5]
      DL [Mb/s]:      avg 42.7 - min 0.0 (0.0%), max 46.0 (107.7%)
      Ping [ms]:      avg 124.9 - min 4.3 (3.4%), max 150.3 (120.3%)
      Web ping [ms]:  avg 146.3 - min 15.5 (10.6%), max 1189.0 (812.5%)
    
    US SJC speedtest.sjc01.softlayer.com [F: 0]
      DL [Mb/s]:      avg 80.1 - min 76.2 (95.1%), max 88.8 (110.8%)
      Ping [ms]:      avg 73.1 - min 72.7 (99.5%), max 78.3 (107.1%)
      Web ping [ms]:  avg 95.7 - min 72.7 (75.9%), max 1195.0 (1248.4%)
    
    AU MEL speedtest.c1.mel1.dediserve.com [F: 0]
      DL [Mb/s]:      avg 25.6 - min 23.7 (92.8%), max 28.7 (112.2%)
      Ping [ms]:      avg 233.8 - min 214.1 (91.6%), max 279.3 (119.5%)
      Web ping [ms]:  avg 236.6 - min 216.5 (91.5%), max 282.2 (119.3%)
    
    JP TOK speedtest.tokyo2.linode.com [F: 0]
      DL [Mb/s]:      avg 36.0 - min 35.1 (97.4%), max 37.3 (103.4%)
      Ping [ms]:      avg 178.6 - min 173.4 (97.1%), max 224.1 (125.5%)
      Web ping [ms]:  avg 178.9 - min 173.8 (97.1%), max 225.2 (125.9%)
    
    IT MIL speedtest.mil01.softlayer.com [F: 0]
      DL [Mb/s]:      avg 65.0 - min 57.4 (88.3%), max 70.7 (108.7%)
      Ping [ms]:      avg 91.9 - min 91.9 (100.0%), max 92.3 (100.4%)
      Web ping [ms]:  avg 99.8 - min 91.9 (92.1%), max 1356.9 (1359.8%)
    
    TR UNK 185.65.204.169 [F: 0]
      DL [Mb/s]:      avg 51.9 - min 39.6 (76.3%), max 55.9 (107.7%)
      Ping [ms]:      avg 119.2 - min 117.8 (98.9%), max 144.2 (121.0%)
      Web ping [ms]:  avg 119.6 - min 118.0 (98.6%), max 153.8 (128.5%)
    
    FR PAR speedtest.par01.softlayer.com [F: 21]
      DL [Mb/s]:      avg 66.5 - min 0.0 (0.0%), max 81.3 (122.3%)
      Ping [ms]:      avg 89.9 - min 89.7 (99.8%), max 95.7 (106.5%)
      Web ping [ms]:  avg 120.2 - min 89.7 (74.6%), max 1157.3 (962.6%)
    
    SG SGP mirror.sg.leaseweb.net [F: 0]
      DL [Mb/s]:      avg 26.0 - min 24.8 (95.4%), max 27.1 (104.1%)
      Ping [ms]:      avg 241.7 - min 241.0 (99.7%), max 247.8 (102.5%)
      Web ping [ms]:  avg 241.7 - min 241.1 (99.7%), max 247.8 (102.5%)
    
    BR SAO speedtest.sao01.softlayer.com [F: 0]
      DL [Mb/s]:      avg 47.8 - min 45.1 (94.3%), max 50.5 (105.6%)
      Ping [ms]:      avg 124.9 - min 124.6 (99.8%), max 127.3 (101.9%)
      Web ping [ms]:  avg 136.6 - min 124.6 (91.2%), max 1159.2 (848.4%)
    
    IN CHN speedtest.che01.softlayer.com [F: 31]
      DL [Mb/s]:      avg 21.1 - min 0.0 (0.0%), max 27.1 (128.7%)
      Ping [ms]:      avg 245.5 - min 204.3 (83.2%), max 254.6 (103.7%)
      Web ping [ms]:  avg 261.4 - min 204.3 (78.2%), max 1457.3 (557.5%)
    
    GR UNK speedtest.ftp.otenet.gr [F: 63]
      DL [Mb/s]:      avg 37.6 - min 0.0 (0.0%), max 56.2 (149.5%)
      Ping [ms]:      avg 74.3 - min 0.0 (0.0%), max 126.5 (170.3%)
      Web ping [ms]:  avg 77.7 - min 0.0 (0.0%), max 651.6 (838.5%)
    
    US WDC mirror.wdc1.us.leaseweb.net [F: 0]
      DL [Mb/s]:      avg 336.8 - min 287.4 (85.4%), max 358.3 (106.4%)
      Ping [ms]:      avg 7.9 - min 7.9 (99.4%), max 8.2 (103.2%)
      Web ping [ms]:  avg 8.0 - min 7.9 (98.5%), max 8.6 (107.2%)
    
    RU MOS speedtest.hostkey.ru [F: 0]
      DL [Mb/s]:      avg 51.5 - min 46.7 (90.6%), max 55.0 (106.9%)
      Ping [ms]:      avg 116.7 - min 0.0 (0.0%), max 161.2 (138.2%)
      Web ping [ms]:  avg 118.3 - min 0.0 (0.0%), max 161.6 (136.6%)
    
    US DAL speedtest.dal05.softlayer.com [F: 2]
      DL [Mb/s]:      avg 129.7 - min 0.0 (0.0%), max 143.8 (110.8%)
      Ping [ms]:      avg 39.7 - min 0.0 (0.0%), max 42.3 (106.5%)
      Web ping [ms]:  avg 62.6 - min 0.0 (0.0%), max 1386.8 (2215.6%)
    
    UK LON speedtest.lon02.softlayer.com [F: 0]
      DL [Mb/s]:      avg 78.8 - min 66.4 (84.3%), max 89.0 (112.9%)
      Ping [ms]:      avg 73.3 - min 73.2 (99.9%), max 73.5 (100.3%)
      Web ping [ms]:  avg 89.0 - min 73.2 (82.2%), max 882.0 (990.8%)
    
    US NYC nyc.download.datapacket.com [F: 0]
      DL [Mb/s]:      avg 377.9 - min 352.9 (93.4%), max 386.3 (102.2%)
      Ping [ms]:      avg 1.0 - min 0.9 (87.5%), max 2.4 (233.2%)
      Web ping [ms]:  avg 1.4 - min 0.9 (65.5%), max 18.2 (1323.9%)
    
    RO BUC 185.183.99.8 [F: 0]
      DL [Mb/s]:      avg 54.6 - min 50.6 (92.6%), max 61.0 (111.7%)
      Ping [ms]:      avg 109.9 - min 109.3 (99.5%), max 119.3 (108.6%)
      Web ping [ms]:  avg 111.6 - min 109.4 (98.0%), max 185.6 (166.3%)
    
    NL AMS mirror.nl.leaseweb.net [F: 0]
      DL [Mb/s]:      avg 78.3 - min 72.4 (92.5%), max 84.3 (107.7%)
      Ping [ms]:      avg 75.5 - min 74.4 (98.5%), max 77.3 (102.4%)
      Web ping [ms]:  avg 76.5 - min 74.4 (97.3%), max 79.0 (103.3%)
    
    CN HK mirror.hk.leaseweb.net [F: 0]
      DL [Mb/s]:      avg 29.3 - min 27.6 (94.3%), max 30.6 (104.3%)
      Ping [ms]:      avg 210.2 - min 0.0 (0.0%), max 222.7 (105.9%)
      Web ping [ms]:  avg 215.1 - min 0.0 (0.0%), max 1125.3 (523.1%)
    
    DE FRA fra.lg.core-backbone.com [F: 0]
      DL [Mb/s]:      avg 76.1 - min 73.6 (96.7%), max 82.5 (108.5%)
      Ping [ms]:      avg 78.4 - min 78.3 (99.9%), max 78.8 (100.5%)
      Web ping [ms]:  avg 79.1 - min 78.4 (99.1%), max 79.8 (100.9%)
    

    If I lived in the USA and that was my primary focus, I'd pick the St. Louis DC - but I do not, I look with european eyes and my focus, besides Europe itself, is on "in which of the Contabo DCs do I get the best balance?", meaning min 50 Mb/s and preferably more like 70+ Mb/s to both the major US targets as well as to the major Europe targets. Contabo's NYC DC would be my choice.
    80 to 95 Mb/s to California, ca. 130 Mb/s to Dallas, (not surprisingly) 300+ Mb/s to Washington DC - but also - 75 to 80 Mb/s to the big 3 in Europe (AMS, FRA, LON) looks quite attractive. Nice, I like those results.

  • jsgjsg Member, Resident Benchmarker

    -- part 3 --
    Finally to the german DC

    US LAX lax.download.datapacket.com [F: 0]
      DL [Mb/s]:      avg 39.8 - min 36.9 (92.6%), max 43.6 (109.6%)
      Ping [ms]:      avg 155.5 - min 152.9 (98.3%), max 158.1 (101.7%)
      Web ping [ms]:  avg 156.2 - min 153.8 (98.4%), max 159.0 (101.8%)
    
    NO OSL speedtest.osl01.softlayer.com [F: 0]
      DL [Mb/s]:      avg 191.3 - min 182.5 (95.4%), max 207.0 (108.2%)
      Ping [ms]:      avg 27.9 - min 27.8 (99.6%), max 28.4 (101.7%)
      Web ping [ms]:  avg 34.7 - min 27.8 (80.1%), max 514.7 (1483.4%)
    
    US SJC speedtest.sjc01.softlayer.com [F: 0]
      DL [Mb/s]:      avg 38.4 - min 34.6 (90.3%), max 42.8 (111.6%)
      Ping [ms]:      avg 155.6 - min 155.4 (99.9%), max 156.9 (100.8%)
      Web ping [ms]:  avg 173.7 - min 155.4 (89.5%), max 1364.6 (785.5%)
    
    AU MEL speedtest.c1.mel1.dediserve.com [F: 12]
      DL [Mb/s]:      avg 17.8 - min 0.0 (0.0%), max 25.1 (141.1%)
      Ping [ms]:      avg 294.2 - min 0.0 (0.0%), max 309.4 (105.2%)
      Web ping [ms]:  avg 296.2 - min 0.0 (0.0%), max 309.4 (104.5%)
    
    JP TOK speedtest.tokyo2.linode.com [F: 0]
      DL [Mb/s]:      avg 25.5 - min 23.2 (91.2%), max 27.0 (106.0%)
      Ping [ms]:      avg 251.5 - min 242.6 (96.4%), max 259.9 (103.3%)
      Web ping [ms]:  avg 256.7 - min 242.7 (94.6%), max 274.7 (107.0%)
    
    IT MIL speedtest.mil01.softlayer.com [F: 0]
      DL [Mb/s]:      avg 293.2 - min 277.5 (94.6%), max 305.7 (104.2%)
      Ping [ms]:      avg 13.8 - min 13.7 (99.0%), max 14.2 (102.7%)
      Web ping [ms]:  avg 29.0 - min 13.7 (47.2%), max 1093.3 (3767.5%)
    
    TR UNK 185.65.204.169 [F: 0]
      DL [Mb/s]:      avg 80.3 - min 58.8 (73.2%), max 82.1 (102.2%)
      Ping [ms]:      avg 42.2 - min 40.0 (94.7%), max 63.8 (151.1%)
      Web ping [ms]:  avg 43.1 - min 40.3 (93.5%), max 64.7 (150.1%)
    
    FR PAR speedtest.par01.softlayer.com [F: 4]
      DL [Mb/s]:      avg 276.3 - min 0.0 (0.0%), max 302.4 (109.4%)
      Ping [ms]:      avg 13.9 - min 13.8 (99.2%), max 15.1 (108.5%)
      Web ping [ms]:  avg 27.6 - min 13.8 (50.0%), max 990.3 (3588.0%)
    
    SG SGP mirror.sg.leaseweb.net [F: 7]
      DL [Mb/s]:      avg 21.4 - min 0.0 (0.0%), max 27.4 (128.3%)
      Ping [ms]:      avg 315.8 - min 315.3 (99.8%), max 333.4 (105.6%)
      Web ping [ms]:  avg 316.0 - min 315.4 (99.8%), max 333.4 (105.5%)
    
    BR SAO speedtest.sao01.softlayer.com [F: 0]
      DL [Mb/s]:      avg 31.6 - min 30.4 (96.2%), max 33.0 (104.5%)
      Ping [ms]:      avg 193.1 - min 192.8 (99.8%), max 193.9 (100.4%)
      Web ping [ms]:  avg 193.8 - min 192.9 (99.5%), max 236.7 (122.1%)
    
    IN CHN speedtest.che01.softlayer.com [F: 2]
      DL [Mb/s]:      avg 41.5 - min 0.0 (0.0%), max 46.0 (110.8%)
      Ping [ms]:      avg 143.5 - min 141.0 (98.2%), max 146.6 (102.1%)
      Web ping [ms]:  avg 174.7 - min 141.1 (80.8%), max 1093.8 (626.1%)
    
    GR UNK speedtest.ftp.otenet.gr [F: 27]
      DL [Mb/s]:      avg 82.0 - min 0.0 (0.0%), max 141.4 (172.5%)
      Ping [ms]:      avg 23.8 - min 0.0 (0.0%), max 41.0 (172.0%)
      Web ping [ms]:  avg 44.0 - min 0.0 (0.0%), max 1442.0 (3275.1%)
    
    US WDC mirror.wdc1.us.leaseweb.net [F: 0]
      DL [Mb/s]:      avg 61.4 - min 58.0 (94.5%), max 64.7 (105.4%)
      Ping [ms]:      avg 100.0 - min 99.9 (99.9%), max 100.2 (100.2%)
      Web ping [ms]:  avg 100.3 - min 99.9 (99.6%), max 101.5 (101.2%)
    
    RU MOS speedtest.hostkey.ru [F: 0]
      DL [Mb/s]:      avg 134.1 - min 115.0 (85.8%), max 159.9 (119.3%)
      Ping [ms]:      avg 38.8 - min 38.6 (99.6%), max 39.2 (101.1%)
      Web ping [ms]:  avg 42.3 - min 38.7 (91.6%), max 58.9 (139.4%)
    
    US DAL speedtest.dal05.softlayer.com [F: 0]
      DL [Mb/s]:      avg 46.4 - min 43.8 (94.3%), max 49.8 (107.2%)
      Ping [ms]:      avg 124.0 - min 122.9 (99.1%), max 125.1 (100.9%)
      Web ping [ms]:  avg 165.4 - min 124.0 (75.0%), max 1017.0 (615.0%)
    
    UK LON speedtest.lon02.softlayer.com [F: 2]
      DL [Mb/s]:      avg 284.8 - min 0.0 (0.0%), max 308.1 (108.2%)
      Ping [ms]:      avg 13.9 - min 13.6 (98.0%), max 14.0 (100.8%)
      Web ping [ms]:  avg 41.1 - min 13.7 (33.3%), max 1085.7 (2639.4%)
    
    US NYC nyc.download.datapacket.com [F: 0]
      DL [Mb/s]:      avg 77.8 - min 69.0 (88.7%), max 82.8 (106.3%)
      Ping [ms]:      avg 81.2 - min 78.7 (96.9%), max 127.1 (156.5%)
      Web ping [ms]:  avg 82.4 - min 79.0 (95.9%), max 127.1 (154.3%)
    
    RO BUC 185.183.99.8 [F: 0]
      DL [Mb/s]:      avg 148.0 - min 131.1 (88.6%), max 165.3 (111.7%)
      Ping [ms]:      avg 36.9 - min 32.2 (87.3%), max 43.1 (116.9%)
      Web ping [ms]:  avg 38.8 - min 33.2 (85.6%), max 79.6 (205.3%)
    
    NL AMS mirror.nl.leaseweb.net [F: 0]
      DL [Mb/s]:      avg 347.6 - min 302.6 (87.0%), max 371.5 (106.9%)
      Ping [ms]:      avg 5.2 - min 5.2 (99.7%), max 5.4 (103.5%)
      Web ping [ms]:  avg 5.8 - min 5.2 (90.3%), max 9.9 (172.0%)
    
    CN HK mirror.hk.leaseweb.net [F: 4]
      DL [Mb/s]:      avg 22.1 - min 0.0 (0.0%), max 27.7 (125.2%)
      Ping [ms]:      avg 239.5 - min 232.5 (97.1%), max 285.1 (119.1%)
      Web ping [ms]:  avg 271.0 - min 237.4 (87.6%), max 311.1 (114.8%)
    
    DE FRA fra.lg.core-backbone.com [F: 0]
      DL [Mb/s]:      avg 359.8 - min 353.6 (98.3%), max 372.4 (103.5%)
      Ping [ms]:      avg 5.2 - min 4.8 (92.8%), max 5.5 (106.4%)
      Web ping [ms]:  avg 5.4 - min 5.0 (92.0%), max 6.1 (112.2%)
    

    Obviously, this DC would be my personal choice; 160 ms roundtrip latency can be unnerving in an SSH connection. And yes, of course I love to see all major european targets offering 250 to even 350 Mb/s! The East Coast targets are still quite decent (70 - 80 Mb/s) but Asia is a weak point. Maybe Contabo should talk to Tata to at least have decent connectivity to Singapore, from where the rest can be decently reached.

    But, well this isn't really a network review; most of us know Contabo's network and its strong and not so strong points. Nor is it a processor performance review; the Epycs, unlike the Ryzens, aren't meant for best desktop performance but for really decent server performance and for min. 95% of all actually used servers one would have a very hard time to convince me that a ca. 350 ProcMem score isn't good enough and that a Ryzen was needed.

    Nope, this is about Contabo's new NVMe product line. I might be wrong but the way I see it is "it's Contabo, with its pros and a few cons, but now with frighteningly fast drives" - and that they did achieve. Oh boy, did they!
    Very well done Contabo and thanks again for your patience with my hardcore torture testing your drives.

    Final note: I also have a benchmark in the Singapore location running, but as that started a bit later I'll append it later along with the final real world results in about a week or two.

  • DPDP Administrator, The Domain Guy

    :%s/Epyc/EPYC/g

  • LeviLevi Member
    edited August 2021

    Test results can be tainted as you specifically asked Contabo for test servers. They could provide you servers in almost empty nodes, lift IOPS limits. Rather spend your money, complete rigorous testing and cancel with refund request and reason behind it. And by the way, this would be a good indication of providers billing practices.

    You don't see Michelin inspector come to restaurant and introduce him-self :)

    P.S. Yes, because of your test results I'am getting Contabo VPS.

  • aquaaqua Member, Patron Provider

    The health inspector of Contabo VPS's!

  • jsgjsg Member, Resident Benchmarker
    edited August 2021

    @LTniger said:
    Test results can be tainted as you specifically asked Contabo for test servers. They could provide you servers in almost empty nodes, lift IOPS limits. Rather spend your money, complete rigorous testing and cancel with refund request and reason behind it. And by the way, this would be a good indication of providers billing practices.

    You don't see Michelin inspector come to restaurant and introduce him-self :)

    P.S. Yes, because of your test results I'am getting Contabo VPS.

    Not "could provide you servers in almost empty nodes" - they did provide servers on almost empty nodes. Simple reason: as I clearly said, this review is based on pre-launch testing (another review round, based on production testing, will follow in one or two weeks).

    As for the network I don't expect significant changes between pre-launch and production; after all we're talking about DCs that are already filled with thousands of nodes. Wrt processor I do expect a slight decrease in production that is, on fully occupied nodes.
    But for the major factor, the NVMe drives I don't expect negative surprises because in that regard my benchmark did absolutely not run on an empty node but on one that had 5 beefy VMs hammer the NVMe really hard.

    If (not specifically) you want anonymous benchmarks then you'll have to sponsor me because I can't be expected to do all the work and then to even pay for the VMs.
    That said, I'm quite confident that Contabo didn't cheat. For different reasons, one of them being that in the long run they'd f_ck themselves, another and more important one being that (a) they are not stupid, and (b) their offering test VMs to me is not or only to a very limited degree for marketing but rather for their own purposes, roughly comparable to a "pre-flight test".

  • It's probably a bad thing there isn't limits per VM to limit abuse and keep things more fair. If there were limits, doing whatever on 5 other VM's wouldn't impact the server under test and you'd have your upper performance limit. Instead, it'll be affected by the biggest abusers.

  • @jsg said: So, unless Contabo basically heavily oversells their nodes (which I assume they do not) you should experience about 3 times the speed of what you get from other providers (some of which use NVMes that too are not slow at all)

    Contabo is the champion of overselling.

    @jsg said: Drive TLDR; This is the kind of VM you definitely want for databases, heavily dynamic web sites, and the like!

    Your NVMe wont help much when you have 50% to 75% steal and CPU is the bottleneck.

    I am not saying Contabo is bad, I used them for many months without any downtime, their network was alright, SSD speeds were acceptable, but I have always had > 50% steal in every time whether it was Germany or St Louis or new Seattle location. Even moving my VPS didn't help. If you care about CPU performance look elsewhere, or thier VDS line.

  • SaahibSaahib Host Rep, Veteran

    Checking out contabo in other tab.

    @jsg .. is this the review caused you cursing Cloudf*** shit ?

  • jsgjsg Member, Resident Benchmarker

    @TimboJones said:
    It's probably a bad thing there isn't limits per VM to limit abuse and keep things more fair. If there were limits, doing whatever on 5 other VM's wouldn't impact the server under test and you'd have your upper performance limit. Instead, it'll be affected by the biggest abusers.

    You are probably right but as a benchmarker it's not my job to discuss provider politics (nor could I successfully).

    @itsnotv said:
    Contabo is the champion of overselling.

    Any tangible credible evidence? Don't get me wrong but @itsnotv saying so isn't good enough, especially when I do not see that on my (private, paid for) VPSs.

    @Saahib said:
    @jsg .. is this the review caused you cursing Cloudf*** shit ?

    Yes. CF made it basically impossible for me to put this review up. I finally succeeded only thanks to @jbiloh helping me.

  • Great review. Would you like to include your benchmarking script so we can use it in other providers' servers to compare?

  • jsgjsg Member, Resident Benchmarker

    @laoban said:
    Great review. Would you like to include your benchmarking script so we can use it in other providers' servers to compare?

    Thank you, but: it's not a script but a compiled program (freeware but not open source).

  • jbilohjbiloh Administrator, Veteran

    Great contribution to the community, thanks @jsg

    Thanked by 1jsg
  • Yeah, contabo. Totally not overselling the CPU.

  • jsgjsg Member, Resident Benchmarker
    edited August 2021

    @JabJab said:
    Yeah, contabo. Totally not overselling the CPU.

    I suggest you learn about the difference between VPS and VDS.

    Quick and short explanation: 'steal' tells about the OS's view which is based on an OS vCPU which is a HWT. With a VDS that is what you get, a full vCore as seen by the OS.
    A VPS's vCore however is a shared HWT, typically 25% or, if you are lucky 33% of a HWT (and even less than 25% whith some products/providers).

    top and similar utilities report from the OS's perspective, not from the VPS perspective, or, to be more precise, they assume that a vCore is a full vCore/HWT. VPS however are "fair share" that is, a provider packs e.g. 3 VPS vCores on a HWT and then talks about something like "2 vCPU, fair share 33%" - and if on such a VPS 'steal' shows even 60% that is normal and to be expected.

    You don't like that, you want a full HWT for yourself? No problem, just buy a VDS.

  • cybertechcybertech Member
    edited August 2021

    @jsg said:

    @JabJab said:
    Yeah, contabo. Totally not overselling the CPU.

    I suggest you learn about the difference between VPS and VDS.

    Quick and short explanation: 'steal' tells about the OS's view which is based on an OS vCPU which is a HWT. With a VDS that is what you get, a full vCore as seen by the OS.
    A VPS's vCore however is a shared HWT, typically 25% or, if you are lucky 33% of a HWT (and even less than 25% whith some products/providers).

    top and similar utilities report from the OS's perspective, not from the VPS perspective, or, to be more precise, they assume that a vCore is a full vCore/HWT. VPS however are "fair share" that is, a provider packs e.g. 3 VPS vCores on a HWT and then talks about something like "2 vCPU, fair share 33%" - and if on such a VPS 'steal' shows even 60% that is normal and to be expected.

    You don't like that, you want a full HWT for yourself? No problem, just buy a VDS.

    i suggest you read what "st" in top means. cant understand what "fair share" means? No problem. He can.

    edit: oops , sorry for shitting in your thread. will take it outside next time.

  • Contabo has been amazing since their expansion to the STL region (<15 ms latency to my home). What I like most is their fast and free snapshot for the whole VPS - extremely handy.

  • TimboJonesTimboJones Member
    edited August 2021

    @jsg said:

    @JabJab said:
    Yeah, contabo. Totally not overselling the CPU.

    I suggest you learn about the difference between VPS and VDS.

    Quick and short explanation: 'steal' tells about the OS's view which is based on an OS vCPU which is a HWT. With a VDS that is what you get, a full vCore as seen by the OS.
    A VPS's vCore however is a shared HWT, typically 25% or, if you are lucky 33% of a HWT (and even less than 25% whith some products/providers).

    top and similar utilities report from the OS's perspective, not from the VPS perspective, or, to be more precise, they assume that a vCore is a full vCore/HWT. VPS however are "fair share" that is, a provider packs e.g. 3 VPS vCores on a HWT and then talks about something like "2 vCPU, fair share 33%" - and if on such a VPS 'steal' shows even 60% that is normal and to be expected.

    You don't like that, you want a full HWT for yourself? No problem, just buy a VDS.

    Regardless, if one ssh's into their VPS and top reports those numbers, the server is shit and not performing well.

    Out of 20+ VM's, the ones with 2, 5, and 7 for those numbers are noticeably shittier than less resource VPS'. It's like, "shit, why is this so slow?" checks top "ah, ok".

    Also, why even talk about VDS when that isn't the product being reviewed? (You probably should summarize the DUT at the start). Anyway, it's irrelevant since nobody is talking about needing to use 100% 24/7, people just want a responsive server, not a laggy one.

  • @laoban said:
    Great review. Would you like to include your benchmarking script so we can use it in other providers' servers to compare?

    He no longer makes it available to the public because I (and maybe others) question the validity of the results and releasing it publicly would allow others to prove or disprove his results and the reproducibility.

  • JabJabJabJab Member
    edited August 2021

    @TimboJones said: Regardless, if one ssh's into their VPS and top reports those numbers, the server is shit and not performing well.

    You sure that means server is shit? :D

    Imagine how long it took to log into that server and start top.

    Fair share, not used cores for bursting, yeah right :D

  • @JabJab said:

    @TimboJones said: Regardless, if one ssh's into their VPS and top reports those numbers, the server is shit and not performing well.

    You sure that means server is shit? :D

    Imagine how long it took to log into that server and start top.

    Fair share, not used cores for bursting, yeah right :D

    If it takes more than 2 seconds for the bash prompt after sshing in, a keyboard is getting destroyed.

    Thanked by 1chocolateshirt
  • jsgjsg Member, Resident Benchmarker

    @TimboJones said:

    @laoban said:
    Great review. Would you like to include your benchmarking script so we can use it in other providers' servers to compare?

    He no longer makes it available to the public because I (and maybe others) question the validity of the results and releasing it publicly would allow others to prove or disprove his results and the reproducibility.

    What a pile of BS and lies!

    The core algorithms are still the same as in v. 1 for which I made the source code available. Result of all the "we need the source code!!!" virtue signalling? Less than 3 source code downloads and zero feedback, positive or negative.

  • @jsg said:

    @TimboJones said:

    @laoban said:
    Great review. Would you like to include your benchmarking script so we can use it in other providers' servers to compare?

    He no longer makes it available to the public because I (and maybe others) question the validity of the results and releasing it publicly would allow others to prove or disprove his results and the reproducibility.

    What a pile of BS and lies!

    The core algorithms are still the same as in v. 1 for which I made the source code available. Result of all the "we need the source code!!!" virtue signalling? Less than 3 source code downloads and zero feedback, positive or negative.

    I didn't mention the source code and the topic was providing your script/program, not the source code. I don't know what the fuck you're responding to. It's like you're deliberately responding to something else, entirely.

    You didn't even prove I was lying by providing the program used for this review.

    I still have the v1 programs if someone wants to try it on their VPS and see random numbers. It's just a waste of time. Anyone can search your old threads and see my posts and screenshots from the app.

    (Also, you 100% got feedback. Lots of negative from me, and another user asked you about the code specifically).

    Thanked by 1chocolateshirt
  • jsgjsg Member, Resident Benchmarker

    @TimboJones said:

    @jsg said:

    @TimboJones said:

    @laoban said:
    Great review. Would you like to include your benchmarking script so we can use it in other providers' servers to compare?

    He no longer makes it available to the public because I (and maybe others) question the validity of the results and releasing it publicly would allow others to prove or disprove his results and the reproducibility.

    What a pile of BS and lies!

    The core algorithms are still the same as in v. 1 for which I made the source code available. Result of all the "we need the source code!!!" virtue signalling? Less than 3 source code downloads and zero feedback, positive or negative.

    I didn't mention the source code and the topic was providing your script/program, not the source code. I don't know what the fuck you're responding to. It's like you're deliberately responding to something else, entirely.

    You didn't even prove I was lying by providing the program used for this review.

    I still have the v1 programs if someone wants to try it on their VPS and see random numbers. It's just a waste of time. Anyone can search your old threads and see my posts and screenshots from the app.

    (Also, you 100% got feedback. Lots of negative from me, and another user asked you about the code specifically).

    Not only is the binary available but I in fact even created a Windows version for a LET user who asked for it.

    And no, I did not get feedback re the source code, zero, none. I got feedback re the benchmarks/reviews I did with my software, some of it, yours, utterly clueless but consistently salty, occasionally negative from some others and mostly positive.

    Being at it, let me make it clear: I had and have multiple providers, small, mid-size, and large, expressly asking me to benchmark their products, sometimes even just for their own internal use. Also, some authors of other benchmarks (scripts in their case) have publicly declared to be interested in me providing a library, e.g. for disk testing, to them, because, as at least one of them honestly said, he doesn't really know much about it. Plus I have been invited to be LET/LEBs official benchmarker and have done plenty benchmarks as LET benchmarker.

    To make it clear, I'm absolutely ready to stop doing and publishing my benchmarks and reviews, if that is what LET wants, if they prefer your clueless trash talking and bashing over what I have done and am doing for LET/LEB.

    In fact, I'm tagging @jbiloh and @raindog308 who also happen to know about certain agreements I happened to honour and you happened to break - again.

  • @jsg said:
    In fact, I'm tagging @jbiloh and @raindog308 who also happen to know about certain agreements I happened to honour and you happened to break - again.

    That's true. I didn't think I was starting anything new, just restating the past, but I'll bow out now.

  • jsgjsg Member, Resident Benchmarker

    @TimboJones said:

    @jsg said:
    In fact, I'm tagging @jbiloh and @raindog308 who also happen to know about certain agreements I happened to honour and you happened to break - again.

    That's true. I didn't think I was starting anything new, just restating the past, but I'll bow out now.

    No.

    Taking a big dump on and fervently bashing me and my work again and then quickly "bowing out" doesn't cut it anymore because you have clearly demonstrated that you don't stick to agreements, nor do you treat a hand reached out friendly to you decently.

    Sorry, this time I want a clear and binding solution, because with you generosity, patience and even friendliness just lead to yet another dump attack sooner or later. As you just demonstrated again.

  • ArkasArkas Moderator

    Time out guys. I for one appreciate what @jsg has done thus far, it is a very positive service. There is no need to continue your disagreements as it's obvious that you don't agree with each other. I'm following this thread, and it has derailed into a personal feud. Enough.

  • @Arkas said:
    Time out guys. I for one appreciate what @jsg has done thus far, it is a very positive service. There is no need to continue your disagreements as it's obvious that you don't agree with each other. I'm following this thread, and it has derailed into a personal feud. Enough.

    Things have only just begun :)

    Thanked by 1chocolateshirt
  • Lol,
    TL;DR: somebody PMS-ing..

    Thanked by 1iKeyZ
  • dufudufu Member

    Can someone confirm that nested virtualization (svm cpu flag on AMD) is indeed enabled on the production systems? @jsg mentioned the hypervisor flag is enabled on his test systems, but I’m not sure this is the same as svm and vmx flags?

    Contabo support told me nested virtualization is not possible. Maybe they are misinformed?

Sign In or Register to comment.