Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Ramnode doing NVMe now - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Ramnode doing NVMe now

2»

Comments

  • edited April 2018

    Nick_A said: We've seen up to 2.1GB/s dd within test guests on an empty node. fio writes just under 3000MB. Obviously, speed will vary depending on node usage, caching, etc. I would just expect it to be generally faster than standard SSD.

    Is server NVMe the same as desktop NVMe in single drive condition? or does server NVMe have extra PCI lanes to make it faster?
    Can server have raid on NVMe to double R/W performance?

  • HarambeHarambe Member, Host Rep

    greattomeetyou said: Can server have raid on NVMe to double R/W performance?

    I'd assume they're running them in RAID 1 or 10

  • jackbjackb Member, Host Rep

    @greattomeetyou said:

    Nick_A said: We've seen up to 2.1GB/s dd within test guests on an empty node. fio writes just under 3000MB. Obviously, speed will vary depending on node usage, caching, etc. I would just expect it to be generally faster than standard SSD.

    Is server NVMe the same as desktop NVMe in single drive condition? or does server NVMe have extra PCI lanes to make it faster?
    Can server have raid on NVMe to double R/W performance?

    I doubt it will double single drive performance (since it's so high - I suspect both hardware and software raid would have a hard time making use of 2x performance). But yes, you can use nvme drives in raid and I'd be surprised if ramnode weren't.

  • FHRFHR Member, Host Rep

    @Clouvider said:

    @FHR said:

    Nick_A said: caching

    You are caching NVMe? Is it even worth it?

    Ram’s still faster, to start with ;-).

    Much more expensive though, especially now.

    greattomeetyou said: does server NVMe have extra PCI lanes to make it faster

    Server grade NVMe can be actually slower. But they have much greater durability (DWPD)

    Thanked by 1Clouvider
  • edited April 2018

    So are people not bothering with SW RAID on this stuff? What happens when they fail? Do they just turn into read only allowing a backup? Of course offline backups are done just like with HD's.

  • Nick_ANick_A Member, Top Host, Host Rep

    NL is available now - I believe some of you were waiting for that.

    Thanked by 1sirpumpkin
  • Adam1Adam1 Member

    Would one of these generally help with DB performance? currently on a ramnode openvz premium plan and I/O is sometimes a bottleneck. will rmanodes nvme iops be better than whatever ssd's are in their (presumably) sata ssd vps's?

  • Nick_ANick_A Member, Top Host, Host Rep

    @Adam1 said:
    Would one of these generally help with DB performance? currently on a ramnode openvz premium plan and I/O is sometimes a bottleneck. will rmanodes nvme iops be better than whatever ssd's are in their (presumably) sata ssd vps's?

    Yep!

    Thanked by 1Adam1
  • JonesJones Member

    @Nick_A

    LA is available now???

  • Nick_ANick_A Member, Top Host, Host Rep

    @Jones said:
    @Nick_A

    LA is available now???

    Not yet, sorry.

  • MikeAMikeA Member, Patron Provider

    @Adam1 said:
    Would one of these generally help with DB performance? currently on a ramnode openvz premium plan and I/O is sometimes a bottleneck. will rmanodes nvme iops be better than whatever ssd's are in their (presumably) sata ssd vps's?

    It does, but it probably wouldn't be noticed with most apps.

  • williewillie Member
    edited May 2018

    I had Ramnode OpenVZ 128MB premium SSD plan for many years (finally cancelled because too many idlers) and it was always very speedy when I had it. Definitely fast enough for any sort of light access database usage. For heavier duty yeah you want something heavier duty.

  • sinsin Member

    I picked up one of their NVMe VPSes last night and the performance is pretty damn good! Ramnode just enabled cpu passthrough (so I could have aesni) for me and the cpu cores are 3.7GHz.

    Here's a quick FreeBSD diskinfo test:

    root@freebsd:~ # diskinfo -cti /dev/vtbd0
    /dev/vtbd0
        512             # sectorsize
        26843545600     # mediasize in bytes (25G)
        52428800        # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        52012           # Cylinders according to firmware.
        16              # Heads according to firmware.
        63              # Sectors according to firmware.
                        # Disk ident.
    
    I/O command overhead:
        time to read 10MB block      0.013958 sec   =    0.001 msec/sector
        time to read 20480 sectors   2.374770 sec   =    0.116 msec/sector
        calculated command overhead         =    0.115 msec/sector
    
    Seek times:
        Full stroke:      250 iter in   0.022587 sec =    0.090 msec
        Half stroke:      250 iter in   0.029474 sec =    0.118 msec
        Quarter stroke:   500 iter in   0.062339 sec =    0.125 msec
        Short forward:    400 iter in   0.047338 sec =    0.118 msec
        Short backward:   400 iter in   0.050634 sec =    0.127 msec
        Seq outer:   2048 iter in   0.111607 sec =    0.054 msec
        Seq inner:   2048 iter in   0.106352 sec =    0.052 msec
    
    Transfer rates:
        outside:       102400 kbytes in   0.094869 sec =  1079383 kbytes/sec
        middle:        102400 kbytes in   0.093738 sec =  1092406 kbytes/sec
        inside:        102400 kbytes in   0.089908 sec =  1138942 kbytes/sec
    
    Asynchronous random reads:
        sectorsize:    137690 ops in    3.000539 sec =    45888 IOPS
        4 kbytes:      134037 ops in    3.000558 sec =    44671 IOPS
        32 kbytes:     127723 ops in    3.000570 sec =    42566 IOPS
        128 kbytes:     90931 ops in    3.006833 sec =    30241 IOPS
    
    Thanked by 3Sofia_K Nick_A eva2000
  • yomeroyomero Member

    sin said: Here's a quick FreeBSD diskinfo test:

    Pretty interesting tool! That's new here =) Thanks!

    Thanked by 1sin
  • sinsin Member
    edited May 2018

    Here's some tests done from Debian 9.4 (using backported kernel):

    root@ram:~# ioping . -c 10
    4 KiB <<< . (ext4 /dev/vda1): request=1 time=110.9 us (warmup)
    4 KiB <<< . (ext4 /dev/vda1): request=2 time=231.7 us
    4 KiB <<< . (ext4 /dev/vda1): request=3 time=219.0 us
    4 KiB <<< . (ext4 /dev/vda1): request=4 time=246.0 us
    4 KiB <<< . (ext4 /dev/vda1): request=5 time=211.4 us
    4 KiB <<< . (ext4 /dev/vda1): request=6 time=243.3 us
    4 KiB <<< . (ext4 /dev/vda1): request=7 time=232.8 us
    4 KiB <<< . (ext4 /dev/vda1): request=8 time=213.4 us
    4 KiB <<< . (ext4 /dev/vda1): request=9 time=228.9 us
    4 KiB <<< . (ext4 /dev/vda1): request=10 time=216.7 us
    
    --- . (ext4 /dev/vda1) ioping statistics ---
    9 requests completed in 2.04 ms, 36 KiB read, 4.40 k iops, 17.2 MiB/s
    generated 10 requests in 9.00 s, 40 KiB, 1 iops, 4.44 KiB/s
    min/avg/max/mdev = 211.4 us / 227.0 us / 246.0 us / 11.9 us
    
    root@ram:~# ioping -RL /dev/vda1
    
    --- /dev/vda1 (block device 23.0 GiB) ioping statistics ---
    14.6 k requests completed in 2.96 s, 3.56 GiB read, 4.93 k iops, 1.20 GiB/s
    generated 14.6 k requests in 3.00 s, 3.56 GiB, 4.86 k iops, 1.19 GiB/s
    min/avg/max/mdev = 166.0 us / 202.9 us / 4.44 ms / 106.8 us
    root@ram:~# ioping -R /dev/vda1
    
    --- /dev/vda1 (block device 23.0 GiB) ioping statistics ---
    25.9 k requests completed in 2.96 s, 101.1 MiB read, 8.76 k iops, 34.2 MiB/s
    generated 25.9 k requests in 3.00 s, 101.1 MiB, 8.63 k iops, 33.7 MiB/s
    min/avg/max/mdev = 34.8 us / 114.2 us / 2.65 ms / 34.5 us
    
    root@ram:~# hdparm -Tt /dev/vda1
    
    /dev/vda1:
     Timing cached reads:   22950 MB in  1.99 seconds = 11512.35 MB/sec
     Timing buffered disk reads: 5334 MB in  3.00 seconds = 1777.46 MB/sec
    
    root@ram:~# wget https://freevps.us/downloads/bench.sh -O - -o /dev/null|bash
    Benchmark started on Sun May 20 14:42:33 EDT 2018
    Full benchmark log: /root/bench.log
    
    System Info
    -----------
    Processor   : Intel(R) Xeon(R) CPU E3-1240 v6 @ 3.70GHz
    CPU Cores   : 2
    Frequency   : 3696.048 MHz
    Memory      : 1997 MB
    Swap        :  MB
    Uptime      : 7 min,
    
    OS      : Debian GNU/Linux 9
    Arch        : x86_64 (64 Bit)
    Kernel      : 4.16.0-0.bpo.1-amd64
    Hostname    : xxx
    
    
    Speedtest (IPv4 only)
    ---------------------
    Your public IPv4 is x.x.x.x
    
    Location        Provider    Speed
    CDN         Cachefly    106MB/s
    
    Atlanta, GA, US     Coloat      111MB/s 
    Dallas, TX, US      Softlayer   52.1MB/s 
    Seattle, WA, US     Softlayer   25.0MB/s 
    San Jose, CA, US    Softlayer   28.9MB/s 
    Washington, DC, US  Softlayer   45.3MB/s 
    
    Tokyo, Japan        Linode      4.80MB/s 
    Singapore       Softlayer   8.12MB/s 
    
    Rotterdam, Netherlands  id3.net     7.05MB/s
    Haarlem, Netherlands    Leaseweb    34.3MB/s 
    
    
    Disk Speed
    ----------
    I/O (1st run)   : 1.8 GB/s
    I/O (2nd run)   : 1.8 GB/s
    I/O (3rd run)   : 1.8 GB/s
    Average I/O : 1.8 MB/s
    

    and last but not least, here's a Geekbench

    I really like this Ramnode NVME VPS, def a keeper :-).

  • Sofia_KSofia_K Member
    edited May 2018

    OMG cpu cores are 3.7GHz! that's terrific. Will try them.

    Thanked by 2sin Nick_A
  • sinsin Member

    @Sofia_K said:
    OMG cpu cores are 3.7GHz! that's terrific. Will try them.

    Yeah it's not just the disks that are blazing fast, they give you some fast cores too!

    Thanked by 2Nick_A Sofia_K
  • sinsin Member

    Whelp I ended up migrating my websites on a VULTR NJ VPS to my Ramnode NVMe VPS in ATL and I couldn't be happier! The performance is really freaking good for $12/Month and the only issue I came across was for some reason my Ramnode IPv6 can't access any OVH IPv6 addresses (discovered this when I tried using ipv6-test.com which is hosted at OVH and then tried doing mtr -6 and ping6 to and from my Ramnode ATL and Kimsufi server) but it works fine for every other IPv6 network.

    I'm probably going to pickup a Ramnode NVMe VPS in NY next and maybe setup a DragonFly BSD server there.

    Thanked by 2Nick_A eva2000
  • Nick_ANick_A Member, Top Host, Host Rep

    @sin send us a ticket with the IP range you're trying to reach. I bet I know what the problem is.

  • sinsin Member

    @Nick_A said:
    @sin send us a ticket with the IP range you're trying to reach. I bet I know what the problem is.

    Done, Ticket #190102 - thank you :-).

  • JoeMeritJoeMerit Veteran

    @sin what plan did you come from at Vultr ?

  • sinsin Member
    edited May 2018

    @JoeMerit said:
    @sin what plan did you come from at Vultr ?

    I was using a few $10 vps plans at VULTR and was in the middle of setting up a new one there so I could condense some of them when I saw this post here that said Ramnode had new NVMe plans. I hadn't of used Ramnode in a long time so I figured I would give their new nvme vpses a try and I was super impressed with the performance so decided I would setup my new webserver on a Ramnode NVMe VPS instead.

    I usually use FreeBSD on most of my servers (currently testing out OpenBSD and DragonFly BSD on some others) so I was happy to see Ramnode had all the latest BSD isos available too :-).

    Thanked by 1Nick_A
Sign In or Register to comment.