Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


servaRICA - Black Friday 2024 - Dedicated servers , Unified Plans and Storage , Incredible - Page 4
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

servaRICA - Black Friday 2024 - Dedicated servers , Unified Plans and Storage , Incredible

1246789

Comments

  • servarica_haniservarica_hani Member, Patron Provider

    @sitss said:

    @servarica_hani said:

    @sitss said:
    @servarica_hani can i do an early renewal of my existing VDSs and upgrade/change to this promo? tnx

    open a ticket mostly will give you new vps and you handle data transfer

    can this done thru temp snapshot?

    Honestly the team will be able to suggest best solution
    usually they know about these processes more than me

  • servarica_haniservarica_hani Member, Patron Provider

    @nick_ said:
    Fat and Slim plans are different only at the storage size while the hardware is exactly the same? If so, the 16GB slim plan is the 8GB fat plan killer. $2 more for 2x RAM, CPU, and bandwidth.

    yeah the only diffrence is the the slice NVMe storage everything else is the same
    and yes you can find some configs are much cheaper than others depending on your needs :)

    Thanked by 1nick_
  • servarica_haniservarica_hani Member, Patron Provider

    @zed3473 said:
    Opossum 1: 1GB RAM, 1 core, 1TB HDD, 4TB @ 1Gbps, 1 IPv4 = $29/year (Order Here) How is excess traffic calculated and is there a speed limit?

    yes will switch 10mbps

  • servarica_haniservarica_hani Member, Patron Provider

    @MaxTakeba said:

    Just need to get AMD-V turned on which I've already sent a ticket for. Not bad to be honest... I hate XEN and I have a feeling the NVME is being held back a little (by the software raid in the VM on top of the overhead of it being in a VM and everyone else).

    You are correct 100%
    Xen is great for many many features that are not available on kvm out of box
    but disk performance is not their strong point

    Those NVMe on the server have read 1210000 IOPS and 480K write IOPS
    and on some cases we run 4 or 5 vms per server
    but the VM cannot do fraction of that

    to be honest in real word situation it was never that we saw an application being much slower due to disk speed on nvme
    So it seems it only for yabs that it does not show good numbers

    That being said we are working on a project to rewrite good chunk of storage code in xen but it is not easy task so dont have ETA about it yet

  • servarica_haniservarica_hani Member, Patron Provider

    @smajl said:
    Is it possible to add slices later on?

    yes as long as we have stock
    but within like 2 or 3 weeks I would say we will definitely have some but if you are talking about 1 year from now then it depend on that time

  • servarica_haniservarica_hani Member, Patron Provider

    @cotc said:

    @itsTomHarper said:

    @cotc said:
    YABS for Unified Slim Plan 8 Slice

    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/mapper/unified--nvme-main):

    Block Size 4k (IOPS) 64k (IOPS)
    Read 24.01 MB/s (6.0k) 166.59 MB/s (2.6k)
    Write 24.02 MB/s (6.0k) 167.47 MB/s (2.6k)
    Total 48.03 MB/s (12.0k) 334.07 MB/s (5.2k)
    Block Size 512k (IOPS) 1m (IOPS)
    ------ --- ---- ---- ----
    Read 239.15 MB/s (467) 244.85 MB/s (239)
    Write 251.86 MB/s (491) 261.15 MB/s (255)
    Total 491.02 MB/s (958) 506.00 MB/s (494)

    Isn't the disk supposed to be nvme?

    yep, it's slow. the disk speed tests result is very similiar compared to HDD Storage VPS from other provider.

    @servarica_hani should I open a ticket?

    yes open a ticket but dont expect same number as KVM based VPS , xen does not get same numbers although the hardware here is excellent

    We are working on perminant fix for this but it is still in very early stages

  • servarica_haniservarica_hani Member, Patron Provider

    @churongcon said:
    Hello I buy SSD NVME but it show HDD disk

    The disk is nvme but i am not sure how windows will show it as in xen the drive you see in windows in called tapdisk which is software layer between the actual nvme and your windows VPS

  • servarica_haniservarica_hani Member, Patron Provider

    @RayRedd said:
    is there any free additional bandwidth limit offer for dedicated servers?

    which server you are looking for
    for greyhound servers we can switch to unlimited 1gbps if that of interest to the users here

  • The only thing I need from the new Unified package is to be nested virtualization. Can it be enable or not? Thx!

  • Until when will this offer be available? And are the cores dedicated (can I do 100% 24/7 using plex for example)?

    Thank you

  • @servarica_hani said:

    @MaxTakeba said:

    Just need to get AMD-V turned on which I've already sent a ticket for. Not bad to be honest... I hate XEN and I have a feeling the NVME is being held back a little (by the software raid in the VM on top of the overhead of it being in a VM and everyone else).

    You are correct 100%
    Xen is great for many many features that are not available on kvm out of box
    but disk performance is not their strong point

    Those NVMe on the server have read 1210000 IOPS and 480K write IOPS
    and on some cases we run 4 or 5 vms per server
    but the VM cannot do fraction of that

    to be honest in real word situation it was never that we saw an application being much slower due to disk speed on nvme
    So it seems it only for yabs that it does not show good numbers

    That being said we are working on a project to rewrite good chunk of storage code in xen but it is not easy task so dont have ETA about it yet

    Is Xen worth keeping? You're probably right, I wouldn't see the inefficiency issues. But statistically speaking, the nvme disks in that yabs test was performing like a SAS SSD would.

    Plus the way OS installers see the nvme storage as 64GB blocks (that I need to create a LVM and make it one volume) really threw me off when I manually reinstalled Debian (sorry I don't trust templates, it's just how I roll).

    Hyper-V is just as bad with storage performance too.

  • servarica_haniservarica_hani Member, Patron Provider

    @seedmonster said:
    The only thing I need from the new Unified package is to be nested virtualization. Can it be enable or not? Thx!

    Unfortunately we did a test and there is exceptions in the hypervisor
    we are debugging but so far nested virtualizationdoes not work

  • servarica_haniservarica_hani Member, Patron Provider

    @sanvit said:
    Until when will this offer be available? And are the cores dedicated (can I do 100% 24/7 using plex for example)?

    Thank you

    till quantity last
    so within 3 weeks i think we will have stock for sure
    after that it depend on how many orders we will get or if we will reorder new hardware

  • dev_vpsdev_vps Member
    edited November 2024

    @servarica_hani said:

    @churongcon said:
    Hello I buy SSD NVME but it show HDD disk

    The disk is nvme but i am not sure how windows will show it as in xen the drive you see in windows in called tapdisk which is software layer between the actual nvme and your windows VPS

    For Windows Server OS, Hyper-V and KVM virtualizations are better than Xen

  • servarica_haniservarica_hani Member, Patron Provider

    @MaxTakeba said:

    @servarica_hani said:

    @MaxTakeba said:

    Just need to get AMD-V turned on which I've already sent a ticket for. Not bad to be honest... I hate XEN and I have a feeling the NVME is being held back a little (by the software raid in the VM on top of the overhead of it being in a VM and everyone else).

    You are correct 100%
    Xen is great for many many features that are not available on kvm out of box
    but disk performance is not their strong point

    Those NVMe on the server have read 1210000 IOPS and 480K write IOPS
    and on some cases we run 4 or 5 vms per server
    but the VM cannot do fraction of that

    to be honest in real word situation it was never that we saw an application being much slower due to disk speed on nvme
    So it seems it only for yabs that it does not show good numbers

    That being said we are working on a project to rewrite good chunk of storage code in xen but it is not easy task so dont have ETA about it yet

    Is Xen worth keeping? You're probably right, I wouldn't see the inefficiency issues. But statistically speaking, the nvme disks in that yabs test was performing like a SAS SSD would.

    Plus the way OS installers see the nvme storage as 64GB blocks (that I need to create a LVM and make it one volume) really threw me off when I manually reinstalled Debian (sorry I don't trust templates, it's just how I roll).

    Hyper-V is just as bad with storage performance too.

    Hyper-V and xcp have very similar storage stack , so i wouldnt be surprised if it perform the same

    it is the other tools and stability that sell it for us
    we have some vms up for years with zero issues
    So the disk performance was in the past not that big of the deal before the the new generations of nvme so we were ok with it as the diff was like 20% in performance
    now with many NVMe reaching more than 1M IOPS this issue started to be big issue and we need to fix it or even switch to KVM for nvme based vms

    Thanked by 1MaxTakeba
  • servarica_haniservarica_hani Member, Patron Provider

    @dev_vps said:

    @servarica_hani said:

    @churongcon said:
    Hello I buy SSD NVME but it show HDD disk

    The disk is nvme but i am not sure how windows will show it as in xen the drive you see in windows in called tapdisk which is software layer between the actual nvme and your windows VPS

    For Windows Server OS, Hyper-V and KVM virtualizations are better than Xen

    @churongcon
    if you install windows yourself make sure to install xcp tools
    ask the team to mount the iso there
    it will give you better driver for network and disk
    you will see the diff

    Thanks

    Thanked by 1dev_vps
  • @discocat said:

    @slantgolf said:
    @servarica_hani and others. :0
    It seems that servarica's website is different than what was posted here. Specifically the storage for the Greyhound. It shows 2 12TB HDD. However, support said that is incorrect.

    >

    Ah thanks for flagging - was about to order that exact server for the storage!

    @servarica_hani was fantastic to talk with and explained the Greyhound servers physically cannot handle that drive count. However, in another 2 months approximately there will be another round of new servers so stay tuned.

  • anyone got disk IO benchmark on SAN disk ? IOPS limit/expectation ?

  • @servarica_hani said:

    @MaxTakeba said:

    @servarica_hani said:

    @MaxTakeba said:

    Just need to get AMD-V turned on which I've already sent a ticket for. Not bad to be honest... I hate XEN and I have a feeling the NVME is being held back a little (by the software raid in the VM on top of the overhead of it being in a VM and everyone else).

    You are correct 100%
    Xen is great for many many features that are not available on kvm out of box
    but disk performance is not their strong point

    Those NVMe on the server have read 1210000 IOPS and 480K write IOPS
    and on some cases we run 4 or 5 vms per server
    but the VM cannot do fraction of that

    to be honest in real word situation it was never that we saw an application being much slower due to disk speed on nvme
    So it seems it only for yabs that it does not show good numbers

    That being said we are working on a project to rewrite good chunk of storage code in xen but it is not easy task so dont have ETA about it yet

    Is Xen worth keeping? You're probably right, I wouldn't see the inefficiency issues. But statistically speaking, the nvme disks in that yabs test was performing like a SAS SSD would.

    Plus the way OS installers see the nvme storage as 64GB blocks (that I need to create a LVM and make it one volume) really threw me off when I manually reinstalled Debian (sorry I don't trust templates, it's just how I roll).

    Hyper-V is just as bad with storage performance too.

    Hyper-V and xcp have very similar storage stack , so i wouldnt be surprised if it perform the same

    it is the other tools and stability that sell it for us
    we have some vms up for years with zero issues
    So the disk performance was in the past not that big of the deal before the the new generations of nvme so we were ok with it as the diff was like 20% in performance
    now with many NVMe reaching more than 1M IOPS this issue started to be big issue and we need to fix it or even switch to KVM for nvme based vms

    If you do switch to kvm based solutions (e.g proxmox) I would love to be a beta tester :smile:

    Thanked by 1servarica_hani
  • @gemini_geek said:
    anyone got disk IO benchmark on SAN disk ? IOPS limit/expectation ?

    Give me half an hour. Just on my way home.

    Thanked by 1gemini_geek
  • hard check on ordering IP/address.... it's a NO for Canada location for me :persevere:

  • @servarica_hani - so far very happy. A couple of little issues to iron out, but Sathish, Hari and Gokul have been absolutely superb helping me with two tickets, and now everything is working well. Good service, and you've got a good team.

    Thanked by 1servarica_hani
  • @gemini_geek said:
    anyone got disk IO benchmark on SAN disk ? IOPS limit/expectation ?

    Apologies, ended up making lunch.
    Here's a 1Mib sequential write.

    sudo fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=1m --size=16g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1
    random-write: (g=0): rw=randwrite, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
    fio-3.33
    Starting 1 process
    random-write: Laying out IO file (1 file / 16384MiB)
    Jobs: 1 (f=1): [F(1)][100.0%][eta 00m:00s]
    random-write: (groupid=0, jobs=1): err= 0: pid=62592: Thu Nov 28 18:11:58 2024
    write: IOPS=167, BW=168MiB/s (176MB/s)(11.0GiB/67015msec); 0 zone resets
    slat (usec): min=18, max=1519, avg=53.85, stdev=58.25
    clat (usec): min=541, max=64151, avg=5274.63, stdev=6087.86
    lat (usec): min=565, max=64195, avg=5328.48, stdev=6086.96
    clat percentiles (usec):
    | 1.00th=[ 619], 5.00th=[ 676], 10.00th=[ 971], 20.00th=[ 1188],
    | 30.00th=[ 1237], 40.00th=[ 1287], 50.00th=[ 1418], 60.00th=[ 2024],
    | 70.00th=[ 9765], 80.00th=[12125], 90.00th=[14877], 95.00th=[16188],
    | 99.00th=[21103], 99.50th=[22938], 99.90th=[26346], 99.95th=[30802],
    | 99.99th=[61080]
    bw ( KiB/s): min=61440, max=1348982, per=100.00%, avg=192097.21, stdev=135482.04, samples=119
    iops : min= 60, max= 1317, avg=187.47, stdev=132.29, samples=119
    lat (usec) : 750=8.87%, 1000=1.20%
    lat (msec) : 2=49.79%, 4=8.26%, 10=2.32%, 20=27.86%, 50=1.68%
    lat (msec) : 100=0.02%
    cpu : usr=1.02%, sys=0.37%, ctx=19061, majf=0, minf=23
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued rwts: total=0,11244,0,0 short=0,0,0,0 dropped=0,0,0,0
    latency : target=0, window=0, percentile=100.00%, depth=1

    Run status group 0 (all jobs):
    WRITE: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=11.0GiB (11.8GB), run=67015-67015msec

    Disk stats (read/write):
    xvdg: ios=0/267336, merge=0/584, ticks=0/1426312, in_queue=1426345, util=94.14%

    Thanked by 1gemini_geek
  • vps give us only Canada, is it possible to switch to another location?

  • @servarica_hani said:

    @dev_vps said:

    @servarica_hani said:

    @churongcon said:
    Hello I buy SSD NVME but it show HDD disk

    The disk is nvme but i am not sure how windows will show it as in xen the drive you see in windows in called tapdisk which is software layer between the actual nvme and your windows VPS

    For Windows Server OS, Hyper-V and KVM virtualizations are better than Xen

    @churongcon
    if you install windows yourself make sure to install xcp tools
    ask the team to mount the iso there
    it will give you better driver for network and disk
    you will see the diff

    Thanks

    Hello I dont install by myself. I choose Windows 2022 when bought from website

  • @servarica_hani Unable to purchase due to credit card and physical location not the same. I'm now on an overseas trip and will not be back until 2 weeks later. Any way to resolve this? Support not responding.

  • ridrid Member
    edited November 2024

    @rid said:
    @servarica_hani Unable to purchase due to credit card and physical location not the same. I'm now on an overseas trip and will not be back until 2 weeks later. Any way to resolve this? Support not responding.

    Support manually reversed the order cancellation and the payment went through. Good service!

    Thanked by 1servarica_hani
  • @bbn12 said:
    hard check on ordering IP/address.... it's a NO for Canada location for me :persevere:

    i use it from Europe, and its fine.. u are developing games? lag is around 100-110ms

    Thanked by 1servarica_hani
  • nice offer

    Thanked by 1servarica_hani
  • @mitnick2 said:

    @bbn12 said:
    hard check on ordering IP/address.... it's a NO for Canada location for me :persevere:

    i use it from Europe, and its fine.. u are developing games? lag is around 100-110ms

    cancelled my order due to loc/IP... no, not for gaming

Sign In or Register to comment.