New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Hey @servarica_hani wondering if all accept CAD per chance?
Yes, I definitely saw people paying in CAD instead of USD.
They didn't mention how though, probably some switch either in the store or in the cart.
If not, then support should be able to change the currency associated with your account.
Unfortunately I couldn't find a CAD option anywhere on the website. I'll wait until I hear from @servarica_hani before proceeding.
if you are n Canada the default currency is actually CAD
if you are outside Canada the currency is USD
if you still want to pay in CAD while outside Canada you need to set it on the order url
add the following to your order url ¤cy=1 and it will switch to CAD
Please activate my plan
Invoice #153926
I see the team already activated it
welcome to servaRICA
Any possibility of offering snapshots for free?
YABS
root@ubuntu:~# curl -sL https://yabs.sh | bash
## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ##
Yet-Another-Bench-Script
v2025-01-01
https://github.com/masonr/yet-another-bench-script
## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ##
Sat Feb 1 11:50:56 AM UTC 2025
Basic System Information:
Uptime : 0 days, 0 hours, 14 minutes
Processor : AMD EPYC 7551P 32-Core Processor
CPU cores : 6 @ 1996.256 MHz
AES-NI : ✔ Enabled
VM-x/AMD-V : ❌ Disabled
RAM : 23.5 GiB
Swap : 487.0 MiB
Disk : 740.0 GiB
Distro : Ubuntu 24.04 LTS
Kernel : 6.8.0-35-generic
VM Type : XEN
IPv4/IPv6 : ✔ Online / ❌ Offline
IPv4 Network Information:
ISP : Rica Web Services
ASN : AS26832 Rica Web Services
Host : Rica Web Services
Location : Montreal, Quebec (QC)
Country : Canada
fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/mapper/unified--nvme-main):
iperf3 Network Speed Tests (IPv4):
Provider | Location (Link) | Send Speed | Recv
Speed | Ping
----- | ----- | ---- | ----
| ----
Clouvider | London, UK (10G) | 1.86 Gbits/sec | 573 Mbits/sec | 77.5 ms
Eranium | Amsterdam, NL (100G) | 2.62 Gbits/sec | 2.18
Gbits/sec | 80.1 ms
Uztelecom | Tashkent, UZ (10G) | 1.15 Gbits/sec | 379 Mbits/sec | 176 ms
Leaseweb | Singapore, SG (10G) | 764 Mbits/sec | 461 Mbits/sec | 232 ms
Clouvider | Los Angeles, CA, US (10G) | 2.33 Gbits/sec | 831 Mbits/sec | 71.6 ms
Leaseweb | NYC, NY, US (10G) | 8.41 Gbits/sec | 1.65
Gbits/sec | 9.69 ms
Edgoo | Sao Paulo, BR (1G) | 1.54 Gbits/sec | 72.7
Mbits/sec | 132 ms
Running GB6 benchmark test... cue elevator music
Geekbench 6 Benchmark Test:
Test | Value
|
Single Core | 725
Multi Core | 2578
Full Test | https://browser.geekbench.com/v6/cpu/10233062
YABS completed in 13 min 27 sec
Download is kinda slow as well like 100mbps at limited 10gbps eh
Any update on improving NVMe speeds?
I wish they move to KVM to get a performance increase.
Which would also fix several bugs they have right now.
I suggested it but they have no plans...
Out of curiosity, what kind of bugs are they dealing with?
Honestly the price these given in leave no room to give snapshots for free
specially with those servers we give a lot of storage and and snapshot can eat the storage even more so we could get out of available storage
Sorry for that
Download is hard to optimize as it depend on the sender plus the path taken
for upload we have limited control in choosing first hop in the path but Download it is not the same
anyway for a server you will mostly depend on upload as you are serving files etc
This is a completely ridiculous request given their prices, jfc.
We have 2 path to fix it
1- move totally to KVM this will fix most of our performance issues but the amount of work needed to move all our addons and features to it is big
2- Working on improved storage subsystem to xen which what i am working on it now
and it is also very complex code
I am currently working on option 2 and will give it couble of months if that fail will switch to option 1 which i am trying to avoid it due to amount of changes needed
I dont have ETA but i am planning to have a solution for disk performance issues in 2025
It is still an option but lets try the easier option first
Mainly nested virtualisation is not working due to xen bugs as well
IO performance issues, can't switch on AMD-V.
It's not using the hardware to the fullest.
I understand choosing the easiest option, however I don't know anyone else who uses Xen and unfortunately it's a bit late as I've been waiting for a while (with a ticket, I'm sure you know who I am) and I've asked for a refund.
Xen isn't good, there's many reasons why not many people run Xen and why many choose KVM(Proxmox in mind).
Even me who runs self hosting and actually has stuff in production, I use KVM because it's less work and it does perform better.
Waiting a couple of months when I can get more performance for the same and more feature sets enabled... Isn't a good solution to be honest, the only reason I did pick it up was for a future project related to storage but the above problems I can't move forward.
Well you can start with limited packages and keep expanding over time. Some packages with Virtfusion will look nice.
It seems like a matter of time that you will have to migrate out of Xen, so why keep postponing it? . Unless there is something behind we don't know (which is fine).
I wish you all the best!
I have requested a refund and it's acceptable within the ToS of RWS. No response so far. 😊 Please look.
Edit: I got refund.
This offer has great potential once the issues are resolved.
Got it
one Important factor why we went with xen 14 years ago is stability
xen is not on same level as KVM in disk and network performance but in terms of stability it is rock solid
For storage vms the stability is very important (we run our servers in pools and we do alot of upgrades and maintenance to our servers and no one notice because we move vms live before doing so )
Within the years KVM started to catch up to xen in terms of stability and now we have solutions to do everything that was done in xen (xenserver or xcp-ng) using kvm
But from my own experience few years ago when we tried Proxmox locally , xenserver was miles ahead in terms on stability and more refined solution
I think it is time to retest proxmox and see where we are now
OK then we have a plan
The main limitation is that we need to reevaluate all kvm panels and see if any fit or we need to create our own
But after this discussion lets say within this year will do some KVM based offers (specially the ones on NVMe , for pure storage offers it will take some time to make sure it is as stable as current solution ) unless I can make xen great great again
I have also been observing, and currently the disk performance is not suitable for storing data because the speed will be very poor
I'm waiting for when Servarica's speed will improve.
@johngko Which plan do you have and where are you connecting from? What speed is your service?
I've been looking for a remote backup storage option and of course speed is important both when backing up/restoring. My connection is gigabit fiber (att), so ideal speeds would be limited to ~110MB/s (give or take).
I'm in chicago so not that far from Montreal. Wonder what kind of practical speeds I can expect in either direction?
I have an HDD-only plan and use it from another side of the ocean for more than two years.
Disk write speed (dd) is higher. I've seen less than 150-160 MB/s only once over all this time.
Check yourself here: https://ping.servarica.com/ and https://speedtest.servarica.net
Results from the ping... site are ~300 mbps on the 1gb file download. Speedtest yields ~400 mbps down, 500+ up when going through my connection, or ~500 down, 900 up (mbps) when routing through cloudflare warp.
I guess I'll just start out with the $5 package to test things out, then either upgrade to what I need, or cancel entirely if speeds are too slow.
I'm looking at one of the expanding offers, or the penguin.
I'm talking about the hard disk IO speed.
His hard drive performance is not suitable for storing WordPress because WordPress has high requirements for hard drive IO, otherwise it opens very slowly
Are you doing any caching at all? (Filesystem and anything in WP)
What plugins are you using?
Ram usage?
Yeah the SAN is not the fastest thing as proven in several YABS from this thread, but they can't make it faster either without caching from the host level which would be a significant change or maybe this is just another issue with Xen (yes it's easy to blame, can you blame me?)
Opossum 1 still available ?