New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
nothing works correctly.
VPS reboot without warning.
Storage partition disappears randomly, or is not accessible. And when it's available, the write speed is just awful, making it totally unusable.
>
Hello , ram upgrade yesterday , right now it's 512 GB ram on node , and I'm order more 64 GB Ram DIMMs (right now I'm have only 32 GB ram DIMMs) , right now it's enough space on node , node don't usage more that 76% constantly ram memory , it's possible give me you IP on private?More customers reports me bad I/O speed and more good I/O speed
>
You mount correctly /storage partition?
on
nano /etc/fstab
have similar at this?
Regards,
Calin
Yes i m go for verify this strange thing , Monday the new employer start work and fixed this strange things
Regards
This is easy to replciate. Navigate to /storage/ and run yabs:
root@debian:/# cd /storage/
root@debian:/storage# wget -qO- yabs.sh | bash
Just generating the fio test file takes 10 minutes and some times the benchmark itself only shows 0kbs for all tests..
Copying files swings from mostly <100kb to 3-4mb with maybe 300k on average or so.
This vps is not usable unless this is solved..
I didn't touch this file since getting my vps.
Disk was mounted as UUID=some string /storage ext4 defaults 0 2
Changed it like on your screenshot and reboot.
And now the whole VPS is down and not accessible anymore.
Great....
Small update
Yes, the VPS was rebooted (by @Calin, not myself)
Establishing a SSH connection was much faster now.
The proc/mem results are consistent and good. An indication of the processor not being overloaded. Calins proc info sounds credible.
That said his uptime page, section 'Recent DOWNTIMES' looks shockingly like 'experimental zone'.
Now, on to the beef ...
Also ran a benchmark on the large storage disk again. Results:
4 threads/ 4k blocksize result is 7.77 MB/s and 1988 IOPs.
Huh ? Reminds me of something ...
I also tested the SSD again and found improvement, albeit quite small (translation: might be simply due to node load being a bit lower now). It's 8.33 MB/s and 2132 IOPs.
So, the storage volume now is about the same performance as the small SSD. Really nice improvement, kudos!
I observed the same but it worked for just 20min or so.. i am back to 100kb/s
well, mine has been down since the reboot.
Can't even SSH to it, as the connection is refused.
Send an email to @Calin , waiting for him to fix it hopefully.
exactly this!
https://www.truenas.com/docs/references/zfsdeduplication/#ram
"Pools with deduplication ratios of 3x or more (data can be reduced to a third or less in size), might need 1-3 GB of RAM per 1 TB of data"
Deduplication for this server would require you at least 468GB - 1.4TB RAM with deduplication ratio of >= 3. In reality you will probably need a lot more for it to run performant.
So if you won't save 66% of storage space, you better disable deduplication.
Sorry for you but I just ran another test and the result is easily within the normal spread: 8.28 MB/s and 2119 IOPs. Btw I've seen (sequential) read speeds of up to 2.5 GB/s (but of course with a storage VPS we're probably all most concerned with write speed).
Maybe your (and some others') problems are to do with some software, your linux kernel version, or something?
vanilla debian here so not sure.. you?
Plain FreeBSD (13.1 iirc) but usually linux seems to be faster (due to (too) aggressive caching). Strange, that. Just a quick question: Does your linux know that that disk is ZFS?
I enabled dedup in ZFS Config,ram consumption almost doubled
What is even the point of enabling deduplication on a cheap storage server?
Pros:
Cons:
Hello @mountyPython dedup it's enable for fix this problem https://github.com/openzfs/zfs/issues/14734
On test server I'm stop problems about bads and I'm enable to production node
I'm put enough ram , ram it's constantly usage 76% if you check my uptime (https://uptime.ihostart.com/report/uptime/e91598904c79c0f4ac6b5c2a2851e715/) never more high that 75-76%
But now it's problems about performance , I'm start investigate more about this
Regards
However, in the linked Github issue, dedup seems to have been the cause of the problems.
When I wake up, the VPS is down again. Could not ping and SSH connect. Feeling sad.
It's really annoying that you don't know why your VPS got rebooted itself and being totally unavailable. Haven't met this kind of situation in my other storage server.
Hello @nullptr i'm check you VPS from last conversation from PM , and it's online , you make a firewall or similar things?
In same time , Anyone else has performance issues, we've added a few more options to Node to improve performance
Regards
You also never seen the same price on your other storage box
Indeed, now it came back online and uptime is 22 hours. So it means the server went offline itself and recovered without doing anything from me. The former screenshot is from ping.pe. Let me set an uptime monitor for it.
Or it may be my network issue, but I visited some online port scan website, also indicates the SSH port is closed.
Anyway, it's good to see the performance is improving. Thank your effort.
Please check your emails.
Still have no access to my VPS since yesterday
Thanks
disk speed is fixed at 8MB/s now after many checks, no reboot since yesterday. Seems it's fixed?
Which is way to low. Would be nice to be able to write backups with at least 100MBit/s (12MB/s) and restore them with full port speed (200MBit/s / 24MB/s).
@Calin If your ZFS ist fixed now, could you please evaluate to change you PVE limits to the above ones?
I think he said ZFS will be fixed so I don't expect higher disk speed. But 8MB/s is not bad for this price.
Yeah, everything is good for this price, but with only 8MB/s a full restore from this VPS would take a week (~6,5 days). In this time, I could drive the 1600km to @Calin , copy the backup to an external HDD, drive back, restore the backup and even had enough time for a wellness weekend, which I definitely would need after this stress....
Of course not. That copy would be just as slow, coming from the same slow shared disk array.
Well, the 8MB/s limit is configured at PVE. The RAID itself should handle more than 200MB/s read-wise, even with you guys doing IO@8MB/s. And if not (and I have physical Access for the HDD), I might unplug the network ports "by accident"...
Hello , VPSs it's limit I/O for no abuse , I'm double right now for all VPS IOps and R/W
Regards
Hello everyone , i m very apreciate any feedback , how work VPS in last two days , and what need improvement
Regards,
Calin
Hi @Calin - performance of the large mounted store (/storage) seems much better today than last week (which was baaaad). Good work!