New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
We keep up to date with kernelcare reboot-less updates.
Me either, BuyVM used to or still does it was done due to SSD drivers not working on one virtualization platform where as the other it worked fine which is perfectly acceptable.
Now if you slab so you can fit thousands of containers on a box that could never handle that amount obviously that's bad.. Truth be told I'm not sure why one would slab or not and in what circumstances anyway so don't listen to anything I say..
Great provider, no problems, no drama. You can go clear back to the beginning of LET and see examples of him being as dedicated of a person to the VPS industry then as he is now.
@Kujoe - Did you manage to screenshot that 1000+ Day uptime you had on your FL KVM server? That's reliability. To play the game, I am going to guess this node in the 100+ VPS range?
This is neat !
Slabbing on OVZ is done also for another reason.
Due to the huge number of threads and switching, big nodes will not share CPU correctly and get stuck from time to time which leads to crashes. You cannot really have many containers on one node without a lot of instability, so, either small containers on small nodes, which still take space and are not power efficient, cant aggregate many disks, etc, or small containers in a slabbed environment with allocated cores and all.
Big containers do not have this issue, obviously, since there are not so many in a box.
We choose to upgrade everyone to bigger containers and retire the small servers (E3) apart from Biz customers.
So ...TYPE-1 hypervisors taking on scheduling workload that the Linux 2.6.18/32 OVZ kernel would have to do.
Interesting.
Down memory lane: I vaguely recall buyvm chatter about this approach. VMware was the Type1 hypervisor I think(correct me if I'm not remembering correctly).
I dont remember, might have been xen, iirc.
Actually, the ovz kernel continues to do the scheduling on the virtual cores. You allocate some cores to it, and it will do with that, fewer cores, less processes, less switching overall.
The kernel might, under heavy switching, cause softlocks which cascade and bring down the node. It was much worse in the past with 2.6.32, been better lately, probably some of it caused by the changes introduced with vswap, slowing that down causes more trouble than it is worth, IMO, for a very problematic benefit.
2.6.18 did not have this issue, you could have much more threads and the node was still stable, but, of course, it would probably collapse too under extreme number of threads.
fl1kvm01 was at 1026 days before the VENOM reboot. And nope, it has a lower number than that.
Very nice info.. haha...
170 neighbors @ phase7, 26,7MB/s disk i/o right now
315 @ NanoVZ, DE2 with Hetzner
318 @ NanoVZ, KC location
Couldn't check NanoVZ LA, down since 11,6 hours (uptimerobot)
On every other OVZ VPS I got it has already been patched obviously.
I'd really love to know how many I got @ GalaxyHostPlus. My 1GB VPS I got with them is one of the slowest I've seen so far (this thing took over an hour for a first apt-get update && apt-get upgrade with Debian 7). Maybe on the level of a 256MB for 58 RUB (~1,1$) @ BeriVDS.ru
Oneasiahost OVZSSD-128
devices 4 111 1
Worked smoothly as a proxy for a long time:)
More important than how many neighbors you have is how well the nodes are managed, monitored, et cetera.
And this is the part of servers where i can see those details..
Patched.
70 VMs on a server. E3 CPU, so no more than 32GB. my VM is 1GB
Hello.
Contact our support and we will check your server.
@mlody1039 The month I had the VPS is already over and I cancelled it.
Edit: sorry, the VPS is still running for 2 more days. Just didn't use it because of the low cpu performance and I cancelled it some time ago so I thought it would be running anymore. But nothing changed about the cpu, installed some updates just a minute ago and they took not less time for such small packages (apt, apt lib & ssl lib).
Contact our support so we may give you 1 month free as refund of speed.
@mlody1039 Well, if you want me to do so.. I'll do.
We have activated your server hope this time server performance will be better.
Thanks.
What hope got to do with it ?