Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How to know how much neighbours you've got in your OpenVZ VPS - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How to know how much neighbours you've got in your OpenVZ VPS

13»

Comments

  • SpeedyKVMSpeedyKVM Banned, Member

    @nightshade said:
    I get nothing back on Wable. Guess the kernel is outdated.

    We keep up to date with kernelcare reboot-less updates.

  • ATHKATHK Member

    @AnthonySmith said:
    yep, and I don't have an issue with slabbing.

    Me either, BuyVM used to or still does it was done due to SSD drivers not working on one virtualization platform where as the other it worked fine which is perfectly acceptable.

    Now if you slab so you can fit thousands of containers on a box that could never handle that amount obviously that's bad.. Truth be told I'm not sure why one would slab or not and in what circumstances anyway so don't listen to anything I say..

  • @inthecloudblog said:
    @KuJoe rocks. I love his service(s). The Tampa location was darn reliable with something like 32/64 mb. Now I only use their shared hosting which is great. In about 2 years I never opened a support ticket.

    Great provider, no problems, no drama. You can go clear back to the beginning of LET and see examples of him being as dedicated of a person to the VPS industry then as he is now.

    @Kujoe said:
    So who wants to take the challenge and see how real world performance correlates to the number of VPSs on a node?

    @Kujoe - Did you manage to screenshot that 1000+ Day uptime you had on your FL KVM server? That's reliability. To play the game, I am going to guess this node in the 100+ VPS range?

  • kevinkevin Member
    edited May 2015

    This is neat !

  • MaouniqueMaounique Host Rep, Veteran

    Slabbing on OVZ is done also for another reason.

    Due to the huge number of threads and switching, big nodes will not share CPU correctly and get stuck from time to time which leads to crashes. You cannot really have many containers on one node without a lot of instability, so, either small containers on small nodes, which still take space and are not power efficient, cant aggregate many disks, etc, or small containers in a slabbed environment with allocated cores and all.
    Big containers do not have this issue, obviously, since there are not so many in a box.

    We choose to upgrade everyone to bigger containers and retire the small servers (E3) apart from Biz customers.

  • @Maounique said:
    Slabbing on OVZ is done also for another reason.

    Due to the huge number of threads and switching, big nodes will not share CPU correctly and get stuck from time to time which leads to crashes.

    So ...TYPE-1 hypervisors taking on scheduling workload that the Linux 2.6.18/32 OVZ kernel would have to do.
    Interesting.

    Down memory lane: I vaguely recall buyvm chatter about this approach. VMware was the Type1 hypervisor I think(correct me if I'm not remembering correctly).

  • MaouniqueMaounique Host Rep, Veteran
    edited May 2015

    I dont remember, might have been xen, iirc.

    Actually, the ovz kernel continues to do the scheduling on the virtual cores. You allocate some cores to it, and it will do with that, fewer cores, less processes, less switching overall.

    The kernel might, under heavy switching, cause softlocks which cascade and bring down the node. It was much worse in the past with 2.6.32, been better lately, probably some of it caused by the changes introduced with vswap, slowing that down causes more trouble than it is worth, IMO, for a very problematic benefit.

    2.6.18 did not have this issue, you could have much more threads and the node was still stable, but, of course, it would probably collapse too under extreme number of threads.

  • KuJoeKuJoe Member, Host Rep

    cncking2000 said: Did you manage to screenshot that 1000+ Day uptime you had on your FL KVM server? That's reliability. To play the game, I am going to guess this node in the 100+ VPS range?

    fl1kvm01 was at 1026 days before the VENOM reboot. And nope, it has a lower number than that. ;)

    Thanked by 1vimalware
  • Very nice info.. haha...

  • 170 neighbors @ phase7, 26,7MB/s disk i/o right now
    315 @ NanoVZ, DE2 with Hetzner
    318 @ NanoVZ, KC location
    Couldn't check NanoVZ LA, down since 11,6 hours (uptimerobot)
    On every other OVZ VPS I got it has already been patched obviously.

    I'd really love to know how many I got @ GalaxyHostPlus. My 1GB VPS I got with them is one of the slowest I've seen so far (this thing took over an hour for a first apt-get update && apt-get upgrade with Debian 7). Maybe on the level of a 256MB for 58 RUB (~1,1$) @ BeriVDS.ru

  • BAKABAKA Member
    edited May 2015

    Oneasiahost OVZSSD-128

    devices 4 111 1

    Worked smoothly as a proxy for a long time:)

  • Steven_FSteven_F Member
    edited May 2015

    More important than how many neighbors you have is how well the nodes are managed, monitored, et cetera.

  • CatalinCatalin Member

    And this is the part of servers where i can see those details..

    IP Systems Limited - devices 4       1       1
    TragicServers - devices 4       1       1
    XVMLabs - devices   4   1   1
    
  • FredQcFredQc Member

    @Catalin said:
    And this is the part of servers where i can see those details..

    IP Systems Limited - devices 4       1       1
    TragicServers - devices 4       1       1
    XVMLabs - devices 4   1   1
    

    Patched.

  • BruceBruce Member

    70 VMs on a server. E3 CPU, so no more than 32GB. my VM is 1GB

  • @nexusrain said:
    170 neighbors @ phase7, 26,7MB/s disk i/o right now
    315 @ NanoVZ, DE2 with Hetzner
    318 @ NanoVZ, KC location
    Couldn't check NanoVZ LA, down since 11,6 hours (uptimerobot)
    On every other OVZ VPS I got it has already been patched obviously.

    I'd really love to know how many I got @ GalaxyHostPlus. My 1GB VPS I got with them is one of the slowest I've seen so far (this thing took over an hour for a first apt-get update && apt-get upgrade with Debian 7). Maybe on the level of a 256MB for 58 RUB (~1,1$) @ BeriVDS.ru

    Hello.

    Contact our support and we will check your server.

  • nexusrainnexusrain Member
    edited June 2015

    @mlody1039 The month I had the VPS is already over and I cancelled it.

    Edit: sorry, the VPS is still running for 2 more days. Just didn't use it because of the low cpu performance and I cancelled it some time ago so I thought it would be running anymore. But nothing changed about the cpu, installed some updates just a minute ago and they took not less time for such small packages (apt, apt lib & ssl lib).

  • @nexusrain said:
    mlody1039 The month I had the VPS is already over and I cancelled it.

    Contact our support so we may give you 1 month free as refund of speed.

  • @mlody1039 Well, if you want me to do so.. I'll do.

  • @nexusrain said:
    mlody1039 Well, if you want me to do so.. I'll do.

    We have activated your server hope this time server performance will be better.

    Thanks.

  • MaouniqueMaounique Host Rep, Veteran

    mlody1039 said: hope this time server performance will be better

    What hope got to do with it ?

    Thanked by 1Dylan
Sign In or Register to comment.