Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


VPS6.net initial thoughts
New on LowEndTalk? Please Register and read our Community Rules.

VPS6.net initial thoughts

shunnyshunny Member
edited April 2012 in General

I saw some reviews thoughts out here on LET about VPS6 but I thought I provide my own as well.
So I signed up with vps6 around a week ago to use at their Turkey location. I used the Open-VZ customize your own package with specs of 128mb RAM (256mb Burst), 5gb Disk, 250gb Bandwidth and 1 CPU Core. With the package customization I also use one of their promotional offers I found: DOUBLEMYVPS6 and asked support upon my vps activation to double the RAM and bandwidth of my VPS. Support was quick, friendly and the VPS itself is rather speedy and sufficient for my uses. Below are some of the basic stats:

uptime
19:08:03 up 6 days, 31 min, 1 user, load average: 0.00, 0.00, 0.00

dd if=/dev/zero of=test bs=64k count=20k conv=fdatasync
20480+0 records in
20480+0 records out
1342177280 bytes (1.3 GB) copied, 6.25819 s, 214 MB/s

So the disk is rather impressive and if my memory serves me correctly it around the same speed as I posted it above when I first tested it a week ago. Of course I have been with them for just a short while and I fully set up for the use of my VPS 6 days ago but I still find the VPS as what I expected. Overall, I am happily satisfied with what VPS6.net is offering and if anyone out there is wanting to get service with VPS6.net then try them out! :)

Comments

  • flyfly Member

    yeah too bad there was that huge deal a couple days ago where my vps was up and down for like the whole night.

  • Really? Which location? I didn't see my VPS with them in Turkey go blah!

  • flyfly Member
    edited April 2012

    chicago

    their network pooped, then the vps container pooped, over and over again

    [[email protected] ~]$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync;rm test
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 36.5626 s, 29.4 MB/s
    

    i've submitted multiple tickets... nobody can help out.

  • Oh dear... I'm sure they are trying to clean up their network poop. :) Lets hope they get back on track.

  • flyfly Member

    well i mean, i haven't really checked back. i got all my shit off there in a hurry, got a refund this morning. the fastest ticket response was for the refund.

  • @kbar Do you recall which node you were on, or could you perhaps let me know the Ticket ID# that you had opened? I'd like to follow up on this with our network admins, just to make sure the issue has been resolved. Thanks!

  • flyfly Member
    edited April 2012

    975959

  • quirkyquarkquirkyquark Member
    edited April 2012

    @vps6net said: Do you recall which node you were on

    Jeremy: it was node ch-23. I have a Xen-PV on the same node, and @kbar and I were on IRC when the outage happened, so we were talking about it:

    image

    The glitch was only for a few minutes, but my VPS was not automatically rebooted, so it shows 29 minutes.

    As for the @kbar's network/IO problems, I ran a dd at the same time and it was 125 MB/s+. I ascribed @kbar's issues after the glitch was resolved to the fact that he has a Xen HVM and cannot use the PV-on-HVM drivers. So he's using the emulated RTL8139 NIC, which has serious performance issues as documented on many mailing lists. The emulated ATAPI disk is also going to perform pretty poorly.

    I had a Xen-HVM initially too, and wasted a weekend trying to get PV-on-HVM working in FC15, Deb 6 and Ubuntu 10.04/11.10 (see ticket #290690). The problem is that you are using Xen 3.4 as your hypervisor, while the newer linux kernels' built-in PV-on-HVM support works only with Xen 4+. I did manage to get the disk PV drivers working eventually on Xen 3.4, but it was a lot of effort. Network never worked, and the emulated RTL8139 was really crappy. I have since obtained another HVM with a provider using Xen 4.1, and all the above OS work perfectly out of the box. (see log snippets at the end)

    As mentioned, I requested a switch to PV, and have been very happy with it, apart from the minor network dropouts and the glitch above. I may still play around with Xen 3.4 HVM when I have the time, but for now, I recommend that folks do not use your Xen-HVMs if they plan to run contemporary kernels and do not want to spend oodles of time trawling the Xen-dev lists and building a custom kernel, since simply adding the modules into the initrd image certainly does not work for the network part.


    Here's the difference between the two Xen-HVMs from the installation dmesg:

    VPS6/Xen 3.4:

    [    0.000000] Linux version 3.2.0-19-generic-pae...
    ...
    [    0.000000] Hypervisor detected: Xen HVM
    [    0.000000] Xen version 3.4.
    [    0.000000] Xen Platform PCI: I/O protocol version 1
    [    0.000000] HVMOP_pagetable_dying not supported
    

    Amerinoc/Xen 4.1:

    [    0.000000] Linux version 3.2.0-19-generic-pae...
    ...
    [    0.000000] Hypervisor detected: Xen HVM
    [    0.000000] Xen version 4.1.
    [    0.000000] Xen Platform PCI: I/O protocol version 1
    [    0.000000] Netfront and the Xen platform PCI driver have \
    been compiled for this kernel: unplug emulated NICs.
    [    0.000000] Blkfront and the Xen platform PCI driver have \
    been compiled for this kernel: unplug emulated disks.
    
  • vps6netvps6net Member
    edited April 2012

    @quirkyquark Thanks for your in-depth view of things, I appreciate it.

    In truth, our Xen HVM service is not really out of the gate yet, and is still very much in development. We don't even advertise it on our website yet!

    The reason for this is that we have found RHEL 5 + Xen 3.4 to be an incredibly reliable platform, while I would still consider Xen 4.1 to be a few steps away from suitable for wide-scale deployments.

    That said, our next major release is scheduled to be a "Xen HVM-ISO" line, which will feature Xen 4.1 and full PV-on-HVM compatibility.

    A small aside: Due to the issues you mentioned, our typical practice is to assign the emulated Intel E1000 to all Xen guests. It's a bit strange that either of you were using the Realtek NIC to begin with, so I'd like to make a note to any current or future clients of ours that we can modify this setting upon request.

  • @vps6net said: That said, our next major release is scheduled to be a "Xen HVM-ISO" line, which will feature Xen 4.1 and full PV-on-HVM compatibility.

    Glad to know Jeremy, I look forward to it!

    @vps6net said: A small aside: Due to the issues you mentioned, our typical practice is to assign the emulated Intel E1000 to all Xen guests. It's a bit strange that either of you were using the Realtek NIC to begin with, so I'd like to make a note to any current or future clients of ours that we can modify this setting upon request.

    From my chat with Chris at the time, it seemed I was the first Xen-HVM client in Los Angeles (he had to spend an hour+ settings things up). That may explain the RTL8139 choice. It seems your support techs may also be unaware of the emulated NIC issue:

    I believe your config with the RTL8139 as eth1 will function, and the timeout problems are caused by a lingering network problem that we have noticed since the Los Angeles migration. We can hopefully update you about this soon, otherwise, our network team may need to reconfigure your VM's host server to get this working for you.

    >

    Regards,

    >

    Pavel Evdokimov ([email protected])
    L3 Support Engineer :: VPS6.NET, LP
    www.vps6.net

Sign In or Register to comment.