Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Bad experience from BlueVM - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Bad experience from BlueVM

2

Comments

  • DomainBopDomainBop Member
    edited April 2013

    As far as @colm it is really your responsibility to get iptables to run correctly on your your vps!??

    I think he meant that not all of the required iptables modules were installed (see example below), in which case the responsibility lies with the host not the customer to add the modules since this is an openvz VPS. This is a fairly common problem with many openvz hosts.

    results from an OVH 2013 classic VPS
    Testing ip_tables/iptable_filter...OK
    Testing ipt_LOG...OK
    Testing ipt_multiport/xt_multiport...OK
    Testing ipt_REJECT...OK
    Testing ipt_state/xt_state...OK
    Testing ipt_limit/xt_limit...OK
    Testing ipt_recent...FAILED [Error: iptables: No chain/target/match by that name.] - Required for PORTFLOOD and PORTKNOCKING features
    Testing xt_connlimit...FAILED [Error: iptables: No chain/target/match by that name.] - Required for CONNLIMIT feature
    Testing ipt_owner/xt_owner...OK
    Testing iptable_nat/ipt_REDIRECT...FAILED [Error: iptables: No chain/target/match by that name.] - Required for MESSENGER feature
    Testing iptable_nat/ipt_DNAT...OK
    
    RESULT: csf will function on this server but some features will not work due to some missing iptables modules [3]

    2) Host node kept running out of disk space.

    Otherwise known as massive overcommitment. Happened to me at HostSlim. Node hit 0GB available, entire node crashed minutes later. My cancellation followed shortly after because moving to a new node is only a temporary solution when a host is overselling to such a degree that a node runs out of disk space.

  • @biplab said: BlueVM is unable to enable correct iptables modules on the host node. End result, firewall is crippled inside vps.

    This isn't incompetency (Considering all it takes is editing one line in vz.conf and adding the module to rc.modules to load on boot), it's simply been overlooked for some reason.

    Let them know, I'm sure they can fix it up pretty fast :)

  • biplabbiplab Member
    edited April 2013

    @Wintereise,

    I wish things were that easy for them. They are unable to solve this since 19 March 2013.

    Edit: They replied several times saying that modules are enabled. However it is not. I told them to log into my vps and see the problem themselves. Still they are unable to understand the problem.

  • @Wintereise said: This isn't incompetency (Considering all it takes is editing one line in vz.conf and adding the module to rc.modules to load on boot), it's simply been overlooked for some reason.

    You make it sound easy. Not taking this at BlueVM, but a lot of users have been in contact with Summer Hosts that fail.

  • MunMun Member

    @colm said: 1) They were never about to get iptables working properly for OpenVZ.

    It is working fine right now.

    @colm said: 2) Host node kept running out of disk space.

    I had more issue with CVPS, and in this case I have never had such case on my 4 bluevms.

    @colm said: @ZinnVPS OpenVZ needs to be correctly configured on the host node with iptables modules for it to work inside the VPS.

    I currently am using OpenVPN which requires iptables, no issues.

    @concerto49 said: You make it sound easy. Not taking this at BlueVM, but a lot of users have been in contact with Summer Hosts that fail.

    They aren't a summer host and have been around for a few.

  • I've always had a open door policy on email and on LET. If you have a problem that isn't getting the attention it deserves bring it to my attention, I'll fix it.

  • @Mun said: I currently am using OpenVPN which requires iptables, no issues.

    You already have the issue ... (Hint: iptables)

  • MunMun Member

    @Noerman said: @Mun said: I currently am using OpenVPN which requires iptables, no issues.

    You already have the issue ... (Hint: iptables)

    Hmm I need iptables for openvpn to work, so thus I have an issue because I have iptables, and as such I shouldn't have openvpn working?.... Wow great logic.

  • @Mun Security does matter

  • MunMun Member

    @Noerman said: @Mun Security does matter

    So you are stating that iptables is a security issue. So is running ssh on port 22.

  • @Mun said: @colm said: @ZinnVPS OpenVZ needs to be correctly configured on the host node with iptables modules for it to work inside the VPS.

    I currently am using OpenVPN which requires iptables, no issues.

    @Mun I meant that using iptables without fully loaded iptables-specific-modules will function but some features will not work (related to csf). That what I refer to ...

    @Noerman said: @Mun Security does matter

    And about ports, we should always change default ports, whatever it is (except cPanel -- I still unable to do it)

  • MunMun Member

    Exactly, there is certain things you just have to work around, and not complain.

    I would love anycast, but im small and can't get it.

  • IpDoIpDo Member

    Few updates,
    The problem was fix for a day or so. got IO of 110mb/s~.
    after a day the IO dropped to 8-20mb/s, network was very slow, pings high, and cpu is clogged Unixbench is 200-300 per core of E5.
    so the server is useless for me.
    Submitted another ticked to support - they said that "it might b the RAID Array re-syncing" (might?!).
    but it's slow a a few days now without any response.
    submitted a feedback ticket as well, no response either.
    I've asked to: 1. fix the problem. OR 2. move me to another node. OR 3. refund.
    didn't get any reply yet (3 days).

  • rm_rm_ IPv6 Advocate, Veteran
    edited April 2013

    @Mun said: OP I think you are looking something that is world class in a LEB price range. You aren't going to find it.

    Linode.com?

    Wow what a load of b/s you are spreading here. "A random LEB provider is bad => all of them are bad, go use Linode."

    My advice to @IpDo: stop chasing random small providers barely anyone uses or heard of.

    Here: lowendbox.com/blog/lowendbox-top-providers-2013-q1-results/
    You have three on the top. Pick a VPS from any of these. You will find yourself on a top-performing well-tuned node. If you have any problems, support responds to you quickly and cares about you. And why I can say this, is because these providers aren't getting lots of good feedback for nothing. They built a service that's verified to work very well by many people, so why not give them a chance? It's not like these are Linode-priced either, still in the LEB land -- just the very best of what it has to offer.

  • @IpDo if it helps I've seen and been on that end of some of these same issues playing out, though not with BlueVM. Never been a customer of theirs.
    I think you should know that 99% of the time this happens because of the people put in front lines as CSM or for CRM.
    The hosts I've learned to stick with either have their processes so refined that I never have these issues or have minimal enough staff who have their own best methods that just work.

    That being said iptables is just a standard mechanism for instantiating a firewall.
    And the rDNS issues, that's weird. I disagree with some replies here, that really is no big deal, and Ip ownership?
    Don't know how that comes into play. It really shouldn't I mean ya dig..But that usually won't get you RBL'd just rejected.
    I didn't read much into it but your host can't have blocks being blacklisted. That's like, really bad, those microsoft emails are standard RBL alerts.
    I'm guessing you seen a 550, if it really was the host/block that takes some big time abuse or mis configuration.

  • @BlueVM said: There is no reason your support should have been that bad and I will be taking measures to fix the problem.

    @IpDo said: Few updates,

    The problem was fix for a day or so. got IO of 110mb/s~.
    after a day the IO dropped to 8-20mb/s, network was very slow, pings high, and cpu is clogged Unixbench is 200-300 per core of E5.
    so the server is useless for me.
    Submitted another ticked to support - they said that "it might b the RAID Array re-syncing" (might?!).
    but it's slow a a few days now without any response.
    submitted a feedback ticket as well, no response either.
    I've asked to: 1. fix the problem. OR 2. move me to another node. OR 3. refund.
    didn't get any reply yet (3 days).

    @BlueVM you sure you are taking steps to fix the problem? :)

  • IpDoIpDo Member

    got a Prometeus openvz a few days ago. works flawlessly.
    still, I don't like wasting money for nothing.

  • @IpDo said: got a Prometeus openvz a few days ago. works flawlessly.

    still, I don't like wasting money for nothing.

    not worth your stress complaining, just move on to another provider

  • @zhuanyi We are investigating.

  • I'm having a different and positive experience with BlueVM -- The bench below was taken about 12:47PM EST, and I have seen I/O as low as 12.x MB/s before but on a subsequent bench (run right after the first) it's back up to a more than acceptable level.

    [root@zpanel ~]# sh bench.sh
    CPU model :  QEMU Virtual CPU version (cpu64-rhel6)
    Number of cores : 2
    CPU frequency :  2000.000 MHz
    Total amount of ram : 3830 MB
    Total amount of swap : 3967 MB
    System uptime :   15 days, 22:03,
    Download speed from CacheFly: 50.6MB/s
    Download speed from Coloat, Atlanta GA: 31.6MB/s
    Download speed from Softlayer, Dallas, TX: 26.7MB/s
    Download speed from Linode, Tokyo, JP: 7.25MB/s
    Download speed from i3d.net, NL: 7.09MB/s
    Download speed from Leaseweb, Haarlem, NL: 11.3MB/s
    Download speed from Softlayer, Singapore: 5.27MB/s
    Download speed from Softlayer, Seattle, WA: 14.4MB/s
    Download speed from Softlayer, San Jose, CA: 16.6MB/s
    Download speed from Softlayer, Washington, DC: 50.5MB/s
    I/O speed :  77.8 MB/s
    
  • BlueVMBlueVM Member
    edited April 2013

    @connercg - Is that with or without virtuio. I pulled a test on the main node (and have been since this thread was posted) and the I/O was at 219 MB/s...

  • @BlueVM - that is with the virtio disk driver. I just did another bench and it came out at 105 MB/s, I can open a support ticket if you like, as this really isn't the place.

  • @connercg Please do. That'll make it easier to track down.

  • imperioimperio Member
    edited April 2013

    Web (www.*****) is UP again at 29.04.2013. 20:27:32, after 11h 0m of downtime.

    Today, 11 hours of downtime victory..As usual there is no explanation..I guess it is time to move..

  • @imperio We're working on figuring out the cause of the downtime. Open a ticket for compensation under our SLA.

  • imperioimperio Member
    edited April 2013

    @Magiobiwan If you are still trying to figure out the cause of downtime after 12 hours do not offer compensation to customers, instead offer a tech guy to find the cause and make it not happen again..

  • @imperio: Which server are you on?

  • imperioimperio Member
    edited April 2013

    s3.c56.il.bluevm.com OpenVZ

  • My VPS was on the same node at Illinois, which had the downtime. When the server was up, I was informed from IRC that the issue was caused by an outbound DDoS.

    I bought my BlueVM vps almost a month ago and just completed migrating the files from my old service provider. And unfortunately, the downtime happened at the same time I updated the DNS to point to the vps :-(

    I know these are budget providers and there aren't much staff to monitor everything. But more than 10 hours to find out a problem seems a long time to me. I haven't given up on BlueVM yet. The server performance is fine and I have heard some good words about them.

  • BlueVMBlueVM Member
    edited April 2013

    @imperio - My level 1 support wasn't informed of the status of the server because his job is to manage sales and billing. That said here's what happened to the server:

    Last night at about 8:36 PM HST server 3 in Illinois went down, we investigated the issue and found the server had gone into a kernel panic. After performing an FSCK we found that one of the disks was malfunctioning and was in the process of failing. We contacted CC and had them replace the disk under our 8 hour part replacement policy. The remainder of the time was spent rebuilding the disk array and performing a full FSCK on the node.

    Let me know if you need any more details about this. In reality we need to do a better job of communicating when problems arise and I realize that... its something we're working on and we will improve.

Sign In or Register to comment.