Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


EvoBurst/VirtWire Global Review - Page 5
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

EvoBurst/VirtWire Global Review

1235

Comments

  • Woah... What happened to Virtwire? They used to be recommended...

  • simonindiasimonindia Member
    edited August 2016

    @SpicyPepper said:
    Woah... What happened to Virtwire? They used to be recommended...

    They are still the same just some negative reviews from unsatisfied customers as same as most providers but some customers don't be reasoning enough to help them its not anyone's fault just what it is

    I have services with Virtwire (Evoburst) since Dec 2014 I have no issues so they are good ones in my books

  • Providing you don't need support 10/0

    Need support? No chance. If it's software post it here. If it's hardware wait 48 hours, no luck? Give up hope or order a new one

  • The Netherlands server is an absolute joke, all 5 of my servers in other locations are just fine.

    :/

    Thanked by 2Frecyboy Jarry
  • No issues on my boxes in NL over here. Haven't been monitoring them closely though as I just run one small traffic website on it. Benchmark seems fine though.

  • ajgarettajgarett Member
    edited August 2016

    Maybe you're not on NL01? It's been a shitshow for me, but only in the past ~4 months. FIrst ~month was fine, but now:

    [root@jenkins ~]# ioping .
    4 KiB from . (ext4 /dev/ploop---p1): request=1 time=49.7 ms
    4 KiB from . (ext4 /dev/ploop---p1): request=2 time=1.56 s
    4 KiB from . (ext4 /dev/ploop---p1): request=3 time=20.7 ms
    4 KiB from . (ext4 /dev/ploop---p1): request=4 time=1.53 s
    4 KiB from . (ext4 /dev/ploop---p1): request=5 time=68.3 ms
    4 KiB from . (ext4 /dev/ploop---p1): request=6 time=57.6 ms
    4 KiB from . (ext4 /dev/ploop---p1): request=7 time=50.5 ms
    4 KiB from . (ext4 /dev/ploop---p1): request=8 time=1.89 s
    ^C
    --- . (ext4 /dev/ploop---p1) ioping statistics ---
    8 requests completed in 12.3 s, 1 iops, 6.11 KiB/s
    min/avg/max/mdev = 20.7 ms / 654.3 ms / 1.89 s / 787.3 ms
    

    I previously got 5 NAT VPSes from VirtWire which were good, so when Ryan was having an awesome $20/year special on 60GB of HDD, combined with 4x CPU cores + 4GB of RAM. (The EvoBurst-4G-AF-2016 plan that's still available, albeit at $30.), I got one as a Jenkins build box.

    Then the I/O slowdowns started happening. I didn't care too much because Jenkins still completed my jobs.

    Then Jenkins started getting killed regularly. My monitoring said I was always <1GB RAM used (of my 4GB), and <1% CPU, so I was massively confused. Figured out anything with java in the process name would get killed, which is a problem because Jenkins is Java powered. (Symlinked /usr/bin/jv to /usr/bin/java, changed the systemd conf to use jv and everything ran fine.)

    Opened a ticket to ask what was up (Minecraft people abusing the node?) and the ticket was closed without an answer. Except the only active announcement was for LA nodes, which mine wasn't. Jenkins stopped getting killed though, so, success?

    Except now Jenkins (and other processes) continue to get killed at random times for no rhyme or reason.

    [root@jenkins ~]# journalctl -xe -u jenkins --since "2016-07-06" --until now|grep "main process exited, code=killed, status=9/KILL"
    Jul 06 09:45:57 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 15 00:13:05 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 15 00:13:37 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 15 00:14:10 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 15 00:14:43 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 16 12:43:16 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 16 12:43:47 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 22 00:07:34 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 22 00:08:06 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:43:51 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:44:23 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:44:56 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:45:28 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:46:01 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:46:31 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:47:03 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:47:35 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:48:05 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:48:36 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:49:07 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:49:38 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 28 01:18:56 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 28 01:19:29 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 28 01:20:09 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 28 01:21:00 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 28 01:21:34 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:43:22 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:43:54 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:44:25 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:44:56 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:45:35 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:45:58 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:46:30 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:47:02 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:47:33 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:48:05 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:48:36 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:49:07 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:49:38 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:50:12 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 11:41:12 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 11:41:43 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 11:42:14 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    

    See the Aug 03 messages? Jenkins got killed 13 times in 7 minutes. I have systemd set to autorestart jenkins after 30 seconds, so for that 7 minutes something was pitching a fit on the host side. Maybe OOM? Jenkins does use ~500MB of RAM, so it's an easily selected target.

    Then that means the node is so oversold that it regularly hits an OOM situation, which doesn't bode well for any sort of client balancing.

    At this point I've pretty much given up on VirtWire. Only have non-critical services there now. It's a pity, considering that I love the grandfathered NAT plan I'm on.

    Something something you live long enough to become the villain.

    Thanked by 2kkrajk Jarry
  • I have similar problems with my VPS on NL01. I could live with low i/o-speed and high i/o-ping, but randomly killing processes in my vps (apache, mysql, varnish, etc)? WTF is that? And support-dickhead said it was my problem and closed my ticket...

  • Mhh... I have a couple smaller plans and the 4G offer as well. All on NL1. Only actively using one of the smaller plans which seems to be doing fine. No weird killing of processes. Just running caddy/php/MariaDB though.

    The others I'm not monitoring as it's just for development purposes. Thanks for the info though. Will keep an eye on them to see if there's anything weird going on.

    Support has always been nice and friendly. I travel a lot so my order got flagged. They got it fixed in minutes. Only issue I ever had was when a VPS got suspended but that was due a coding error in a script I was running. Got things up and running after debugging again without a long wait.

    Anyway, for what I'm paying I guess they're ok but I'm curious how the 4G plan turns out. The given resources should be enough to support a more intensive setup. I'll move something over in the next couple of days and see how things work out :)

  • I have a vps with them, 1 GB ram in Miami (quadranet) but it's unusable, low I/O even a simple update takes an eternity, I will let it expire

  • @Autosnipe - care to answer some of these reviews?

  • @ScienceOnline said:
    I have a vps with them, 1 GB ram in Miami (quadranet) but it's unusable, low I/O even a simple update takes an eternity, I will let it expire

    Open an support ticket

  • K4Y5K4Y5 Member
    edited August 2016

    @ajgarett said:
    Maybe you're not on NL01? It's been a shitshow for me, but only in the past ~4 months. FIrst ~month was fine, but now:

    [root@jenkins ~]# ioping .
    4 KiB from . (ext4 /dev/ploop---p1): request=1 time=49.7 ms
    4 KiB from . (ext4 /dev/ploop---p1): request=2 time=1.56 s
    4 KiB from . (ext4 /dev/ploop---p1): request=3 time=20.7 ms
    4 KiB from . (ext4 /dev/ploop---p1): request=4 time=1.53 s
    4 KiB from . (ext4 /dev/ploop---p1): request=5 time=68.3 ms
    4 KiB from . (ext4 /dev/ploop---p1): request=6 time=57.6 ms
    4 KiB from . (ext4 /dev/ploop---p1): request=7 time=50.5 ms
    4 KiB from . (ext4 /dev/ploop---p1): request=8 time=1.89 s
    ^C
    --- . (ext4 /dev/ploop---p1) ioping statistics ---
    8 requests completed in 12.3 s, 1 iops, 6.11 KiB/s
    min/avg/max/mdev = 20.7 ms / 654.3 ms / 1.89 s / 787.3 ms
    

    I previously got 5 NAT VPSes from VirtWire which were good, so when Ryan was having an awesome $20/year special on 60GB of HDD, combined with 4x CPU cores + 4GB of RAM. (The EvoBurst-4G-AF-2016 plan that's still available, albeit at $30.), I got one as a Jenkins build box.

    Then the I/O slowdowns started happening. I didn't care too much because Jenkins still completed my jobs.

    Then Jenkins started getting killed regularly. My monitoring said I was always <1GB RAM used (of my 4GB), and <1% CPU, so I was massively confused. Figured out anything with java in the process name would get killed, which is a problem because Jenkins is Java powered. (Symlinked /usr/bin/jv to /usr/bin/java, changed the systemd conf to use jv and everything ran fine.)

    Opened a ticket to ask what was up (Minecraft people abusing the node?) and the ticket was closed without an answer. Except the only active announcement was for LA nodes, which mine wasn't. Jenkins stopped getting killed though, so, success?

    Except now Jenkins (and other processes) continue to get killed at random times for no rhyme or reason.

    [root@jenkins ~]# journalctl -xe -u jenkins --since "2016-07-06" --until now|grep "main process exited, code=killed, status=9/KILL"
    Jul 06 09:45:57 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 15 00:13:05 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 15 00:13:37 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 15 00:14:10 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 15 00:14:43 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 16 12:43:16 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 16 12:43:47 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 22 00:07:34 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 22 00:08:06 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:43:51 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:44:23 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:44:56 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:45:28 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:46:01 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:46:31 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:47:03 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:47:35 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:48:05 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:48:36 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:49:07 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 24 12:49:38 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 28 01:18:56 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 28 01:19:29 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 28 01:20:09 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 28 01:21:00 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Jul 28 01:21:34 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:43:22 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:43:54 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:44:25 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:44:56 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:45:35 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:45:58 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:46:30 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:47:02 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:47:33 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:48:05 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:48:36 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:49:07 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:49:38 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 04:50:12 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 11:41:12 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 11:41:43 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    Aug 03 11:42:14 jenkins systemd[1]: jenkins.service: main process exited, code=killed, status=9/KILL
    

    See the Aug 03 messages? Jenkins got killed 13 times in 7 minutes. I have systemd set to autorestart jenkins after 30 seconds, so for that 7 minutes something was pitching a fit on the host side. Maybe OOM? Jenkins does use ~500MB of RAM, so it's an easily selected target.

    Then that means the node is so oversold that it regularly hits an OOM situation, which doesn't bode well for any sort of client balancing.

    At this point I've pretty much given up on VirtWire. Only have non-critical services there now. It's a pity, considering that I love the grandfathered NAT plan I'm on.

    Something something you live long enough to become the villain.

    Yeah, I bought the same 4GB VPS at NL to use for testing some code and running an instance of GitLab (hah!). Needless to say, things have gotten so bad, that I can't even install packages without delay, or move files around; forget about running some script and getting any real work done.

    Opening another ticket would probably be a waste of time, as it'll likely get closed with some smartass remark about running some specific benchmark test, reading a message/announcement about ongoing performance issues on the said node, or reading some footnote on the nondescript ToS page, while completely ignoring the obvious problem(s).

    At this point, I'd be happy to just get a pro-rated refund and run, because the problem is not with my setup or usage pattern (If one would even call it that; considering the unusable state of the VPS) or that of tens of unfortunate clients like me who are on the NL node, but the provider and the heavily oversold node.

    @AutoSnipe, comment / help / refund, maybe?

  • @K4Y5 said:
    @AutoSnipe, comment / help / refund, maybe?

    He's got enough stuff on fire right now that you'll be waiting a while for a response, I think :P

    Last announcement I saw was about DDOS attacks, billing panel is currently down. ¯\_(ツ)_/¯

  • my one and only comment for here is that this is not, nor ever will be a support channel.

  • KobeKobe Member
    edited August 2016

    @SpicyPepper said:
    Woah... What happened to Virtwire? They used to be recommended...

    Virtwire and Evoburst are somewhat, borderline decent services, just run by incredibly rude and often incompetent staff members that often act malicious or harshly on the basis that they provide budget services, which somehow entitles them to be arseholes.

    Granted, my VPS services were comparably acceptable for some period of time, it was the customer service side of things that really ruined it for me. I've never had any problems with any VPS host except Virtwire.

    The truth is, you'll absolutely have no problems with Virtwire: if you leave your VPS idle. If you actually use your server for something, you're in for a rough time over the long term, filled with petty annoyances and rude replies that subtly tell you to go fuck yourself, until you eventually give up.

  • @AutoSnipe said:
    my one and only comment for here is that this is not, nor ever will be a support channel.

    You are damn right about it! You can not close comments here, as you can close tickets on your support-channel...

  • AutoSnipeAutoSnipe Member
    edited August 2016

    That's not a problem @Jarry, would you mind reading your last reply that you sent in the ticket system, preferably the 1st paragraph of the last sentence.

    But, all in all, it does not matter, as I won't participate in the offering any sort of support on a public forum.

    P.s. I can write in bold too!, I'm not sure if you are putting it there to be menacing or what, but..

  • @AutoSnipe said:
    my one and only comment for here...

    I knew this promise was too good to be true...

  • I am not a VirtWire customer, and never have been.

    What I can say is that @AutoSnipe has almost always been extremely unprofessional to potential customers, and most importantly to the ones already paying him. I wouldn't get near his service with any project I cared about at all. This thread simply proves it.

    I don't need to be a customer to know that, and thank God.

    Thanked by 1SpicyPepper
  • @daily

    As I said above 10/10 if you don't need support. You need support just cancel and go home

  • The last update time of the announcement about BVZ-LA02 is Jul 30.
    Tickets are unresponsive.
    I don't know what happened, deadpool?

  • I am luckily not a customer. I sense the same thing is happening here as what happened to SeFlow... Another 'once' reputable company turned (from what I am reading here) bad...

  • NekkiNekki Veteran

    @cgs3238 said:
    The last update time of the announcement about BVZ-LA02 is Jul 30.
    Tickets are unresponsive.
    I don't know what happened, deadpool?

    Given that the provider replied to the thread a couple of days ago, no, not deadpool :-/

  • My guess is they simply have sold too many of those cheap high-spec VPS and are now not able to handle it. "4GB RAM, 4 cpu, 4TB transfer" VPS for $30/y (at the beginning as promo for $20/y with 2xIPv4!) sounds great, but imho is not sustainable...

    There's not much support can do with badly oversold HW. They can just sit and wait till enough customers leave or at least put their vps to idle, that could improve situation for the rest...

  • matteobmatteob Barred
    edited August 2016

    @SpicyPepper said:
    I am luckily not a customer. I sense the same thing is happening here as what happened to SeFlow... Another 'once' reputable company turned (from what I am reading here) bad...

    Yeah, this is why we stopped doing offers and trying to take customers here, too much kids and abusers. (not all, but too cheap attract these kind of person and we're targeted to enterprises quality).

  • K4Y5K4Y5 Member
    edited August 2016

    matteob said: Yeah, this is why we stopped doing offers and trying to take customers here, too much kids and abusers. (not all, but too cheap attract these kind of person and we're targeted to enterprises quality).

    Doesn't look good coming from you, considering it was your decision to advertise here and get customers to help you break even.

    Take responsibility for your decision to post an offer here, like a grown up would, and quit trying to portray it as if someone pointed a gun to your head and forced you to fish for customers here at LET.

    Besides, with eloquence and maturity such as yours, you're going to need a lot more than just luck to hold on to those 'enterprise quality' customers that you may (or not) have.

    Thanked by 1sandro
  • SpicyPepperSpicyPepper Member
    edited August 2016

    @matteob said:

    Yeah, this is why we stopped doing offers and trying to take customers here, too much kids and abusers. (not all, but too cheap attract these kind of person and we're targeted to enterprises quality).

    You are in no position to talk because you threatened multiple people on multiple occasions about starting up a police investigation on them because they didn't like your services. I bet your company will deadpool soon because your prices keep getting higher.

  • @SpicyPepper said:

    You're right man, this is why supernap/switch choosed us as mitigation provider in their datacenters and we predicted to >10 millions revenue this year

    But this is another story and we're of topic.

  • @matteob said:

    You're right man, this is why supernap/switch choosed us as mitigation provider in their datacenters and we predicted to >10 millions revenue this year

    But this is another story and we're of topic.

    Nobody said your services were bad. We only complain about your attitude towards your customers and how you always threaten to launch a police investigation whenever someone is unhappy with your services.

  • matteob said: You're right man, this is why supernap/switch choosed us as mitigation provider in their datacenters and we predicted to >10 millions revenue this year

    I can also make 10million revenue and 9.99mil loss/yr, the profit would be far more interesting. Revenue does not mean anything.

    Thanked by 1Falzo
Sign In or Register to comment.