Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


30 day review/first impression of VULTR.com
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

30 day review/first impression of VULTR.com

myhkenmyhken Member
edited March 2014 in Reviews

It's seems like 2014 will be the "cloud" year, more and more cloud providers is coming. VULTR is not a real cloud service, but offer cloud like features, the same way as DigitalOcean do. The difference is that VULTR has more locations (soon coming) and offer better prices on some packages. Here is my review of VULTR so far. (see the complete review here with all images)

Product/hardware/price

VULTR offer "fake" KVM cloud instances with SSD storage (like DigitalOcean), so you only get some cloud features like hourly billing, quick deployment, many locations etc. VULTR has promised that more features will come the next months.

VULTR has the same plans as DigitalOcean, but the price is little lower on some of the plans., take a look at this pictures to see the plans, price and locations.

As you can see VULTR has plans to offer 12 locations around the world soon, thats three times more locations then DigtalOcean has.

VULTR guarantee more then 3 GHz CPUs on all their plans (except their storage plans in NY), so you get fast CPU power on all servers. They offer SSD storage as standard, but you don't get Ramnode speeds, more like 3-460 MB/s. Still, can't complain over that.

Their hardware seems solid, and using CentOS 6.5 and Virtualmin gives me a really good result. The servers (both France and Netherlands) works really fast.

You deploy new servers within minutes depending on what location you are deploying in. It's really simple as you can see in the picture over, you only select locations, package, and OS, then your server is online.

Service/Support

Their service seems really good. They have already several good offers, like double your first payment and 2xRAM coupons last weekend. They are active on Twitter and on LET. ( @DaveA )

I have opened one support ticket in this period, and they used around 30 minutes to reply on it. Not the best, nor the worst reply time for a unmanaged host.

Benchmark

France

Netherlands

Great speeds internal in Europe, and ok speeds to the US from my Netherlands and France server.

Network/uptime

I have 100% uptime on my Netherlands server and on my France server the last 30 days.

Conclusion

If you need a host that can offers lots of locations, rapid deployment and hourly billing, take a look at VULTR.com. They also promise to deploy lots of new features soon, that will take them closer to a "real" cloud provider.

They do compete with DigitalOcean, but if they deploy all they say that they will deploy, they actually can give better service then DO.

They offer fast servers, good network speeds, great prices, many locations, hourly payment and more, and I'm recommending them. Of course, the best thing is maybe to wait some times, so they get all their locations online. But if you can live with the current locations, go for them now.

(see the complete review here with all images)

Thanked by 1udk
«1

Comments

  • Well, 30min support is very fast I think. Its not just mediocrity

  • @trexos said:
    Well, 30min support is very fast I think. Its not just mediocrity

    It's not bad, but since I only have one support ticket, I can't say anything about the average support time.

  • BradBrad Member

    Thanks for the brief review. I currently have one of their Tokyo plans. So far, I haven't had a chance to test anything yet. My first ticket, filed today, asking when their "coming soon" locations will be available, received a response in 15 minutes.

  • tchentchen Member

    I kinda wish they'd double up DCs at some of the primary locations like NY and NL instead of just trying to spread themselves wide.

  • udkudk Member

    @Brad, what was their response? Curious too

  • BradBrad Member

    @udk (I also asked them if they were still going to provide DDoS protection) Here was their response:

    Hello,

    We are currently not offering ddos protection. We cannot say when Australia will be available, please ensure to sign up for a notification when it comes online.

  • LeeLee Veteran

    The seem to have sold out of all the non Primary/French locations within hours of launch and nothing has been back in stock since. Either provide them or don't.

  • France has been available from time to time, it's available today. Tokyo was also available some hours today. Betting it gets online if somebody deletes their VPS from the location.
    All other locations has never been available as far as I have seen.

  • One month is not enough to make a complete review of the service. Time should pass a bit. As you can see yourself yesterday they have enabled speed limits to all vps without any notice to clients.

  • @alexvolk Hence the 30 day review/first impression title. I usually follow up with a 3-6 month review, if I still is using the host.
    Was not aware of the speed limits when I posted the review, and it is not a positive change. Still, VULTR can be a good "cloud" provider if they really deploy all of their features and locations.

    Thanked by 1alexvolk
  • @alexvolk

    As you can see yourself yesterday they have enabled speed limits to all vps without any notice to clients.

    They have not reduced the speed on my France server, only in Netherlands.
    Have they reduced speed in the US also?

  • @myhken said:
    alexvolk
    Have they reduced speed in the US also?

    Well, I have tested speed on France VPS as well.

  • myhkenmyhken Member
    edited March 2014

    @alexvolk said:

    >

    And? Just tested again on my France node, and I have a 1gbit connection there still.

    System uptime : 9 days, 2:37, **Download speed from CacheFly: 64.6MB/s** Download speed from Coloat, Atlanta GA: 13.3MB/s Download speed from Softlayer, Dallas, TX: 15.2MB/s Download speed from Linode, Tokyo, JP: 5.75MB/s **Download speed from i3d.net, Rotterdam, NL: 57.1MB/s** **Download speed from Leaseweb, Haarlem, NL: 65.6MB/s** Download speed from Softlayer, Singapore: 6.24MB/s Download speed from Softlayer, Seattle, WA: 12.3MB/s Download speed from Softlayer, San Jose, CA: 10.7MB/s Download speed from Softlayer, Washington, DC: 22.3MB/s

  • @myhken said:

    Could you please check the speed on the newly created server in France? Looks like limited only on new servers except Netherlands.

  • udkudk Member

    New server in NJ is gigabit (at least from cachefly, couldn't get higher than ~3MB/s anywhere else)

  • What's the test script? I'll run it from one of my old chicago vpses and try it.

  • Edit: Looks like they restricted existing vpses too :( 11.2MB/s from cachefly in a chicago vps

  • BradBrad Member

    57.7 MB/s to Cachefly from their Tokyo location.

  • edanedan Member
    edited March 2014

    Snap their Tokyo location and so far so good, like it more compared to DO Singapore, fast network and the doubled credit/RAM are great offer.

  • It will be interesting to see how they handle capacity planning and resource throttling, weather they pull the cliche "see TOS don't use a core for more than X hrs" or if they can pull off DO/Linode style where only extreme/deliberate resource abuse ignites a call to action. It will probably be many months before we can gauge this, not only do they have to fill up their nodes, but they need to be used (migration of existing sites/services, development of greenfield projects).

  • MaouniqueMaounique Host Rep, Veteran

    Their deployment makes easy for cpacity planning and all after all, they dont need to factor in redundancy, for example nor have arbitrary disk or ram plans, etc. CPU seems to be very conservative, giving just a few cores so there is likely a reserve capacity.

    This was well thought of, carefully studying DO probably.

  • nonubynonuby Member
    edited March 2014

    @Maounique said:
    Their deployment makes easy for cpacity planning and all after all, they dont need to factor in redundancy, for example nor have arbitrary disk or ram plans, etc. CPU seems to be very conservative, giving just a few cores so there is likely a reserve capacity.

    This was well thought of, carefully studying DO probably.

    I mean more in terms of having the correct governors/throttling/settings in place to ensure worst-case performance metric (instead of average or day-one metrics) once a node is full and guests are active, Linode/DO both have the deployment numbers and experience from the vast number of deployments to get this right (or as close as). Number of virtual core doesn't matter, you can give 8 virtual core (ala wow Linode gave me free upgrade nonsense) or 16, or 1, has very little effect on managing the host node or ensuring all guests have a minimum performance expectation.

  • MaouniqueMaounique Host Rep, Veteran
    edited March 2014

    nonuby said: has very little effect on managing the host node.

    Yes and no. If they use E3s as it looks like, they can have up to 8 threads per node so it is imperative they do not allow any one instance too many cores as if they get an abuser, even with the correct throttling, will still take a lot of the power of the CPU because there are just so many cores available. They must stay clear of miners among other things by making it obviously ineffective and it looks like they do that.
    By giving low number of cores compared with other resources, but of a high clock rate they can balance this effectively, at least me thinks that way. It shows they did some thinking before based on DO usage and growth pains.
    It is probably possible to create complex scripts for throttling, but it is likely they will break every now and then and serious customers will suffer, that is to be avoided because they seem to target those without perpetual free credits, like DO, for instance..
    I think they thought it out well, it remains to be seen how it will pan out in the end.

  • nonubynonuby Member
    edited March 2014

    The virtual cores are not a 1-1 mapping with physical cores. I can attempt to simultaneously use all allocated 8 virtual cores on my cheapest Linode but the reality is I won't see true 8 way parallelism at least not a clock rate on the underlying e3/e5 CPU (let say I built something cpu-bound that could take full advantage with say golang) due to the priority settings on the host thus giving a reality greater different to what it appears in /proc/cpuinfo, this what enables Linode to pull the free 4 core to 8 core upgrade without moving host nodes. Same stuff that allows AWS to allow mining on $17.60 Xen instance, without them caring or even batting an eyelid. Simplifying my original thought it boils down to can they deliver a "lower bound" at five nines..

    A simple way to think of it:

    a) Capacity = Clock rate * real host cores

    b) Your lower bound big(o) performance = (a / (Number of Guests * Average Priority of all Guests)) * your priority

    c) Your lower bound big(o) performance per core = b / Number of virtual cores assigned

    b remains static.. Virtual cores - because its all bullshit (tm)

    The hypervisor has to do whatever required (interleaving guests yielding an illusion lower clock rates per guests, "stealing" cpu time in terms of how kernel see its etc..) to ensure that a lower bound is delivered, this is why AWS is popular with growing startups, most LEB hosts focus on average or best case which from an engineer's perspective is bah nah

    Note: Im not familiar with openvz container virtualization so perhaps this is different, but given the flood of cpu abuse complaints, and silly TOS about don't steal a core for long, either its fundamentally flawed or solusvm/whatever don't set it up right.

  • tchentchen Member

    @Maounique said:
    It is probably possible to create complex scripts for throttling, but it is likely they will break every now and then and serious customers will suffer.

    Or you could do it easier and just build the kernel with the CPU bandwidth control cap :P GoodHosting was messing with it earlier on the other board with good results.

  • MaouniqueMaounique Host Rep, Veteran

    Xen has an excellent way to share CPU. This is why we built our largest servers on Xen because it does if great. KVM is not so great, but it can be tweaked. OVZ is miserable, but I presume that is because of the massive switching that occurs, it is not really helping if you add some throttling to it, you probably lose more than you gain overall by increasing switching and it's latency.

    I know the theory, it works with a few VPSes and smaller servers, but as the numbers increase and the servers grow, both in number of threads and in number of VMs, it starts to degrade, of course, first on OVZ, then on KVM and last in Xen.

    One reason for which high RAM OVZs are offered is that small ones create a much higher chance of race conditions and softlocks due to the number of processes, while fewer but with larger RAM allows the size of the server to increase and keep the number of threads under control. In theory, it should behave the same is you have 100x 1 GB ram VMs or 25x 4 GB ram ones with similar loads in total, but in reality this is not true, there are massive load spikes, servers become unresponsive or plainly locked due to various issues generated by the thousands of processes and the crazy switching. Add to this some throttling and it all crumbles. It was less of an issue on .18 kernels, though.

    Having 256 GB RAM Xens with same CPU power that can barely power 64-128 GB OVZ ones with rock solid stability and happy customers shows the difference, even if prices are not so different, if we consider all aspects, not only RAM, but also storage, cores and traffic.

  • kyakykyaky Member

    @Maounique said:
    Xen has an excellent way to share CPU. This is why we built our largest servers on Xen because it does if great. KVM is not so great, but it can be tweaked. OVZ is miserable, but I presume that is because of the massive switching that occurs, it is not really helping if you add some throttling to it, you probably lose more than you gain overall by increasing switching and it's latency.

    I know the theory, it works with a few VPSes and smaller servers, but as the numbers increase and the servers grow, both in number of threads and in number of VMs, it starts to degrade, of course, first on OVZ, then on KVM and last in Xen.

    One reason for which high RAM OVZs are offered is that small ones create a much higher chance of race conditions and softlocks due to the number of processes, while fewer but with larger RAM allows the size of the server to increase and keep the number of threads under control. In theory, it should behave the same is you have 100x 1 GB ram VMs or 25x 4 GB ram ones with similar loads in total, but in reality this is not true, there are massive load spikes, servers become unresponsive or plainly locked due to various issues generated by the thousands of processes and the crazy switching. Add to this some throttling and it all crumbles. It was less of an issue on .18 kernels, though.

    Having 256 GB RAM Xens with same CPU power that can barely power 64-128 GB OVZ ones with rock solid stability and happy customers shows the difference, even if prices are not so different, if we consider all aspects, not only RAM, but also storage, cores and traffic.

    your xenpower products and the ones from Dallas have been very stable. I'm running over 30 sites on your VPSes. stable and fast

    Thanked by 1Maounique
  • The IP issue was solved, but still no reply on the reduced network speed. Support is not working very fast at VULTR.

  • Reply from support now about the reduced speeds:

    Hello,

    The Vultr network will be the highest performance instance you can buy in our price range when we complete our roll out. Instances in certain locations are limited to provide a consistent quality product across all our nodes. When we complete the roll out in the coming days we will provide the fastest performance(GigE and 10GbE) of anyone out there. Thank you for your understanding.

    Thank you,

    Mike Marinescu

    Not anything about why they have reduced the speed, nothing about they fixing it now, but that it will be better later on.

Sign In or Register to comment.