New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Hetzner has a great offer, not gonna say otherwise. I'm sure if you spend some time thinking about it you'll see that these are two companies with two very different things going on. I'll give you one to start, locations.
If you think on it and none of the differences matter to you, and you prefer the most resources for the least cost, you've always had cheaper options and today is no different than the day before it. OVH and BuyVM have been right there the whole time. You do what's right for you, I won't think less of you or anything like that. You know I'm a customer of everyone else too
That's definitely true about locations and it will probably stay that way for now. Hetzner is currently behind in features but I wonder how big a gap there really is.
DO is (up til now) quite competitive for hourly servers even for cheapskates like me. Even monthly, the $5 base instance isn't that much more than the LET favorite $3.50 BuyVM slice, and droplets have the advantage of always being in stock . DO only gets expensive when you compare the big instances (32GB say) to budget dedis of the same size. That's where I think Hetzner has disrupted things.
Anyway I'm sure your biz folks have noticed this development and will take it into account in their decision making.
nah. ovh and buyvm were not direct competitors because we are talking about per hour pricing. the let leb really is about good pricing for long term usage which technically DO became more relevent with the price decrease.
OVH has hourly pricing, and most of LET thinks hourly pricing is stupid anyway, so BuyVM is definitely in the running for the average member here.
@jarland Hourly pricing is stupid. Most of us only need a couple minutes!
It's a race, last one to finish is the loser.
And for you DO is doing per-second billing.
The poll shows that i am already second billing.
ba dum tss
I'm sure someone said it was OpenVZ last night. My bad.
The price really depends on what you are valuing ram? Sure. CPUs? it's slightly cheaper. And my concern with those prices are that there would be alot of people per core once this gets large levels adoption.
I saw OpenVZ in that thread too at some point. Someone did say it, before they replied saying it was KVM.
I wouldn't call the CPUs "slightly" cheaper either. An 8-cpu Hetzner instance is 30 euro/month, while an 8-cpu DO instance is $160. DO does include more SSD space, but you can back a Hetzner instance with a StorageBox or a local dedi, to which DO has nothing comparable.
I think you're right that the unloaded CPUs at Hetzner can't last. We'll see. I don't know what the corresponding situation is for DO's non-optimized instances.
@willie, I use $5/1 vCore/1GB ram instances for near everything. $36 vs $40 for 8 vCores. Granted there is 4x the ram which would be nice for some of our applications.
Most of the time I prefer to scale out compared to up, you usually can integrate HA when you do that Plus scaling becomes easy.
I'd like to know if you actually can run eight $5 instances at full cpu speed on DO for any length of time. Plus lots of stuff is much easier to multi-thread than to split across multiple servers. I just compiled ffmpeg in about 70 seconds on a Hetzner 32gb instance with "make -j10". It's much harder to do that on a bunch of separate single core VM's.
Typically I'd say not a problem. Seen people max out those cores for a year, and not just because they were super lucky or something.
A side note though, it's not a problem until it causes a problem. You and I both know there's an industry trend right now that might increase the probability of running into a problem when combined with a particular neighbor who works in the mines. A statistical increase in something that theoretically impacts every provider, to varying degrees of course. So if you do happen to run into trouble, just hit me up and I'll help, and make it up to you
That is one of the things you pay for too, you have my direct support.
Interesting, thanks. I wonder why optimized instances are needed in that case. I appreciate your invitation but your extending it might have been unwise . I have some apps that would nicely fit a pattern of spinning up 20-100 instances, computing at 100% cpu for maybe 5 hours on all of them, then shutting down. (It's logistically a lot easier to use fewer instances with more cores each though).
Of course optimized instances are intended for that, so doing it with regular instances doesn't seem nice. In practice I usually run this type of task on one or more dedis and let it take several days as needed (it's non-urgent batch stuff). If you're sure you don't mind my using $5 droplets that way, I might give it a try. I've usually thought bigger instances would get proportionately more CPU.
The optimized should be a more sure thing, but you're welcome to try it and let me know if you run into trouble. If it causes a problem and someone has to respond, we'll chalk it up to the combo not being a good fit, and we'll both learn some limits and I'll help you change course (you're welcome to guess what help means, I'm avoiding saying it).
@willie with 14 droplets doing the work over different HNs you are pretty much able to guarantee some reasonable CPU % when you need it.
Those sit at <30% normally but fire up to 100% for a few hours at a time at most.
How do you know they're on different HN's?
Ok, but if they're online all the time that's $70 a month-- you can get a nice dedi for that. More interesting imho is the dynamic compute case: spin them up, compute something, spin back down. Even more extreme:
https://www.usenix.org/system/files/conference/nsdi17/nsdi17-fouladi.pdf
Too bad AWS Lambda is so expensive compared to this other stuff.
Chances of landing on the same node are slim to none, heavy logic built in to prevent it unless it's unavoidable.
https://www.digitalocean.com/community/questions/making-sure-droplets-are-on-different-physical-servers-or-server-racks
@willie I'm performing more computing during that peak 100% time than a single Dedi would provide. Plus I'd need at-least 4 dedi's to get the same level of HA (2 masters, 2 data). I used to use Hetzner dedi's for this exact workload and was using 5 dedi's (I also had 3 other droplets integrated on that 5th). DO was cheaper and returned queries just as fast (might even be faster due to more peak combined IOPS).
I could spin them up on demand, but re-balancing takes at present ~3 minutes and I don't have that kind of storage at the processing side. I may do that in the future, but for now $70/month is acceptable and very very reliable.
I'm confused: is there a way to know whether two droplets are running on the same HN, other than by using multiple locations? Otherwise, even if there are too many droplets for a single node, how do you know how they are distributed? Maybe both masters are on the same node, so you have no HA at all.
Also, didn't you say you had 14 droplets? DO's host nodes might very well have >14 cores, so just going by CPU cycles doesn't establish that your droplets are on multiple nodes. They probably are, but it's not guaranteed.
You do have me intrigued by this many-small-droplet scheme, so thanks. I had always discounted it before, preferring to use dedis and big VM's. Using lots of small ones will require concocting or figuring out a distribution scheme, but maybe it's worth it.
If you create less than 500 droplets at once in the same region and hit the same HV I'd be surprised. The system is that good
(Number not chosen from actual math, just an example that communicates the point as intended)
SplitIce's setup works for him.
If you have a workload that needs 4+ E3 threads running full-bore, you'd get a hetzner dedi.
What Hetzner's offering now makes possible, is combining Failover IP, 2GB-4GB cloud instances and a variety of bare-metal specs (+ free internal bandwidth!),
to build value-for-money hybrid cloud for fixed+variable workloads.
The hybrid part is now possible with hourly billing.
I haven't been in the loop for a while, but has DO introduced bare metal recently?
No bare metal. Guaranteed CPU though.
Oh, intelligent allocation like Dediserve. I wasn't sure of this until now.
Cuntwaffles Per Unit?
@willie it's a bit funky to do but https://developers.digitalocean.com/documentation/v2/#list-neighbors-for-a-droplet
I've kept an eye on them too. Although one got migrated a year or so ago, the rest have remained put. The one that got migrated ended up on a unique HN.
@jarland I had a couple on the same HN at one point, but that was probably early in AMS2 days. But yes I agree the allocation is very good.
Thanks, Splitice. Jarland, is there a way to tell how many droplets I'm allowed to spin up at once? Using 250 or so would be fantastic but of course a lot fewer would be fine. Do you recommend a particular region for this?
Also, are there separate Spaces servers for each region? It would be nice if the droplets have local access to the object store. Thanks!
Message me your account email again and I'll check on it in the morning and let you know (about to hit the pillow, I hope)