Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


CPU fair usage on public clouds
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

CPU fair usage on public clouds

seed4useed4u Member
edited October 2020 in General

Hello,
There is a question bothering me and UpCloud support doesn't seem to be able to provide a define answer. On a public cloud providers aka DigitalOcean, Linode, UpCloud - how much cpu can you use for 24/7 ? Is it 50% ? is it 80% ? what is common practice for fair cpu usage on such providers? What is the number I should need my server below to avoid any service disruptions ?

Right now I am constantly pushing 40-60% with my Laravel App on a 4 cores instance.

Thanks

Comments

  • edited October 2020

    I believe the one you mentioned is shared cpu core policy. There is no specific rule, just dont be a dick, meaning that if you are constantly pushing CPU, then you need a dedicated one.

    Thanked by 1ViridWeb
  • It depends, but if you have the budget and your application doesn't generate that much real egress traffic, you can get an EC2 m5a.large from AWS for under $60 on a savings plan including storage and bandwidth. You get 2 EPYC hardware threads and 8GB of RAM all to yourself.

  • raindog308raindog308 Administrator, Veteran

    On the big clouds (Amazon, DO, etc) , I always assume I can use as much CPU as I want. Managing resources is their problem. They can cap me if they wish.

  • raindog308raindog308 Administrator, Veteran

    @sundaymouse said:
    It depends, but if you have the budget and your application doesn't generate that much real egress traffic, you can get an EC2 m5a.large from AWS for under $60 on a savings plan including storage and bandwidth. You get 2 EPYC hardware threads and 8GB of RAM all to yourself.

    Including bandwidth?

  • sundaymousesundaymouse Member
    edited October 2020

    @raindog308 said:

    @sundaymouse said:
    It depends, but if you have the budget and your application doesn't generate that much real egress traffic, you can get an EC2 m5a.large from AWS for under $60 on a savings plan including storage and bandwidth. You get 2 EPYC hardware threads and 8GB of RAM all to yourself.

    Including bandwidth?

    I suspect this is running a CPU hungary interpreter which doesn't actually serve that much traffic a month.

  • @raindog308 said:
    On the big clouds (Amazon, DO, etc) , I always assume I can use as much CPU as I want. Managing resources is their problem. They can cap me if they wish.

    I was under same impression, however, besides Amazon, all other will suspend, open tickets etc and demand you to use less resources and yet won’t say how much is too much. Also not really understand why should I manage this or idle the server if I am paying money.

  • @sundaymouse said:
    @raindog308 said:

    @sundaymouse said:
    It depends, but if you have the budget and your application doesn't generate that much real egress traffic, you can get an EC2 m5a.large from AWS for under $60 on a savings plan including storage and bandwidth. You get 2 EPYC hardware threads and 8GB of RAM all to yourself.

    Including bandwidth?

    I suspect this is running a CPU hungary interpreter which doesn't actually serve that much traffic a month.

    It is an eCommerce app which combine information from various sources for 5 million items so traffic is about 3tb per month, that’s alone will cost me 300$ on AWS

  • jarjar Patron Provider, Top Host, Veteran
    edited October 2020

    The truth is if you need a straight answer on that you need to go ahead and use a dedicated server. That's the most honest and direct answer you're going to get.

    There is no way to make a sane policy with that much detail without raising even the highest cloud prices for the sole purpose of defining it for people who can't wrap their mind around it not being so strictly defined. You're the outlier and if you can't get over not having that detail, you're the one who will be excluded.

    Consider that an answer from DigitalOcean because that was my answer to customers during my time there. No one cares if you use 100% CPU until enough of your neighbors do the same, and if you set everyone at the level of dedicated CPU allocation that prevents any potential overlap just to not ever hit that event (even though it's a one in tens of thousands event to even hit), then everyone suffers poor performance or increased prices and then that cloud is ditched by everyone for the cloud that isn't doing that. Which, of course, ensures that the vagueness of this policy has to continue and nothing is gained by anyone.

    Thanked by 2MechanicWeb MrH
  • @seed4u said:

    @raindog308 said:
    On the big clouds (Amazon, DO, etc) , I always assume I can use as much CPU as I want. Managing resources is their problem. They can cap me if they wish.

    I was under same impression, however, besides Amazon, all other will suspend, open tickets etc and demand you to use less resources and yet won’t say how much is too much. Also not really understand why should I manage this or idle the server if I am paying money.

    Your impression is incorrect in this case, unfortunately. m5 is a shape where you have dedicated resources for everything allocated. And no, they won't open ticket and demand you to use less even on shared CPU shapes, as they already have a well-defined CPU throttling and credit system for shared CPU shapes like t2 and t3.

    Thanked by 1JasonM
  • But I take your point on traffic, AWS isn't for your budget because how much they charge for that.

  • This is the very reason why I dont go for services which can't offer me 100% cpu usage for something remotely to use constant cpu usage.

    Most provider don't allow you to use even 50% constant cpu usage 24/7.

  • I have never been told to settle down with Vultr. But my account is 6+ years old and never do 100% for 24 hours.

  • Just use a dedi mate. And front it with VMs/CDN later if you want to scale out the HTTP/cache delivery horizontally

  • @vimalware said:
    Just use a dedi mate. And front it with VMs/CDN later if you want to scale out the HTTP/cache delivery horizontally

    You can't cache dynamic content... Or can you?

  • @vimalware said:
    Just use a dedi mate. And front it with VMs/CDN later if you want to scale out the HTTP/cache delivery horizontally

    It an app doing its thing, no http server even installed so CDN is definitely is not required. Guess I will just upgrade the plans higher to be in the 25% cpu usage zone, I just don’t want to give up on the flexibility of backups, creating snapshots, launching additional dev machines to talk to the db server when needed.

  • Digitalocean do monitoring for droplets that using 100% CPU for long time and may limit CPU for this droplets. See here https://www.digitalocean.com/community/questions/restrictions-on-the-cpu

    If you really want full utilize 100% CPU all time then use their dedicated CPU plan (full hyperthread all the time). See here for their explanation https://www.digitalocean.com/docs/droplets/resources/choose-plan/

    I think this kind policy is same for other big cloud player. AWS has shared plan (T series), GCP has shared-core plan. For other plan you can safely assumed that it is really dedicated core (hyperthread) for you.

  • Reading all comment,i am curious to ask that, what is the difference between cloud instance and vps or dedicated server? its seems dedicated server is much better then cloud but my every new client those days want to use cloud for there big budget project, why?

  • @bdspice said:
    Reading all comment,i am curious to ask that, what is the difference between cloud instance and vps or dedicated server? its seems dedicated server is much better then cloud but my every new client those days want to use cloud for there big budget project, why?

    The cloud offers more flexibility in terms of pay per hour and scaling. Also the hardware is managed by the provider. If you use AWS or other big cloud provider you also get their ecosystem of services like S3, SES etc.

  • @bdspice said:
    Reading all comment,i am curious to ask that, what is the difference between cloud instance and vps or dedicated server? its seems dedicated server is much better then cloud but my every new client those days want to use cloud for there big budget project, why?

    Dedicated - comparatively inflexible, almost always monthly billing, need to care about hardware, best value for money in opex.

    Cloud - high number of add-ons are available, upto as granular as per second billing, hardware is irrelevant (live migration is the norm), worst option for value for money.

    It's just flexibility vs performance, at the end of the day.

    Thanked by 1bdspice
  • My daily usage is averaged less than 10%, so 100% CPU for a maximum of 2 hours a day.
    I personally think it's quite a fair usage but don't know if the provider thinks so. :p

  • jsgjsg Member, Resident Benchmarker

    When I need to ask that question I put that project on a good and real VDS or a dedi.

  • Another method I use to estimate this is to compare the price between VDS and VPS for that provider.

    By calculating the VDS_price/VPS_price for the same number of cores, you should be able to get a broad idea on how much CPU the provide expect you to get.

    It supposes to be much less than that as more ram and disk used for VPS, but I think if you use that as a threshold, your provider won't pay too much attention to you.

  • @AC_Fan said:

    @bdspice said:
    Reading all comment,i am curious to ask that, what is the difference between cloud instance and vps or dedicated server? its seems dedicated server is much better then cloud but my every new client those days want to use cloud for there big budget project, why?

    Dedicated - comparatively inflexible, almost always monthly billing, need to care about hardware, best value for money in opex.

    Cloud - high number of add-ons are available, upto as granular as per second billing, hardware is irrelevant (live migration is the norm), worst option for value for money.

    It's just flexibility vs performance, at the end of the day.

    So Dedicated for performance and Cloud for Flexibilty in one word, right?

  • @bdspice said:

    @AC_Fan said:

    @bdspice said:
    Reading all comment,i am curious to ask that, what is the difference between cloud instance and vps or dedicated server? its seems dedicated server is much better then cloud but my every new client those days want to use cloud for there big budget project, why?

    Dedicated - comparatively inflexible, almost always monthly billing, need to care about hardware, best value for money in opex.

    Cloud - high number of add-ons are available, upto as granular as per second billing, hardware is irrelevant (live migration is the norm), worst option for value for money.

    It's just flexibility vs performance, at the end of the day.

    So Dedicated for performance and Cloud for Flexibilty in one word, right?

    Yes, if your load is extremely stable, then a dedi or VDS is your best bet. But, if it's variable enough, the Cloud is your best bet.

    Or the most reasonable option, if you're big enough: hybrid (Dedi for base load, Cloud for the peaks).

    Thanked by 1bdspice
Sign In or Register to comment.