Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


What are your feelings on Host CPU throttling when you're using less than 25% of the core? - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

What are your feelings on Host CPU throttling when you're using less than 25% of the core?

13

Comments

  • MaouniqueMaounique Host Rep, Veteran

    Buy VPS 128 MB at 3 a year, run yum update, fail.

    Thanked by 1netpioneer
  • jarjar Patron Provider, Top Host, Veteran

    @Amitz said:
    As. A. Man.

    Thanked by 1Amitz
  • Buy cheap VPS, generate 4096bit dh params for VPN, feel bad about yourself.

  • jvnadrjvnadr Member
    edited December 2017

    Janevski said: One can have it, if one doesn't use it, but if one doesn't use it, then why would one pay for it?

    Not at all. Let's say, you are a student and you cannot afford to rent a whole apartment where your university is. You find 3 other guys and you rent a nice flat. You have on your own a room, but you share the rest of the house (kitchen, bathroom, living room etc.) with others.
    Or, you cannot afford to rent a hotel room and, because you have to sleep, you grab a bed in a hostel. There are rules, stay there and sleep, but you cannot make noise to disturb the others that are sleeping in the same room.
    It's not marketing, there are needs for that, just the rules have to be clear.

    Thanked by 1kkrajk
  • Maounique said: Buy VPS 128 MB at 3 a year, run yum update, fail.

    FTFY! (Except if it's a LES from @AnthonySmith, $3/y and works marvelous!)

  • Ive got a OpenVZ VPS and I haven't connected to it for like 8 month.

  • Gamma17Gamma17 Member
    edited December 2017

    IMO it just shows how with all the price competition and overselling rented VM-s becomes less and less suitable for production use with time.

    I've recently migrated small completely static-html site with some pdf-s to cheap online.net dedi, because shared hosting provider which hosted it started complaining that it is using too much CPU and "throttling" it by displaying http500 to visitors. Sure the site, by its nature, has traffic spikes in certain days of year (like may be ~1000-2000 page loads/minute for few days), but CPU load from static html ?!?

    I also had a case, may be ~half a year ago, when $12/y kvm vps was suspended for "io abuse" while i was reinstalling OS on it (was installing openbsd by booting bsd.rd using grub)...

    Thanked by 1netpioneer
  • MaouniqueMaounique Host Rep, Veteran

    jvnadr said: FTFY! (Except if it's a LES from @AnthonySmith, $3/y and works marvelous!)

    Hum how do you run YUM in 128 RAM? Debian fan here, I am not using Centos on any VPS I have and smallest ones are 512 anyway, but I heard 128 and yum don't mix.

  • brueggusbrueggus Member, IPv6 Advocate

    Maounique said: Hum how do you run YUM in 128 RAM?

    Disable the fastestmirror plugin. And reduce the number of updated packages per run.

    It's somewhat riddiculous that the services you want to run on the VPS work flawlessly while you need to jump through hoops to run the package manager...

  • @Janevski said:
    No, you get nothing, marketed as something.

    The Rules of the Con

    • Find somebody who wants something for nothing, then give him nothing for something
    • You can’t cheat an honest man
    • Never give a sucker an even break
    • Feed the greed
    • It's all in the detail
      ...

    https://en.wikiquote.org/wiki/Hustle

    Thanked by 1Janevski
  • Maounique said: Hum how do you run YUM in 128 RAM? Debian fan here

    I am also a Debian user, not using Centos much... But in Centos 6 (last time I used it on LES) it worked. Now, with Centos 7, I dunno...

  • @brueggus said:

    Maounique said: Hum how do you run YUM in 128 RAM?

    Disable the fastestmirror plugin. And reduce the number of updated packages per run.

    Then get sick of it, and reinstall Debian Wheezy. :D

  • @WSS said:

    @brueggus said:

    Maounique said: Hum how do you run YUM in 128 RAM?

    Disable the fastestmirror plugin. And reduce the number of updated packages per run.

    Then get sick of it, and reinstall Debian Wheezy. :D

    Slackware enough said.

  • @AuroraZ said:
    Slackware enough said.

    Because of you and @Angstrom, I burned a Slack CD to do some HD testing on the one desktop that's partially running now (Doesn't like that PC2-6400 RAM you sent, but is happy with the 512MB of PC2-5300 I had). The installer and tools are exactly the same as it was in 1996. Hell, it wanted to install LILO!

  • AnthonySmithAnthonySmith Member, Patron Provider

    Well, I think the key thing I would input into this is if you need dedicated CPU resources even a partial core, you don't need a VPS, you need a cheap dedi.

    I have limits (some just monitoring only) in place on a per hour, per day, per week, per month basis, if it turns out someone needs lets say 40% of a core on avg over 30 days, I would suggest that they need a dedicated server not a VPS.

    CPU is by far the most contested resource in terms of numbers/potential only in any VPS environment, if you are going to guarantee a certain amount is eaten up rather than just using it as general use/burst, you need a dedi or you need to pay for your use.

    I never intervene unless I feel it is impacting others, when it impacts others, those that use dedicated CPU resources are the first to get the warning shots fired.

    I don't know of a fairer way of doing things, common sense has to go both ways.

  • @WSS said:

    @AuroraZ said:
    Slackware enough said.

    Because of you and @Angstrom, I burned a Slack CD to do some HD testing on the one desktop that's partially running now (Doesn't like that PC2-6400 RAM you sent, but is happy with the 512MB of PC2-5300 I had). The installer and tools are exactly the same as it was in 1996. Hell, it wanted to install LILO!

    If it works don't fuck with it. More distros could learn this lesson. Eg....systemd

  • WSSWSS Member
    edited December 2017

    @AnthonySmith said:
    I never intervene unless I feel it is impacting others, when it impacts others, those that use dedicated CPU resources are the first to get the warning shots fired.

    I don't know of a fairer way of doing things, common sense has to go both ways.

    I feel this is perfectly fair. I also find that if resources are being oversold to the point that they're actually being used causing detriment, there's a problem with the service provided.

    I mean, c'mon, if my load of 0.25 (peak) is causing that much issue on a multi-GB multi-vCore, then how is that really my fault? That's like one poorly formed SQL query in a somewhat larger table. ;)

    @AuroraZ said:
    If it works don't fuck with it. More distros could learn this lesson. Eg....systemd

    Hey hey, ho ho- LILO is wack and has to go.

    Thanked by 1Cam
  • AnthonySmithAnthonySmith Member, Patron Provider
    edited December 2017

    WSS said: I mean, c'mon, if my load of 0.25 (peak) is causing that much issue on a multi-GB multi-vCore, then how is that really my fault?

    It is not if you are only 'peaking' if you are actually using 25% of a core 24x7 and 20 other people are doing the same thing on the same node then that is when it becomes a problem. (which is rare)

    On LES of course, anyone requiring dedicated resources is just gone, it is what it is :)

    Thanked by 1netpioneer
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    I think at least give a clear cut on what is the maximum load allowed per core, quantifying things is always better. We all know CPU are shared, but without any clear info that just means host can have double standard.

    Thanked by 1netpioneer
  • AnthonySmithAnthonySmith Member, Patron Provider

    FAT32 said: We all know CPU are shared

    :) I wish that was true, sadly the amount of "Can I use a core 24x7 on your 3.50 p/year service for mining" emails I get suggests otherwise, and they are the ones that actually ask first haha.

    Thanked by 1netpioneer
  • @AnthonySmith said:

    WSS said: I mean, c'mon, if my load of 0.25 (peak) is causing that much issue on a multi-GB multi-vCore, then how is that really my fault?

    It is not if you are only 'peaking' if you are actually using 25% of a core 24x7 and 20 other people are on the same node then that is when it becomes a problem. (which is rare)

    It's closer to sustained around 18% when that's available; I haven't really got super-good granular control since I'm actually stuck polling the system load. I don't have a problem rescheduling things, either- it's not something I've got to run 24/7- but to just crank down the CPU to the point where things fail instead of asking "What're you up to?", or even opening an "abuse" ticket pending response?

    Again, I was under the impression that I was paying for a multi-vCore, milti-GB VPS with the idea that I'd be able to actually use at least a portion of that.

    On LES of course, anyone requiring dedicated resources is just gone, it is what it is :)

    I wouldn't really expect otherwise.. even if I have been known to occasionally cross-compile for/to my LES boxes because 128MB doesn't go very far anymore. ;)

  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    Oops, put more ticks on order page next time then =)

    [-] ... I understand the CPU are SHARED
    [-] ... I will NOT mine crypto on this machine

  • jarjar Patron Provider, Top Host, Veteran
    edited December 2017

    @FAT32 said:
    I think at least give a clear cut on what is the maximum load allowed per core, quantifying things is always better. We all know CPU are shared, but without any clear info that just means host can have double standard.

    Which doesn't help you when you're with a decent host because either:

    1. They don't enforce it until someone causes a problem, making it no different than normal unstated circumstances.

    2. They limit you more than they had to, purely to satisfy your need to know.

    Coercing providers into stating these limits, where the provider is someone you should be spending money with anyway (meaning solid admins, not extreme overselling), can only reduce your quality of service. Just let them deal with 1 person, who isn't even you I'm sure, abusing one node every 3-4 months and don't ask them to reduce everyone's service to satisfy the mental state of obsessive people.

    Anyone who obsessively needs to know how much CPU they can use down to two decimal places on load should be purchasing a dedicated server because they're not mentally prepared to handle shared resources. I assume that if they enter an "all you can eat" buffet this person freezes up in the line to the food because they realize that not everyone in the restaurant can eat everything and there are no signs stating limits, meaning it can't mathematically work in a theoretical reality that isn't taking place and therefore they cannot proceed to put food on their plate because they don't know how to function without everything making 100% mathematical sense at all times under every theoretical scenario. Pretty sure it's called autism tbh, and I'm not saying that jokingly, but a lot of people suffer from it and there are things they just can't experience the way that the rest of us do (again, not poking fun or insulting, I legitimately think this is common in our community/industry).

    Thanked by 1netpioneer
  • WSSWSS Member
    edited December 2017

    @jarland said:
    Anyone who obsessively needs to know how much CPU they can use down to two decimal places on load should be purchasing a dedicated server because they're not mentally prepared to handle shared resources.

    This is a completely unfair statement, and you knew that when you were writing it by putting the burden on me to refute this magic number when my question remains "I'm paying for X, but I am using Y, at what point is expecting a portion of that too much?"

    E: You've made a second edit since this edit. I don't care for the recently-appended AYCE/Autism allegory, and won't be appending it to my response.

  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    If the host usually don't care until something happen (I am pretty sure there are scripts that check every x hours), why there isn't a way to limit it automatically?

    For example, how does cloud providers like AWS, GCE, Azure and Aliyun handle CPU stealing so well?

  • jarjar Patron Provider, Top Host, Veteran
    edited December 2017

    WSS said: This is a completely unfair statement, and you knew that when you were writing it but putting the burden on me to refute this magic number when my question remains "I'm paying for X, but I am using Y, at what point is expecting a portion of that too much?"

    No it isn't. If you NEED to know how much you can use at all times, then you NEED a dedicated server. Otherwise you are asking for artificial limits or higher prices. Because any number you state is either mathematically sustainable under the worst case scenario, meaning you can't oversell to any degree above the assumption that everyone hits that number at the same time because you're stating it as policy and therefore you have to raise your prices and/or limit everyone further than you would have ever had to based on actual system administration in response to actual real world events, or the number stated is a lie.

    If I've sold 4 cores to 8 people then no one can sustain a load over 0.5, but when 7 of them use 0.1 for 6 years straight I think it's okay if you hit a load of 5 every now and then. You're asking for that to not be okay if you're asking for an absolute defined value of what you're allowed to hit. That's not okay, that's idling resources for the purpose of satisfying someone's obsessive mental state (or it's lying because the number STILL relies on a gamble). It puts a theory of something that almost never occurs ahead of reality, and that's insanity. You can't ignore reality because it doesn't match worst case scenario theory.

    Thanked by 1netpioneer
  • WSSWSS Member
    edited December 2017

    @jarland said:

    WSS said: This is a completely unfair statement, and you knew that when you were writing it but putting the burden on me to refute this magic number when my question remains "I'm paying for X, but I am using Y, at what point is expecting a portion of that too much?"

    No it isn't. If you NEED to know how much you can use at all times, then you NEED a dedicated server. Otherwise you are asking for artificial limits or higher prices. Because any number you state is either mathematically sustainable under the worst case scenario, meaning you can't oversell to any degree above the assumption that everyone hits that number at the same time because you're stating it as policy and therefore you have to raise your prices and/or limit everyone further than you would have ever had to based on actual system administration in response to actual real world events, or the number stated is a lie.

    At no point beyond simplifying the thread did I state that I absolutely need to have X% of my resources 24/7, etc. It was an example of "Hey, I do expect to use what I believe I am paying for", and although I have no problem moving some stuff off-peak- if I'm expected to use none of it- then what am I paying for?

    Overselling? Fine, but if you want to sell me 6 gigs and 6 cores, then throttle me down to the point that I can't use it- there's a problem with your math more than mine. I follow the "don't be a dick" methodology, and I have no use for fake coinage.

    For what it's worth, @Veesp actually have these numerics in their SLA, and I don't remember anyone talking shit about their selling/overselling, and they seem to be doing quite well for themselves.

    If I've sold 4 cores to 8 people then no one can sustain a load over 0.5, but when 7 of them use 0.1 for 6 years straight I think it's okay if you hit a load of 5 every now and then. You're asking for that to not be okay if you're asking for an absolute defined value of what you're allowed to hit. That's not okay, that's idling resources for the purpose of satisfying someone's obsessive mental state. It puts a theory of something that almost never occurs ahead of reality, and that's insanity. You can't ignore reality because it doesn't match worst case scenario theory.

    I'm asking for the community to weigh in on "just how much is too much, and how little is too little" from expectations of both sides. You're obviously taking this far too personally to use this illustrative questioning with these numerics as a steady/stable/rock-solid number rather than just that- an illustrative load. Will I use some timeslices doing nothing? Well, yeah. The service is online, isn't it?

    These numerics I've given are what I have personally observed myself, along with the response from this provider, and whether or not I'm expecting too much for them to do other than throttle my shit from being functional when it's peaked beyond this use. You're stating asking this numeric as though it is the holy grail, and what I'm seeking, rather than "am I off base for exacting this much service, or not?"

    If I had the ability to actually use just an X percentage/need X percentage of a core consistantly, I'd probably just pay for an hourly instance, because it'd be cheaper than paying monthly for a service I can't actually use.

    I'm also assuming your metrics above are based on your actual core numbers, and not throttled vCores, because we all know how trivial it is to lock down CPU utilization from there).. my numerics are on MY vcores, not global system utilization.

  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    I agree, or else the number of cores is just for marketing and for multi-threaded program only. I would rather request a dedicated core instead of 6 shared cores.

    Maybe this is what future VPS provider should do:

    • Pay per CPU cycle
    • Pay per Disk IO byte
    • Pay per Network packet

    No more argue :)

  • jarjar Patron Provider, Top Host, Veteran
    edited December 2017

    WSS said: I follow the "don't be a dick" methodology, and I have no use for fake coinage.

    Me too, that's why I'm suggesting that the resolution to such a topic is to not host with shitty providers, not to attempt to coerce other providers into stating and enforcing limits to compensate for bad experiences with bad providers. A good provider should be able to let you do whatever you want 99.999% of the time because they've balanced their servers reasonably and significantly reduced the risk that enough people on one node are going to simultaneously power on their laser.

    WSS said: For what it's worth, @Veesp actually have these numerics in their SLA

    You see that as admirable, I see it as artificially limiting where likely not needed in actual real world situation, and also possibly a lie. If you're going to state it in policy then everyone should be able to sustain that. If everyone on a node sustains 74% of a CPU core indefinitely, they have placed in their policy that they will not take action against them. That means one of these things is true:

    1. Their prices are high enough and their distribution effective enough that they actually can allow everyone on a single node to sustain 74% of a CPU core indefinitely.

    2. They have to violate their TOS in an extreme scenario. That extreme scenario won't occur but maybe once a year I'm sure, but it technically COULD, and probably will to someone at some time. That makes the number dishonest.

    This is why you should just choose providers that have real system administrators at the helm. Like @Francisco.

    FAT32 said: I agree, or else the number of cores is just for marketing and for multi-threaded program only

    Shared resource available for burst =/= "just marketing." You need flexibility in your servers, so there's flex room for everyone. Not guaranteed room to sustain 100% resource usage at all times. It's the very very safe bet that not everyone needs to bang against the flexible wall at the same time. It reduces costs. It was the whole point of the VPS industry to begin with. The entire industry and it's purpose is now a failure or "just marketing?"

    Thanked by 1Aidan
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire
    edited December 2017

    FAT32 said: I agree, or else the number of cores is just for marketing and for multi-threaded program only

    Shared resource available for burst =/= "just marketing."

    I recall that argument, that's true because I haven't had any projects that run CPU consistently yet, it is just a lot of peak (where 15min average is still below the limits because the code usually run for only a fraction of second)

Sign In or Register to comment.