New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Look who's talking
#dicks
You're missing a hashmark, dick.
I am sorry, fixed.
3,4x? There are providers that fits over 150-200 containers to a single server!
I should have said "25% of the Vcore", but it's not like that matters.
SPINLOCK DAT SHIT, DJ WANKSTANK!
This made my sides hurt from laughing so much.
It depends how big the server, nowadays 200 containers on a big server with many cores should not be too bad.
OVZ has issues, though, with too many threads, that can cause lockups, it is different than Xen or kvm as each uses one thread no matter how many processes inside and switching is more sane, even if they are broken down in some components in the rings, also Xen does cpu sharing way better and fairer, so, yeah, we have xen servers running 256 and 512 MB VMs with hundreds of tenants and they crash less often than an overzold with 100 only.
I also smiled broadly on that one :P
Good providers don't do this. Period.
This is what process priorities are for. Hard CPU usage caps should be a last resort, only in cases where your usage threatens the stability of the entire node and normal process priorities (ie, nice) aren't good enough.
On my service, even 25% of 1 core won't get you deprioritized. You'd have to use a lot more than that, and it'd have to be sustained, before it'd even register as an issue for me the provider. And even when it does register as an issue, the first thing that would happen is you'd get the process reniced, which you'd hardly even notice at all.
This is doable on openvz and kvm with standard linux utilities the host has access to, and can be scripted to happen automatically. xen is a bit weird though, i don't use it myself.
In that case, the disks (even if they are ssd's) aren't suffering on I/O perspective, or isn't the network very slow from time to time?
OK, I am not a provider and I thought 200 containers are way too much! Mea culpa!
serverhand is gonna be my goto vote for all of your future polls
You'd be amazed what overselling can do.
I recall one of the OpenVZ guys did an experiment a few years ago and had something like 15,000 containers running on a single machine, and apparently it was fine.
:O
We use on most products SAN storage. The IO has some latency, but it is double cached and easy to move/snapshot.
SSDs wore down too fast, after 2-3 years there was already a noticeable difference.
Yes, we do have io issues in some setups, we are continuously tweaking the settings and it is manageable because people do not do 20 MB/s sustained unless hacked. We warn them if they keep even 10 MB/s for a day or more. This is where mutual respect comes into place, the customers do not treat us as enemies and we do not spy on them to monitor for each small issue to throttle or terminate.
As for the network, that is not a problem, most of the time, it is at 10% wirespeed, we did have people doing 800 Mbps, but only torrenters or booters and "stressers" which were kicked because of that. It also happened years ago.
We allow up to 40k pps also, because we can, as long as it is legit (we monitor for attacks automatically).
Honestly in one scenario this is intelligent system administrators like you vs a minority of customers. You have some customers that kick and scream if they can't know the precise values and timestamps you'll force yourself to hold to before limiting them. I've seen this, rarely. They work against their own self interest claiming it's providers like you who are too lazy to calculate the necessary variables, or that you're dishonest and trying to scam them by not being up front about what numbers you'll limit them at.
The reality is you cannot predict user action with enough consistency to publish attractive numbers for where you'll limit people. So providers that fall prey to this idiotic demand are forced to use conservative numbers, at which point they decide to automate it because rather than addressing problems as they occur, they've now committed to policing minor variables.
Several times in this industry I've observed customers making demands that can only decrease their quality of service, simply because they don't understand how to do the job that isn't theirs to do anyway.
Of course, that's probably a minority. The rest are probably just taking overselling far beyond what's reasonable.
15k cannot work in production unless people use them for VPN or something 1-2 threads without waiting for disks and even then...
OVZ crashes when there are too many threads because the cpu cannot switch fast enough properly, which brings everything down in a cascade. We had a node with 24 real cores which never crashed, but everything below that failed at least once. And I suspect even such nodes will be brought to their knees if they have a lot of ram and threads.
That seems kinda low, i guess they are grossly overselling. It's like if you could just give them your money and leave, they would be sooo happy. But this doesn't surprise me, 25% sustained load means 4 users per node, contrary to their situation, probably around hundreds of people. They should at least allow temporary bursts, and this makes me make a final decision it's the provider's fault. The monetary aspect of node price and price of service per customer is actually deciding of how much of dicks they are, but we don't know that.
You know my feelings man, if you can't use what you are paying for ass rape them. That little of usage should not be a problem.
That's how they made money by overselling and tempting with deals. Many of us bought and idle for cheap ones since we don't feel safe for mission critical stuff to run there.
Lets face it a lot of offers are just silly. I am not really angry with providers trying to stay competitive but they should do some maths before running those crazy deals and decide how much of a loss they can take.
Lets say the node is some cheapass 3770 at $30. That means $360/y alone for the server. Doing $10/y deals would mean 36 VMs to break even just on the hardware with no IPs and no paychecks for anyone involved. At 36 VMs that would mean an average of 0.22 threads per VM and imo be somewhat reasonable.
Once you add IPs on top of that it just gets unsustainable. Say $0.80/m per IP and your IPs will cost almost as much as the server (~$346/y). Worst part is it's hardly possible to offset this by adding more VMs as more VMs means more IPs. So basically unless your IPs are dirt cheap any kind of $10/y offer has to be a loss.
I really dont see how anyone would expect to turn in a profit on these kind of offers.
They can't thus Nat was born to offset the costs some more, but even then it's very difficult.
Most are loss leaders, thus turn no profit at all in the hopes they will buy more expensive.
Basically VPSes are a scam.
One can have it, if one doesn't use it, but if one doesn't use it, then why would one pay for it?
Come to think of it, this goes for many things... It's a dog eat dog world, constantly biting on the neck.
You get what you pay for.... If you want dedicated resources buy a slice....
No, you get nothing, marketed as something.
If you buy a time share and feel scammed because you can't spend the whole year living in it, did they scam you or are you stupid?
People here are mostly looking for deals too good to be true, and that's what they get. Most of them accept the limitations as a man.
If i buy a time share then i am gullible, which means a little bit of both.
Unless, you know, a time share weirdly fits your needs perfectly. Probably more common that a good VPS does though, most people are hosting websites and if your website is running at 100% CPU you done failed
As. A. Man.
yum update