All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Pricing up Shared Hosting
Howdo,
I'm struggling to find what I am looking for on Google but how do shared hosting providers take their real VPS resource allocation and work out how many packages they can split their resources into without the shared hosting provider totally taking the p*ss please? I'm using DirectAdmin, obviously hoping to make a few quid whilst being reasonably priced.
The whole subject confuses me because you have companies selling 1/2vCPU and i'm sure they are selling to more than 16 customers on a 16 core box and selling more than say 64gb of overall RAM to the same 16.
Logic tells me that there is an average use approximation with safety net but how is that calculated please?
I'm suspecting most are nothing much more than a scam - big headline numbers, strict "fair use" policies (engineered to ensure that the headline numbers are never met) and clever limits on i/o speed and processes etc.
Thanks
Comments
Hello sir check our deals https://panel.ihostart.com/index.php?rp=/store/vps-kvm-ssd , we have dedicated CPUs , and it's possible check , we don't have CPU Steal or IOps limits
Regards,
Calin
Dear Calin,
Please read the whole message next time before answering to the topic.
Kind regards,
COLBYLICIOUS
I never oversell a shower.
It depends what type of service you want to offer: lower priced, more customers per server and a potentially worse service vs higher priced, less customers per server and a potentially better service. A lot of hosts oversell the resources as they know a customer isn't going to use 100% of the allocated resources all the time, yes - but a lot of hosts don't oversell either.
Three resource limits are: CPU, memory and disk space. You should choose 1 or 2 or these to 'portion' your server. Disk space is a good one to choose, and I recommend, as it's something that is usually used and not regained. So, if you have a 1,000 GB disk and want 10 GB packages then that equates to 100 customers - there's your number.
Memory should be used as a secondary indicator, it's hard to gauge how much memory a customer would use though as it depends on the type of website and traffic pattern but a idle website uses next to no resources and most memory is used by shared processes should as the web server and MySQL/MariaDB anyway so the actual memory usage of a low-medium traffic website is actually quite low. Memory should be closely monitored as once you reach 100% memory, processes are killed and not allowed to spawn so the customers will have a bad experience.
CPU is less of a concern these days as we have so many cores and GHz available. It also works different to memory in that processes can wait for CPU time rather than being killed, therefore can be shared easily. Processes take their CPU time and drop off, ready for the next process. Obviously CPU should still need to be monitored as any server at 100% pegged CPU will be having a bad time.
In my opinion, sell based off available disk space, closely monitor memory usage while the server fills up. I can't see you have any issues with CPU.
All of this assumes you have an proportionally specced server though, so a 1,000 GB disk space server with 2vCPU isn't going to cut it as you definitely will have a CPU bottleneck.
In general, this is how you will see enterprise IT work too. Storage is usually expandable when needed though as it will be an external array connected via FC or iSCSI etc. You will always see hosts more hot on memory with not as much CPU usage, due to what is explained above.
Shared hosting providers typically calculate their resource allocation based on average usage approximation with a safety net, but some may use strict "fair use" policies to ensure headline numbers are never met, leading to potential scams.