New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Does anybody pins threads on dedicated vCPU offers?
PHP-friends, netcup and Hetzner doesnt. Maybe ServerFactory does, Im not sure. No idea about big bois like AWS, DO compute-optimised etc.
You cant oversell at all with pinning, it will be instantly noticeable by neighbor.
Context-switching is something to worry about, curious about real performance differences because at the same time such pinning doesnt allow to schedule two tasks to same core/thread which can decrease perf (Turbo boost doesnt kick in to this single core that is used, not a big problem in EPYC because it is more heat limited than power limited, but Intel Skylake-based Xeon chips are power limited as fuck, max turbo wont kick in when multiple cores are used even if you have heat headroom). Have you seen any benchmarks?
Any provider wants to chip in with 5 cents?
You also can't oversell with true "dedicated cores", and any provider that says they offer "dedicated cores" without actually dedicating them to the VM is probably considered to be misleading. "dedicated cores" is different to "unlimited" or "unthrottled" CPU usage".
Threads can only run one thing at a time. You can still have multiple tasks running - the pinning just means only one VM runs on that core, however multiple processes on that VM can be running.
Looking at pure hardware and one nanosecond then yes, but we need to take a look at massive inefficiency of that - Scheduler tries to mitigate inefficiency and Im talking purely about that. Intel and AMD themselfes are making tweaks in kernel all the time. Intel has their own distro called Clear Linux, it provides better performance even on single core CPU when you put multiple tasks at once - scheduler is heavily optimised to work with hardware prefetcher so "CPU wait time" is a lot shorter.
Cores also have their own L1 cache. If you schedule task and cache will hit then performance jumps drastically.
Thats why I say: if host scheduler puts two tasks (so some things in VMs) into one core that doesnt mean that it will be slower VS pinned cores by some fixed percentage, because it is context switching. Thereotically it should be slower (how much?) but then turbo boost comes to play and this single core can get like 300MHz more.
It is just interesting case if pure dedicated pinned cores will be always faster than dynamically switched cores - is Linux scheduler already good enough to mitigate context switching?
Would live to seevsome benchmarks or experiences
We have also a few dedicated machines in Frankfurt with unlimited Traffic as a Rackserver including redundant network (active LACP). I dont know if this is interesting for Lowendtalk Users
Intel Xeon E3-1270 v6
64 GB DDR4 ECC-RAM
2x 480 GB SSD
2x 1 Gbit/s Uplink incl. Traffic-Flat
1x IPv4 / 1x /64-IPv6
single NT
DDoS-Protection
No setup
For 95,99 EUR per month incl. 19 % VAT.
Current deployment 5-7 days.
and the price ?
was too quick to ask as it was edited.
Yes, they are pinned.