New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
that will depend which service you have, but yeah if you have auction v1000 on a server which was filled to the brim just now; It's gonna be sloooow.
M10G, Dragon-R, or older M1000/V1000 server will be much snappier. SSD ofc.
Right now ~all new users for M10G and SSD should get to Ryzen servers even
THO, M10G is not meant to be Ryzen servers and we will probably not add more Ryzens to that series because the difference between old Xeon 5600 series and a Ryzen is just too great. So it's a happy bonus for those few who get a Ryzen server.
We have plenty of that era servers still available and with 10G so we will probably in mid-term (~a year) separate older gen servers from new Ryzen servers by introducing a Ryzen only line-up for 10G tier. Same thing goes for the SSD servers.
We should probably do better job to inform people what kind of performance to expect on each tier of services.
Regardless i would recommend this way:
You guys probably know better than i do how much CPU is required for transcoding
The V1000 servers are literally hand me downs from the old M1000, Super250, Super50 etc. series.
the server is very slow! Download speed downgrade to 100kb
I'm usually pretty laid back when it comes to response times, but I am a bit disappointed by the response to my open ticket. I'm still waiting for a response to my October 25th reply. I understand that they can get busy, but 5 days for something that should be fairly straightforward seems a bit excessive.
try virmach)
6TB plan is wiped out. But I must say speed is highly decreased..
Stats show ever higher BW usage all around -- but nowhere near is any of the links maxed out.
That being said, the M1000/V1000 series servers are busier than ever and several abusers caught maxing out everything they can.
Running stuff like parpar, pigz, rclone *arrs etc. all the same time with zero regard on server resources and that there are other users on the server. People also configured their Deluge instances to consume all server RAM does not help.
Oh well, that's always been the nature. Some people could not give shit about the fact they are sharing the server with other users too and feel entitled to everything they can get their hands on; meanwhile refusing to get service that suits their needs best. I guess we have to soon develop hard limits for everything so there is no need for manual intervention in future.
Ultimately tho, there is only so much performance on a single server, and if the whole server is sold at 40-50€ a month there just is not much to go around for hardware after colocation, power, bandwidth etc. Sad reality is that with seedboxes very very few users are willing to pay for performance; So most of the time when we setup high performance servers it's a loss leader for us with very weak ROI.
As for the ticket response times, really sorry about that. Help is coming to that soon.
So busy with setting up datacenter infra so we can bring more servers online lately that ticket responses has been a bit slow occasionally now.
Further we prioritize things which from subject look critical (whole service down) and trivial stuff/basic questions type of stuff for later.
So as a provider do you plan to take any action on these said abusers?
Why don't you create individual SEEDBOX? Like create 5 KVM on your 50EUR and the host 1 seedbox on each VM for $10. Also you can use CLOUDLinux and Varnish and so many other stuffs to improve your client experience. You running a DC - don't tell me you never thought about any solution?
He can't oversell as much then
I want to share a seedbox with another user, is that allowed and how do I create a second user?
Already taken.
We swiftly move to suspend or repeat offenders get terminated.
Single user per instance only. Typically second user increases load to double.
Each IP increases cost by ~2€/a month, they cost now about 50$ a piece.
Further, VM has performance hit on I/O, tho we need to retest this as our benchmarks are now like 7 years old -- ought to have gotten better ... Unless you got Intel CPU
Now such a 50€ a month revenue server might cost something like 10€ a month to just power on, now with the additional IPs + Cloudlinux adds another 11€ a month, so we are now talking are talking about 31€ a month just to have it powered on. Now we are taking a loss with that server. Bandwidth is super expensive too, IP transit prices are nothing like the consumer pricing you see.
Now because it's VM that extra cost has to be justified, so root access, and then any distro, all the QA & dev that comes with that, and all the tickets asking for "how do i connect with ssh?", and all those "qwerty123" root passwords (people have actually asked us to set that password! And not just 1 user) -- so we just ended up needing 75+€ a month from that server to be profitable. Will require building also all the backend management stuff and processes, documentation for users etc. which there is quite a bit!
But users will not want that as they seemingly get less resources for more money. Most people choose by what cost least numbers wise. Even you @webontop are essentially asking more for same or less money. Further the most popular price point is at 5€ neighbourhood, sadly paypal can take more than 10% from this...
I agree, we should offer VMs, but we do not see stuff directly translating to that, but as completely separate service -- and servers tailored for that.
--
Ultimately it boils down to lack of human resources, and huge taxes in Finland preventing us to effectively hire more highly skilled people plus the huge risks involved. If we had time to develop all kinds stuff, we would.
Don't get me wrong, we are in process of hiring someone who will be solely working customer care, but damn bureucracy is taking a lot of time.
Operating a DC takes huge amount of effort alone, servers don't build, test and rack themselves. Oh but you also need to get that rack in place, and switches, wire them, config them, manage all the IP allocations and network configs. Someone needs to install the PDUs, and wire them in, make sure power monitoring is working, documentation etc.
Then you need to do maintenance on them.
Getting servers "to just work" is not happenstance; It takes a lot of work and diligence.
Sometimes you even need to fix manufacturer defects/issues yourself on hardware level; we swapped chipset heatsinks/added fans on hundreds of servers, mobo mfg used full copper, then system integrator had swapped them for alu with garbage thermal paste. Bios/Bmc also had monitoring issue by showing ~20C lower temp for chipset than real. Took like 1½ years to find the reason for random halts and crashes on those servers as we could not reproduce it neither reliable; Only after winter as summer was coming we got on the right trail! Ever so slight temp increase at DC (~1.5C) servers started crashing again left and right. Ofc of ALL the things, 40mm fans were nowhere available in Finland except at insane prices: 15€ a pop! This is very typical for Finland due to absurd taxation no one wants to keep anything in stock.