Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Pulsed Media Seedbox Auctions Are Back! 1TB and 6TB 1Gbps RAID0 Options Available NOW! - Page 4
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Pulsed Media Seedbox Auctions Are Back! 1TB and 6TB 1Gbps RAID0 Options Available NOW!

124»

Comments

  • PulsedMediaPulsedMedia Member, Patron Provider
    edited October 2021

    @codelock said:

    @Wicked said:
    Anyone running Jellyfin or Emby on one of these and can share some feedback?

    Please don't, if you just want to stream the thing without any things like thumbnail etc, it may work fine otherwise disk speed is too low if you have thumbnail and stuff it will literally take eternity for it to load

    that will depend which service you have, but yeah if you have auction v1000 on a server which was filled to the brim just now; It's gonna be sloooow.

    M10G, Dragon-R, or older M1000/V1000 server will be much snappier. SSD ofc.
    Right now ~all new users for M10G and SSD should get to Ryzen servers even :)
    THO, M10G is not meant to be Ryzen servers and we will probably not add more Ryzens to that series because the difference between old Xeon 5600 series and a Ryzen is just too great. So it's a happy bonus for those few who get a Ryzen server.

    We have plenty of that era servers still available and with 10G so we will probably in mid-term (~a year) separate older gen servers from new Ryzen servers by introducing a Ryzen only line-up for 10G tier. Same thing goes for the SSD servers.

    We should probably do better job to inform people what kind of performance to expect on each tier of services.

    Regardless i would recommend this way:

    • V1000: Direct streaming only via Kodi, VLC etc. (ie. SSHFS mount. Do note that high bitstream Kodi cannot stream, programmatic limit not the server. Even locally it cannot do typical high bitstream 4K HDR)
    • M1000: Emby or Jellyfin fine, no transcoding tho
    • V1000 Older server: Same as M1000
    • M10G Regular server: same as M1000
    • M10G Ryzen server: Transcoding is fine too, just keep the simultaneous users at max ~2
    • SSD Ryzen server: same as M10G
    • SSD Old server: Same as M1000 or max 1 ~1080p transcode (not enough CPU remaining for other users if you do heavy transcoding)
    • Dragon-R Servers: Stream and Transcode to all your pleasure, every single server is 32core/64thread EPyC currently, plenty of I/O perf too with RAID10 etc. multiple simultaneous transcodes? Sure why not :)

    You guys probably know better than i do how much CPU is required for transcoding :)

    The V1000 servers are literally hand me downs from the old M1000, Super250, Super50 etc. series.

  • the server is very slow! Download speed downgrade to 100kb :(

  • I'm usually pretty laid back when it comes to response times, but I am a bit disappointed by the response to my open ticket. I'm still waiting for a response to my October 25th reply. I understand that they can get busy, but 5 days for something that should be fairly straightforward seems a bit excessive.

  • bruh21bruh21 Member, Host Rep

    @user123 said:
    I'm usually pretty laid back when it comes to response times, but I am a bit disappointed by the response to my open ticket. I'm still waiting for a response to my October 25th reply. I understand that they can get busy, but 5 days for something that should be fairly straightforward seems a bit excessive.

    try virmach)

  • 6TB plan is wiped out. But I must say speed is highly decreased..

  • PulsedMediaPulsedMedia Member, Patron Provider

    Stats show ever higher BW usage all around -- but nowhere near is any of the links maxed out.
    That being said, the M1000/V1000 series servers are busier than ever and several abusers caught maxing out everything they can.

    Running stuff like parpar, pigz, rclone *arrs etc. all the same time with zero regard on server resources and that there are other users on the server. People also configured their Deluge instances to consume all server RAM does not help.

    Oh well, that's always been the nature. Some people could not give shit about the fact they are sharing the server with other users too and feel entitled to everything they can get their hands on; meanwhile refusing to get service that suits their needs best. I guess we have to soon develop hard limits for everything so there is no need for manual intervention in future.

    Ultimately tho, there is only so much performance on a single server, and if the whole server is sold at 40-50€ a month there just is not much to go around for hardware after colocation, power, bandwidth etc. Sad reality is that with seedboxes very very few users are willing to pay for performance; So most of the time when we setup high performance servers it's a loss leader for us with very weak ROI.

    As for the ticket response times, really sorry about that. Help is coming to that soon.
    So busy with setting up datacenter infra so we can bring more servers online lately that ticket responses has been a bit slow occasionally now.

    Further we prioritize things which from subject look critical (whole service down) and trivial stuff/basic questions type of stuff for later.

  • plumbergplumberg Veteran, Megathread Squad

    @PulsedMedia said:
    Stats show ever higher BW usage all around -- but nowhere near is any of the links maxed out.
    That being said, the M1000/V1000 series servers are busier than ever and several abusers caught maxing out everything they can.

    Running stuff like parpar, pigz, rclone *arrs etc. all the same time with zero regard on server resources and that there are other users on the server. People also configured their Deluge instances to consume all server RAM does not help.

    Oh well, that's always been the nature. Some people could not give shit about the fact they are sharing the server with other users too and feel entitled to everything they can get their hands on; meanwhile refusing to get service that suits their needs best. I guess we have to soon develop hard limits for everything so there is no need for manual intervention in future.

    Ultimately tho, there is only so much performance on a single server, and if the whole server is sold at 40-50€ a month there just is not much to go around for hardware after colocation, power, bandwidth etc. Sad reality is that with seedboxes very very few users are willing to pay for performance; So most of the time when we setup high performance servers it's a loss leader for us with very weak ROI.

    As for the ticket response times, really sorry about that. Help is coming to that soon.
    So busy with setting up datacenter infra so we can bring more servers online lately that ticket responses has been a bit slow occasionally now.

    Further we prioritize things which from subject look critical (whole service down) and trivial stuff/basic questions type of stuff for later.

    So as a provider do you plan to take any action on these said abusers?

  • @PulsedMedia said:
    Stats show ever higher BW usage all around -- but nowhere near is any of the links maxed out.
    That being said, the M1000/V1000 series servers are busier than ever and several abusers caught maxing out everything they can.

    Running stuff like parpar, pigz, rclone *arrs etc. all the same time with zero regard on server resources and that there are other users on the server. People also configured their Deluge instances to consume all server RAM does not help.

    Oh well, that's always been the nature. Some people could not give shit about the fact they are sharing the server with other users too and feel entitled to everything they can get their hands on; meanwhile refusing to get service that suits their needs best. I guess we have to soon develop hard limits for everything so there is no need for manual intervention in future.

    Ultimately tho, there is only so much performance on a single server, and if the whole server is sold at 40-50€ a month there just is not much to go around for hardware after colocation, power, bandwidth etc. Sad reality is that with seedboxes very very few users are willing to pay for performance; So most of the time when we setup high performance servers it's a loss leader for us with very weak ROI.

    As for the ticket response times, really sorry about that. Help is coming to that soon.
    So busy with setting up datacenter infra so we can bring more servers online lately that ticket responses has been a bit slow occasionally now.

    Further we prioritize things which from subject look critical (whole service down) and trivial stuff/basic questions type of stuff for later.

    Why don't you create individual SEEDBOX? Like create 5 KVM on your 50EUR and the host 1 seedbox on each VM for $10. Also you can use CLOUDLinux and Varnish and so many other stuffs to improve your client experience. You running a DC - don't tell me you never thought about any solution?

  • He can't oversell as much then

    Thanked by 1xetsys
  • I want to share a seedbox with another user, is that allowed and how do I create a second user?

  • PulsedMediaPulsedMedia Member, Patron Provider

    @plumberg said: So as a provider do you plan to take any action on these said abusers?

    Already taken.
    We swiftly move to suspend or repeat offenders get terminated.

    @tr1cky said: I want to share a seedbox with another user, is that allowed and how do I create a second user?

    Single user per instance only. Typically second user increases load to double.

    @webontop said: Why don't you create individual SEEDBOX? Like create 5 KVM on your 50EUR and the host 1 seedbox on each VM for $10. Also you can use CLOUDLinux and Varnish and so many other stuffs to improve your client experience. You running a DC - don't tell me you never thought about any solution?

    Each IP increases cost by ~2€/a month, they cost now about 50$ a piece.
    Further, VM has performance hit on I/O, tho we need to retest this as our benchmarks are now like 7 years old -- ought to have gotten better ... Unless you got Intel CPU

    Now such a 50€ a month revenue server might cost something like 10€ a month to just power on, now with the additional IPs + Cloudlinux adds another 11€ a month, so we are now talking are talking about 31€ a month just to have it powered on. Now we are taking a loss with that server. Bandwidth is super expensive too, IP transit prices are nothing like the consumer pricing you see.

    Now because it's VM that extra cost has to be justified, so root access, and then any distro, all the QA & dev that comes with that, and all the tickets asking for "how do i connect with ssh?", and all those "qwerty123" root passwords (people have actually asked us to set that password! And not just 1 user) -- so we just ended up needing 75+€ a month from that server to be profitable. Will require building also all the backend management stuff and processes, documentation for users etc. which there is quite a bit!

    But users will not want that as they seemingly get less resources for more money. Most people choose by what cost least numbers wise. Even you @webontop are essentially asking more for same or less money. Further the most popular price point is at 5€ neighbourhood, sadly paypal can take more than 10% from this...

    I agree, we should offer VMs, but we do not see stuff directly translating to that, but as completely separate service -- and servers tailored for that.

    --

    Ultimately it boils down to lack of human resources, and huge taxes in Finland preventing us to effectively hire more highly skilled people plus the huge risks involved. If we had time to develop all kinds stuff, we would.
    Don't get me wrong, we are in process of hiring someone who will be solely working customer care, but damn bureucracy is taking a lot of time.

    Operating a DC takes huge amount of effort alone, servers don't build, test and rack themselves. Oh but you also need to get that rack in place, and switches, wire them, config them, manage all the IP allocations and network configs. Someone needs to install the PDUs, and wire them in, make sure power monitoring is working, documentation etc.

    Then you need to do maintenance on them.

    Getting servers "to just work" is not happenstance; It takes a lot of work and diligence.

    Sometimes you even need to fix manufacturer defects/issues yourself on hardware level; we swapped chipset heatsinks/added fans on hundreds of servers, mobo mfg used full copper, then system integrator had swapped them for alu with garbage thermal paste. Bios/Bmc also had monitoring issue by showing ~20C lower temp for chipset than real. Took like 1½ years to find the reason for random halts and crashes on those servers as we could not reproduce it neither reliable; Only after winter as summer was coming we got on the right trail! Ever so slight temp increase at DC (~1.5C) servers started crashing again left and right. Ofc of ALL the things, 40mm fans were nowhere available in Finland except at insane prices: 15€ a pop! This is very typical for Finland due to absurd taxation no one wants to keep anything in stock.

Sign In or Register to comment.