Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


NEW 10Gbps Value Seedbox From Pulsed Media! V10G Series: UPTO 40TB of Storage WITH 10Gbps - Page 4
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

NEW 10Gbps Value Seedbox From Pulsed Media! V10G Series: UPTO 40TB of Storage WITH 10Gbps

124

Comments

  • PulsedMediaPulsedMedia Member, Patron Provider

    @Setsura said: Do you have any job listing? I'm a developer and competent sysadmin who could handle the transmission thing and other stuff for you depending on the job specs.

    Sorry we do not have public listing, as we mostly only look for local people.
    You can contact via sales with a open format application tho.

    @niknar1900 said: This is one of the reasons why I left, using Torrnado on my phone is so convenient. Also HTTPS wasn't turned on for my account.

    That must've been a really really long time ago, close to 12 years. I believe HTTPS was enabled a few months or weeks after we opened the doors and first customers were setup.

    Self signed cert on the other hand has been the norm most of the time, since there was no automated way to get bulk licenses and cost was initially prohibitive a decade ago.
    These days if there is self signed cert it's when once again certbot has broken by what-ever 3rd party reason. EFF intentionally breaks it every now and then for old users.

  • @PulsedMedia thank you for your time explaining to the peanut gallery. I despise the crypto scam but i can certainly understand that if you are building a server for high I/O throughput combining a very bursty high priority load with a low priority background load just makes sense for optimal server use. I also hate capitalism but i get that servers gotta pay the bills, and that its gonna be a more usable server with a background load than 2ce as many burst loads competing.
    If people dont wanna benefit from the crypto scam even indirectly as a moral principal i guess i get it, but either way the scam is gonna be running. It isnt gonna change til we regulate, you harming yourself wont change it. And this being the far less energy intensive version, who knows, may even do a tiny good moving people off proof of work systems.

  • PulsedMediaPulsedMedia Member, Patron Provider

    Just restocked with a few big servers :)
    Next servers for this lineup might take until end of january

  • yoursunnyyoursunny Member, IPv6 Advocate

    @TimboJones said:
    Well, I wouldn't too concerned about the security aspect, it's just Linux ISO'S in 100% of customer data.

    I have the 450GB plan.
    I'm thinking about canceling the HostHatch 250GB KVM, and move the backups here.
    They are not movies or ISO's, but family photos (unencrypted) and tax returns (encrypted).

  • @PulsedMedia any chance the year 1TB flash deal can get sonarr as well?

  • PulsedMediaPulsedMedia Member, Patron Provider

    @TheBrokenBee said:
    @PulsedMedia any chance the year 1TB flash deal can get sonarr as well?

    sonarr is on all plans, just config + launch from shell.

  • PulsedMediaPulsedMedia Member, Patron Provider

    @yoursunny said:

    @TimboJones said:
    Well, I wouldn't too concerned about the security aspect, it's just Linux ISO'S in 100% of customer data.

    I have the 450GB plan.
    I'm thinking about canceling the HostHatch 250GB KVM, and move the backups here.
    They are not movies or ISO's, but family photos (unencrypted) and tax returns (encrypted).

    It's trivial to encrypt everything btw, just pipe data through gpg :) for example:

    tar -c [data]|gpg -c | ssh user@server 'cat > filename'
    

    That would tar whatever data you want, gpg can do also the compressions and finally passed through ssh to remote server file. or local just remove ssh user@server

    unencryption is just gpg decompress and pipe to tar. If you got big data set you can even use pigz -9 to compress with multiple cores and maximum compression level -9 :)

    Thanked by 2yoursunny user123
  • @yoursunny said:

    @TimboJones said:
    Well, I wouldn't too concerned about the security aspect, it's just Linux ISO'S in 100% of customer data.

    I have the 450GB plan.
    I'm thinking about canceling the HostHatch 250GB KVM, and move the backups here.
    They are not movies or ISO's, but family photos (unencrypted) and tax returns (encrypted).

    "Family photos" is the new Linux ISO'S. But I give @Daniel15 first credit.

  • @PulsedMedia said:

    @yoursunny said:

    @TimboJones said:
    Well, I wouldn't too concerned about the security aspect, it's just Linux ISO'S in 100% of customer data.

    I have the 450GB plan.
    I'm thinking about canceling the HostHatch 250GB KVM, and move the backups here.
    They are not movies or ISO's, but family photos (unencrypted) and tax returns (encrypted).

    It's trivial to encrypt everything btw, just pipe data through gpg :) for example:

    tar -c [data]|gpg -c | ssh user@server 'cat > filename'
    

    That would tar whatever data you want, gpg can do also the compressions and finally passed through ssh to remote server file. or local just remove ssh user@server

    unencryption is just gpg decompress and pipe to tar. If you got big data set you can even use pigz -9 to compress with multiple cores and maximum compression level -9 :)

    Encrypt his photos? Have you seen his posts? He'd be happy to have some random hacker download and view all those pictures.

  • AndrewsAndrews Member
    edited December 2021

    @PulsedMedia said:

    tar -c [data]|gpg -c | ssh user@server 'cat > filename'
    

    am I right, that for your current level of inter-user isolation (or rather lack of it), if users were able to spot Chia harvester process, then they will be able to sneak this above command as well and find that secret "user@server" (the heart of world wide pushup CDN infrastructure) involved in processing yoursunny's precious "family photos"?

    EDIT: or maybe that command is meant to be run on client side, so then no issue at all

  • @TimboJones said:

    @yoursunny said:

    @TimboJones said:
    Well, I wouldn't too concerned about the security aspect, it's just Linux ISO'S in 100% of customer data.

    I have the 450GB plan.
    I'm thinking about canceling the HostHatch 250GB KVM, and move the backups here.
    They are not movies or ISO's, but family photos (unencrypted) and tax returns (encrypted).

    "Family photos" is the new Linux ISO'S. But I give @Daniel15 first credit.

    LOL. I did actually mean it legitimately.

  • @PulsedMedia i just claimed one of the seedboxes, thank you for what you do!

    Thanked by 1PulsedMedia
  • PulsedMediaPulsedMedia Member, Patron Provider

    @TimboJones said:

    @PulsedMedia said:

    @yoursunny said:

    @TimboJones said:
    Well, I wouldn't too concerned about the security aspect, it's just Linux ISO'S in 100% of customer data.

    I have the 450GB plan.
    I'm thinking about canceling the HostHatch 250GB KVM, and move the backups here.
    They are not movies or ISO's, but family photos (unencrypted) and tax returns (encrypted).

    It's trivial to encrypt everything btw, just pipe data through gpg :) for example:

    tar -c [data]|gpg -c | ssh user@server 'cat > filename'
    

    That would tar whatever data you want, gpg can do also the compressions and finally passed through ssh to remote server file. or local just remove ssh user@server

    unencryption is just gpg decompress and pipe to tar. If you got big data set you can even use pigz -9 to compress with multiple cores and maximum compression level -9 :)

    Encrypt his photos? Have you seen his posts? He'd be happy to have some random hacker download and view all those pictures.

  • PulsedMediaPulsedMedia Member, Patron Provider

    @DogbertPrime said:
    @PulsedMedia i just claimed one of the seedboxes, thank you for what you do!

    Thank You for choosing Pulsed Media :)

  • PulsedMediaPulsedMedia Member, Patron Provider

    Getting low on stock, so get yours now if you are looking to get one.

    New nodes i don't think we can get online before mid January.
    We just racked a ~rack full of 4x3.5" nodes. Drives, CPUs, RAM, network stuff etc. are already in stock tho. QA, 2 more racks of servers to rack, holidays etc. so might even take until around february new stock starts to rolling in significant numbers.

    Dragon-R is sold out, but new mega server is already on order, and another one of those is planned to be ordered by february. First one with direct connection 36x 3.5", and second one with SAS 12G Expander and SAS drives, should be even faster. Both will be 32core EPyC as is normal for the Dragon-R series :) These are north of 20 000€ each if maxed out config, but to drive even further performance current plan is to go with 8-12TB drives instead of 18TB. Not your typical seedbox server then, and it's always fun to see GB/s rates read+write simultaenous for 100% random I/O on this level of storage, just insane perf these megaservers put through :)

    M10G has barely any stock either.

    All of the other offers are practically sold out.

    1 or 2 server installments might appear here and there until we get the new servers running. Considering also going for atypical config for many of them, and leave the second CPU in. Many of the servers are 2x12Core Opterons. Would make for wicked compute servers @ budget so we might just offer them as dedis.

  • Ok..

  • @PulsedMedia said:

    We just racked a ~rack full of 4x3.5" nodes

    What configuration are you using with these? Hardware RAID? Linux md raid? bcache? Chunk size for the raid set?

    Curious minds would love to know :)

  • PulsedMediaPulsedMedia Member, Patron Provider

    @Quartermaster said:

    @PulsedMedia said:

    We just racked a ~rack full of 4x3.5" nodes

    What configuration are you using with these? Hardware RAID? Linux md raid? bcache? Chunk size for the raid set?

    Curious minds would love to know :)

    These will mostly be RAID0 V10G servers. typically something along the lines of:

    mdadm --create /dev/md1 -l0 -n4 --chunk=2048 /dev/sd[abcd]4

    Thanked by 1Quartermaster
  • PulsedMediaPulsedMedia Member, Patron Provider

    There is currently reaaaally bad congestion on Twelve99/Telia network: https://pulsedmedia.com/clients/announcements.php?id=537

    Hopefully resolved by tomorrow.

    Thanked by 1nordmann
  • nordmannnordmann Member
    edited December 2021

    @PulsedMedia said: hence Deluge support added, and we are now working on adding qbittorrent as next. Qbittorrent shows potential UI wise to replace rtorrent.

    Thx a lot for already implementing qBittorrent <3
    I just gave it a try on a V1k L instance and it runs pretty well (with only a few torrents so far). Although in direct comparision with rtorrent i've seen some >150% cpu usage when reaching max. download speed download (1gbps) from some peers.

    Speaking of recent implementation is there a special reason it's still on v4.1.5?
    Seems this is from Dec. 24th 2018 which is even older than last 0.9.8 rtorrent rls from 2019 😇

    Needless to say qBT got many improvements over the last years but personally to fully replace rtorrent for me I would love at least v4.3.0 release for: FEATURE: Add RSS functionality in Web UI (Sepro)

    There are also alternative WebUIs worth checking, e.g. https://flood.js.org/ (as it's a single super sleek UI supporting rTorrent, qBt, Transmission and Deluge experimental as backend)

    Got it up for testing in a few minutes but don't wanna run anything "unofficially" on the server of course.

    git clone https://github.com/jesec/flood.git
    npm install && npm build
    npm run start -- --host 0.0.0.0 --port $port --qburl http://httpUser:httpPass@localhost/user/...

    Cheers!

    Thanked by 1PulsedMedia
  • PulsedMediaPulsedMedia Member, Patron Provider
    edited December 2021

    @nordmann said:

    @PulsedMedia said: hence Deluge support added, and we are now working on adding qbittorrent as next. Qbittorrent shows potential UI wise to replace rtorrent.

    Thx a lot for already implementing qBittorrent <3
    I just gave it a try on a V1k L instance and it runs pretty well (with only a few torrents so far). Although in direct comparision with rtorrent i've seen some >150% cpu usage when reaching max. download speed download (1gbps) from some peers.

    Speaking of recent implementation is there a special reason it's still on v4.1.5?
    Seems this is from Dec. 24th 2018 which is even older than last 0.9.8 rtorrent rls from 2019 😇

    Needless to say qBT got many improvements over the last years but personally to fully replace rtorrent for me I would love at least v4.3.0 release for: FEATURE: Add RSS functionality in Web UI (Sepro)

    There are also alternative WebUIs worth checking, e.g. https://flood.js.org/ (as it's a single super sleek UI supporting rTorrent, qBt, Transmission and Deluge experimental as backend)

    Got it up for testing in a few minutes but don't wanna run anything "unofficially" on the server of course.

    git clone https://github.com/jesec/flood.git
    npm install && npm build
    npm run start -- --host 0.0.0.0 --port $port --qburl http://httpUser:httpPass@localhost/user/...

    Cheers!

    Good thing seedbox servers generally have their CPUs almost fully idle then, ain't it? ;)

    We currently use the repo version of qbittorrent. No sense for us to go through the trouble of maintaining from source until we see it gets enough usage etc. and a solid reason for that.

    Flood has been requested by many indeed.

    You can run it "unofficially" no problem, we cannot have time to implement everything under the sun, so there is no "unofficial" usage. Just be mindful of other users, that's all :)

    Thanked by 2nordmann chedenaz
  • netomxnetomx Moderator, Veteran

    Mine-s working great. Thanks @PulsedMedia

    Thanked by 1PulsedMedia
  • @PulsedMedia any ETA for the official qBittorrent rollout on the M1000 A2 series?
    It would be godsend!
    I always hated that rutorrent starts to throw timeouts all the time when there are ~2800+ torrents.

  • @PulsedMedia said: You can run it "unofficially" no problem, we cannot have time to implement everything under the sun, so there is no "unofficial" usage. Just be mindful of other users, that's all

    Nice, thank you kindly for that relaxed point of view :)

    Regading the version, oldstable Debian Repo is really pretty conservative 🙈

    There is also a Git Repo constantly publishing static linked binaries of latest Qbt, running great out of the box 👌
    https://github.com/userdocs/qbittorrent-nox-static
    Implementation would be as simple as:
    wget -qO ~/bin/qbittorrent-nox https://github.com/userdocs/qbittorrent-nox-static/releases/latest/download/x86_64-qbittorrent-nox

    Thanked by 1chedenaz
  • yoursunnyyoursunny Member, IPv6 Advocate

    @PulsedMedia said:
    Good thing seedbox servers generally have their CPUs almost fully idle then, ain't it? ;)

    You can run it "unofficially" no problem, we cannot have time to implement everything under the sun, so there is no "unofficial" usage. Just be mindful of other users, that's all :)

    I see 6 cores and 48GB RAM.
    Since it's 24 users per server, everyone gets 25% core and 2GB RAM, right?

    However, I can't figure out how to set CPU and RAM limits.
    Last time I tried the pre-installed ffmpeg, it goes up to 200% CPU, so I quickly stopped it to not ruffle any feathers.
    The two methods I know are Docker and systemd, but neither is possible without root.

    As for running programs unofficially, one issue is port number allocation.
    I notice that uTorrent and lighttpd are each assigned a port number whose last two digits are same as account UID.
    Does it mean I can use any port number that end with account UID?
    Is there any other guidelines I should know about?

    Then, I see I could insert custom lighttpd config, but how to make lighttpd reload the config?
    If lighttpd config change could take effect, there would be no port number problem because I can then have the app listen on Unix socket and use lighttpd as reverse proxy.

  • PulsedMediaPulsedMedia Member, Patron Provider

    @that_guy said:
    @PulsedMedia any ETA for the official qBittorrent rollout on the M1000 A2 series?
    It would be godsend!
    I always hated that rutorrent starts to throw timeouts all the time when there are ~2800+ torrents.

    It is already rolling out on all. If not on your server open a ticket requesting your server to be updated.

    rutorrent works with 14k+ torrents on our setup, if not then that server is busy and that comes with the territory of entry level shared.

    @nordmann said:

    @PulsedMedia said: You can run it "unofficially" no problem, we cannot have time to implement everything under the sun, so there is no "unofficial" usage. Just be mindful of other users, that's all

    Nice, thank you kindly for that relaxed point of view :)

    Regading the version, oldstable Debian Repo is really pretty conservative 🙈

    There is also a Git Repo constantly publishing static linked binaries of latest Qbt, running great out of the box 👌
    https://github.com/userdocs/qbittorrent-nox-static
    Implementation would be as simple as:
    wget -qO ~/bin/qbittorrent-nox https://github.com/userdocs/qbittorrent-nox-static/releases/latest/download/x86_64-qbittorrent-nox

    If that's the same userdocs who was mod on reddit, like i think he is, we will not touch anything he does or has done. He will cause a shit storm just to spite us. You would not believe the shit he tried to pull of in his moderator power trip.

    @yoursunny said:

    @PulsedMedia said:
    Good thing seedbox servers generally have their CPUs almost fully idle then, ain't it? ;)

    You can run it "unofficially" no problem, we cannot have time to implement everything under the sun, so there is no "unofficial" usage. Just be mindful of other users, that's all :)

    I see 6 cores and 48GB RAM.
    Since it's 24 users per server, everyone gets 25% core and 2GB RAM, right?

    However, I can't figure out how to set CPU and RAM limits.
    Last time I tried the pre-installed ffmpeg, it goes up to 200% CPU, so I quickly stopped it to not ruffle any feathers.
    The two methods I know are Docker and systemd, but neither is possible without root.

    As for running programs unofficially, one issue is port number allocation.
    I notice that uTorrent and lighttpd are each assigned a port number whose last two digits are same as account UID.
    Does it mean I can use any port number that end with account UID?
    Is there any other guidelines I should know about?

    Then, I see I could insert custom lighttpd config, but how to make lighttpd reload the config?
    If lighttpd config change could take effect, there would be no port number problem because I can then have the app listen on Unix socket and use lighttpd as reverse proxy.

    Not a VPS, so you can burst use all those 6cores.

    ffmpeg if i recall right has a flag for how many cores to use, and just use nice 19 to start it, then it does not matter if you even try to use all cores as only idle cycles would be used.

    port numbers ending on UID is purely coincidental.

    restart lighttpd: Kill the process, and automation will start it again within 2 minutes -- or you can start it yourself :)

  • yoursunnyyoursunny Member, IPv6 Advocate

    @PulsedMedia said:
    rutorrent works with 14k+ torrents on our setup, if not then that server is busy and that comes with the territory of entry level shared.

    I can never imagine how someone could mentally keep track of 14K torrents.
    I have 10 movies and I'm already confused on which movies I haven't watched.
    I know I can delete the movie right after I've watched it, but I read on a magazine that I'm supposed to keep the files for a few weeks before deleting.

    restart lighttpd: Kill the process, and automation will start it again within 2 minutes -- or you can start it yourself :)

    I see, cron script or the likes.
    I didn't find systemd service, so that I was too afraid to kill lighttpd because I worried it wouldn't come up again until who knows how long.

    ffmpeg if i recall right has a flag for how many cores to use, and just use nice 19 to start it, then it does not matter if you even try to use all cores as only idle cycles would be used.

    ffmpeg would become nice 19 after a minute.
    Automation strikes again.

  • @yoursunny said:
    I have 10 movies and I'm already confused on which movies I haven't watched.

    You can pull up random tweets from last decade but not recall 10 movies you've watched in last months? Are you sure you're downloading movies worth watching in the first place? Remembering if you saw a shitty movie likely is harder to remember.

    Remembering the last 14k movies I may have watched is tricky, not the last 10 in a list.

    Thanked by 1yoursunny
  • PulsedMediaPulsedMedia Member, Patron Provider

    @yoursunny said: I didn't find systemd service, so that I was too afraid to kill lighttpd because I worried it wouldn't come up again until who knows how long.

    If we didn't have a watchdog we'd be swamped with tickets X)

    It's actually kinda ridiculous how many watchdogs we need to have, and how flaky most software actually is.

    @yoursunny said: ffmpeg would become nice 19 after a minute.

    Automation strikes again.

    Yup, i think i forgot we might have automation for that too.... oh well, so much stuff happens in more than a decade cannot recall every single detail :)

    Thanked by 1yoursunny
  • @PulsedMedia My service is down. 502 Bad Gateway on website, and Torrent trackers are showing no activity.

Sign In or Register to comment.