Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


kvm 2xE5-2680v4/2gb ddr4/50gb nvme+3.6tb hdd raid10- located in Bucharest ! Telia upstream ! - Page 8
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

kvm 2xE5-2680v4/2gb ddr4/50gb nvme+3.6tb hdd raid10- located in Bucharest ! Telia upstream !

1568101117

Comments

  • seed2tweetseed2tweet Member
    edited March 2021

    @notarobo said:

    try download to NVMe and compare

    Yes, for NVMe the speeds are better but still not quite good enough. Also, even when I download by FTP from NVMe, the speed does not saturate my 100Mbps connection...
    Unfortunately, it seems that this plan is not working out for me either :(

  • @yorkchou said:

    I believe it's due to the slow hdd disk speed.
    Can't endure this slow speed, so I raised a refund ticket 21st,March, stilling waiting reply.
    https://paste.ubuntu.com/p/KWyRp56Yky/

    I got similar low speed numbers: https://ibb.co/QJQVd7C

  • @seed2tweet said:
    Unfortunately, it seems that this plan is not working out for me either :(

    You want "NVME+3.6TB" + premium service for 6 Eur month :smiley:

  • seed2tweetseed2tweet Member
    edited March 2021

    @momkin said:

    You want "NVME+3.6TB" + premium service for 6 Eur month :smiley:

    Not premium, of course, but usable for my case scenario. I guess this server configuration is just not good for a seedbox/media server, which does not necessarily require an NMVe disk but needs a large, speedy HDD instead.

    But by all means, thanks to the provider for this offer!

  • @seed2tweet said:
    Not premium, of course, but usable for my case scenario. I guess this server configuration is just not good for a seedbox/media server, which does not necessarily require an NMVe disk but needs a large, speedy HDD instead.

    I guest you can't combine "large" And "speed" so you must choice one :smile:

  • @momkin said:

    @seed2tweet said:
    Not premium, of course, but usable for my case scenario. I guess this server configuration is just not good for a seedbox/media server, which does not necessarily require an NMVe disk but needs a large, speedy HDD instead.

    I guest you can't combine "large" And "speed" so you must choice one :smile:

    You can, you just add "price" in large capital letters to the equation.

  • @afn said:
    @tommycai, may I ask when did you order? I ordered on 2nd of march and still didn't get mine neither.

    I know someone who ordered by 27th of Feb and got theirs... So I think we should get ours soon

    Feb 26th and still pending

  • PixelsPixels Member
    edited March 2021

    @seed2tweet said:

    @notarobo said:

    try download to NVMe and compare

    Yes, for NVMe the speeds are better but still not quite good enough. Also, even when I download by FTP from NVMe, the speed does not saturate my 100Mbps connection...
    Unfortunately, it seems that this plan is not working out for me either :(

    I'd suggest waiting. I have one of cociu's 2TB box from BF and now it seems it is somewhat more responsive. I get 80-90 MB/s now using the same hdparm command.
    This one doesn't even have a small NVMe disk so the OS feels way slower.

    Yes, it may even take months for things on your host node to calm down lol

    Pricing is hard to beat, tho

    Thanked by 1seed2tweet
  • seed2tweetseed2tweet Member
    edited March 2021

    @Pixels said:

    I'd suggest waiting. I have one of cociu's 2TB box from BF and now it seems it is somewhat more responsive. I get 80-90 MB/s now using the same hdparm command.
    This one doesn't even have a small NVMe disk so the OS feels way slower.

    Yes, it may even take months for things on your host node to calm down lol

    Do you really think things might "calm down" and why? Or maybe your HDD was faster to begin with? Did you measure the speeds when you just got the box?

  • @seed2tweet said:

    @Pixels said:

    I'd suggest waiting. I have one of cociu's 2TB box from BF and now it seems it is somewhat more responsive. I get 80-90 MB/s now using the same hdparm command.
    This one doesn't even have a small NVMe disk so the OS feels way slower.

    Yes, it may even take months for things on your host node to calm down lol

    Do you really think things might "calm down" and why? Or maybe your HDD was faster to begin with? Did you measure the speeds when you just got the box?

    every new user is new torrent and new download into the box. after some time box will full and less write.

    Thanked by 1seed2tweet
  • @seed2tweet said:

    @Pixels said:

    I'd suggest waiting. I have one of cociu's 2TB box from BF and now it seems it is somewhat more responsive. I get 80-90 MB/s now using the same hdparm command.
    This one doesn't even have a small NVMe disk so the OS feels way slower.

    Yes, it may even take months for things on your host node to calm down lol

    Do you really think things might "calm down" and why? Or maybe your HDD was faster to begin with? Did you measure the speeds when you just got the box?

    Because of people benching and setting up their server. Puts above average load on server.

    Thanked by 1seed2tweet
  • @seed2tweet said:
    Has anybody tried setting up a seedbox on it?
    So, I mounted the 3.6Tb disk as per instructions above, installed Swizzin with rtorrent/rutorrent set to download to that disk ("/dev/vda"). I get very low download (<10MB/s) and upload (a couple MB/s) torrent speeds on private trackers (with enough seeders per torrent). Also, upload speed from the server by FTP is just 2-3 MB/s Max. I am based in EU btw..

    What was your expectation? 80Mbps isn't "very low download" to most people. It's torrent, so it's not consistent or easily reproducible as a troubleshooting metric in many cases.

  • @TimboJones said:

    What was your expectation? 80Mbps isn't "very low download" to most people. It's torrent, so it's not consistent or easily reproducible as a troubleshooting metric in many cases.

    I was comparing to Time4Vps's speeds I'm getting (up to 500Mbps download and stable 100Mbps FTP uploads).

    But it makes sense what you all are saying. After this initial setup period, speeds should improve. I'll stick around and see.

    I really appreciate the support I am getting here! Thanks, guys!

    Thanked by 1TimboJones
  • @fragpic said:
    @yorkchou he responded to my ticket but for some reason I did not get an email. Check in the client area directly.
    He extended the money back guarantee by 10 more days.
    Btw, I'm still monitoring the disk speed and it hasn't improved.

    My ticket on 21st still got no reply :'(

  • cociucociu Member

    @yorkchou said: My ticket on 21st still got no reply

    if you have not push please send me the tiket number in private.

    Thanked by 1yorkchou
  • @cociu said:

    @yorkchou said: My ticket on 21st still got no reply

    if you have not push please send me the tiket number in private.

    Hi, boss. Sent you yesterday, said will proceed yesterday :'(

  • afnafn Member
    edited March 2021

    @cociu said:

    @fragpic said: Okay, I can wait a few days but do you have a min assured speed?

    yes ifi will see this shit continue frist what i willdo i willlimit the port speed and you will see how this improove. Because we just suspend 2 users who was make +5gbps constantly , i understand people is coping his data but the port is not dedicated for one vm.

    That doesn't make sense... but feel fre to correct me If I am wrong...
    These VPSes have 10TB traffic cap. if a user pushes 5gbps (~500MB/s) constantly to the point that it triggered a suspension, I would assume he did that for a long duration. but on 5gbps speed the user will run out of 10TB traffic in ~ 5 hours, Also if they're writing to the disk at this speed, the will run out of disk space in less than ~2 hours... (unless they're uploading from the VM that would be different)

    Also, are you monitoring for cpu abuse too?
    I see the cpu on 100% but I fail to understand how the hell are people able to saturate 2x E5-2680v4 on a storage machine!! Seems like people are using it for transcoding/computing or other cpu-intensive tasks that should definitely not be done on this kind of VM...

  • @afn said:

    but on 5gbps speed the user will run out of 10TB traffic in ~ 5 hours, Also if they're writing to the disk at this speed, the will run out of disk space in less than ~2 hours...

    cociu's logic :smile:

  • cociucociu Member

    @momkin said: but on 5gbps speed the user will run out of 10TB traffic in ~ 5 hours, Also if they're writing to the disk at this speed, the will run out of disk space in less than ~2 hours...

    cociu's logic

    no is not cociu logic , is the real life , DO NOT FORGET THE 10GBS PORT IS FOR FREE SO WE DO NOT ENCOURAGE TO ABUSE IT .

  • @momkin said:

    @afn said:

    but on 5gbps speed the user will run out of 10TB traffic in ~ 5 hours, Also if they're writing to the disk at this speed, the will run out of disk space in less than ~2 hours...

    cociu's logic :smile:

    Did you ever ear about burst ?

  • afnafn Member
    edited March 2021

    @cociu said:

    @momkin said: but on 5gbps speed the user will run out of 10TB traffic in ~ 5 hours, Also if they're writing to the disk at this speed, the will run out of disk space in less than ~2 hours...

    cociu's logic

    no is not cociu logic , is the real life , DO NOT FORGET THE 10GBS PORT IS FOR FREE SO WE DO NOT ENCOURAGE TO ABUSE IT .

    What I am trying to say is: if the entire problem was just due to 2 users abusing the server, the problem would have just ended by now weather you suspended them or not because they will run out of traffic and/or disk space before you even take action and suspend them. Not to mention the fact that it would be extremely hard for users to reach 5gbps on shared nodes.

    The reason I am asking is, because I am getting similar disk speeds to seed2tweet and yorkchou (can't even reach 3MB/s), so I don't understand how will this problem be solved exactly? it's a been a couple of days, so I would expect people to be done copying files and the drives to give decent speed (nearly 800MB/s -1GB/s) specially that they're in raid 0 (and maybe with an ssd cache?) I am not of course expecting a premium speed all to myself, If I reach (even occasionally) speeds between 20MB/s-60MB/s out of a supposedly fast raid 10 setup I would be more than happy... but I never get anywhere near that and 1-2MB/s isn't making the service usable...

    P.s: network speeds are perfect, I am getting perfect download speeds (70-120MB/s) when I download to the SSD, but things get shitty when I use the storage drive...

  • @afn said:

    @cociu said:

    @momkin said: but on 5gbps speed the user will run out of 10TB traffic in ~ 5 hours, Also if they're writing to the disk at this speed, the will run out of disk space in less than ~2 hours...

    cociu's logic

    no is not cociu logic , is the real life , DO NOT FORGET THE 10GBS PORT IS FOR FREE SO WE DO NOT ENCOURAGE TO ABUSE IT .

    What I am trying to say is: if the entire problem was just due to 2 users abusing the server,

    Where did this come from?

  • afnafn Member

    @TimboJones said:

    @afn said:

    @cociu said:

    @momkin said: but on 5gbps speed the user will run out of 10TB traffic in ~ 5 hours, Also if they're writing to the disk at this speed, the will run out of disk space in less than ~2 hours...

    cociu's logic

    no is not cociu logic , is the real life , DO NOT FORGET THE 10GBS PORT IS FOR FREE SO WE DO NOT ENCOURAGE TO ABUSE IT .

    What I am trying to say is: if the entire problem was just due to 2 users abusing the server,

    Where did this come from?

    I just quoted coicu saying this! check my reply above he mentioned at some point that he hopes things will get better by purging abusers, but even after they're purged I still see low performance (actually unusable) on the storage HDD. That's why I explained if people keep abusing they will eventually run out of traffic and disk space at some point so it doesn't make much sense to blame it on them, that's why I am asking how will the i/o on drives improve. I am fine with waiting if things will get better, but would like to understand first the kind of service I should expect to get.

    Thanked by 1TimboJones
  • i cant manage to connect the 3.6tb drive. when I try to add it from the menu I receive either connection timed out or Message: Can't lock file '/var/lock/qemu-server/lock-1519.conf' - got timeout. Also some time its unresponsive and the machine doesn't reboot from the console or reinstall.

  • @afn said:

    @TimboJones said:

    @afn said:

    @cociu said:

    @momkin said: but on 5gbps speed the user will run out of 10TB traffic in ~ 5 hours, Also if they're writing to the disk at this speed, the will run out of disk space in less than ~2 hours...

    cociu's logic

    no is not cociu logic , is the real life , DO NOT FORGET THE 10GBS PORT IS FOR FREE SO WE DO NOT ENCOURAGE TO ABUSE IT .

    What I am trying to say is: if the entire problem was just due to 2 users abusing the server,

    Where did this come from?

    I just quoted coicu saying this! check my reply above he mentioned at some point that he hopes things will get better by purging abusers, but even after they're purged I still see low performance (actually unusable) on the storage HDD. That's why I explained if people keep abusing they will eventually run out of traffic and disk space at some point so it doesn't make much sense to blame it on them, that's why I am asking how will the i/o on drives improve. I am fine with waiting if things will get better, but would like to understand first the kind of service I should expect to get.

    Ah, I see it now, I didn't go back 8 days to see his post. While I see your point, I think cociu wasn't implying 2 abusers caused all issues, simply that he JUST dealt with 2 and if more and more people do the same, he'll need to limit the port speed.

  • afnafn Member
    edited March 2021

    @anubis, the same happened to me, I turned off my VM, and tried to add the drive several times, kept getting the error, never got a "success confirmation" but in the end when I refreshed I saw the drive was there in the list, so I just restarted my VM and mounted it in my OS

    @TimboJones we can't know for sure, we will have to wait for him to clarify on this and tell us if he considers the current speeds we're getting to be normal or if he has some plan to improve things...

  • i'm boy not sister. :neutral:

  • @afn said:
    @anubis, the same happened to me, I turned off my VM, and tried to add the drive several times, kept getting the error, never got a "success confirmation" but in the end when I refreshed I saw the drive was there in the list, so I just restarted my VM and mounted it in my OS

    @TimboJones we can't know for sure, we will have to wait for him to clarify on this and tell us if he considers the current speeds we're getting to be normal or if he has some plan to improve things...

    i tried that too; unfortunately, when I use df, it doesn't show up and can't find the device to mount it. I tried with both debian and ubuntu image. Also tried both virtio and ide. Same results

  • verenveren Member

    @anubis said:
    i tried that too; unfortunately, when I use df, it doesn't show up and can't find the device to mount it. I tried with both debian and ubuntu image. Also tried both virtio and ide. Same results

    You won't see the device with df initially. Try fdisk -l and see if that does anything

  • anubisanubis Member
    edited March 2021

    @veren said:

    @anubis said:
    i tried that too; unfortunately, when I use df, it doesn't show up and can't find the device to mount it. I tried with both debian and ubuntu image. Also tried both virtio and ide. Same results

    You won't see the device with df initially. Try fdisk -l and see if that does anything

    after some restarts it showed up, thank you

This discussion has been closed.