Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Black Friday 2020 - NVMe and Storage deals - deploy worldwide - Page 122
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Black Friday 2020 - NVMe and Storage deals - deploy worldwide

1119120122124125151

Comments

  • @Daniel15 said:
    Is there a difference in CPU allocation between the storage VPS and the regular NVMe VPS? I've got a 1 GB RAM / 10 GB NVMe and a 4 GB RAM / 2 TB HDD storage, both in Chicago. I'm used to storage VPSes having lower-end CPUs, but both have the same model in this case (Xeon E5-2690 v2 @ 3.00GHz). The NVMe one says the CPU is "30% dedicated" whereas the storage one doesn't explicitly say anything about CPU usage. Can I use the CPU in the storage VPS roughly the same, or should I try to keep it as low as possible?

    Interesting.

    I don't have mine finalized and am waiting to see the final button displayed, worrying about the approaching shining machine day, and you people who had your dream machine are also worrying about something.

    OK, I should not envy you, boss! :D

  • @FrankZ said:

    @danceswithpugs said:
    @hosthatch made a post acknowledging issues with .ISO uploads. Is anyone else battling this? I attempted to upload an .ISO, it failed, and now all attempts to upload any .ISO immediately fail with error message:

    "Sorry, an error has occurred (iso already exists). please try again and submit a support ticket if this keeps happening."

    Same here with a NVMe in Chicago. I tried for a few days and then went with a template instead of ticketing.

    I’ve had this happen before. In the end I did have to ticket in so they could clear it as it wasn’t showing anything in the CP.

    Thanked by 1FrankZ
  • @cazrz said:
    Bandwidth pooling will be an awesome feature.

    Was this announced to come?

  • @icry said:

    @ferri said:

    @thedp said:

    @ferri said:
    does anyone have a problem with ssh'ing in Amsterdam?

    ssh: connect to host xx.xx.xxx.xx: Connection timed out

    But nginx + db still accesible.

    Works fine here.

    Maybe worth checking your fw rules if you have one.

    Thanks. I don't have fw rules atm.

    @tetech said:

    @ferri said:
    does anyone have a problem with ssh'ing in Amsterdam?

    ssh: connect to host xx.xx.xxx.xx: Connection timed out

    But nginx + db still accesible.

    Works here. Specific to your VPS. Check usual stuff like iptables.

    Thanks. Didn't touch iptables till now. Last night is still working fine.


    Update: SSH now back to normal. Thanks all

    How did you fix it? If you don't mind sharing

    Do nothing. Just suddenly back to normal

    Thanked by 1icry
  • @Daniel15 said:
    Is there a difference in CPU allocation between the storage VPS and the regular NVMe VPS? I've got a 1 GB RAM / 10 GB NVMe and a 4 GB RAM / 2 TB HDD storage, both in Chicago. I'm used to storage VPSes having lower-end CPUs, but both have the same model in this case (Xeon E5-2690 v2 @ 3.00GHz). The NVMe one says the CPU is "30% dedicated" whereas the storage one doesn't explicitly say anything about CPU usage. Can I use the CPU in the storage VPS roughly the same, or should I try to keep it as low as possible?

    I can't answer your questions about usage, but this is completely normal for HostHatch. I imagine they keep their costs low by buying hardware in quantity and standardizing it across their locations. Abdullah essentially confirmed this in another thread, when he wrote that HostHatch buys capable (but older) CPUs rather than the latest and greatest. This allows them to allocate more resources to each client when compared to companies who need to pay off huge hardware costs and are forced to cram as many users as possible onto a single node.

    They tend to use a lot of the same E5 processors in the 2.8-3 GHz range. The most common for me is the CPU your systems have: E5-2690 v2 @ 3.00GHz. But I also have several Chicago storage servers with the E5-2680 v2 @ 2.80GHz.

    They've obviously found the right technique to balance resources, because I've rarely seen IO problems or CPU steal on any of my servers.

    Thanked by 2dosai lentro
  • @aj_potc said: They've obviously found the right technique to balance resources, because I've rarely seen IO problems or CPU steal on any of my servers.

    this.

    @Daniel15 from my experience you don't need to worry about anything. they are properly balanced and unless you use full throttle constantly for days, I guess there is nothing you'll run into.

    of course always depends on your use case. f.i. I split my initial rsyncs into a few hours each day with quite some breaks in between.
    but after that I won't artificially limit additional runs, as they won't run longer than 1-2 hours max anyway.

    Thanked by 3FrankZ Daniel15 lentro
  • @Falzo said:

    @aj_potc said: They've obviously found the right technique to balance resources, because I've rarely seen IO problems or CPU steal on any of my servers.

    this.

    @Daniel15 from my experience you don't need to worry about anything. they are properly balanced and unless you use full throttle constantly for days, I guess there is nothing you'll run into.

    Thanks for the info! That makes sense. The storage VPSes I've used at other providers differentiate themselves from "regular" VPSes at the same provider by having relatively low-end processors (eg the one I'm migrating away from has a 4-core Xeon L5630 @ 2.13 GHz) and low amounts of ram (256 MB or 512 MB), and their policies have been to only use it for storage, nothing else. It seems like HostHatch's storage VPSes are pretty much just regular VPSes with more disk space.

  • hosthatchhosthatch Patron Provider, Top Host, Veteran

    It's simply because we like to see full storage nodes running at 70-80% idle CPU, not because we want people to be using them as standard compute VMs. We provide NVMe servers for compute, with predictable CPU limits and performance.

    This works out great because most people understand that and do not abuse it, and are able to use multiple cores when they really need to for a short period of time, and the ones who abuse this get throttled.

    Most of the benchmarks you have seen people posting here are from full nodes, not brand new nodes made to shine on benches, and then dropping that performance a couple months later.

    @aj_potc said: because I've rarely seen IO problems or CPU steal on any of my servers.

    Good to know our efforts pay off :) Please do ticket in whenever you see steal or iowait so we can check. We monitor both values per node and are quick to act whenever we see it. You should never see either on your servers with us.

    @sgheghele said:

    @cazrz said:
    Bandwidth pooling will be an awesome feature.

    Was this announced to come?

    Likely mid 2021. It will be per location, not global.

  • Please do ticket in whenever you see steal or iowait so we can check. We monitor both values per node and are quick to act whenever we see it. You should never see either on your servers with us.

    Niiiice!

  • Daniel15Daniel15 Veteran
    edited December 2020

    @hosthatch said: and are able to use multiple cores when they really need to for a short period of time

    What would you consider a "short period of time"? For example I'm running Borgbackup on one of my storage VPSes right now to store a backup elsewhere. The initial backup uses a bit of CPU as it needs to compress and encrypt everything (it's using ~45% of my single vcore) but should take less than an hour to complete, and then future backups are fast incremental backups that only take a few minutes. Is that OK or should I throttle the initial backup somehow?

  • @Daniel15 said:

    @hosthatch said: and are able to use multiple cores when they really need to for a short period of time

    What would you consider a "short period of time"? For example I'm running Borgbackup on one of my storage VPSes right now to store a backup elsewhere. The initial backup uses a bit of CPU as it needs to compress and encrypt everything (it's using ~45% of my single vcore) but should take less than an hour to complete, and then future backups are fast incremental backups that only take a few minutes. Is that OK or should I throttle the initial backup somehow?

    This depends on your plan how many 100% you have. If you have the 8GB plan, 50% is dedicated so constantly using 45% should be fine as how I understand it.

  • @webcraft said:

    @Daniel15 said:

    @hosthatch said: and are able to use multiple cores when they really need to for a short period of time

    What would you consider a "short period of time"? For example I'm running Borgbackup on one of my storage VPSes right now to store a backup elsewhere. The initial backup uses a bit of CPU as it needs to compress and encrypt everything (it's using ~45% of my single vcore) but should take less than an hour to complete, and then future backups are fast incremental backups that only take a few minutes. Is that OK or should I throttle the initial backup somehow?

    This depends on your plan how many 100% you have. If you have the 8GB plan, 50% is dedicated so constantly using 45% should be fine as how I understand it.

    No, he is referring to storage VPSes, which did not have percentage of dedicated usage like NVME VPSes.

  • Hello, I submitted my ticket to upgrade the machine on Black Friday
    The staff approved my application a few days ago. After I paid the bill, my VPS could not be started
    I sent the urgent ticket right away but no one replied to me now it has been three days.
    What should I do?
    Will my data still be kept? Please help me! :s
    I use translation software and I apologize if the language is incorrect

  • @Moenis said:
    Hello, I submitted my ticket to upgrade the machine on Black Friday
    The staff approved my application a few days ago. After I paid the bill, my VPS could not be started
    I sent the urgent ticket right away but no one replied to me now it has been three days.
    What should I do?
    Will my data still be kept? Please help me! :s
    I use translation software and I apologize if the language is incorrect

    It never started (then you have no data) or it was running (for a brief time) but no longer?

  • MoenisMoenis Member
    edited December 2020

    @TimboJones said:

    @Moenis said:
    Hello, I submitted my ticket to upgrade the machine on Black Friday
    The staff approved my application a few days ago. After I paid the bill, my VPS could not be started
    I sent the urgent ticket right away but no one replied to me now it has been three days.
    What should I do?
    Will my data still be kept? Please help me! :s
    I use translation software and I apologize if the language is incorrect

    It never started (then you have no data) or it was running (for a brief time) but no longer?

    It had been running normally for two months before I upgraded the VPS, and it failed to boot after I upgraded the configuration and restarted it.
    Before the upgrade, I voted to ask if the VPS data will be retained after the upgrade, and the reply was "Your data will be retained".
    So I didn't do a data backup, and the VPS had all my data from the past. Could he come back

    https://imgur.com/Nq0J8m3
    Now it looks like this at boot time

  • DPDP Administrator, The Domain Guy

    @Moenis said: It had been running normally for two months before I upgraded the VPS, and it failed to boot after I upgraded the configuration and restarted it.

    What configuration?

    Let me guess, disk upgrade?

  • @thedp said:

    @Moenis said: It had been running normally for two months before I upgraded the VPS, and it failed to boot after I upgraded the configuration and restarted it.

    What configuration?

    Let me guess, disk upgrade?

    I upgraded 1C 1G 15G to 2C 8G 40G

  • @Moenis said: So I didn't do a data backup

    Lunacy

    Thanked by 1Falzo
  • probably something off with grub or the like. boot into rescue mode and investigate. did you use encryption/luks or something the like?
    normally if they grow the disk, that means you still have to resize your partition and filesystem. did you try that already?

    also in rescue mode you should be able to find your data partition and mount it manually and take a proper backup before you start making things worse...

    why in the hell wouldn't you have a backup of your valuable production data in any case?

  • @Falzo said:
    probably something off with grub or the like. boot into rescue mode and investigate. did you use encryption/luks or something the like?
    normally if they grow the disk, that means you still have to resize your partition and filesystem. did you try that already?

    also in rescue mode you should be able to find your data partition and mount it manually and take a proper backup before you start making things worse...

    why in the hell wouldn't you have a backup of your valuable production data in any case?

    I didn't use encryption or anything like that
    As an amateur who doesn't know much about Linux partitions, I've tried using the simple fdisk-l command to check the disk.
    Busybox does Not appear to include this tool, however, and prompts for "fdisk: Not Found" backups
    I was worried that doing too much extra would lead to further damage, so I started waiting for Hosthatch's technical support
    What I said above is that the data was not backed up before this machine upgrade.
    (I did have backups before, but at long intervals.)
    Eventually I found a copy of the data from about 30 days ago, and I was hoping to recover the missing 30 days

    Thanked by 1Falzo
  • @Moenis said:

    @Falzo said:
    probably something off with grub or the like. boot into rescue mode and investigate. did you use encryption/luks or something the like?
    normally if they grow the disk, that means you still have to resize your partition and filesystem. did you try that already?

    also in rescue mode you should be able to find your data partition and mount it manually and take a proper backup before you start making things worse...

    why in the hell wouldn't you have a backup of your valuable production data in any case?

    I didn't use encryption or anything like that
    As an amateur who doesn't know much about Linux partitions, I've tried using the simple fdisk-l command to check the disk.
    Busybox does Not appear to include this tool, however, and prompts for "fdisk: Not Found" backups
    I was worried that doing too much extra would lead to further damage, so I started waiting for Hosthatch's technical support
    What I said above is that the data was not backed up before this machine upgrade.
    (I did have backups before, but at long intervals.)
    Eventually I found a copy of the data from about 30 days ago, and I was hoping to recover the missing 30 days

    I had this problem after migrating from mbr to gpt yesterday to resize my upgraded disk server. So now the only way to boot my vps is uploading the super grub2 disk and booting from it.

    Thanked by 2Falzo jokotan
  • @Moenis waiting for a response from support totally makes sense. though they might only be able to tell you, that they increased the existing disk and everything should still be there, but adjusting partition sizes and boot records and such might still be a task on your end, as it's an unmanaged service.

    as written before, I'd mount a systemrescue iso and boot from that to investigate. check with fdisk -l there about the partition layout. usually you should find your old partition and be able to mount it somewhere (e.g. /mnt) and pull out the data to have a fresh backup.

    after that you can mount dev/proc/sys into it and make sure the systems boot partition is also in place. then chroot into it and update-grub to rewrite the bootsector...

    @cpsd this is similar, changing to gpt will change the label/uid and probably break grub. this should also be solvable with a rewrite. if you get your system booted up normally though, you could try that from within and without the systemrescue.
    however, depending on the underlying settings you might need a small empty space on the partition (1-2MB might be sufficient) so that UEFI boot information can be written into.
    worst case you need to resize/move your partitions around with gparted to free that up.
    (a fresh install and rebuild from backups in that case might be easier)

    Thanked by 1cpsd
  • MoenisMoenis Member
    edited December 2020

    @Falzo said:

    As you said it's really unmanaged and I'll have to try to fix it myself
    I entered rescue mode and successfully mounted the partition
    Then I typed in the following command
    mount /dev/vda1 /mnt
    mount -t proc proc /mnt/proc
    mount -t sysfs sys /mnt/sys
    mount -o bind /dev /mnt/dev
    chroot /mnt /bin/bash
    /usr/sbin/grub-install /dev/vda # Don't know why, you must use the absolute path after chroot to execute the command properly.
    Otherwise, the command will be prompted that it was not found
    I don't know if this is the reason why I can't boot the system.
    https://imgur.com/a/EouFhUk
    Then I try to restart the system.
    The system still doesn't boot...

  • @Moenis said:

    @Falzo said:

    As you said it's really unmanaged and I'll have to try to fix it myself
    I entered rescue mode and successfully mounted the partition
    Then I typed in the following command
    mount /dev/vda1 /mnt
    mount -t proc proc /mnt/proc
    mount -t sysfs sys /mnt/sys
    mount -o bind /dev /mnt/dev
    chroot /mnt /bin/bash
    /usr/sbin/grub-install /dev/vda # Don't know why, you must use the absolute path after chroot to execute the command properly.
    Otherwise, the command will be prompted that it was not found
    I don't know if this is the reason why I can't boot the system.
    https://imgur.com/a/EouFhUk
    Then I try to restart the system.
    The system still doesn't boot...

    I believe the command above is mounting vda1 of system rescue, not your storage partition. Try lsblk command, you can see other partition too..

  • @chocolateshirt said:

    @Moenis said:

    I'm convinced that VDA is my storage partition.
    Because I see my data in/MNT
    But I did execute the LSBLK command and took a look
    https://imgur.com/YnC8k69

  • Ok, just backup your data and reinstall your VPS..

  • Has anyone here signed up for LA and Chicago VPSes? If so, did you notice any connection issues between the VPSes? I had been trying to backup some data from LA to Chicago but the connection kept dropping after a few gigabytes of data was transferred.

    Running traceroute and it seems like a server in the LA datacenter is unresponsive. The connection eventually restored itself but as soon as I started another transfer it went down again.

    My home connection to both VPSes is solid though, no connection issues.

    Thanked by 1Moenis
  • hosthatchhosthatch Patron Provider, Top Host, Veteran
    edited December 2020

    @letitrain said:
    Has anyone here signed up for LA and Chicago VPSes? If so, did you notice any connection issues between the VPSes? I had been trying to backup some data from LA to Chicago but the connection kept dropping after a few gigabytes of data was transferred.

    Running traceroute and it seems like a server in the LA datacenter is unresponsive. The connection eventually restored itself but as soon as I started another transfer it went down again.

    My home connection to both VPSes is solid though, no connection issues.

    Yes, there's some prefixes that have bad speeds between LA and Chicago. We're working with our upstream in these locations to get this resolved. Likely tomorrow or Tuesday this should be fixed.

    (as you already noticed - this only affects traffic between our LA and Chicago locations - does not affect any other traffic)

    Thanked by 2letitrain kalimov622
  • The data was recovered successfully
    Thank you for your help

  • @Moenis said:
    The data was recovered successfully
    Thank you for your help

    did you choose the way with backup&restore or somehow fix the problem with boot?

Sign In or Register to comment.