Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Special Offers: 2 vCPUs, 4 GB RAM, 100 GB HDD: 3.33 EUR | 4 vCPUs, 8 GB RAM, 200 GB HDD: 6.66 EUR - Page 4
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Special Offers: 2 vCPUs, 4 GB RAM, 100 GB HDD: 3.33 EUR | 4 vCPUs, 8 GB RAM, 200 GB HDD: 6.66 EUR

12467

Comments

  • Well ... I do still have that first installation attempt (on the 2nd disk) to play around with, since I haven't done anything to it yet.

    I suspect the initial problem had to do with specifying /dev/vdb instead of /dev/vda for installing GRUB - chainloading from /dev/vda to /dev/vdb seems viable.

    However ... I expect there will be at least one more complication due to using using the "LVM on LUKS" encrypted setup on both disks. The default clickthrough setup (repeated on the first disk) has used the same LVM volume group etc names.

    I'm guessing specifying volumes via UUID somewhere along the line might be a feasible workaround if I want to keep the existing LVM on LUKS on each disk setup ... but I probably don't want to do that anyway if it means I have to repeat typing in the 666 character random gibberish passphrase required to unlock both disks everytime I want to boot ... using the VNC console which I've not yet figured out how to cut-and-paste into.

    Now ... I could shoehorn dropbear into the boot system to allow entering the LUKS passphrase(s) via ssh rather than the VNC console. (Most likely will set that up anyway at some point ... once I get hungry for another can of worms.)

    Or, I could setup /dev/vda without LVM and LUKS just to have GRUB chainload /dev/vdb ...

    Still expecting to see whatever best-laid plans go sideways once I throw that ramdisk installation into the mix. But the last thing I want to do is overthink any of this!

    So I might just run this post through google translate a few times and then put it in a ticket for UltraVPS to sort out. (I'll be sure to start a few threads on LET about my urgent setup issues while waiting for their timely response.)

  • angstromangstrom Moderator

    @uptime said: But the last thing I want to do is overthink any of this!

    I'm pleased to hear that there's no risk of overengineering. :smile:

    Thanked by 1uptime
  • FalzoFalzo Member

    uptime said: I suspect the initial problem had to do with specifying /dev/vdb instead of /dev/vda for installing GRUB

    this. grub simply has to be in the mbr of that first disk...

  • nephneph Member
    edited April 2019

    Hey,

    I ordered the offer with 4 GB of RAM and 100 GB of storage but I can't install the VPS as I want...

    In fact, I would like all my data to be on vdb and to fully use the 100G storage. I managed to have more than 90G on /mnt (default installation) or /home (thanks to @OsirisBlack ) but all the storage on my server isn't necessarily in these mounting points. It's sometimes on /srv, /home, /var...

    How to simply put everything / on vdb and possibly keep vda for boot and backup?

    Thanks.

    Thanked by 1uptime
  • uptimeuptime Member
    edited April 2019

    @neph said:
    Hey,

    I ordered the offer with 4 GB of RAM and 100 GB of storage but I can't install the VPS as I want...

    In fact, I would like all my data to be on vdb and to fully use the 100G storage. I managed to have more than 90G on /mnt (default installation) or /home (thanks to @OsirisBlack) but all the storage on my server isn't necessarily in these mounting points. It's sometimes on /srv, /home, /var...

    How to simply put everything / on vdb and possibly keep vda for boot and backup?

    Thanks.

    I imagine many a sysadmin's eyelid would start to twitch at the mere thought of using symbolic links in the root directory ...

    ... but (not knowing any better) that's what I'd try first :)

    mount the 100 GB as whatever - let's say "/data"

    create whatever directories /data/srv, /data/home, /data/var (and so forth) as needed

    then ln -s /data/srv /srv would create a symbolic link to point /srv to the actual directory /data/srv on your 100 GB disk. (And so on.)

    Maybe some good reasons to be wary of doing things this way (potential for confusion at the very least) ... but if it works, it works. (Until it doesn't. Possibly screwing up some important backup or other essential function at the worst possible time.)

    Thanked by 2vimalware neph
  • nephneph Member

    @uptime : I didn't think about a "simple" symbolic link (I've used it for other things, though) but it's a good idea indeed. I will test that. Thank you :smile:

    There are probably cleaner solutions (do the partitioning "properly" at installation, for example) but they may be too complex for me ^^

    Thanked by 1uptime
  • emperoremperor Member
    edited April 2019

    @neph

    Well i already did setup mine as the way i want, so i cant test it, but you can try this setup :

    Boot the image you want to install, when you get to disk setup use guided entire disk lvm.. choose vda, at least on debian 9 this does 2 partitions 1 lvm (/ and swap) and one boot. You will notice swap partition is big, probably it assign itself swap=ram. You can delete these partitions and make partitions swap/backup, or only backup etc.etc. You can do that in Configure the LVM option. Then make it vdb lvm also, make lvm partition on it with ext4 and mount / .

    This was my case making vdb /home, but since boot partition is on vda, you can try and do / in vdb..
    I dont know if this will work, but it would get 5 min from your time to test it.
    Gl tho

  • emperoremperor Member
    edited April 2019

    uptime said: mount the 100 GB as whatever - let's say "/data"

    I've noticed their centos 7 image does this. After testing their image df -h gave output vdb as /data

    Thanked by 1uptime
  • uptimeuptime Member
    edited April 2019

    @neph said:
    @uptime : I didn't think about a "simple" symbolic link (I've used it for other things, though) but it's a good idea indeed. I will test that. Thank you :smile:

    There are probably cleaner solutions (do the partitioning "properly" at installation, for example) but they may be too complex for me ^^

    Right - one ultimately "cleaner" solution would be to install via ISO to put your entire system on /dev/vdb (LVM maybe useful), but be sure to install GRUB on /dev/vda as well as /dev/vdb - then somehow do some GRUB configuration magic to chainload /dev/vda to boot /dev/vdb ...

    May be a messy bit of work to do but will produce more tidy system once done.

    I guess I'd do a perfunctory install twice (first for /dev/vda and only after that for /dev/vdb) just to be able to boot a working system with which to do the needful GRUB wrangling and so forth.

    (Also think I've seen a "Rescue Mode" in the UltraVPS panel so that might be another option for noodling around on an initially unbootable filesystem.)

    Good luck, have fun

    EDIT2:

    And please do be sure to test your backup/restore setup carefully if you go with the symbolic links in root directory. (Because paranoia is its own reward.)

  • FalzoFalzo Member
    edited April 2019

    @neph said:
    @uptime : I didn't think about a "simple" symbolic link (I've used it for other things, though) but it's a good idea indeed. I will test that. Thank you :smile:

    There are probably cleaner solutions (do the partitioning "properly" at installation, for example) but they may be too complex for me ^^

    if you don't want to go through a full manual installation process, you can achieve what you want with a box installed from a template as well:

    # df -h
    Filesystem      Size  Used Avail Use% Mounted on
    udev            2.0G     0  2.0G   0% /dev
    tmpfs           396M  5.4M  391M   2% /run
    /dev/vdb1        99G  1.9G   97G   2% /
    tmpfs           2.0G     0  2.0G   0% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    

    please note vda1 is still available, I just did not mount anywhere else in that test.

    to achieve that just head to the control panel and reboot in rescue mode.
    you'll be provided with a temporary root pw, so make sure to note it down.
    open the vnc console from the control panel, and once asked for a login use root + that pw...

    mount /dev/vda1 /mnt/vda1
    mount /dev/vdb1 /mnt/vdb1
    # dirs as mountpoint are already there in rescue
    cd /mnt/vda1
    mv * ../vdb1
    # ignore lost+found error
    nano /mnt/vdb1/etc/fstab
    # adjust the line for / from that UID to /dev/vdb1 and remove or comment the last line which mounts vdb1 to /data
    mount --bind /dev /mnt/vdb1/dev
    mount --bind /sys /mnt/vdb1/sys
    mount --bind /proc /mnt/vdb1/proc
    chroot /mnt/vdb1
    grub-install /dev/vda
    # that's no typo!
    update-grub
    reboot

    just checked on a fresh debian 9 install, no guarantues though. use at your own risk.

  • angstromangstrom Moderator
    edited April 2019

    @uptime said: Maybe some good reasons to be wary of doing things this way (potential for confusion at the very least) ... but if it works, it works. (Until it doesn't. Possibly screwing up some important backup or other essential function at the worst possible time.)

    I use symbolic links on my setup, it works. :smile:

    As far as I can tell, the biggest risk is that if (say) /data doesn't get mounted for some reason, then /home and any other directory under /data won't be available. But /root would still be available, so you could mount /data manually in such a case.

    But, yes, an intelligent manual partitioning at installation time from an ISO is probably the best strategy. :smile:

    Thanked by 2uptime vimalware
  • Sorry if this has been asked before, but what is the point of 2 drives?

    I attempted a manual install of Ubuntu 18, but there were no LVM options? Did I miss something?

    @UltraVPS could I get 1 HDD please?
    Or include an article (instead of this) in your KB on how we can use LVM to merge vda and vdb?

  • angstromangstrom Moderator

    @twhstr said:
    Sorry if this has been asked before, but what is the point of 2 drives?

    I attempted a manual install of Ubuntu 18, but there were no LVM options? Did I miss something?

    @UltraVPS could I get 1 HDD please?

    I don't think that they can make exceptions. It's the way that they've designed their system.

    As for Ubuntu 18.04 and LVM, you probably have a better chance with the traditional installer:

    http://cdimage.ubuntu.com/releases/18.04.2/release/ubuntu-18.04.2-server-amd64.iso

  • @angstrom I thought as much...

    It doesn't look like I can load a custom ISO from anywhere in the panel though?

  • angstromangstrom Moderator

    @twhstr said:
    @angstrom I thought as much...

    It doesn't look like I can load a custom ISO from anywhere in the panel though?

    You can open a ticket and request a custom ISO. In this case, you may want to explain to them that this is the "traditional installer" for Ubuntu Server 18.04. On this page,

    https://www.ubuntu.com/#download

    click on "Use the traditional installer".

    Otherwise, go for Debian. :smile:

    Thanked by 1tafa2
  • UltraVPSUltraVPS Member, Patron Provider
    edited April 2019

    twhstr said: @UltraVPS could I get 1 HDD please?

    I am sorry, but that is not possible. But you can install your server manually and place / on /dev/vdb. We have just published an article for Debian 9 in our KB:

    http://kb.ultravps.eu/kvm-cloud-server/installation/index.html#manual-installation-of-linux-distributions-general-guide

    Other distributions (CentOS 7 and Ubuntu 18.04) will follow soon.

    Thanked by 3uptime neph nqservices
  • @OsirisBlack said:

    @andrew1995 said:

    @OsirisBlack said:
    umount /mnt
    fdisk /dev/vdb
    g
    w
    fdisk /dev/vdb
    n
    w
    mkfs.ext4 /dev/vdb1
    nano /etc/fstab
    /dev/vdb1 /home ext4 defaults 0 1
    mount -a
    reboot

    simples.....

    will this work with ubuntu 18.04?

    Yes.

    It took me almost 2 hours to follow this guide. But I'm happy and satisfied now, and I also learned some basic linux command like "fdisk, lsblk, nano, etc..." Thanks @OsirisBlack including @magnoman and @uptime.

    Thanked by 1uptime
  • @emperor said:
    I guess Falzo is on other loc in dus. For those who will land in de-dus2

    I'm having similar nench results, a E5-2690 0 CPU with ~200MB seq write speed. I set up LVM with LUKS on the first disk, booted into the system and then manually added the second disk.

    $ df -h
    Filesystem               Size  Used Avail Use% Mounted on
    udev                     2.0G     0  2.0G   0% /dev
    tmpfs                    396M  5.4M  391M   2% /run
    /dev/mapper/de--vg-root  105G  1.1G   99G   2% /
    tmpfs                    2.0G     0  2.0G   0% /dev/shm
    tmpfs                    5.0M     0  5.0M   0% /run/lock
    tmpfs                    2.0G     0  2.0G   0% /sys/fs/cgroup
    /dev/vda1                236M   65M  159M  29% /boot
    tmpfs                    396M     0  396M   0% /run/user/1000
    
    Thanked by 1Falzo
  • Bought two SAS-Special-2's in two different locations today. Did 'yum update'. Rebooted. Both servers unreachable now, failing to boot, hanging at 'Probing EDD'. Not a very promising start...

  • Nice offer, thought it was serverhunter at first since the logo is so similar. Don’t have a use for a vm in Europe right now : /

  • FalzoFalzo Member

    @mrysbekov said:
    Bought two SAS-Special-2's in two different locations today. Did 'yum update'. Rebooted. Both servers unreachable now, failing to boot, hanging at 'Probing EDD'. Not a very promising start...

    maybe reinstall and try again and watch the output for errors/warning before rebooting, or use a boot image and go through the manual install process... I don't use centos, so can't really speak from experience, but I have a box spare so can try to reproduce, if you tell what version you started with ;-)

  • @Falzo said:

    @mrysbekov said:
    Bought two SAS-Special-2's in two different locations today. Did 'yum update'. Rebooted. Both servers unreachable now, failing to boot, hanging at 'Probing EDD'. Not a very promising start...

    maybe reinstall and try again and watch the output for errors/warning before rebooting, or use a boot image and go through the manual install process... I don't use centos, so can't really speak from experience, but I have a box spare so can try to reproduce, if you tell what version you started with ;-)

    Reinstalled CENTOS-7 on both boxes. Reboots worked for some time. Did 'yum update' on one box. Reboots worked. Added a user / public_key on one box. Reboots stopped working on both boxes. So I assume this isn't caused by what I do inside the servers.

  • @mrysbekov said:

    @Falzo said:

    @mrysbekov said:
    Bought two SAS-Special-2's in two different locations today. Did 'yum update'. Rebooted. Both servers unreachable now, failing to boot, hanging at 'Probing EDD'. Not a very promising start...

    maybe reinstall and try again and watch the output for errors/warning before rebooting, or use a boot image and go through the manual install process... I don't use centos, so can't really speak from experience, but I have a box spare so can try to reproduce, if you tell what version you started with ;-)

    Reinstalled CENTOS-7 on both boxes. Reboots worked for some time. Did 'yum update' on one box. Reboots worked. Added a user / public_key on one box. Reboots stopped working on both boxes. So I assume this isn't caused by what I do inside the servers.

    Contacted support. Soon after that my servers were accessible again, and I got a reply saying everything should be fine now. Rebooted the servers just to see them hang again. Not sure if these guys know what they're doing. :(

  • FalzoFalzo Member

    @mrysbekov said:

    Contacted support. Soon after that my servers were accessible again, and I got a reply saying everything should be fine now. Rebooted the servers just to see them hang again. Not sure if these guys know what they're doing. :(

    sorry to hear that, I rarely need support at all, but whenever I had a request they have been nothing but helpful.

    can't reproduce your problem though, how do you reboot? within the console or control panel? from my experience the control panel will give a error message if the process of resetting or shutting down gets stuck, then you just have to wait for a minute or two, after it gets reset in full and comes up again.

    the vnc console should give more clues on what's the problem with the boot process after all. if you're not satisfied or convinced I am sure they'll handle a refund request fair.

  • @Falzo said:

    @mrysbekov said:

    Contacted support. Soon after that my servers were accessible again, and I got a reply saying everything should be fine now. Rebooted the servers just to see them hang again. Not sure if these guys know what they're doing. :(

    sorry to hear that, I rarely need support at all, but whenever I had a request they have been nothing but helpful.

    can't reproduce your problem though, how do you reboot? within the console or control panel? from my experience the control panel will give a error message if the process of resetting or shutting down gets stuck, then you just have to wait for a minute or two, after it gets reset in full and comes up again.

    the vnc console should give more clues on what's the problem with the boot process after all. if you're not satisfied or convinced I am sure they'll handle a refund request fair.

    I'm rebooting the servers via their "BK Manager v2" management console. I use "Force immediate execution" option. I actually see part of the boot process in NoVNC console, I can choose the boot option in Grub, but when the kernel actually starts loading, everything just hangs after "Probing EDD (edd=off to disable)... ok" line. I've been looking at this line for the past 20 minutes for both servers, and the last comment I got from support was "The boot process just take a little longer. There are no known issues on our node and the load is ok."

  • mrysbekov said: I'm rebooting the servers via their "BK Manager v2" management console. I use "Force immediate execution" option.

    Forcing execution of a reboot is never a good idea unless your machine hangs. How about you just perform regular reboots?

    Thanked by 1vimalware
  • mrysbekov said: Probing EDD (edd=off to disable)... ok

    So have you tried enabling/adding the edd=off line to your kernel cmdline (via Grub) and does that help your VM reboot without issue?

    I'm running an out of the box Debian kernel and I've not resorted to any trickery to get it to boot (since you're on CentOS - I don't have too much insight into the kernels they use and why it seems like only you're running into this issue).

    Also, are you booting off the default vda partition/disk or have you changed to vdb?

  • FalzoFalzo Member

    @mrysbekov said:

    I'm rebooting the servers via their "BK Manager v2" management console. I use "Force immediate execution" option. I actually see part of the boot process in NoVNC console, I can choose the boot option in Grub, but when the kernel actually starts loading, everything just hangs after "Probing EDD (edd=off to disable)... ok" line. I've been looking at this line for the past 20 minutes for both servers, and the last comment I got from support was "The boot process just take a little longer. There are no known issues on our node and the load is ok."

    yeah, I can see that part as well, the "Probing EDD" line stays a few seconds and then it switches towards a black screen in the console, another few seconds and then the login prompt appears. eventually you want to refresh the console for the last step but however, is it gets stuck earlier I agree something is off.

    but I think we are missing something here. the boot menu is available, so the system finds the disks/mbr etc.
    afaik the EDD message is kernel related. did you change something in grub or use a specific yum repo or self compiled kernel?

    I installed centos from their template, run yum update and rebooted quit some time via control panel (immediate reboot). still can't replicate that issue...

  • @solaire said:

    mrysbekov said: I'm rebooting the servers via their "BK Manager v2" management console. I use "Force immediate execution" option.

    Forcing execution of a reboot is never a good idea unless your machine hangs. How about you just perform regular reboots?

    I tried normal reboot again now, and it worked for one of the servers, the other appears to be stuck in "rebooting" state now (this is why I started using forced reboots in the first place). But this all started with normal "reboot" command in shell under root, so I'm not sure if it's related to their management console reboot functionality.

    Thanked by 1solaire
  • @nullnothere said:

    mrysbekov said: Probing EDD (edd=off to disable)... ok

    So have you tried enabling/adding the edd=off line to your kernel cmdline (via Grub) and does that help your VM reboot without issue?

    I'm running an out of the box Debian kernel and I've not resorted to any trickery to get it to boot (since you're on CentOS - I don't have too much insight into the kernels they use and why it seems like only you're running into this issue).

    Also, are you booting off the default vda partition/disk or have you changed to vdb?

    With edd=off it just hangs with no output. I assume the EDD part works fine, and whatever makes the servers hang happens after that.

    As for the disks, I didn't change anything at all. Just plain CENTOS-7 installation that is offered by their management console.

Sign In or Register to comment.