Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Scaleway VC1-x, X64-x, Start1-x bootscript to local boot in-place migration on the same instance
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Scaleway VC1-x, X64-x, Start1-x bootscript to local boot in-place migration on the same instance

ValdikSSValdikSS Member
edited June 2023 in Tutorials

I have a VC1S type VPS instance since 2016. Scaleway has deprecated the "bootscript" booting method which was initially used by default some years ago, unfortunately without providing a method to perform in-place migration for current "local boot" method for this type of instance.

The official migration documentation provides the steps to convert only newer types of instances which were initially configured for UEFI booting, and tells us to export all the data, recreate the disk and attach it to a new instance otherwise, which is the case with VC1-x, X64-x, Start1-x types.

My VC1S type of machine is a legacy one and not offered anymore, but more importantly, it's much cheaper than the current instance families. that's why I'd like to keep it instead of creating new type of instance with the old data.

The major inconvenience is that VC1-x, X64-x, Start1-x come with no partition table and partitions on local disk: the ext4 file system is created directly on the disk, without the bootloader or even kernel (it is provided by the bootscript/hypervisor). This kind of partitioning could not be used for regular UEFI booting. We'll move the data partition, make the partition table and create the EFI partition for it to boot.

BEFORE YOU BEGIN

Create a snapshot of your disk using Scaleway interface, just in case something goes wrong. Wait until it finishes creating.

The steps we'll take to convert the data:

  1. Install the kernel and grub packages to the OS
  2. Shrink the filesystem size
  3. Move the filesystem data in-place further to the disk
  4. Create GPT partition table, EFI and data partitions
  5. Install the bootloader (GRUB) in UEFI mode
  6. Fill in fstab

All is based on Debian 10.

1. Install the packages

You'll need at least the kernel and the bootloader. I've also installed "ifupdown", as for some reason it was missing in my installation and no network was configured after all the steps (the console access worked though).

# apt install linux-image-amd64 grub-efi-amd64 ifupdown

Make sure that /boot has the vmlinuz and initrd files

# ls /boot
config-4.19.0-24-cloud-amd64  initrd.img-4.19.0-24-cloud-amd64  System.map-4.19.0-24-cloud-amd64  vmlinuz-4.19.0-24-cloud-amd64

Make sure grub-install command is present (you don't need to install the bootloader yet)

# grub-install --version
grub-install (GRUB) 2.06-3~deb10u3

2. Shrink the filesystem size

Now reboot to the rescue image of Scaleway.
poweroff from the console, go to Advanced Settings, select "Use rescue image", boot the machine.

Once it's booted, login to the machine over SSH, you'll be in a rescue mode. It boots a special image from another disk, allowing unrestricted manipulations with your local disk.

Ensure you have your local disk with the filesystem

# cat /dev/vda | file -
/dev/stdin: Linux rev 1.0 ext4 filesystem data, UUID=1865917f-d4e9-4bcf-b624-de9dc97d0f20 (extents) (large files) (huge files)

Perform the filesystem check:

# fsck.ext4 -f /dev/vda

e2fsck 1.46.5 (30-Dec-2021)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vda: 178933/3055616 files (1.1% non-contiguous), 2852413/12207031 blocks

Check the filesystem size:

# dumpe2fs /dev/vda | grep -F 'Block count'
dumpe2fs 1.46.5 (30-Dec-2021)
Block count:              12207031

This is 12207031 blocks × 4096 block size = 49999998976 bytes (50 GB)

Now shrink the partition size to the minimal amount, to ease its moving:

-M     Shrink the file system to minimize its size as much as possible, given the files stored in the file system.

# resize2fs -M /dev/vda
resize2fs 1.46.5 (30-Dec-2021)
Resizing the filesystem on /dev/vda to 3045809 (4k) blocks.
The filesystem on /dev/vda is now 3045809 (4k) blocks long.

This took about 7 minutes and shrunk the filesystem from 50 GB down to 12.4 GB, without any free space inside the filesystem.

Now the filesystem is 3045809 blocks in size.

3. Move the filesystem data in-place further to the disk

This is the most important step which could and will irreversibly corrupt your filesystem if done incorrectly.

In order to create the partition table and additional EFI partition, we need to move the existing filesystem, which begins right from the very first sector of the local drive, somewhere further away, down the disk.
I recommend moving it for 128MiB (134217728 byte) to the right. This is more than enough for the partition table and EFI partition, where the bootloader would be stored.

We'll need to install GNU ddrescue to the rescue environment:

# apt install gddrescue

GNU ddrescue allows to copy the data backwards (in reverse, starting from the last block) which is very important in our case, as we're copying the data "from the left to the right" of the disk. Otherwise the data would be irrecoverably corrupted.

The syntax as follows:

ddrescue --reverse --size=<filesystem size IN BYTES> --input-position=0 --output-position=<new position IN BYTES> --sector-size=512 --cluster-size=256 --same-file --force --ask /dev/vda /dev/vda

The filesystem size could be calculated as shown in the example above (multiply filesystem block count by 4096, the block size). I'm going to move the filesystem 134217728 bytes to the right. The command in my case is:

# ddrescue --reverse --size=12475633664 --input-position=0 --output-position=134217728 --sector-size=512 --cluster-size=256 --same-file --force --ask /dev/vda /dev/vda
GNU ddrescue 1.23
About to copy 12475 MBytes
from '/dev/vda' [UNKNOWN] (50000000000)
to '/dev/vda' [UNKNOWN] (50000000000)
Proceed (y/N)? y
Press Ctrl-C to interrupt
    ipos:    8131 MB, non-trimmed:        0 B,  current rate:    210 MB/s
    opos:    8265 MB, non-scraped:        0 B,  average rate:    160 MB/s
non-tried:    8131 MB,  bad-sector:        0 B,    error rate:       0 B/s
rescued:    4343 MB,   bad areas:        0,        run time:         27s
pct rescued:   34.81%, read errors:        0,  remaining time:         51s
                            time since last successful read:          0s
Copying non-tried blocks... Pass 1 (backwards)

This took about 2 minutes.

Check that everything is fine by issuing:

# dd if=/dev/vda iflag=skip_bytes skip=134217728 | file -
/dev/stdin: Linux rev 1.0 ext4 filesystem data, UUID=1865917f-d4e9-4bcf-b624-de9dc97d0f20 (extents) (large files) (huge files)

4. Create GPT partition table, EFI and data partitions

Now we need to create the partition table and two partitions: one for EFI booting, another is for our moved filesystem.

Use cfdisk for that:

# cfdisk /dev/vda

Create GPT partition table, then 1 new partition of "260096s" size (260096 sectors) and set the type of it to "EFI system", then create another partition for the remaining of the disk.

You should see something like this:

                            Disk: /dev/vda
            Size: 46.6 GiB, 50000000000 bytes, 97656250 sectors
        Label: gpt, identifier: B6341233-F14A-334D-A2F7-74BE5649F848

    Device            Start         End    Sectors    Size Type
>>  /dev/vda1          2048      262143     260096    127M EFI System        
    /dev/vda2        262144    97656216   97394073   46.5G Linux filesystem

Make sure that start sector of /dev/vda2 is the same value as the amount you've moved your partition for. The sector size is 512 bytes in this case (and the block size of the filesystem is 4096).

262144 × 512 = 134217728

Press "write" to save your data on disk.

Now we'll mount our filesystem as partition, to make sure that everything was done correctly:

# mkdir m
# mount /dev/vda2 m
# ls m
bin   dev  home        initrd.img.old  lib64       media  opt   root  sbin  sys  usr  vmlinuz
boot  etc  initrd.img  lib             lost+found  mnt    proc  run   srv   tmp  var  vmlinuz.old

It's time to create the EFI file system. It should be FAT32 (vfat):

# mkfs.vfat /dev/vda1 
mkfs.fat 4.2 (2021-01-31)

5. Install the bootloader (GRUB) in UEFI mode

Installing the bootloader from the other system is always a bit tricky.
For maximum compatibility, we'll install the original bootloader from Debian in chroot environment.
First we need to mount /proc, /sys and /dev mount points from the host (rescue system) to our filesystem, and then chroot to it.

# mount /dev/vda2 ./m
# mount --bind /proc ./m/proc
# mount --bind /sys ./m/sys
# mount --bind /dev ./m/dev
# mkdir ./m/boot/efi
# mount /dev/vda1 ./m/boot/efi
# chroot m

In the chroot environment, execute the following:

# grub-install --removable --target=x86_64-efi
# grub-install --target=x86_64-efi

Next we need to generate GRUB configuration file. This require some modifications for the Scaleway. Open /etc/default/grub and change GRUB_CMDLINE_LINUX_DEFAULT line to:

GRUB_CMDLINE_LINUX_DEFAULT="quiet console=tty1 console=ttyS0"

Execute configuration file generation:

# update-grub

This will copy all the needed files to our EFI partition and generate grub.cfg bootloader configuration file.
Don't be scared of the "warning: Discarding improperly nested partition" messages, they are safe to skip.
For non-Debian and non-Ubuntu OS you'll need to generate configuration file using grub-mkconfig, something like:

# grub-mkconfig -o /boot/grub/grub.cfg

6. Fill in fstab

Add the following lines in /etc/fstab file, otherwise your root mount point would be read-only on the next boot:

/dev/vda2 / ext4 defaults 0 1
/dev/vda1 /boot/efi vfat defaults 0 2

And that's it! All is left is to exit the chroot and unmount the partitions:

# exit # exit from chroot
# umount ./m/boot/efi
# umount ./m/proc
# umount ./m/sys
# umount ./m/dev

Power off the machine

# poweroff

Go to Advanced Settings in the Scaleway panel, select "Local Boot" method, boot the machine.

If everything is done correctly, the machine should boot and be reachable from the network.
If you can't reach it over SSH, open the console in the panel and debug it from there.

Comments

  • ZyraZyra Member

    nice

  • darkimmortaldarkimmortal Member
    edited June 2023

    That’s a clever trick with ddrescue

  • zerbumzerbum Member
    edited September 2023

    I was following these instructions but I'm using Ubuntu (16.04 LTS) (GNU/Linux 4.5.7-std-3 x86_64 ) so I tried to use sudo apt-get install linux-image-generic but it installed a vmlinuz-4.15.0-213-generic which I believe is 20.2 so needless to say it wouldn't boot after completing the steps.

    I thought the simplest solution is to restore my snapshot to a new volume and start over and attempt to either install the correct kernel or update my distribution. Unfortunately Scaleway has let me down, when I disconnected my broken volume and connected the one restored from the snapshot the web UI gives the error:

    invalid argument(s): volumes.0.volume_type does not respect constraint, not a valid value

    And I see the same error if I try to re-attach the original volume! What the heck is up with Scaleway?

    This is no way a complaint about the tutorial which I thought was excellent, it was my mistake and Scaleway have let me down.

  • Please ignore my earlier post. Thanks for this tutorial I got it to work with Ubuntu 18.04.6 LTS and I had to change linux-image-amd64 to linux-image-generic.

    I also had no networking so I would recommend after step 5 while still in the chroot environment edit /etc/network/interfaces and replace both eth0 with ens2. That will make ssh work on the first boot and not have to deal with Scaleway's web CLI that was just showing a black screen for me.

  • zerbumzerbum Member
    edited October 2023

    What about resizing the filesystem back to 50GB?

    root@venus:~# cat /dev/vda2 | file -
    /dev/stdin: Linux rev 1.0 ext4 filesystem data, UUID=bd32e6c7-48ab-4bf9-85a6-e5501f8cd030 (extents) (large files) (huge files)
    root@venus:~# fsck.ext4 -f /dev/vda2
    e2fsck 1.46.5 (30-Dec-2021)
    Pass 1: Checking inodes, blocks, and sizes
    Inode 395519 extent tree (at level 1) could be narrower.  Optimize<y>? yes
    Pass 1E: Optimizing extent trees
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    
    /dev/vda2: ***** FILE SYSTEM WAS MODIFIED *****
    /dev/vda2: 182653/720896 files (0.4% non-contiguous), 2598096/2856550 blocks
    root@venus:~# dumpe2fs /dev/vda2 | grep -F 'Block count'
    dumpe2fs 1.46.5 (30-Dec-2021)
    Block count:              2856550
    root@venus:~# resize2fs /dev/vda2
    resize2fs 1.46.5 (30-Dec-2021)
    Resizing the filesystem on /dev/vda2 to 12174259 (4k) blocks.
    The filesystem on /dev/vda2 is now 12174259 (4k) blocks long.
    
  • @zerbum said: What about resizing the filesystem back to 50GB?

    That was forgotten :smile:

  • Thanks very much for these notes! I wish to add that on Debian Stretch (4.9.0-19-amd64)
    eth0 change to ens2, and ssh does not work until this is fixed (thanks also to the people on the slack channel who pointed this out!). This works for me:

    root@cor:~# more /etc/network/interfaces
    # interfaces(5) file used by ifup(8) and ifdown(8)
    # The loopback network interface
    auto lo
    iface lo inet loopback
    
    # The primary network interface
    #auto eth0
    #iface eth0 inet dhcp
    auto ens2
    iface ens2 inet dhcp
    
    # Include files from /etc/network/interfaces.d:
    source-directory /etc/network/interfaces.d
    root@cor:~# 
    
  • Just a final remark - as eth0 changed to ens2, my iptables config was wrong in a non-obvious way. 10 out of 15 docker containers worked fine, the rest.. just partly.

Sign In or Register to comment.