Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Help, Trying to get out of Rescue mode on a dedicated server
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Help, Trying to get out of Rescue mode on a dedicated server

Tony40Tony40 Member
edited February 2023 in Help

Hello, I need help from the Pro. My remote server only boot on Rescue mode, I need to the mount root partition to restart the server on regular mode. this is my first time doing this.

Can anyone tell me which is the correct root partition and disk to mount?

Disk /dev/sdb: 1,84 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20PURZ-85G
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xf0cc43ef

Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 4095 2048 1M 83 Linux
/dev/sdb2 4096 16005119 16001024 7,6G fd Linux raid autodetect
/dev/sdb3 * 16005120 18006015 2000896 977M fd Linux raid autodetect
/dev/sdb4 18006016 3907028991 3889022976 1,8T fd Linux raid autodetect

Disk /dev/loop0: 687,33 MiB, 720711680 bytes, 1407640 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@grml ~ # df -H
Filesystem Size Used Avail Use% Mounted on
udev 8,4G 0 8,4G 0% /dev
tmpfs 1,7G 1,2M 1,7G 1% /run
/dev/loop0 721M 721M 0 100% /run/live/rootfs/grml64-full.squashfs
tmpfs 8,4G 8,8M 8,4G 1% /run/live/overlay
overlay 8,4G 8,8M 8,4G 1% /
tmpfs 8,4G 0 8,4G 0% /dev/shm
tmpfs 5,3M 4,1k 5,3M 1% /run/lock
tmpfs 8,4G 0 8,4G 0% /sys/fs/cgroup
tmpfs 8,4G 0 8,4G 0% /tmp
tmpfs 1,7G 0 1,7G 0% /run/user/0
root@grml ~ #

root@grml ~ # lsblk :(
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 687,3M 1 loop /lib/live/mount/rootfs/grml64-full.squashfs
sda 8:0 0 1,8T 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 7,6G 0 part
├─sda3 8:3 0 977M 0 part
└─sda4 8:4 0 1,8T 0 part
sdb 8:16 0 1,8T 0 disk
├─sdb1 8:17 0 1M 0 part
├─sdb2 8:18 0 7,6G 0 part
├─sdb3 8:19 0 977M 0 part
└─sdb4 8:20 0 1,8T 0 part

root@grml ~ # parted -l
Model: ATA WDC WD20PURZ-85G (scsi)
Disk /dev/sda: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags:

Number Start End Size Type File system Flags
1 1049kB 2097kB 1049kB primary
2 2097kB 8195MB 8193MB primary raid
3 8195MB 9219MB 1024MB primary boot, raid
4 9219MB 2000GB 1991GB primary raid

Model: ATA WDC WD20PURZ-85G (scsi)
Disk /dev/sdb: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags:

Number Start End Size Type File system Flags
1 1049kB 2097kB 1049kB primary
2 2097kB 8195MB 8193MB primary raid
3 8195MB 9219MB 1024MB primary boot, raid
4 9219MB 2000GB 1991GB primary raid

root@grml ~ # lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME SIZE FSTYPE TYPE MOUNTPOINT
loop0 687,3M squashfs loop /lib/live/mount/rootfs/grml64-full.squashfs
sda 1,8T disk
├─sda1 1M part
├─sda2 7,6G linux_raid_member part
├─sda3 977M linux_raid_member part
└─sda4 1,8T linux_raid_member part
sdb 1,8T disk
├─sdb1 1M part
├─sdb2 7,6G linux_raid_member part
├─sdb3 977M linux_raid_member part
└─sdb4 1,8T linux_raid_member part

Comments

  • jackbjackb Member, Host Rep

    mdadm --assemble --scan

    Then grml-chroot into the correct mdadm raid member, which will probably be raid1 of sda4 + sdb4

  • @jackb said:
    mdadm --assemble --scan

    Then grml-chroot into the correct mdadm raid member, which will probably be raid1 of sda4 + sdb4

    root@grml ~ # mdadm --assemble --scan
    mdadm: /dev/md/0 has been started with 2 drives.
    mdadm: /dev/md/1 has been started with 2 drives.
    mdadm: /dev/md/2 has been started with 2 drives.
    root@grml ~ # grml-chroot
    Usage: "grml-chroot" NEWROOT [COMMAND....]

    grml-chroot is a chroot wrapper with proc/sys/pts/dev filesystem handling

    Error: Wrong number of arguments.

  • jackbjackb Member, Host Rep

    @Tony40 said:
    root@grml ~ # grml-chroot
    Usage: "grml-chroot" NEWROOT [COMMAND....]

    grml-chroot is a chroot wrapper with proc/sys/pts/dev filesystem handling

    Error: Wrong number of arguments.

    You need to pass in whichever raid member /dev/md<X> is your root partition. cat /proc/mdstat to list - the one with sda4 and sdb4 is my bet.

    Thanked by 1MikeA
  • @jackb said:

    @Tony40 said:
    root@grml ~ # grml-chroot
    Usage: "grml-chroot" NEWROOT [COMMAND....]

    grml-chroot is a chroot wrapper with proc/sys/pts/dev filesystem handling

    Error: Wrong number of arguments.

    You need to pass in whichever raid member /dev/md<X> is your root partition. cat /proc/mdstat to list - the one with sda4 and sdb4 is my bet.

    root@grml ~ # cat /proc/mdstat :(
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md2 : active (auto-read-only) raid1 sda4[0] sdb4[1]
    1944379392 blocks super 1.2 [2/2] [UU]
    bitmap: 0/15 pages [0KB], 65536KB chunk

    md1 : active (auto-read-only) raid1 sda3[0] sdb3[1]
    999424 blocks super 1.2 [2/2] [UU]

    md0 : active raid0 sda2[0] sdb2[1]
    15990784 blocks super 1.2 512k chunks

    unused devices:

  • jackbjackb Member, Host Rep

    @Tony40 said:
    root@grml ~ # cat /proc/mdstat :(
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md2 : active (auto-read-only) raid1 sda4[0] sdb4[1]
    1944379392 blocks super 1.2 [2/2] [UU]
    bitmap: 0/15 pages [0KB], 65536KB chunk

    md1 : active (auto-read-only) raid1 sda3[0] sdb3[1]
    999424 blocks super 1.2 [2/2] [UU]

    md0 : active raid0 sda2[0] sdb2[1]
    15990784 blocks super 1.2 512k chunks

    unused devices:

    /dev/md2 looks like your root partition to me.

  • @jackb said:

    @Tony40 said:
    root@grml ~ # cat /proc/mdstat :(
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md2 : active (auto-read-only) raid1 sda4[0] sdb4[1]
    1944379392 blocks super 1.2 [2/2] [UU]
    bitmap: 0/15 pages [0KB], 65536KB chunk

    md1 : active (auto-read-only) raid1 sda3[0] sdb3[1]
    999424 blocks super 1.2 [2/2] [UU]

    md0 : active raid0 sda2[0] sdb2[1]
    15990784 blocks super 1.2 512k chunks

    unused devices:

    /dev/md2 looks like your root partition to me.

    So I just mount /dev/md2 /mnt/ and reboot the server?

  • jackbjackb Member, Host Rep
    edited February 2023

    @Tony40 said:
    So I just mount /dev/md2 /mnt/ and reboot the server?

    No, you need to fix whatever is preventing the system from booting - when chrooted into the system.

    Mounting the partition and doing nothing will achieve nothing.

    Once you're done make sure to unmount the grml cd or change boot order to HD first, otherwise you'll always boot into rescue.

    Thanked by 1Tony40
  • Tony40Tony40 Member
    edited February 2023

    @jackb said:

    @Tony40 said:
    So I just mount /dev/md2 /mnt/ and reboot the server?

    No, you need to fix whatever is preventing the system from booting - when chrooted into the system.

    Mounting the partition and doing nothing will achieve nothing.

    Once you're done make sure to unmount the grml cd or change boot order to HD first, otherwise you'll always boot into rescue.

    Ok, Thanks a lot for your help, your help is appreciated!

  • vsys_hostvsys_host Member, Patron Provider

    Hello, try to run fsck on all particions and fix errors, if they are exist. Sometimes it helps.

    Thanked by 1Tony40
Sign In or Register to comment.