Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


slowing down everything soyoustart
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

slowing down everything soyoustart

Hi,

I am facing this issue since yesterday.
iotop result.

975 be/3 root 0.00 B/s 15.40 K/s 0.00 % 99.99 % auditd
18672 be/4 root 0.00 B/s 3.85 K/s 0.00 % 99.99 % sshd: root [priv]
13743 be/4 root 0.00 B/s 0.00 B/s 0.00 % 99.99 % [kworker/u24:2]
18738 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 99.99 % mysqld --defaults

how to resolve this issue and what it could be.

thanks

Comments

  • jarjar Patron Provider, Top Host, Veteran

    I don't see a problem from this output. What were the conditions that caused you to run iotop and grab this output?

  • CPU usage too high,

    wa is also too high. normally CPU goes above 50 mark easily. and remain there.

    i am unable to do anything due to this issue.
    i am using virtualizor with 1 container in it.

    thanks

  • AdvinAdvin Member, Patron Provider

    Check your CPU clock speed/frequency and CPU temperatures. Could be a hardware issue.

    Thanked by 1Dilstar
  • VoidVoid Member
    edited December 2022

    SoYouStartSlowly
    SlowYouStart

    Thanked by 1yoursunny
  • YouSoSlow

  • The drive with serial number: K5G is defective and should be replaced.

    I have 4x 2To HDD SATA Soft RAID

    if i tell them to replace the drive, so would be data be safe? and what procedure i have to do as its soft raid, after changing the drive.

    or i just tell them to replace to make it fix ?

    Thanks

  • @Dilstar said:
    or i just tell them to replace to make it fix ?

    No , if they replace the disk the server won't start , you will have to rebuild the raid manually from rescue system.

    Ovh won't help with that !

    Thanked by 1Dilstar
  • after their intervention server node is fine, but i did not start the container/vps yet. if it works fine or not, WA wait time is also fine. i am thinking to start container and try to backup.

    and here i dont know how to rebuild raid. as i did not encounter such issue before. i have no knowledge regarding rebuild the raid.

    thanks

  • RapToNRapToN Member, Host Rep

    I would recommend to Check your RAID status:
    cat /proc/mdstat

    Google for mdadm if you need to add the New disk to the RAID.

    Thanked by 1Dilstar
  • here is the result currently

    [root@bigserver ~]# cat /proc/mdstat
    Personalities : [raid1]
    md4 : active (auto-read-only) raid1 sda4[0] sdb4[1] sdc4[2]
    1837642688 blocks [4/3] [UUU_]
    bitmap: 0/14 pages [0KB], 65536KB chunk

    md2 : active raid1 sdd2[3] sdc2[2] sdb2[1] sda2[0]
    104856512 blocks [4/4] [UUUU]

    unused devices:

  • @Dilstar said:
    after their intervention server node is fine, but i did not start the container/vps yet. if it works fine or not, WA wait time is also fine. i am thinking to start container and try to backup.

    and here i dont know how to rebuild raid. as i did not encounter such issue before. i have no knowledge regarding rebuild the raid.

    thanks

    Here is a good description by OVH https://docs.ovh.com/ie/en/dedicated/raid-soft/
    The only that I make different that I use gdisk and generate new UUIDs on the new disk
    sgdisk /dev/sda -R /dev/sdb # copy sda partition table to sdb
    sgdisk -G /dev/sdb # generate new UUIDs

    And double check the command before issue (E.G. disk identifiers, RAID array identifiers and more) because it is easy to damage data. Remember that you might have more than one raid arrays (for example if you use disk encryption with SSH unlock and you place the /boot on a different array).

    Good luck:)

    Thanked by 1Dilstar
  • currently wa looks fine, did they remove that device which were causing issue or still there. as server looks ok curently, should i start my vps container and try to backup, as i dont have backup. or tell them to replace ?

    thanks

  • Seems like it is all in raid 1 aka mirroring. Means you could probably lose two more.disks and still be fine.

    Doing backups right now is the roght thing to do..

    Does not look like they replaced the disk at all. More likely only removed it from the raid array md4

    Thanked by 1Dilstar
  • hi,

    i have bootup the container, wa is still high. but i am trying best to take backup.
    looks like that drive is still exist there, which is causing issue to wait time.

    thanks

  • Check what smartmontols give there. If you rebooted the md4 raid might have been reassembled with the 4th disk and is running a rebuild.

    Get a (paid) sys admin to analyze an fix it for you.

Sign In or Register to comment.