Proxmox VE 5.4 software RAID using MDADM

Proxmox Virtual Environment is an open source server virtualization management solution based on QEMU/KVM and LXC. You can manage virtual machines, containers, highly available clusters, storage and networks with an integrated, easy-to-use web interface or via CLI. Proxmox VE code is licensed under the GNU Affero General Public License, version 3. The project is developed and maintained by Proxmox Server Solutions GmbH.

My setup

Proxmox Version 5.4-5-c6fdb264
Install disk /dev/sda: 10GB
Mirrored disk /dev/sdb: 10GB
RAID setup mdadm –level=1: mirror setup

Step 1: Install and setup

Disk preparation

Before you even start your install of Proxmox I highly recommend — unless you are sure about your disk history — that you boot into something like Kali Linux or System rescue CD and clear the two disks you intend to use for the OS mirror. It is a good idea when setting up RAID that you use the exact same disk models but it is not a requirement.

Once you boot into your live CD or choice use parted or gparted to clear all partitions on the disks you want to use. A second and very important step before setting up RAID is making sure the disks don’t have any hardware or software RAID metadata on them. Rather than worry about if the disks have this data just clear where it lives with these two commands — after you remove all partitions.

The following command will clear the first 512*10 bytes of the disk. This is me being overzealous as the RAID metadata is only suppose to live in the first 512 bytes.

dd if=/dev/zero of=/dev/sdX bs=512 count=10

Then because some versions of RAID live in the last 512 bytes clear that as well.

dd if=/dev/zero of=/dev/sdX bs=512 seek=$(( $(blockdev --getsz /dev/sdX) - 1 )) count=1

Make sure you run both of these commands for each sdX where X is your drive (a/b/c/d etc). Now we can be certain no previous RAID data is going to cause major headaches and confusion with our install going forward.

Proxmox Install
Boot from your ISO or burned disk and perform the install. Keep track of which /dev/sdX you install to. In my example I am using /dev/sda as my install disk. Once your installation is completed you’ll want to make sure you have internet connectivity.

There is a good chance you don’t have a Proxmox community subscription so if you don’t have a license key follow the instructions in the link below to change over to the “pve-nosubscription” repo.

https://www.prado.lt/proxmox-ve-5-4-fix-updates-upgrades

Once you have completed these steps run the following commands to get your Proxmox install up to date

apt-get update
apt-get dist-upgrade
apt-get upgrade

Now that your Proxmox is up to date you’ll need to install MDADM to be able to perform RAID later on.

apt-get install mdadm

During mdadm’s install you will be prompted; feel free to leave the answers at default unless you know why you want to change them. Once this completes we can move to the next stage.

Understanding Proxmox partition layout
The default Proxmox partitioning after a fresh install looks like this:

GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print                                                            
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sda: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name  Flags
 1      17.4kB  1049kB  1031kB                     bios_grub
 2      1049kB  538MB   537MB   fat32              boot, esp
 3      538MB   21.5GB  20.9GB                     lvm

(parted)               

We can see from this list that partition 3 is the one we need to mirror through MDADM. Remember that in my setup I am using /dev/sda and /dev/sdb but that you may have decided to use other disks so you will need to transpose your device names.

Cloning the installed partitions into RAID
Because Proxmox VE 5.4 uses GPT instead of MSDOS we have to use the tool sgdisk instead of sfdisk. Run the following commands to prep your blank second disk.

sgdisk -R=/dev/sdb /dev/sda
sgdisk -t 3:fd00 /dev/sdb

The first command copies the partition table from sda to sdb while the second set the partition type of /deb/sdb3 to RAID instead of lvm. If you want to learn more partition flags and types sgdisk –list-types shows all types you can set. Your partition should now look like this:

GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) select /dev/sdb                                                  
Using /dev/sdb
(parted) print                                                            
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdb: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name  Flags
 1      17.4kB  1049kB  1031kB                     bios_grub
 2      1049kB  538MB   537MB                      boot, esp
 3      538MB   21.5GB  20.9GB                     raid

Change the partition UUID of /dev/sdb2

dd if=/dev/sda2 of=/dev/sdb2

Create your RAID array
Since you already installed MDADM earlier on creating the two RAID arrays is simple. The following commands will create the array:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb3

During this step you will likely see the following complaint which is OK you can answer “y”

mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Now we just need to fix our /etc/fstab:

# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=95E3-611A /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

You can find your UUID by following this command:

blkid /dev/sda2 -s UUID -o value

Step 2: Move the 3rd LVM partition over to /dev/md0

The root partition for Proxmox is installed on a logical disk managed by LVM, Moving this isn’t as simple as copying because it requires special steps to create the volume on /dev/md0 as well as to remove it from /dev/sda3.

Create new LVM volume
Run the following commands to move our root FS LVM from /dev/sda3 to /dev/md0

pvcreate /dev/md0
vgextend pve /dev/md0
pvmove /dev/sda3 /dev/md0

The move command takes a very long time.

Remove /dev/sda3 from the LVM volume
Once the pvmove above is completed we can safely remove /dev/sda3 from the volume which will enable us to add it to the /dev/md0 array.

vgreduce pve /dev/sda3
pvremove /dev/sda3

Add /dev/sda3 to the /dev/md0 array
Add our /dev/sda3 partition to the /dev/md0 array with the following two commands

sgdisk -t 3:fd00 /dev/sda
mdadm --add /dev/md0 /dev/sda3

At this point /dev/sda3 will start healing to become a clone of /dev/sdb3.

Next we must adjust /etc/mdadm/mdadm.conf (which doesn’t contain any information about our new RAID array yet) to the new situation:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

Step 4: Prepare GRUB2

Afterwards we must make sure that the GRUB2 bootloader is installed on both hard drives, /dev/sda and /dev/sdb:

grub-install /dev/sda
grub-install /dev/sdb

… and update our GRUB2 bootloader configuration:

update-grub
update-initramfs -u

Reboot the system:

reboot

It should boot without problems.

If you made it this far you are now running Proxmox VE 5.4 software raid!

6 Replies to “Proxmox VE 5.4 software RAID using MDADM”

  1. after I execute mdadm –create /dev/md0 –level=1 –raid-disks=2 missing /dev/sdb3 the result is mdadm Device or resource busy

    1. Hello, do you have the same partitions as us? Can you show output of this command?
      sudo fdisk -l /dev/sdb

  2. Thank you, great guide !!!
    I just added to the files (maybe not necessarily) :
    /etc/initramfs-tools/modules:raid1
    /etc/default/grub/:GRUB_CMDLINE_LINUX=”raid dmraid rootfstype=ext4″

  3. add –metadata=0.90 to mdadm –create… command!

    In my case the grub couldn’t boot from the raid with the default 1.2 metadata.
    I reinstalled the proxmox, and went trouhg again this process but “–metadata=0.90” and it was success.

  4. Sometimes not possible to perform
    pvmove /dev/sda3 /dev/md0
    because destination device contains less physical extents (PE).
    In my case destination was less for 36 PE’s for identical sda and sdb drives (have no idea why, probably mdadm takes few sectors for own needs). Simple solution is to shrink swap, then perform pvmove.
    Change 48 to number you need to free. I shrink little bit more to be sure I will fit to md0 🙂

    swapoff -a
    lvreduce -l -48 /dev/pve/swap
    mkswap /dev/pve/swap
    swapon -a

Leave a Reply to nadi Cancel reply

Your email address will not be published. Required fields are marked *