How to set up software RAID1 on a running LVM system (incl. GRUB2 configuration) (Ubuntu 19.04)

This guide explains how to set up software RAID1 on an already running LVM system (Ubuntu 19.04). The GRUB2 bootloader will be configured in such a way that the system will still be able to boot if one of the hard drives fails (no matter which one).

I do not issue any guarantee that this will work for you!

1 Preliminary note

In this tutorial I’m using an Ubuntu 19.04 system with two hard drives, /dev/sda and /dev/sdb which are identical in size. /dev/sdb is currently unused, and /dev/sda has the following partitions (this is the default Ubuntu 19.04 LVM partitioning scheme – you should find something similar on your system unless you chose to manually partition during the installation of the system):

  • /dev/sda1: is used for LVM (volume group ubuntu-vg) and containts /(volume root) and swap (volume swap_1).

In the end I want to have the following situation:

  • /dev/md0 (made up of /dev/sda1 and /dev/sdb1): LVM (volume group ubuntu-vg), containts /(volume root) and swap (volume swap_1).

This is the current situation:

df -h
Filesystem                   Size  Used Avail Use% Mounted on
udev                         1,9G     0  1,9G   0% /dev
tmpfs                        392M  1,9M  391M   1% /run
/dev/mapper/ubuntu--vg-root  8,9G  4,4G  4,1G  52% /
tmpfs                        2,0G     0  2,0G   0% /dev/shm
tmpfs                        5,0M  4,0K  5,0M   1% /run/lock
tmpfs                        2,0G     0  2,0G   0% /sys/fs/cgroup
/dev/loop0                    90M   90M     0 100% /snap/core/6673
/dev/loop1                    54M   54M     0 100% /snap/core18/941
/dev/loop2                   152M  152M     0 100% /snap/gnome-3-28-1804/31
/dev/loop3                   4,2M  4,2M     0 100% /snap/gnome-calculator/406
/dev/loop4                    15M   15M     0 100% /snap/gnome-characters/254
/dev/loop5                   1,0M  1,0M     0 100% /snap/gnome-logs/61
/dev/loop6                   3,8M  3,8M     0 100% /snap/gnome-system-monitor/77
/dev/loop7                    36M   36M     0 100% /snap/gtk-common-themes/1198
tmpfs                        392M   80K  392M   1% /run/user/1000
fdisk -l
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xcb370986

Device     Boot Start      End  Sectors Size Id Type
/dev/sda1  *     2048 20969471 20967424  10G 8e Linux LVM
pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda1
  VG Name               ubuntu-vg
  PV Size               <10,00 GiB / not usable 2,00 MiB
  Allocatable           yes 
  PE Size               4,00 MiB
  Total PE              2559
  Free PE               9
  Allocated PE          2550
  PV UUID               QxXlcp-npps-qdZZ-a5GM-RAfL-Tn2i-WLlu0Q
vgdisplay
  --- Volume group ---
  VG Name               ubuntu-vg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <10,00 GiB
  PE Size               4,00 MiB
  Total PE              2559
  Alloc PE / Size       2550 / 9,96 GiB
  Free  PE / Size       9 / 36,00 MiB
  VG UUID               a9OYVe-oI5Q-9b7D-j1x8-RDGt-ss2T-rAwOnR
lvdisplay
  --- Logical volume ---
  LV Path                /dev/ubuntu-vg/root
  LV Name                root
  VG Name                ubuntu-vg
  LV UUID                Zdalqe-pSV8-snpv-yQ7e-PDAq-aRGf-FljvoQ
  LV Write Access        read/write
  LV Creation host, time ubuntu, 2019-04-28 18:14:24 +0300
  LV Status              available
  # open                 1
  LV Size                <9,01 GiB
  Current LE             2306
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/ubuntu-vg/swap_1
  LV Name                swap_1
  VG Name                ubuntu-vg
  LV UUID                kwn1rM-KUI9-jtmB-TeXp-et93-bLc1-MCsuW7
  LV Write Access        read/write
  LV Creation host, time ubuntu, 2019-04-28 18:14:25 +0300
  LV Status              available
  # open                 2
  LV Size                976,00 MiB
  Current LE             244
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

2 Installing mdadm

The most important tool for setting up RAID is mdadm. Let’s install it like this:

apt-get install initramfs-tools mdadm

Afterwards, we load a few kernel modules (to avoid a reboot):

modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Now run

cat /proc/mdstat

The output should look as follows:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
unused devices: 

3 Preparing /dev/sdb

To create a RAID1 array on our already running system, we must prepare the /dev/sdb hard drive for RAID1, then copy the contents of our /dev/sda hard drive to it, and finally add /dev/sda to the RAID1 array.

First, we copy the partition table from /dev/sda to /dev/sdb so that both disks have exactly the same layout:

sfdisk -d /dev/sda | sfdisk --force /dev/sdb
Checking that no-one is using this disk right now ... OK

Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Created a new DOS disklabel with disk identifier 0xcb370986.
/dev/sdb1: Created a new partition 1 of type 'Linux LVM' and of size 10 GiB.
/dev/sdb2: Done.

New situation:
Disklabel type: dos
Disk identifier: 0xcb370986

Device     Boot Start      End  Sectors Size Id Type
/dev/sdb1  *     2048 20969471 20967424  10G 8e Linux LVM

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

The command

fdisk -l

should now show that both HDDs have the same layout:

Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xcb370986

Device     Boot Start      End  Sectors Size Id Type
/dev/sda1  *     2048 20969471 20967424  10G 8e Linux LVM


Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xcb370986

Device     Boot Start      End  Sectors Size Id Type
/dev/sdb1  *     2048 20969471 20967424  10G 8e Linux LVM

Next we must change the partition type of our partition on /dev/sdb to Linux raid autodetect:

fdisk /dev/sdb
Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): m

Help:

  DOS (MBR)
   a   toggle a bootable flag
   b   edit nested BSD disklabel
   c   toggle the dos compatibility flag

  Generic
   d   delete a partition
   F   list free unpartitioned space
   l   list known partition types
   n   add a new partition
   p   print the partition table
   t   change a partition type
   v   verify the partition table
   i   print information about a partition

  Misc
   m   print this menu
   u   change display/entry units
   x   extra functionality (experts only)

  Script
   I   load disk layout from sfdisk script file
   O   dump disk layout to sfdisk script file

  Save & Exit
   w   write table to disk and exit
   q   quit without saving changes

  Create a new label
   g   create a new empty GPT partition table
   G   create a new empty SGI (IRIX) partition table
   o   create a new empty DOS partition table
   s   create a new empty Sun partition table


Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): L

 0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris        
 1  FAT12           27  Hidden NTFS Win 82  Linux swap / So c1  DRDOS/sec (FAT-
 2  XENIX root      39  Plan 9          83  Linux           c4  DRDOS/sec (FAT-
 3  XENIX usr       3c  PartitionMagic  84  OS/2 hidden or  c6  DRDOS/sec (FAT-
 4  FAT16 <32M      40  Venix 80286     85  Linux extended  c7  Syrinx         
 5  Extended        41  PPC PReP Boot   86  NTFS volume set da  Non-FS data    
 6  FAT16           42  SFS             87  NTFS volume set db  CP/M / CTOS / .
 7  HPFS/NTFS/exFAT 4d  QNX4.x          88  Linux plaintext de  Dell Utility   
 8  AIX             4e  QNX4.x 2nd part 8e  Linux LVM       df  BootIt         
 9  AIX bootable    4f  QNX4.x 3rd part 93  Amoeba          e1  DOS access     
 a  OS/2 Boot Manag 50  OnTrack DM      94  Amoeba BBT      e3  DOS R/O        
 b  W95 FAT32       51  OnTrack DM6 Aux 9f  BSD/OS          e4  SpeedStor      
 c  W95 FAT32 (LBA) 52  CP/M            a0  IBM Thinkpad hi ea  Rufus alignment
 e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a5  FreeBSD         eb  BeOS fs        
 f  W95 Ext'd (LBA) 54  OnTrackDM6      a6  OpenBSD         ee  GPT            
10  OPUS            55  EZ-Drive        a7  NeXTSTEP        ef  EFI (FAT-12/16/
11  Hidden FAT12    56  Golden Bow      a8  Darwin UFS      f0  Linux/PA-RISC b
12  Compaq diagnost 5c  Priam Edisk     a9  NetBSD          f1  SpeedStor      
14  Hidden FAT16 <3 61  SpeedStor       ab  Darwin boot     f4  SpeedStor      
16  Hidden FAT16    63  GNU HURD or Sys af  HFS / HFS+      f2  DOS secondary  
17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fb  VMware VMFS    
18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fc  VMware VMKCORE 
1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid fd  Linux raid auto
1c  Hidden W95 FAT3 75  PC/IX           bc  Acronis FAT32 L fe  LANstep        
1e  Hidden W95 FAT1 80  Old Minix       be  Solaris boot    ff  BBT            
Hex code (type L to list all codes): fd
Changed type of partition 'Linux LVM' to 'Linux raid autodetect'.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

The command

fdisk -l

should now show that /dev/sdb1 is of the type Linux raid autodetect:

Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xcb370986

Device     Boot Start      End  Sectors Size Id Type
/dev/sda1  *     2048 20969471 20967424  10G 8e Linux LVM


Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xcb370986

Device     Boot Start      End  Sectors Size Id Type
/dev/sdb1  *     2048 20969471 20967424  10G fd Linux raid autodetect

To make sure that there is no remain from previous RAID installations on /dev/sdb, we run the following command:

mdadm --zero-superblock /dev/sdb1

If there is no previous RAID installation, then the given command will throw an error like this one (which is nothing to worry about):

mdadm: Unrecognised md component device - /dev/sdb1

Otherwise the command will not display anything at all.

4 Creating our RAID arrays

Now let’s create our RAID arrays /dev/md0.

/dev/sdb1 will be added to /dev/md0.

/dev/sda1 can’t be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following command:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

The command

cat /proc/mdstat

should now show that you have one degraded RAID array ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb1[1]
      10474496 blocks super 1.2 [2/1] [_U]
      
unused devices: 

Now we come to our LVM RAID array /dev/md0. To prepare it for LVM, we run:

pvcreate /dev/md0

Then we add /dev/md0 to our volume group ubuntu-vg:

vgextend ubuntu-vg /dev/md0

The output of

pvdisplay

should now be similar to this:

  --- Physical volume ---
  PV Name               /dev/sda1
  VG Name               ubuntu-vg
  PV Size               <10,00 GiB / not usable 2,00 MiB
  Allocatable           yes 
  PE Size               4,00 MiB
  Total PE              2559
  Free PE               9
  Allocated PE          2550
  PV UUID               QxXlcp-npps-qdZZ-a5GM-RAfL-Tn2i-WLlu0Q
   
  --- Physical volume ---
  PV Name               /dev/md0
  VG Name               ubuntu-vg
  PV Size               <9,99 GiB / not usable 0   
  Allocatable           yes 
  PE Size               4,00 MiB
  Total PE              2557
  Free PE               2557
  Allocated PE          0
  PV UUID               hpkczS-Kxr7-LfBj-lhZ1-4eAO-3txi-7xaADk

The output of

vgdisplay

should be as follows:

  --- Volume group ---
  VG Name               ubuntu-vg
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               19,98 GiB
  PE Size               4,00 MiB
  Total PE              5116
  Alloc PE / Size       2550 / 9,96 GiB
  Free  PE / Size       2566 / 10,02 GiB
  VG UUID               a9OYVe-oI5Q-9b7D-j1x8-RDGt-ss2T-rAwOnR

Next we must adjust /etc/mdadm/mdadm.conf (which doesn’t contain any information about our new RAID array yet) to the new situation:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

Display the contents of the file:

cat /etc/mdadm/mdadm.conf

In the file you should now see details about our one (degraded) RAID array:

# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST 

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This configuration was auto-generated on Sun, 28 Apr 2019 18:48:31 +0300 by mkconf
ARRAY /dev/md/0  metadata=1.2 UUID=e063bd40:bd8508ec:cdea4023:2ee496de name=vasilij-virtual-machine:0

Run

update-grub

Next we adjust our ramdisk to the new situation:

update-initramfs -u

5 Moving our data to the RAID arrays

Now that we’ve modified all configuration files, we can copy the contents of /dev/sda to /dev/sdb (including the configuration changes we’ve made in the previous chapter).

To move the contents of our LVM partition /dev/sda1 to our LVM RAID array /dev/md0, we use the pvmove command:

pvmove -i 2 /dev/sda1 /dev/md0

This can take some time, so please be patient.

Afterwards, we remove /dev/sda1 from the volume group ubuntu-vg

vgreduce ubuntu-vg /dev/sda1

… and tell the system to not use /dev/sda1 anymore for LVM:

pvremove /dev/sda1

The output of

pvdisplay

should now be as follows:

  --- Physical volume ---
  PV Name               /dev/md0
  VG Name               ubuntu-vg
  PV Size               <9,99 GiB / not usable 0   
  Allocatable           yes 
  PE Size               4,00 MiB
  Total PE              2557
  Free PE               7
  Allocated PE          2550
  PV UUID               hpkczS-Kxr7-LfBj-lhZ1-4eAO-3txi-7xaADk

Next we change the partition type of /dev/sda1 to Linux raid autodetect and add /dev/sda1 to the /dev/md0 array:

fdisk /dev/sda
Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): L

 0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris        
 1  FAT12           27  Hidden NTFS Win 82  Linux swap / So c1  DRDOS/sec (FAT-
 2  XENIX root      39  Plan 9          83  Linux           c4  DRDOS/sec (FAT-
 3  XENIX usr       3c  PartitionMagic  84  OS/2 hidden or  c6  DRDOS/sec (FAT-
 4  FAT16 <32M      40  Venix 80286     85  Linux extended  c7  Syrinx         
 5  Extended        41  PPC PReP Boot   86  NTFS volume set da  Non-FS data    
 6  FAT16           42  SFS             87  NTFS volume set db  CP/M / CTOS / .
 7  HPFS/NTFS/exFAT 4d  QNX4.x          88  Linux plaintext de  Dell Utility   
 8  AIX             4e  QNX4.x 2nd part 8e  Linux LVM       df  BootIt         
 9  AIX bootable    4f  QNX4.x 3rd part 93  Amoeba          e1  DOS access     
 a  OS/2 Boot Manag 50  OnTrack DM      94  Amoeba BBT      e3  DOS R/O        
 b  W95 FAT32       51  OnTrack DM6 Aux 9f  BSD/OS          e4  SpeedStor      
 c  W95 FAT32 (LBA) 52  CP/M            a0  IBM Thinkpad hi ea  Rufus alignment
 e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a5  FreeBSD         eb  BeOS fs        
 f  W95 Ext'd (LBA) 54  OnTrackDM6      a6  OpenBSD         ee  GPT            
10  OPUS            55  EZ-Drive        a7  NeXTSTEP        ef  EFI (FAT-12/16/
11  Hidden FAT12    56  Golden Bow      a8  Darwin UFS      f0  Linux/PA-RISC b
12  Compaq diagnost 5c  Priam Edisk     a9  NetBSD          f1  SpeedStor      
14  Hidden FAT16 <3 61  SpeedStor       ab  Darwin boot     f4  SpeedStor      
16  Hidden FAT16    63  GNU HURD or Sys af  HFS / HFS+      f2  DOS secondary  
17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fb  VMware VMFS    
18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fc  VMware VMKCORE 
1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid fd  Linux raid auto
1c  Hidden W95 FAT3 75  PC/IX           bc  Acronis FAT32 L fe  LANstep        
1e  Hidden W95 FAT1 80  Old Minix       be  Solaris boot    ff  BBT            
Hex code (type L to list all codes): fd
Changed type of partition 'Linux LVM' to 'Linux raid autodetect'.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
mdadm --add /dev/md0 /dev/sda1

Now take a look at

cat /proc/mdstat

… and you should see that the RAID array /dev/md0 is being synchronized:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sda1[2] sdb1[1]
      10474496 blocks super 1.2 [2/1] [_U]
      [===>.................]  recovery = 15.2% (1601920/10474496) finish=0.7min speed=200240K/sec
      
unused devices: 

(You can run

watch cat /proc/mdstat

to get an ongoing output of the process. To leave watch, press CTRL+C.)

Wait until the synchronization has finished (the output should then look like this:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sda1[2] sdb1[1]
      10474496 blocks super 1.2 [2/2] [UU]
      
unused devices: 

6 Preparing GRUB2

Afterwards we must make sure that the GRUB2 bootloader is installed on both hard drives, /dev/sda and /dev/sdb:

grub-install /dev/sda
grub-install /dev/sdb

… and update our GRUB2 bootloader configuration:

update-grub
update-initramfs -u

Reboot the system:

reboot

It should boot without problems.

That’s it – you’ve successfully set up software RAID1 on your running LVM system!

7 Testing

Now let’s simulate a hard drive failure. It doesn’t matter if you select /dev/sda or /dev/sdb here. In this example I assume that /dev/sda has failed.

To simulate the hard drive failure, you can either shut down the system and remove /dev/sda from the system, or you (soft-)remove it like this:

mdadm --manage /dev/md0 --fail /dev/sda1
mdadm --manage /dev/md0 --remove /dev/sda1

Shut down the system:

shutdown -h now

Then put in a new /dev/sda drive (if you simulate a failure of /dev/sdb, you should now put /dev/sda in /dev/sdb‘s place and connect the new HDD as /dev/sda!) and boot the system. It should still start without problems.

Now run

cat /proc/mdstat

and you should see that we have a degraded array:

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb1[1]
      976629760 blocks super 1.2 [2/1] [_U]
      bitmap: 5/8 pages [20KB], 65536KB chunk
unused devices:<none>

Now we copy the partition table of /dev/sdb to /dev/sda:

sfdisk -d /dev/sdb | sfdisk --force /dev/sda

Afterwards we remove any remains of a previous RAID array from /dev/sda

mdadm --zero-superblock /dev/sda1

… and add /dev/sda to the RAID array:

mdadm -a /dev/md0 /dev/sda1

Now take a look at

cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sda1[2] sdb1[1]
      976629760 blocks super 1.2 [2/1] [_U]
      [>....................]  recovery =  0.0% (167168/976629760) finish=480.3min speed=33877K/sec
      bitmap: 5/8 pages [20KB], 65536KB chunk

unused devices: 

Wait until the synchronization has finished:

root@vasilij-M50Vc:/home/vasilij# cat /proc/mdstat 
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sda1[2] sdb1[1]
      976629760 blocks super 1.2 [2/2] [UU]
      bitmap: 3/8 pages [12KB], 65536KB chunk

unused devices: 

Then install the bootloader on both HDDs:

grub-install /dev/sda
grub-install /dev/sdb

That’s it. You’ve just replaced a failed hard drive in your RAID1 array.

Leave a Reply

Your email address will not be published. Required fields are marked *