This guide explains how to set up software RAID1 on an already running LVM system (Ubuntu 18.04). The GRUB2 bootloader will be configured in such a way that the system will still be able to boot if one of the hard drives fails (no matter which one).
I do not issue any guarantee that this will work for you!
1 Preliminary note
In this tutorial I’m using an Ubuntu 18.04 system with two hard drives, /dev/sda and /dev/sdb which are identical in size. /dev/sdb is currently unused, and /dev/sda has the following partitions (this is the default Ubuntu 18.04 LVM partitioning scheme – you should find something similar on your system unless you chose to manually partition during the installation of the system):
- /dev/sda1: is used for LVM (volume group ubuntu-vg) and containts /(volume root) and swap (volume swap_1).
In the end I want to have the following situation:
- /dev/md0 (made up of /dev/sda1 and /dev/sdb1): LVM (volume group ubuntu-vg), containts /(volume root) and swap (volume swap_1).
This is the current situation:
df -h
Filesystem Size Used Avail Use% Mounted on udev 1.9G 0 1.9G 0% /dev tmpfs 395M 1.9M 393M 1% /run /dev/mapper/ubuntu--vg-root 915G 3.7G 865G 1% / tmpfs 2.0G 36M 1.9G 2% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup tmpfs 395M 16K 395M 1% /run/user/120 /dev/loop0 87M 87M 0 100% /snap/core/4486 /dev/loop1 141M 141M 0 100% /snap/gnome-3-26-1604/59 /dev/loop2 1.7M 1.7M 0 100% /snap/gnome-calculator/154 /dev/loop3 13M 13M 0 100% /snap/gnome-characters/69 /dev/loop4 21M 21M 0 100% /snap/gnome-logs/25 /dev/loop5 3.4M 3.4M 0 100% /snap/gnome-system-monitor/36 tmpfs 395M 52K 395M 1% /run/user/1000 tmpfs 395M 0 395M 0% /run/user/0
fdisk -l
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x004b78d0 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 1953523711 1953521664 931.5G 8e Linux LVM
pvdisplay
--- Physical volume --- PV Name /dev/sda1 VG Name ubuntu-vg PV Size 931.51 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 238466 Free PE 48 Allocated PE 238418 PV UUID Q6lGFj-gTnk-OWow-kyKl-pXK3-U97K-ZWoQ25
vgdisplay
--- Volume group --- VG Name ubuntu-vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size <931.51 GiB PE Size 4.00 MiB Total PE 238466 Alloc PE / Size 238418 / 931.32 GiB Free PE / Size 48 / 192.00 MiB VG UUID kRrKNx-hj0u-nPl0-aRX3-WXFb-YAwO-7kmTuA
lvdisplay
--- Logical volume --- LV Path /dev/ubuntu-vg/root LV Name root VG Name ubuntu-vg LV UUID RBRdtd-bXwj-qI03-F046-NeAN-7cwN-bZJQns LV Write Access read/write LV Creation host, time ubuntu, 2018-05-19 13:54:53 -0400 LV Status available # open 1 LV Size <930.37 GiB Current LE 238174 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Path /dev/ubuntu-vg/swap_1 LV Name swap_1 VG Name ubuntu-vg LV UUID l3gptO-5boP-qFMq-EZeX-Q2BE-pCjr-Eskqhi LV Write Access read/write LV Creation host, time ubuntu, 2018-05-19 13:54:54 -0400 LV Status available # open 2 LV Size 976.00 MiB Current LE 244 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1
2 Installing mdadm
The most important tool for setting up RAID is mdadm. Let’s install it like this:
apt-get install initramfs-tools mdadm
Afterwards, we load a few kernel modules (to avoid a reboot):
modprobe linear modprobe multipath modprobe raid0 modprobe raid1 modprobe raid5 modprobe raid6 modprobe raid10
Now run
cat /proc/mdstat
The output should look as follows:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] unused devices:
3 Preparing /dev/sdb
To create a RAID1 array on our already running system, we must prepare the /dev/sdb hard drive for RAID1, then copy the contents of our /dev/sda hard drive to it, and finally add /dev/sda to the RAID1 array.
First, we copy the partition table from /dev/sda to /dev/sdb so that both disks have exactly the same layout:
sfdisk -d /dev/sda | sfdisk --force /dev/sdb
The output should be as follows:
Checking that no-one is using this disk right now ... OK Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes >>> Script header accepted. >>> Script header accepted. >>> Script header accepted. >>> Script header accepted. >>> Created a new DOS disklabel with disk identifier 0x004b78d0. /dev/sdb1: Created a new partition 1 of type 'Linux LVM' and of size 931.5 GiB. /dev/sdb2: Done. New situation: Disklabel type: dos Disk identifier: 0x004b78d0 Device Boot Start End Sectors Size Id Type /dev/sdb1 * 2048 1953523711 1953521664 931.5G 8e Linux LVM The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks.
The command
fdisk -l
should now show that both HDDs have the same layout:
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x004b78d0 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 1953523711 1953521664 931.5G 8e Linux LVM Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x004b78d0
Next we must change the partition type of our partition on /dev/sdb to Linux raid autodetect:
fdisk /dev/sdb
Welcome to fdisk (util-linux 2.31.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): m Help: DOS (MBR) a toggle a bootable flag b edit nested BSD disklabel c toggle the dos compatibility flag Generic d delete a partition F list free unpartitioned space l list known partition types n add a new partition p print the partition table t change a partition type v verify the partition table i print information about a partition Misc m print this menu u change display/entry units x extra functionality (experts only) Script I load disk layout from sfdisk script file O dump disk layout to sfdisk script file Save & Exit w write table to disk and exit q quit without saving changes Create a new label g create a new empty GPT partition table G create a new empty SGI (IRIX) partition table o create a new empty DOS partition table s create a new empty Sun partition table Command (m for help): t Selected partition 1 Hex code (type L to list all codes): L 0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris 1 FAT12 27 Hidden NTFS Win 82 Linux swap / So c1 DRDOS/sec (FAT- 2 XENIX root 39 Plan 9 83 Linux c4 DRDOS/sec (FAT- 3 XENIX usr 3c PartitionMagic 84 OS/2 hidden or c6 DRDOS/sec (FAT- 4 FAT16 <32M 40 Venix 80286 85 Linux extended c7 Syrinx 5 Extended 41 PPC PReP Boot 86 NTFS volume set da Non-FS data 6 FAT16 42 SFS 87 NTFS volume set db CP/M / CTOS / . 7 HPFS/NTFS/exFAT 4d QNX4.x 88 Linux plaintext de Dell Utility 8 AIX 4e QNX4.x 2nd part 8e Linux LVM df BootIt 9 AIX bootable 4f QNX4.x 3rd part 93 Amoeba e1 DOS access a OS/2 Boot Manag 50 OnTrack DM 94 Amoeba BBT e3 DOS R/O b W95 FAT32 51 OnTrack DM6 Aux 9f BSD/OS e4 SpeedStor c W95 FAT32 (LBA) 52 CP/M a0 IBM Thinkpad hi ea Rufus alignment e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a5 FreeBSD eb BeOS fs f W95 Ext'd (LBA) 54 OnTrackDM6 a6 OpenBSD ee GPT 10 OPUS 55 EZ-Drive a7 NeXTSTEP ef EFI (FAT-12/16/ 11 Hidden FAT12 56 Golden Bow a8 Darwin UFS f0 Linux/PA-RISC b 12 Compaq diagnost 5c Priam Edisk a9 NetBSD f1 SpeedStor 14 Hidden FAT16 <3 61 SpeedStor ab Darwin boot f4 SpeedStor 16 Hidden FAT16 63 GNU HURD or Sys af HFS / HFS+ f2 DOS secondary 17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fb VMware VMFS 18 AST SmartSleep 65 Novell Netware b8 BSDI swap fc VMware VMKCORE 1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid fd Linux raid auto 1c Hidden W95 FAT3 75 PC/IX bc Acronis FAT32 L fe LANstep 1e Hidden W95 FAT1 80 Old Minix be Solaris boot ff BBT Hex code (type L to list all codes): fd Changed type of partition 'Linux LVM' to 'Linux raid autodetect'. Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks.
The command
fdisk -l
should now show that /dev/sdb1 is of the type Linux raid autodetect:
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x004b78d0 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 1953523711 1953521664 931.5G 8e Linux LVM Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x004b78d0 Device Boot Start End Sectors Size Id Type /dev/sdb1 * 2048 1953523711 1953521664 931.5G fd Linux raid autodetect
To make sure that there is no remain from previous RAID installations on /dev/sdb, we run the following command:
mdadm --zero-superblock /dev/sdb1
If there is no previous RAID installation, then the given command will throw an error like this one (which is nothing to worry about):
mdadm: Unrecognised md component device - /dev/sdb1
Otherwise the command will not display anything at all.
4 Creating our RAID arrays
Now let’s create our RAID arrays /dev/md0.
/dev/sdb1 will be added to /dev/md0.
/dev/sda1 can’t be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following command:
mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
The command
cat /proc/mdstat
should now show that you have one degraded RAID array ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdb1[1] 976629760 blocks super 1.2 [2/1] [_U] bitmap: 8/8 pages [32KB], 65536KB chunk unused devices:
Now we come to our LVM RAID array /dev/md0. To prepare it for LVM, we run:
pvcreate /dev/md0
Then we add /dev/md0 to our volume group ubuntu-vg:
vgextend ubuntu-vg /dev/md0
The output of
pvdisplay
should now be similar to this:
--- Physical volume --- PV Name /dev/sda1 VG Name ubuntu-vg PV Size 931.51 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 238466 Free PE 48 Allocated PE 238418 PV UUID Q6lGFj-gTnk-OWow-kyKl-pXK3-U97K-ZWoQ25 --- Physical volume --- PV Name /dev/md0 VG Name ubuntu-vg PV Size <931.39 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 238434 Free PE 238434 Allocated PE 0 PV UUID pFv0lS-F01i-IJxN-iNKW-BdaA-UZcL-BsHALQ
The output of
vgdisplay
should be as follows:
--- Volume group --- VG Name ubuntu-vg System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 2 Act PV 2 VG Size <1.82 TiB PE Size 4.00 MiB Total PE 476900 Alloc PE / Size 238418 / 931.32 GiB Free PE / Size 238482 / 931.57 GiB VG UUID kRrKNx-hj0u-nPl0-aRX3-WXFb-YAwO-7kmTuA
Next we must adjust /etc/mdadm/mdadm.conf (which doesn’t contain any information about our new RAID array yet) to the new situation:
cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig mdadm --examine --scan >> /etc/mdadm/mdadm.conf
Display the contents of the file:
cat /etc/mdadm/mdadm.conf
In the file you should now see details about our one (degraded) RAID array:
# mdadm.conf # # !NB! Run update-initramfs -u after updating this file. # !NB! This will ensure that initramfs has an uptodate copy. # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # automatically tag new arrays as belonging to the local system HOMEHOST # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This configuration was auto-generated on Sat, 19 May 2018 14:38:46 -0400 by mkconf ARRAY /dev/md/0 metadata=1.2 UUID=ec0ff979:a2e3c88b:690ea1c7:cb4733ce name=vasilij-M50Vc:0
Run
update-grub
Next we adjust our ramdisk to the new situation:
update-initramfs -u
5 Moving our data to the RAID arrays
Now that we’ve modified all configuration files, we can copy the contents of /dev/sda to /dev/sdb (including the configuration changes we’ve made in the previous chapter).
To move the contents of our LVM partition /dev/sda1 to our LVM RAID array /dev/md0, we use the pvmove command:
pvmove -i 2 /dev/sda1 /dev/md0
This can take some time, so please be patient.
Afterwards, we remove /dev/sda1 from the volume group ubuntu-vg…
vgreduce ubuntu-vg /dev/sda1
… and tell the system to not use /dev/sda1 anymore for LVM:
pvremove /dev/sda1
The output of
pvdisplay
should now be as follows:
--- Physical volume --- PV Name /dev/md0 VG Name ubuntu-vg PV Size <931.39 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 238434 Free PE 16 Allocated PE 238418 PV UUID pFv0lS-F01i-IJxN-iNKW-BdaA-UZcL-BsHALQ "/dev/sda1" is a new physical volume of "931.51 GiB" --- NEW Physical volume --- PV Name /dev/sda1 VG Name PV Size 931.51 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID Q6lGFj-gTnk-OWow-kyKl-pXK3-U97K-ZWoQ25
Next we change the partition type of /dev/sda1 to Linux raid autodetect and add /dev/sda1 to the /dev/md0 array:
fdisk /dev/sda
Welcome to fdisk (util-linux 2.31.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): t Selected partition 1 Hex code (type L to list all codes): L 0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris 1 FAT12 27 Hidden NTFS Win 82 Linux swap / So c1 DRDOS/sec (FAT- 2 XENIX root 39 Plan 9 83 Linux c4 DRDOS/sec (FAT- 3 XENIX usr 3c PartitionMagic 84 OS/2 hidden or c6 DRDOS/sec (FAT- 4 FAT16 <32M 40 Venix 80286 85 Linux extended c7 Syrinx 5 Extended 41 PPC PReP Boot 86 NTFS volume set da Non-FS data 6 FAT16 42 SFS 87 NTFS volume set db CP/M / CTOS / . 7 HPFS/NTFS/exFAT 4d QNX4.x 88 Linux plaintext de Dell Utility 8 AIX 4e QNX4.x 2nd part 8e Linux LVM df BootIt 9 AIX bootable 4f QNX4.x 3rd part 93 Amoeba e1 DOS access a OS/2 Boot Manag 50 OnTrack DM 94 Amoeba BBT e3 DOS R/O b W95 FAT32 51 OnTrack DM6 Aux 9f BSD/OS e4 SpeedStor c W95 FAT32 (LBA) 52 CP/M a0 IBM Thinkpad hi ea Rufus alignment e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a5 FreeBSD eb BeOS fs f W95 Ext'd (LBA) 54 OnTrackDM6 a6 OpenBSD ee GPT 10 OPUS 55 EZ-Drive a7 NeXTSTEP ef EFI (FAT-12/16/ 11 Hidden FAT12 56 Golden Bow a8 Darwin UFS f0 Linux/PA-RISC b 12 Compaq diagnost 5c Priam Edisk a9 NetBSD f1 SpeedStor 14 Hidden FAT16 <3 61 SpeedStor ab Darwin boot f4 SpeedStor 16 Hidden FAT16 63 GNU HURD or Sys af HFS / HFS+ f2 DOS secondary 17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fb VMware VMFS 18 AST SmartSleep 65 Novell Netware b8 BSDI swap fc VMware VMKCORE 1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid fd Linux raid auto 1c Hidden W95 FAT3 75 PC/IX bc Acronis FAT32 L fe LANstep 1e Hidden W95 FAT1 80 Old Minix be Solaris boot ff BBT Hex code (type L to list all codes): fd Changed type of partition 'Linux LVM' to 'Linux raid autodetect'. Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks.
mdadm --add /dev/md0 /dev/sda1
Now take a look at
cat /proc/mdstat
… and you should see that the RAID array /dev/md0 is being synchronized:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sda1[2] sdb1[1] 976629760 blocks super 1.2 [2/1] [_U] [>....................] recovery = 0.0% (539776/976629760) finish=120.5min speed=134944K/sec bitmap: 8/8 pages [32KB], 65536KB chunk
(You can run
watch cat /proc/mdstat
to get an ongoing output of the process. To leave watch, press CTRL+C.)
Wait until the synchronization has finished (the output should then look like this:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sda1[2] sdb1[1] 976629760 blocks super 1.2 [2/2] [UU] bitmap: 1/8 pages [4KB], 65536KB chunk unused devices:
).
6 Preparing GRUB2
Afterwards we must make sure that the GRUB2 bootloader is installed on both hard drives, /dev/sda and /dev/sdb:
grub-install /dev/sda grub-install /dev/sdb
… and update our GRUB2 bootloader configuration:
update-grub update-initramfs -u
Reboot the system:
reboot
It should boot without problems.
That’s it – you’ve successfully set up software RAID1 on your running LVM system!
7 Testing
Now let’s simulate a hard drive failure. It doesn’t matter if you select /dev/sda or /dev/sdb here. In this example I assume that /dev/sda has failed.
To simulate the hard drive failure, you can either shut down the system and remove /dev/sda from the system, or you (soft-)remove it like this:
mdadm --manage /dev/md0 --fail /dev/sda1
mdadm --manage /dev/md0 --remove /dev/sda1
Shut down the system:
shutdown -h now
Then put in a new /dev/sda drive (if you simulate a failure of /dev/sdb, you should now put /dev/sda in /dev/sdb‘s place and connect the new HDD as /dev/sda!) and boot the system. It should still start without problems.
Now run
cat /proc/mdstat
and you should see that we have a degraded array:
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdb1[1] 976629760 blocks super 1.2 [2/1] [_U] bitmap: 5/8 pages [20KB], 65536KB chunk unused devices:<none>
Now we copy the partition table of /dev/sdb to /dev/sda:
sfdisk -d /dev/sdb | sfdisk --force /dev/sda
Afterwards we remove any remains of a previous RAID array from /dev/sda…
mdadm --zero-superblock /dev/sda1
… and add /dev/sda to the RAID array:
mdadm -a /dev/md0 /dev/sda1
Now take a look at
cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sda1[2] sdb1[1] 976629760 blocks super 1.2 [2/1] [_U] [>....................] recovery = 0.0% (167168/976629760) finish=480.3min speed=33877K/sec bitmap: 5/8 pages [20KB], 65536KB chunk unused devices:
Wait until the synchronization has finished:
root@vasilij-M50Vc:/home/vasilij# cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sda1[2] sdb1[1] 976629760 blocks super 1.2 [2/2] [UU] bitmap: 3/8 pages [12KB], 65536KB chunk unused devices:
Then install the bootloader on both HDDs:
grub-install /dev/sda grub-install /dev/sdb
That’s it. You’ve just replaced a failed hard drive in your RAID1 array.
After you ran:
vgextend ubuntu-vg /dev/md0
I see that the total PE of /dev/sda1 was 238466 and the total PE of /dev/md0 was 238434. So /dev/md0 had 32 less PE than /dev/sda1.
When I tried this on Ubuntu 19-04 with two 250GB hard drives after I ran:
pvmove -i 2 /dev/sda1 /dev/md0
I got the error:
Insufficient free space: 63991 extents needed, but only 63967 available
Unable to allocate mirror extents for ubuntu-vg/pvmove0.
Failed to convert pvmove LV to mirrored.
How come you didn’t get a similar error?
Hello, strange issue. My case is successful.
pvmove -i 2 /dev/sda1 /dev/md0
/dev/sda1: Moved: 1,33%
/dev/sda1: Moved: 3,65%
/dev/sda1: Moved: 5,53%
/dev/sda1: Moved: 9,65%
/dev/sda1: Moved: 15,06%
/dev/sda1: Moved: 17,92%
/dev/sda1: Moved: 22,78%
/dev/sda1: Moved: 26,35%
/dev/sda1: Moved: 29,02%
/dev/sda1: Moved: 33,37%
/dev/sda1: Moved: 42,35%
/dev/sda1: Moved: 59,84%
/dev/sda1: Moved: 65,61%
/dev/sda1: Moved: 68,86%
/dev/sda1: Moved: 72,27%
/dev/sda1: Moved: 75,45%
/dev/sda1: Moved: 82,08%
/dev/sda1: Moved: 90,43%
/dev/sda1: Moved: 100,00%
What information is displayed after execute this command fdisk -l