Posted by: kezhong | July 11, 2009

Manage a RAID10 Array

RAID10 means the combination of RAID 0 and RAID 1. It absorbs the advantages of RAID0 and RAID1 that provides mirroring and better performance. It also gives better performance than RAID 5 while a failed drive remains unreplaced. So it is popularly used in real working environment.

In my virtual machine, I had installed Fedora 11 as my last article. Before switched on, I created six virtual disks with the same size (5G). Then I began to do the test for creating RAID10.

Create a RAID10 array
After switched on my virtual machine, I checked the devices.
[root@localhost ~]# ls /dev/sd*
/dev/sda    /dev/sda2  /dev/sdb    /dev/sdb2   /dev/sdc  /dev/sde  /dev/sdg        
/dev/sda1  /dev/sda3  /dev/sdb1  /dev/sdb3   /dev/sdd  /dev/sdf  /dev/sdh 

From the above list, I found the six drives: sdc, sdd, sde, sdf, sdg, and sdh. Then I partitioned the drives respectively. List them again.
[root@localhost ~]# ls /dev/sd*
/dev/sda    /dev/sda3  /dev/sdb2   /dev/sdc1   /dev/sde    /dev/sdf1  /dev/sdh    
/dev/sda1  /dev/sdb    /dev/sdb3  /dev/sdd     /dev/sde1  /dev/sdg   /dev/sdh1
/dev/sda2  /dev/sdb1  /dev/sdc    /dev/sdd1   /dev/sdf    /dev/sdg1  

Create a RAID10 array using these six disks.
[root@localhost ~]# mdadm –C /dev/md3 –l10 –n6 /dev/sd[c,d,e,f,g,h]1
mdadm: array /dev/md3 started.

Check out if the synchronization had finished.
[root@localhost ~]# cat /proc/mdstat
Personalities: [raid1] [raid10]
md3 :  active raid10 sdh1[5] sdg1[4] sdf1[3] sde1[2] sdd1[1] sdc1[0]
            15711168 blocks 64K chunks 2 near-copies [6/6] [UUUUUU]
            [>…..] resync = 3.9% (619392/15711168) finish=2.4min speed=103232K/sec
md0 :   active raid1 sdb1[0] sda1[1]
            204736 blocks [2/2] [UU]
md1 :   active raid1 sdb2[0] sda2[1]
            1048512 blocks [2/2] [UU]
md2 :   active raid1 sdb3[0] sda3[1]
            7132416 blocks [2/2] [UU]
unused devices: <none> 

After the synchronization had finished, I tried to start the array manually but did not pass.
[root@localhost ~]# mdadm –A /dev/md3
mdadm: /dev/md3 not identified in config file. 

Then I found the UUID of the array with the command as below.
[root@localhost ~]# mdadm –detail /dev/md3
… …
UUID : 791bfd4c:df96f9e3:bfe78010:bc810f04
… … 

Add the following line to /etc/mdadm.conf in order that the system can find the array after reboot. If you did not do, you would not find md3 using “cat /proc/mdstat” command and occur unbootable problem when you finished making filesystem and configured it into /etc/fstab file after you reboot the system. If the unbootable problem occur, you can find the solution from my next article “Solving an unbootable problem caused by modifying fstab“. 
[root@localhost ~]# vi /etc/mdadm.conf
ARRAY /dev/md3 level=raid10 num-devices=6 UUID=791bfd4c:df96f9e3:bfe78010:bc810f04

Create LVM on RAID10 array
Create a physical volume on the array.
[root@localhost ~]# pvcreate /dev/md3
  Physical volume “/dev/md3” successfully created

Create a volume group on the pv.
[root@localhost ~]# vgcreate datavg /dev/md3
  Volume group “datavg” successfully createdCreate a logical volume within the vg.
[root@localhost ~]# lvcreate –n datafs1 –size 5G datavg
  Logical volume “datafs1” created

 

Make a filesystem within the lv.
[root@localhost ~]# mkfs.ext4 /dev/datavg/datafs1

Mount the filesystem.
[root@localhost ~]# mkdir /datafs1
[root@localhost ~]# mount /dev/datavg/datafs1  /datafs1

Check if it had been mounted.
[root@localhost ~]# df –Th
Filesystem       Type    Size      Used    Avail    Use%   Mounted on
/dev/mapper/rootvg-rootfs
                        ext4      6.7G    3.1G    3.3G    49%     /
/dev/md0          ext3      194M   14M     170M   8%       /boot
/dev/mapper/datavg-datafs1
                        ext4      5.0G    138M   4.6G    3%       /datafs1

Edit the /etc/fstab file, add the following line to make the filesystem mount automatically after reboot.
[root@localhost ~]# vi /etc/fstab
/dev/mapper/datavg-datafs1  /datafs1   ext4  defaults  1 3 

Replace a failure disk in RAID10 array
Switch off the virtual machine, remove a disk (/dev/sde) from RAID 10, add a new disk, and then turn on the virtual machine. I checked it out and found a disk missing.
[root@localhost ~]# cat /proc/mdstat
Personalities: [raid1] [raid10]
md3 :  active raid10 sdc1[0] sdh1[5] sdg1[4] sdf1[3] sdd1[1]
            15711168 blocks 64K chunks 2 near-copies [6/5] [UU_UUU]
md0 :   active raid1 sdb1[0] sda1[1]
            204736 blocks [2/2] [UU]
md1 :   active raid1 sdb2[0] sda2[1]
            1048512 blocks [2/2] [UU]
md2 :   active raid1 sdb3[0] sda3[1]
            7132416 blocks [2/2] [UU]
unused devices: <none> 

Partition the new disk as above.

Add the new disk to the existing RAID10 array.
[root@localhost ~]# mdadm /dev/md3 –add /dev/sde1

  

References
Manage a Linux RAID 10 Storage Server
Basic RAID Organizations


Responses

  1. […] following steps are obtained from Kezhong’s blog and updated and expanded to account for the differences in my hardware. In the example below […]

  2. Hi Kezhong, thanks for providing the excellent blog post on how to manage a RAID10 array. It helped me with my setup.


Leave a comment

Categories