环境准备

虚拟机添加6块20G 硬盘,sdb sdc sdd sde sdf sdg

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

RAID 级别总结

RAID级别 最小磁盘数 容错能力 磁盘空间开销 读取速度 写入速度 硬件成本
RAID 0 2 0%
RAID 1 2 单个磁盘 50%
RAID 5 3 单个磁盘 1 / N
RAID 6 4 两个磁盘 2 / N
RAID 10 5 多个磁盘 50%
RAID 50 6 多个磁盘 1 / N
RAID 60 8 多个磁盘 50%

Linux 提供 mdadm 实用程序来创建和管理软件RAID。

管理 RAID0

创建 RAID

[root@server ~ 11:28:50]# yum install -y mdadm

# 创建一个包含2个块设备的raid0设备/dev/md0
[root@server ~ 11:29:02]# mdadm --create /dev/md0 --level 0 --raid-device 2 /dev/sd{b,c}
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@server ~ 11:32:17]# #mdadm --create创建新的软件 RAID 阵列
[root@server ~ 11:32:37]# #/dev/md0新建 RAID 阵列的设备名(md 是软件 RAID 专用设备)
[root@server ~ 11:32:43]# #--level 0RAID 级别为 0(条带化,无容错,速度最快)
[root@server ~ 11:32:52]# #--raid-devices 2组成 RAID 0 的成员盘数量(必须 ≥2)
[root@server ~ 11:32:58]# #/dev/sd{b,c}成员盘(/dev/sdb 和 /dev/sdc,等价于直接写 /dev/sdb /dev/sdc)

查看 RAID

# 查看 raid 概要信息
[root@server ~ 11:33:07]# cat /proc/mdstat
Personalities : [raid0] 
md0 : active raid0 sdc[1] sdb[0]
      41908224 blocks super 1.2 512k chunks
      
unused devices: <none>

# 查看 raid 设备详细信息
[root@server ~ 11:33:38]# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Jan  7 11:32:17 2026
        Raid Level : raid0
        Array Size : 41908224 (39.97 GiB 42.91 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Jan  7 11:32:17 2026
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : server.migaomei.cloud:0  (local to host server.migaomei.cloud)
              UUID : ff7e96b1:51fdd56e:30ebb5a2:e3efe1c6
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

需要关注的属性:

  • Raid Level : raid0
  • State : clean
  • Chunk Size : 512K
  • 设备清单
[root@server ~ 11:36:05]# lsblk /dev/md0
NAME MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
md0    9:0    0  40G  0 raid0 

[root@server ~ 11:38:01]# lsblk /dev/sdb /dev/sdc
NAME  MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
sdb     8:16   0  20G  0 disk  
└─md0   9:0    0  40G  0 raid0 
sdc     8:32   0  20G  0 disk  
└─md0   9:0    0  40G  0 raid0 

格式化和挂载

[root@server ~ 11:38:37]# mkfs.xfs /dev/md0
meta-data=/dev/md0               isize=512    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@server ~ 11:39:01]# mkdir -p /raid/raid0
[root@server ~ 11:39:19]# mount /dev/md0 /raid/raid0/
[root@server ~ 11:39:37]# df -h /raid/raid0/
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0         40G   33M   40G   1% /raid/raid0

# 创建数据
[root@server ~ 11:39:48]# cp /etc/ho* /raid/raid0/
[root@server ~ 11:40:19]# ls /raid/raid0/
host.conf  hostname  hosts  hosts.allow  hosts.deny

删除 RAID

# 卸载
[root@server ~ 11:40:26]# umount /dev/md0

# stop RAID阵列,将删除阵列
[root@server ~ 11:40:49]# mdadm --stop /dev/md0 
mdadm: stopped /dev/md0

# 清除原先设备上的 md superblock
[root@server ~ 11:41:52]# mdadm --zero-superblock /dev/sd{b,c}
[root@server ~ 11:58:12]# #这条命令不会删除磁盘上的文件系统数据(只是清除 RAID 元数据),但如果磁盘之前属于 RAID 阵列,原文件系统可能已被 RAID 覆 盖,清理后需重新格式化才能单独使用。

补充说明

  • raid0 条带不能增加新成员盘。
[root@server ~ 11:58:18]# mdadm --add /dev/md0 /dev/sdd
mdadm: error opening /dev/md0: No such file or directory
  • raid0 条带不能强制故障成员盘。
[root@server ~ 11:58:38]# mdadm --fail /dev/md0 /dev/sdc
mdadm: error opening /dev/md0: No such file or directory

管理 RAID1

创建 RAID

# 创建一个包含2个块设备的raid1设备/dev/md1
[root@server ~ 11:58:46]# mdadm --create /dev/md1 --level 1 --raid-devices 2 /dev/sd{b,c}
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

查看 RAID

[root@server ~ 11:59:29]# mdadm --detail /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Wed Jan  7 11:59:29 2026
        Raid Level : raid1
        Array Size : 20954112 (19.98 GiB 21.46 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Jan  7 11:59:48 2026
             State : clean, resyncing 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

     Resync Status : 19% complete

              Name : server.migaomei.cloud:1  (local to host server.migaomei.cloud)
              UUID : 4d3a9ca8:03e27a2b:daed0a67:0b954e40
            Events : 3

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

需要关注的属性

  • Raid Level : raid1
  • State : clean, resyncing,正在同步。
  • Consistency Policy : resync
  • Resync Status : 19% complete,同步进度。
  • 设备清单
[root@server ~ 11:59:48]# lsblk /dev/md1
NAME MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
md1    9:1    0  20G  0 raid1 
[root@server ~ 11:59:59]# lsblk /dev/sdb /dev/sdc
NAME  MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
sdb     8:16   0  20G  0 disk  
└─md1   9:1    0  20G  0 raid1 
sdc     8:32   0  20G  0 disk  
└─md1   9:1    0  20G  0 raid1 

格式化和挂载

等待同步完成:直到同步进度达到100%,然后进行格式化和挂载。

[root@server ~ 12:00:03]# mkfs.xfs /dev/md1
mkfs.xfs: /dev/md1 appears to contain an existing filesystem (xfs).
mkfs.xfs: Use the -f option to force overwrite.
[root@server ~ 12:00:11]# mkdir /raid/raid1
[root@server ~ 12:00:30]# mount /dev/md1 /raid/raid1
mount: /dev/md1: can't read superblock
[root@server ~ 12:00:35]# df -h /raid/raid1
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   17G  1.6G   16G  10% /

# 创建数据
[root@server ~ 12:01:14]# cp /etc/ho* /raid/raid1
[root@server ~ 12:01:30]# ls /raid/raid1
host.conf  hostname  hosts  hosts.allow  hosts.deny

增加热备盘

[root@server ~ 12:01:38]# mdadm --add /dev/md1 /dev/sdd
mdadm: added /dev/sdd
[root@server ~ 12:01:45]# mdadm --detail /dev/md1 |tail -5
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

       2       8       48        -      spare   /dev/sdd
# /dev/sdd的状态为spare(备用)

模拟故障

# 强制成员盘故障
[root@server ~ 12:01:50]# mdadm --fail /dev/md1 /dev/sdc
mdadm: set /dev/sdc faulty in /dev/md1

# 查看成员状态
[root@server ~ 12:01:56]# mdadm --detail /dev/md1 |tail -5
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       2       8       48        1      spare rebuilding   /dev/sdd

       1       8       32        -      faulty   /dev/sdc
# /dev/sdd立刻顶替故障磁盘,并进行同步

# 数据依然正常访问
[root@server ~ 12:02:01]# ls /raid/raid1/
host.conf  hostname  hosts  hosts.allow  hosts.deny
[root@server ~ 12:02:10]# cat /raid/raid1/hostname
server.migaomei.cloud

删除故障磁盘

[root@server ~ 12:02:13]# mdadm --remove /dev/md1 /dev/sdc
mdadm: hot removed /dev/sdc from /dev/md1
[root@server ~ 12:02:18]# mdadm --detail /dev/md1 |tail -5
            Events : 25

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       2       8       48        1      spare rebuilding   /dev/sdd

删除 RAID

# 卸载
[root@server ~ 12:02:26]# umount /dev/md1

# stop RAID 阵列,将删除阵列
[root@server ~ 12:02:31]# mdadm --stop /dev/md1
mdadm: stopped /dev/md1

# 清除原先设备上的 md superblock
[root@server ~ 12:02:49]# mdadm --zero-superblock /dev/sd{b..d}

注意:可能会出现以下问题

[root@server ~ 12:02:26]# umount /dev/md1
umount: /dev/md1: not mounted

需要执行以下命令:先格式化一下

mkfs.xfs /dev/md1 -f

补充说明

RAID1的设计初衷是数据冗余和可靠性, 而不是为了增加存储容量。 因此, 即使添加了新的硬盘
并进行了扩容操作, 由于RAID1的工作方式, 其总容量是不会增加的。

管理 RAID5

创建 RAID

# 创建一个包含4个块设备的raid5设备/dev/md2
[root@server ~ 13:41:28]# mdadm --create /dev/md5 --level 5 --raid-devices 4 /dev/sd{b..e}
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.

查看 RAID

[root@server ~ 13:41:30]# mdadm --detail /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Wed Jan  7 13:41:30 2026
        Raid Level : raid5
        Array Size : 62862336 (59.95 GiB 64.37 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Wed Jan  7 13:41:30 2026
             State : clean, degraded, recovering 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 4% complete

              Name : server.migaomei.cloud:5  (local to host server.migaomei.cloud)
              UUID : 8f077cfc:4d3dbe7c:8e8719b2:6014109e
            Events : 1

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      spare rebuilding   /dev/sde

需要关注的属性

  • Raid Level : raid5
  • State : clean, resyncing,正在同步。
  • Consistency Policy : resync
  • Resync Status : 4% complete,同步进度。
  • 设备清单
[root@server ~ 13:41:35]# lsblk /dev/md5
NAME MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
md5    9:5    0  60G  0 raid5 

[root@server ~ 13:43:22]# lsblk /dev/sd{b..e}
NAME  MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
sdb     8:16   0  20G  0 disk  
└─md5   9:5    0  60G  0 raid5 
sdc     8:32   0  20G  0 disk  
└─md5   9:5    0  60G  0 raid5 
sdd     8:48   0  20G  0 disk  
└─md5   9:5    0  60G  0 raid5 
sde     8:64   0  20G  0 disk  
└─md5   9:5    0  60G  0 raid5 

格式化和挂载

注意:格式化前,等待 raid 构建完成。

[root@server ~ 13:43:29]# mkfs.xfs /dev/md5
meta-data=/dev/md5               isize=512    agcount=16, agsize=982144 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=15714304, imaxpct=25
         =                       sunit=128    swidth=384 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=7680, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@server ~ 13:43:45]# mkdir /raid/raid5
mkdir: cannot create directory ‘/raid/raid5’: No such file or directory
[root@server ~ 13:43:58]# mount /dev/md5 /raid/raid5
mount: mount point /raid/raid5 does not exist
[root@server ~ 13:45:28]# df -h | grep /raid/raid5
[root@server ~ 13:46:16]# ls /raid/raid5
ls: cannot access /raid/raid5: No such file or directory
#要创建该文件夹
[root@server ~ 13:46:26]# mkdir -p /raid/raid5
[root@server ~ 13:46:31]# mount /dev/md5 /raid/raid5
[root@server ~ 13:46:38]# df -h /raid/raid5/
Filesystem      Size  Used Avail Use% Mounted on
/dev/md5         60G   33M   60G   1% /raid/raid5

# 创建数据
[root@server ~ 13:46:45]# cp /etc/ho* /raid/raid5
[root@server ~ 13:46:49]# ls /raid/raid5/
host.conf  hostname  hosts  hosts.allow  hosts.deny

增加热备盘

# RAID5 阵列增加一个块热备盘
[root@server ~ 13:46:52]# mdadm --add /dev/md5 /dev/sdf
mdadm: added /dev/sdf

[root@server ~ 13:46:57]# mdadm --detail /dev/md5 |tail -7
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

       5       8       80        -      spare   /dev/sdf

模拟故障

# 模拟磁盘故障,手动标记/dev/sdb为fail
[root@server ~ 13:47:01]# mdadm --fail /dev/md5 /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md5

# 查看成员状态
[root@server ~ 13:47:05]# mdadm --detail /dev/md5 |tail -7
    Number   Major   Minor   RaidDevice State
       5       8       80        0      spare rebuilding   /dev/sdf
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

       0       8       16        -      faulty   /dev/sdb
# /dev/sdf立刻顶替故障磁盘,并进行同步

# 数据依然正常访问
[root@server ~ 13:47:10]# ls /raid/raid5/
host.conf  hostname  hosts  hosts.allow  hosts.deny
[root@server ~ 13:47:20]# cat /raid/raid5/hostname
server.migaomei.cloud

删除故障磁盘

[root@server ~ 13:47:23]# mdadm --remove /dev/md5 /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md5

[root@server ~ 13:47:28]# mdadm --detail /dev/md5 |tail -5
    Number   Major   Minor   RaidDevice State
       5       8       80        0      spare rebuilding   /dev/sdf
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

扩容 RAID

对于raid5,只能扩容,不能减容。

注意:阵列只有在正常状态下,才能扩容,降级及重构时不允许扩容。

[root@server ~ 13:47:58]# mdadm --add /dev/md5 /dev/sdb /dev/sdg
mdadm: added /dev/sdb
mdadm: added /dev/sdg
[root@server ~ 13:48:18]# mdadm --detail /dev/md5 |tail -8
    Number   Major   Minor   RaidDevice State
       5       8       80        0      spare rebuilding   /dev/sdf
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

       6       8       16        -      spare   /dev/sdb
       7       8       96        -      spare   /dev/sdg

# 设置成员数量为5,-G是grow(扩展)
[root@server ~ 13:48:22]# mdadm --grow /dev/md5 --raid-devices 5

# 等待重组完成
[root@server ~ 13:48:36]# mdadm --detail /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Wed Jan  7 13:41:30 2026
        Raid Level : raid5
        Array Size : 62862336 (59.95 GiB 64.37 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 5
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Wed Jan  7 13:48:41 2026
             State : clean, degraded, reshaping 
    Active Devices : 4
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 2

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Reshape Status : 2% complete
     Delta Devices : 1, (4->5)

              Name : server.migaomei.cloud:5  (local to host server.migaomei.cloud)
              UUID : 8f077cfc:4d3dbe7c:8e8719b2:6014109e
            Events : 70

    Number   Major   Minor   RaidDevice State
       5       8       80        0      spare rebuilding   /dev/sdf
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde
       7       8       96        4      active sync   /dev/sdg

       6       8       16        -      spare   /dev/sdb

# 确认 raid 容量:增加了20G
[root@server ~ 13:48:42]# lsblk /dev/md5
NAME MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
md5    9:5    0  60G  0 raid5 /raid/raid5

# 扩展文件系统
[root@centos7 ~]# xfs_growfs /raid/raid5
[root@centos7 ~]# df -h /raid/raid5/
Filesystem Size Used Avail Use% Mounted on
/dev/md5 80G 604M 80G 1% /raid/raid5

删除 RAID

# 卸载
[root@centos7 ~]# umount /dev/md5

# stop RAID 阵列,将删除阵列
[root@centos7 ~]# mdadm --stop /dev/md5
mdadm: stopped /dev/md5
# stop之后、清除superblock之前,如果想构建原先的md5设备,使用以下命令,数据不会丢失
[root@centos7 ~]# mdadm --assemble /dev/md5 /dev/sd{b..g}

# 清除原先设备上的 md superblock
[root@centos7 ~]# mdadm --zero-superblock /dev/sd{b..g}
E  MOUNTPOINT
md5    9:5    0  60G  0 raid5 /raid/raid5

# 扩展文件系统
[root@centos7 ~]# xfs_growfs /raid/raid5
[root@centos7 ~]# df -h /raid/raid5/
Filesystem Size Used Avail Use% Mounted on
/dev/md5 80G 604M 80G 1% /raid/raid5

删除 RAID

# 卸载
[root@centos7 ~]# umount /dev/md5

# stop RAID 阵列,将删除阵列
[root@centos7 ~]# mdadm --stop /dev/md5
mdadm: stopped /dev/md5
# stop之后、清除superblock之前,如果想构建原先的md5设备,使用以下命令,数据不会丢失
[root@centos7 ~]# mdadm --assemble /dev/md5 /dev/sd{b..g}

# 清除原先设备上的 md superblock
[root@centos7 ~]# mdadm --zero-superblock /dev/sd{b..g}
Logo

有“AI”的1024 = 2048,欢迎大家加入2048 AI社区

更多推荐