Linux RAID 存储技术

RAID

  • 廉价磁盘冗余阵列( Redundant Array of Inexpensive Disks )
  • RAID 的基本思想是将多个容量较小、相对廉价的磁盘进行有机组合,从而以较低的成本获得与昂贵大容量磁盘相当的容量、性能、可靠性
  • RAID 咨询委员会( RAID Advisory Board, RAB )决定用 “ 独立 ” 替代 “ 廉价 ” ,于时 RAID 变成了独立磁盘冗余阵列( Redundant Array of Independent Disks )

RAID 实现方式

  • 软 RAID:所有功能均有操作系统和 CPU 来完成,没有独立的 RAID 控制 / 处理芯片和 I/O 处理芯片,效率最低。
  • 硬 RAID :配备了专门的 RAID 控制 / 处理芯片和 I/O 处理芯片以及阵列缓冲,不占用 CPU 资源,成本很高。
  • 软硬混合 RAID:具备 RAID 控制 / 处理芯片,但缺乏 I/O 处理芯片,需要 CPU 和驱动程序来完成,性能和成本在软 RAID 和硬 RAID 之间。

RAID 级别

  1. RAID 0(条带化) 核心特点:

    • 无冗余、追求极致性能。
    • 工作方式:数据拆分后并行写入多块硬盘,读写速度随硬盘数线性提升。
    • 优缺点:速度最快、容量利用率 100%;无容错,任一硬盘损坏则所有数据丢失。
    • 适用场景:临时文件存储、视频编辑等对速度要求高、数据可重建的场景。
    • 最少硬盘数:2 块。
  2. RAID 1(镜像) 核心特点:

    • 极致冗余、读写性能均衡。
    • 工作方式:数据同步写入两块硬盘,形成完全镜像。
    • 优缺点:一块硬盘损坏时,数据可从另一块完整恢复;读速度提升,写速度与单盘接近,容量利用率仅 50%。
    • 适用场景:系统盘、财务数据等对安全性要求极高的关键数据存储。
    • 最少硬盘数:2 块。
  3. RAID 5(分布式奇偶校验) 核心特点:

    • 性能与冗余的经典平衡。
    • 工作方式:数据拆分存储 + 奇偶校验信息分布式存储,奇偶校验用于数据恢复。
    • 优缺点:支持单盘损坏恢复,读写性能较好,容量利用率(n-1)/n(n 为硬盘数);重建时性能下降,不支持双盘同时损坏。
    • 适用场景:数据库、文件服务器等大多数通用业务场景。
    • 最少硬盘数:3 块。
  4. RAID 6(双重分布式奇偶校验) 核心特点:

    • 强化冗余、应对多盘故障。
    • 工作方式:在 RAID 5 基础上增加一组奇偶校验,提供双重容错。
    • 优缺点:支持同时损坏 2 块硬盘,安全性更高;写性能略低于 RAID 5,容量利用率(n-2)/n。
    • 适用场景:大容量存储集群、数据中心等对可靠性要求极高的场景。
    • 最少硬盘数:4 块。
  5. RAID 10(RAID 1+0,镜像 + 条带) 核心特点:

    • 高性能 + 高冗余,高端场景首选。
    • 工作方式:先将硬盘两两组成 RAID 1(镜像),再将多个镜像组组成 RAID 0(条带)。
    • 优缺点:兼具 RAID 0 的高速和 RAID 1 的冗余,支持多块硬盘损坏(每组最多坏 1 块);成本高,容量利用率 50%。
    • 适用场景:企业级数据库、高并发业务系统等对性能和安全都有极致要求的场景。
    • 最少硬盘数:4 块(需偶数)。
  6. 其余不常用的不讨论

管理 RAID0

#准备环境

#虚拟机添加6块20G 硬盘,sdb sdc sdd  sde sdf sdg

#创建 RAID

[root@server ~ 10:04:56]# mdadm --create /dev/md0 --level 0 --raid-devices 2 /dev/sd{b,c}
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@server ~ 10:05:31]# lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda               8:0    0   200G  0 disk  
├─sda1            8:1    0     1G  0 part  /boot
└─sda2            8:2    0   199G  0 part  
  ├─centos-root 253:0    0    50G  0 lvm   /
  ├─centos-swap 253:1    0   3.9G  0 lvm   [SWAP]
  └─centos-home 253:2    0 145.1G  0 lvm   /home
sdb               8:16   0    20G  0 disk  
└─md0             9:0    0    40G  0 raid0 
sdc               8:32   0    20G  0 disk  
└─md0             9:0    0    40G  0 raid0 
sdd               8:48   0    20G  0 disk  
sde               8:64   0    20G  0 disk  
sdf               8:80   0    20G  0 disk  
sdg               8:96   0    20G  0 disk  
sr0              11:0    1   4.4G  0 rom   
[root@server ~ 10:05:40]# cat /proc/mdstat
Personalities : [raid0] 
md0 : active raid0 sdc[1] sdb[0]
      41908224 blocks super 1.2 512k chunks
      
unused devices: <none>

#查看 RAID

[root@server ~ 10:05:48]# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Nov 14 10:05:31 2025
        Raid Level : raid0
        Array Size : 41908224 (39.97 GiB 42.91 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Fri Nov 14 10:05:31 2025
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : server.demo.cloud:0  (local to host server.demo.cloud)
              UUID : 526b4d74:3d4cb60a:ee29300e:17b415d7
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
[root@server ~ 10:06:04]# lsblk /dev/md0
NAME MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
md0    9:0    0  40G  0 raid0 
[root@server ~ 10:12:39]# lsblk /dev/sdb /dev/sdc
NAME  MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
sdb     8:16   0  20G  0 disk  
└─md0   9:0    0  40G  0 raid0 
sdc     8:32   0  20G  0 disk  
└─md0   9:0    0  40G  0 raid0 

#格式化和挂载

[root@server ~ 10:12:58]# mkfs.xfs /dev/md0
meta-data=/dev/md0               isize=512    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@server ~ 10:13:23]# mkdir /data/raid0
mkdir: cannot create directory ‘/data/raid0’: No such file or directory
[root@server ~ 10:13:29]# mkdir -p /data/raid0
[root@server ~ 10:13:38]# mount /dev/md0 /data/raid0
[root@server ~ 10:13:47]# df -h /data/raid0
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0         40G   33M   40G   1% /data/raid0
[root@server ~ 10:13:52]# cp /etc/ho* /data/raid0
[root@server ~ 10:14:02]# ls /data/raid0/
host.conf  hostname  hosts  hosts.allow  hosts.deny

#删除 RAID

[root@server ~ 10:14:08]# umount /dev/md0
[root@server ~ 10:14:17]# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
[root@server ~ 10:14:35]# lsblk /dev/sdb /dev/sdc
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb    8:16   0  20G  0 disk 
sdc    8:32   0  20G  0 disk 
[root@server ~ 10:14:41]# mdadm --zero-superblock /dev/sd{b,c}
#补充说明

#raid0 条带不能增加新成员盘

[root@centos7 ~]# mdadm /dev/md0 --add /dev/sdd
mdadm: add new device failed for /dev/sdd as 2: Invalid argument

#raid0 条带不能强制故障成员盘

[root@centos7 ~]# mdadm /dev/md0 --fail /dev/sdc
mdadm: Cannot remove /dev/sdc from /dev/md0, array will be failed.

管理 RAID1

#创建

[root@server ~ 10:14:55]# mdadm --create /dev/md1 --level 1 --raid-devices 2 /dev/sd{b,c}
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
[root@server ~ 10:16:19]# lsblk /dev/sdb /dev/sdc
NAME  MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
sdb     8:16   0  20G  0 disk  
└─md1   9:1    0  20G  0 raid1 
sdc     8:32   0  20G  0 disk  
└─md1   9:1    0  20G  0 raid1 

#查看

[root@server ~ 10:16:25]# mdadm --detail /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Fri Nov 14 10:16:19 2025
        Raid Level : raid1
        Array Size : 20954112 (19.98 GiB 21.46 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Fri Nov 14 10:16:25 2025
             State : clean, resyncing 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

     Resync Status : 10% complete

              Name : server.demo.cloud:1  (local to host server.demo.cloud)
              UUID : 2e77ed15:b6cf0b83:6b0bdb83:6d2bd103
            Events : 1

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

# 格式化和挂载

[root@server ~ 10:16:51]# mkfs.xfs /dev/md1
mkfs.xfs: /dev/md1 appears to contain an existing filesystem (xfs).
mkfs.xfs: Use the -f option to force overwrite.
[root@server ~ 10:16:56]# df -h /dev/md1
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        2.0G     0  2.0G   0% /dev
[root@server ~ 10:17:13]# df -h /dev/sdb
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        2.0G     0  2.0G   0% /dev
[root@server ~ 10:17:26]# mkdir -p /data/raid1
[root@server ~ 10:17:40]#  mount /dev/md1 /data/raid1
mount: /dev/md1: can't read superblock
[root@server ~ 10:17:45]# df -h /data/raid1
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   50G  1.6G   49G   4% /
[root@server ~ 10:17:53]# mkfs.xfs -f /dev/md1
meta-data=/dev/md1               isize=512    agcount=4, agsize=1309632 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=5238528, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@server ~ 10:18:14]# mount /dev/md1 /data/raid1
[root@server ~ 10:18:20]# df -h /data/raid1
Filesystem      Size  Used Avail Use% Mounted on
/dev/md1         20G   33M   20G   1% /data/raid1
[root@server ~ 10:18:24]# ls /data/
raid0  raid1
[root@server ~ 10:19:01]# cp /etc/ho* /data/raid1
[root@server ~ 10:19:09]# ls /data/raid1/
host.conf  hostname  hosts  hosts.allow  hosts.deny

#增加热备盘

[root@server ~ 10:19:13]# mdadm /dev/md1 --add /dev/sdd
mdadm: added /dev/sdd
[root@server ~ 10:19:26]# mdadm --detail /dev/md1 |tail -5
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

       2       8       48        -      spare   /dev/sdd
       
#模拟故障

[root@server ~ 10:19:32]# mdadm /dev/md1 --fail /dev/sdc
mdadm: set /dev/sdc faulty in /dev/md1
[root@server ~ 10:19:42]# mdadm --detail /dev/md1 |tail -5
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       2       8       48        1      spare rebuilding   /dev/sdd

       1       8       32        -      faulty   /dev/sdc
[root@server ~ 10:19:44]# ls /data/raid1/
host.conf  hostname  hosts  hosts.allow  hosts.deny
[root@server ~ 10:20:02]# cat /data/raid1/hostname
server.demo.cloud
[root@server ~ 10:20:08]# mdadm --detail /dev/md1 |tail -5
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       2       8       48        1      spare rebuilding   /dev/sdd

       1       8       32        -      faulty   /dev/sdc
       
#删除故障磁盘       
       
[root@server ~ 10:20:31]# mdadm /dev/md1 --remove /dev/sdc
mdadm: hot removed /dev/sdc from /dev/md1
[root@server ~ 10:20:45]# mdadm --detail /dev/md1 |tail -5
            Events : 37

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       2       8       48        1      spare rebuilding   /dev/sdd
       
#删除 RAID       
       
[root@server ~ 10:20:51]# umount /dev/md1
[root@server ~ 10:20:59]# lsblk /dev/sdb /dev/sdc
NAME  MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
sdb     8:16   0  20G  0 disk  
└─md1   9:1    0  20G  0 raid1 
sdc     8:32   0  20G  0 disk  
[root@server ~ 10:21:08]# mdadm --stop /dev/md1
mdadm: stopped /dev/md1
[root@server ~ 10:21:25]# lsblk /dev/sdb /dev/sdc
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb    8:16   0  20G  0 disk 
sdc    8:32   0  20G  0 disk 
[root@server ~ 10:21:28]# mdadm --zero-superblock /dev/sd{b..d}
[root@server ~ 10:21:36]# lsblk /dev/sdb /dev/sdc
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb    8:16   0  20G  0 disk 
sdc    8:32   0  20G  0 disk 
#补充说明

# 清除磁盘分区表信息
wipefs -a /dev/sdc
parted /dev/sdc print

# 格式化并挂载
mkfs.xfs /dev/sdc1
mkfs.xfs -f /dev/sdc1
mount /dev/sdc1 /webapp/webapp01/

管理 RAID5

#创建 RAID

[root@server ~ 10:45:59]# mdadm --create /dev/md5 --level 5 --raid-devices 4 /dev/sd{b..e}
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.

#查看
[root@server ~ 10:46:53]# mdadm --detail /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Fri Nov 14 10:46:53 2025
        Raid Level : raid5
        Array Size : 62862336 (59.95 GiB 64.37 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Fri Nov 14 10:47:06 2025
             State : clean, degraded, recovering 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 16% complete

              Name : server.demo.cloud:5  (local to host server.demo.cloud)
              UUID : 9cbf1693:13704608:604016ee:d87db0e2
            Events : 3

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      spare rebuilding   /dev/sde
[root@server ~ 10:47:10]# mdadm --detail /dev/md5 | tail

              Name : server.demo.cloud:5  (local to host server.demo.cloud)
              UUID : 9cbf1693:13704608:604016ee:d87db0e2
            Events : 4

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      spare rebuilding   /dev/sde
[root@server ~ 10:47:18]# lsblk /dev/md5
NAME MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
md5    9:5    0  60G  0 raid5 
[root@server ~ 10:47:31]# mount /dev/md5 /data/raid5
mount: mount point /data/raid5 does not exist
[root@server ~ 10:48:03]# mdadm --detail /dev/md5 | tail

              Name : server.demo.cloud:5  (local to host server.demo.cloud)
              UUID : 9cbf1693:13704608:604016ee:d87db0e2
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

#格式化和挂载

[root@server ~ 10:49:02]# mkfs.xfs -f /dev/md5
meta-data=/dev/md5               isize=512    agcount=16, agsize=982144 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=15714304, imaxpct=25
         =                       sunit=128    swidth=384 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=7680, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@server ~ 10:49:09]# mkdir /data/raid5
[root@server ~ 10:49:15]# mount /dev/md5 /data/raid5
[root@server ~ 10:49:18]# df -h /data/raid5/
Filesystem      Size  Used Avail Use% Mounted on
/dev/md5         60G   33M   60G   1% /data/raid5
[root@server ~ 10:49:24]# cp /etc/ho* /data/raid5
[root@server ~ 10:49:32]# ls /data/raid5/
host.conf  hostname  hosts  hosts.allow  hosts.deny

#增加热备盘

[root@server ~ 10:49:37]# mdadm /dev/md5 --add /dev/sdf
mdadm: added /dev/sdf
[root@server ~ 10:49:47]# mdadm --detail /dev/md5 | tail
              UUID : 9cbf1693:13704608:604016ee:d87db0e2
            Events : 19

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

       5       8       80        -      spare   /dev/sdf
       
       
#模拟故障

[root@server ~ 10:49:53]#  mdadm /dev/md5 --fail /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md5

[root@server ~ 10:50:05]# mdadm --detail /dev/md5 | tail -15
Consistency Policy : resync

    Rebuild Status : 14% complete

              Name : server.demo.cloud:5  (local to host server.demo.cloud)
              UUID : 9cbf1693:13704608:604016ee:d87db0e2
            Events : 25

    Number   Major   Minor   RaidDevice State
       5       8       80        0      spare rebuilding   /dev/sdf
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

       0       8       16        -      faulty   /dev/sdb
[root@server ~ 10:50:16]# ls /data/raid5/
host.conf  hostname  hosts  hosts.allow  hosts.deny
[root@server ~ 10:50:25]# cat /data/raid5/hostname
server.demo.cloud

#删除故障磁盘

[root@server ~ 10:50:33]# mdadm /dev/md5 --remove /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md5

#扩容 RAID

[root@server ~ 10:51:18]# mdadm /dev/md5 --add /dev/sdb /dev/sdg

[root@server ~ 10:51:38]# mdadm --detail /dev/md5 | tail -15

    Rebuild Status : 92% complete

              Name : server.demo.cloud:5  (local to host server.demo.cloud)
              UUID : 9cbf1693:13704608:604016ee:d87db0e2
            Events : 47

    Number   Major   Minor   RaidDevice State
       5       8       80        0      spare rebuilding   /dev/sdf
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

       6       8       16        -      spare   /dev/sdb
       7       8       96        -      spare   /dev/sdg

[root@server ~ 10:51:48]# mdadm -G /dev/md5 --raid-devices 5
[root@server ~ 10:53:04]# mdadm --detail /dev/md5 | tail -15
    Reshape Status : 1% complete
     Delta Devices : 1, (4->5)

              Name : server.demo.cloud:5  (local to host server.demo.cloud)
              UUID : 9cbf1693:13704608:604016ee:d87db0e2
            Events : 73

    Number   Major   Minor   RaidDevice State
       5       8       80        0      active sync   /dev/sdf
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde
       7       8       96        4      active sync   /dev/sdg

       6       8       16        -      spare   /dev/sdb
[root@server ~ 10:53:06]# lsblk /dev/md5
NAME MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
md5    9:5    0  60G  0 raid5 /data/raid5


# 确认 raid 容量:增加了20G
[root@centos7 ~]# lsblk /dev/md5
NAME MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
md5    9:5    0  80G  0 raid5 /data/raid5

# 扩展文件系统
[root@centos7 ~]# xfs_growfs /data/raid5
[root@centos7 ~]# df -h /data/raid5/
Filesystem      Size  Used Avail Use% Mounted on
/dev/md5         80G  604M   80G   1% /data/raid5
#补充一秒刷新相关信息
watch -n1 'mdadm --detail /dev/md5 | tail -15'

删除 RAID

[root@server ~ 11:22:19]# umount /dev/md5
[root@server ~ 11:22:37]# mdadm --stop /dev/md5
mdadm: stopped /dev/md5
[root@server ~ 11:22:42]# mdadm --assemble /dev/md5 /dev/sd{b..g}
mdadm: /dev/md5 has been started with 5 drives and 1 spare.
[root@server ~ 11:23:36]# lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda               8:0    0   200G  0 disk  
├─sda1            8:1    0     1G  0 part  /boot
└─sda2            8:2    0   199G  0 part  
  ├─centos-root 253:0    0    50G  0 lvm   /
  ├─centos-swap 253:1    0   3.9G  0 lvm   [SWAP]
  └─centos-home 253:2    0 145.1G  0 lvm   /home
sdb               8:16   0    20G  0 disk  
└─md5             9:5    0    80G  0 raid5 
sdc               8:32   0    20G  0 disk  
└─md5             9:5    0    80G  0 raid5 
sdd               8:48   0    20G  0 disk  
└─md5             9:5    0    80G  0 raid5 
sde               8:64   0    20G  0 disk  
└─md5             9:5    0    80G  0 raid5 
sdf               8:80   0    20G  0 disk  
└─md5             9:5    0    80G  0 raid5 
sdg               8:96   0    20G  0 disk  
└─md5             9:5    0    80G  0 raid5 
sr0              11:0    1   4.4G  0 rom   
[root@server ~ 11:23:42]# mdadm --stop /dev/md5
mdadm: stopped /dev/md5
[root@server ~ 11:23:50]# mdadm --zero-superblock /dev/sd{b..g}

[root@server ~ 11:24:19]# lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0   200G  0 disk 
├─sda1            8:1    0     1G  0 part /boot
└─sda2            8:2    0   199G  0 part 
  ├─centos-root 253:0    0    50G  0 lvm  /
  ├─centos-swap 253:1    0   3.9G  0 lvm  [SWAP]
  └─centos-home 253:2    0 145.1G  0 lvm  /home
sdb               8:16   0    20G  0 disk 
sdc               8:32   0    20G  0 disk 
sdd               8:48   0    20G  0 disk 
sde               8:64   0    20G  0 disk 
sdf               8:80   0    20G  0 disk 
sdg               8:96   0    20G  0 disk 
sr0              11:0    1   4.4G  0 rom  

RO TYPE MOUNTPOINT
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 199G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 3.9G 0 lvm [SWAP]
└─centos-home 253:2 0 145.1G 0 lvm /home
sdb 8:16 0 20G 0 disk
sdc 8:32 0 20G 0 disk
sdd 8:48 0 20G 0 disk
sde 8:64 0 20G 0 disk
sdf 8:80 0 20G 0 disk
sdg 8:96 0 20G 0 disk
sr0 11:0 1 4.4G 0 rom








Logo

有“AI”的1024 = 2048,欢迎大家加入2048 AI社区

更多推荐