Linux进阶笔记(四):RAID存储技术、逻辑卷、交换空间和系统启动管理
创建一个包含2个块设备的raid0设备/dev/md0。
·
Linux RAID 存储技术
管理软 RAID
创建 RAID
# 创建一个包含2个块设备的raid0设备/dev/md0
[root@server ~ 09:37:28]# mdadm --create /dev/md0 --level 0 -n 2 /dev/sd{b,c}
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@server ~ 09:37:55]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 52G 0 part
├─centos-root 253:0 0 50G 0 lvm /
└─centos-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 20G 0 disk
└─md0 9:0 0 40G 0 raid0
sdc 8:32 0 20G 0 disk
└─md0 9:0 0 40G 0 raid0
sdd 8:48 0 20G 0 disk
sde 8:64 0 20G 0 disk
sdf 8:80 0 20G 0 disk
sdg 8:96 0 20G 0 disk
sr0 11:0 1 4.4G 0 rom
格式化和挂载
[root@server ~ 09:37:59]# mkfs.xfs /dev/md0 meta-data=/dev/md0 isize=512 agcount=16, agsize=654720 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=10475520, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=5120, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@server ~ 09:38:54]# mount /dev/md0 /mnt [root@server ~ 09:39:09]# ls /mnt/ [root@server ~ 09:39:17]# df -h /mnt 文件系统 容量 已用 可用 已用% 挂载点 /dev/md0 40G 33M 40G 1% /mnt # 创建数据 [root@server ~ 09:39:24]# cp /etc/* /mnt [root@server ~ 09:40:43]# ls /mnt/ adjtime e2fsck.conf inittab motd securetty aliases environment inputrc mtab services aliases.db ethertypes issue my.cnf sestatus.conf
查看 RAID
# 查看 raid 概要信息 [root@server ~ 09:40:47]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Aug 4 09:37:55 2025 Raid Level : raid0 Array Size : 41908224 (39.97 GiB 42.91 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Mon Aug 4 09:37:55 2025 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 512K Consistency Policy : none Name : server.yuxb.cloud:0 (local to host server.yuxb.cloud) UUID : 3a667db2:3fbbf5c2:7accf21e:11f6bfac Events : 0 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc [root@server ~ 09:41:39]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 200G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 52G 0 part ├─centos-root 253:0 0 50G 0 lvm / └─centos-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 20G 0 disk └─md0 9:0 0 40G 0 raid0 /mnt sdc 8:32 0 20G 0 disk └─md0 9:0 0 40G 0 raid0 /mnt sdd 8:48 0 20G 0 disk sde 8:64 0 20G 0 disk sdf 8:80 0 20G 0 disk sdg 8:96 0 20G 0 disk sr0 11:0 1 4.4G 0 rom [root@server ~ 09:43:38]# lsblk /dev/md0 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT md0 9:0 0 40G 0 raid0 /mnt
取消挂载和查看
[root@server ~ 09:44:11]# umount /mnt [root@server ~ 09:45:36]# mdadm --stop /dev/md0 mdadm: stopped /dev/md0 [root@server ~ 09:45:53]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 200G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 52G 0 part ├─centos-root 253:0 0 50G 0 lvm / └─centos-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 20G 0 disk sdc 8:32 0 20G 0 disk sdd 8:48 0 20G 0 disk sde 8:64 0 20G 0 disk sdf 8:80 0 20G 0 disk sdg 8:96 0 20G 0 disk sr0 11:0 1 4.4G 0 rom
擦除信息
[root@server ~ 09:45:59]# wipefs /dev/sdb offset type ---------------------------------------------------------------- 0x1000 linux_raid_member [raid] LABEL: server.yuxb.cloud:0 UUID: 3a667db2-3fbb-f5c2-7acc-f21e11f6bfac [root@server ~ 09:47:26]# wipefs -a /dev/sdb /dev/sdb:4 个字节已擦除,位置偏移为 0x00001000 (linux_raid_member):fc 4e 2b a9 [root@server ~ 09:48:45]# wipefs -a /dev/sdc /dev/sdc:4 个字节已擦除,位置偏移为 0x00001000 (linux_raid_member):fc 4e 2b a9
管理 RAID1
创建 RAID
# 创建一个包含2个块设备的raid1设备/dev/md1
[root@server ~ 09:59:13]# mdadm --create /dev/md1 --level 1 --raid-devices 2 /dev/sd{b,c}
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
查看 RAID
[root@server ~ 10:10:26]# mdadm --detail /dev/md1 /dev/md1: Version : 1.2 Creation Time : Mon Aug 4 10:10:26 2025 Raid Level : raid1 Array Size : 20954112 (19.98 GiB 21.46 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Mon Aug 4 10:11:12 2025 State : clean, resyncing Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : resync Resync Status : 47% complete Name : server.yuxb.cloud:1 (local to host server.yuxb.cloud) UUID : 39a18ab6:5c26bf19:09e72cb4:ed9f2093 Events : 7 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc
格式化和挂载
[root@server ~ 10:11:49]# lsblk /dev/md1 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT md1 9:1 0 20G 0 raid1 [root@server ~ 10:12:18]# mkfs.xfs -f /dev/md1 meta-data=/dev/md1 isize=512 agcount=4, agsize=1309632 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=5238528, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@server ~ 10:13:21]# mkdir -p /data/raid1 [root@server ~ 10:13:31]# mount /dev/md1 /data/raid1 [root@server ~ 10:14:01]# df -h /data/raid1 文件系统 容量 已用 可用 已用% 挂载点 /dev/md1 20G 33M 20G 1% /data/raid1 # 创建数据 [root@server ~ 10:14:34]# cp /etc/ho* /data/raid1 [root@server ~ 10:14:36]# ls /data/raid1/ host.conf hostname hosts hosts.allow hosts.deny
增加热备盘
# /dev/sdd的状态为spare(备用) [root@server ~ 10:15:02]# mdadm /dev/md1 --add /dev/sdd mdadm: added /dev/sdd [root@server ~ 10:15:18]# mdadm --detail /dev/md1 |tail -5 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 - spare /dev/sdd
模拟故障
# 强制成员盘故障 [root@server ~ 10:16:14]# mdadm /dev/md1 --fail /dev/sdc mdadm: set /dev/sdc faulty in /dev/md1 # 查看成员状态 [root@server ~ 10:17:00]# mdadm --detail /dev/md1 |tail -5 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 2 8 48 1 spare rebuilding /dev/sdd 1 8 32 - faulty /dev/sdc # /dev/sdd立刻顶替故障磁盘,并进行同步 # 数据依然正常访问 [root@server ~ 10:17:44]# ls /data/raid1 host.conf hostname hosts hosts.allow hosts.deny [root@server ~ 10:17:53]# cat /data/raid1/hostname server.yuxb.cloud
删除故障磁盘
[root@server ~ 10:18:07]# mdadm /dev/md1 --remove /dev/sdc mdadm: hot removed /dev/sdc from /dev/md1 [root@server ~ 10:18:47]# mdadm --detail /dev/md1 |tail -5 Events : 40 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 2 8 48 1 active sync /dev/sdd
删除 RAID
# 卸载
[root@server ~ 10:19:28]# umount /dev/md1
# stop RAID 阵列,将删除阵列
[root@server ~ 10:19:34]# mdadm --stop /dev/md1
mdadm: stopped /dev/md1
# 清除原先设备上的 md superblock
[root@server ~ 10:19:58]# mdadm --zero-superblock /dev/sd{b..d}
管理 RAID5
创建 RAID
# 创建一个包含4个块设备的raid5设备/dev/md2
[root@server ~ 10:20:45]# mdadm --create /dev/md5 --level 5 --raid-devices 4 /dev/sd{b..e}
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
[root@server ~ 10:42:44]# lsblk /dev/md5
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
md5 9:5 0 60G 0 raid5
查看 RAID
[root@server ~ 10:43:46]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Mon Aug 4 10:42:44 2025 Raid Level : raid5 Array Size : 62862336 (59.95 GiB 64.37 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Aug 4 10:44:29 2025 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Name : server.yuxb.cloud:5 (local to host server.yuxb.cloud) UUID : fe3b1dcf:9e5529a4:1a211bdd:efef5a9f Events : 18 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 4 8 64 3 active sync /dev/sde
格式化和挂载
[root@server ~ 10:45:48]# mkfs.xfs /dev/md5 mkfs.xfs: /dev/md5 appears to contain an existing filesystem (xfs). mkfs.xfs: Use the -f option to force overwrite. [root@server ~ 10:45:59]# mkfs.xfs /dev/md5 -f meta-data=/dev/md5 isize=512 agcount=16, agsize=982144 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=15714304, imaxpct=25 = sunit=128 swidth=384 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=7680, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
格式化和挂载
[root@server ~ 10:45:48]# mkfs.xfs /dev/md5 mkfs.xfs: /dev/md5 appears to contain an existing filesystem (xfs). mkfs.xfs: Use the -f option to force overwrite. [root@server ~ 10:45:59]# mkfs.xfs /dev/md5 -f meta-data=/dev/md5 isize=512 agcount=16, agsize=982144 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=15714304, imaxpct=25 = sunit=128 swidth=384 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=7680, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@server ~ 10:46:15]# [root@server ~ 10:46:38]# # 创建数据 [root@server ~ 10:46:38]# cp /etc/ho* /mnt [root@server ~ 10:46:49]# ls /mnt/ hgfs host.conf hostname hosts hosts.allow hosts.deny [root@server ~ 10:46:52]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Mon Aug 4 10:42:44 2025 Raid Level : raid5 Array Size : 62862336 (59.95 GiB 64.37 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Aug 4 10:46:15 2025 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Name : server.yuxb.cloud:5 (local to host server.yuxb.cloud) UUID : fe3b1dcf:9e5529a4:1a211bdd:efef5a9f Events : 18 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 4 8 64 3 active sync /dev/sde
模拟故障
[root@server ~ 10:47:17]# mdadm /dev/md5 -f /dev/sdb mdadm: set /dev/sdb faulty in /dev/md5 [root@server ~ 10:48:57]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Mon Aug 4 10:42:44 2025 Raid Level : raid5 Array Size : 62862336 (59.95 GiB 64.37 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Aug 4 10:48:57 2025 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Name : server.yuxb.cloud:5 (local to host server.yuxb.cloud) UUID : fe3b1dcf:9e5529a4:1a211bdd:efef5a9f Events : 20 Number Major Minor RaidDevice State - 0 0 0 removed 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 4 8 64 3 active sync /dev/sde 0 8 16 - faulty /dev/sdb # 数据依然正常访问 [root@server ~ 10:50:49]# ls /mnt hgfs host.conf hostname hosts hosts.allow hosts.deny
Linux 逻辑卷管理
逻辑卷基本管理
是一种灵活的磁盘管理机制,用于在物理设备之上创建逻辑存储单元,便于扩容、快照和动态管理。
创建物理卷
# 创建单个PV
[root@server ~ 11:18:04]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created.
# 此次创建多个PV
[root@server ~ 11:18:12]# pvcreate /dev/sd{c,d}
Physical volume "/dev/sdc" successfully created.
Physical volume "/dev/sdd" successfully created.
# 查看PV列表
[root@server ~ 11:18:28]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- 52.00g 4.00m
/dev/sdb lvm2 --- 20.00g 20.00g
/dev/sdc lvm2 --- 20.00g 20.00g
/dev/sdd lvm2 --- 20.00g 20.00g
# 查看单个PV详细信息
[root@server ~ 11:18:34]# pvdisplay /dev/sdb
"/dev/sdb" is a new physical volume of "20.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb
VG Name
PV Size 20.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID ndFPv2-MJAc-EwGA-KFO3-MGvp-vw3o-0Ken2I
创建卷组
# 创建包涵单个PV的VG
[root@server ~ 11:19:20]# vgcreate webapp /dev/sdb
Volume group "webapp" successfully created
# 创建包涵多个PV的VG
[root@server ~ 11:19:38]# vgcreate dbapp /dev/sd{c,d}
Volume group "dbapp" successfully created
[root@server ~ 11:20:01]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- 52.00g 4.00m
/dev/sdb webapp lvm2 a-- <20.00g <20.00g
/dev/sdc dbapp lvm2 a-- <20.00g <20.00g
/dev/sdd dbapp lvm2 a-- <20.00g <20.00g
# 查看VG列表
[root@server ~ 11:20:05]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 2 0 wz--n- 52.00g 4.00m
dbapp 2 0 0 wz--n- 39.99g 39.99g
webapp 1 0 0 wz--n- <20.00g <20.00g
# 查看单个VG详细信息
[root@server ~ 11:20:09]# vgdisplay dbapp
--- Volume group ---
VG Name dbapp
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 39.99 GiB
PE Size 4.00 MiB
Total PE 10238
Alloc PE / Size 0 / 0
Free PE / Size 10238 / 39.99 GiB
VG UUID A7XjVq-r5kP-RmHU-0Hdv-FjRi-u00M-exTENd
创建逻辑卷
# 在卷组webapp中创建一个逻辑卷:名称为webapp01,大小为5G。 [root@server ~ 11:20:23]# lvcreate -n webapp01 -L 5G webapp Logical volume "webapp01" created. # 在卷组dbapp中创建一个跨硬盘逻辑卷:名称为data01,大小为25G [root@server ~ 11:21:08]# lvcreate -n data01 -L 25G dbapp Logical volume "data01" created. # 查看LV列表 [root@server ~ 11:21:24]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root centos -wi-ao---- 50.00g swap centos -wi-ao---- 2.00g data01 dbapp -wi-a----- 25.00g webapp01 webapp -wi-a----- 5.00g [root@server ~ 11:21:27]# ls -l /dev/dbapp/data01 /dev/mapper/dbapp-data01 lrwxrwxrwx 1 root root 7 8月 4 11:21 /dev/dbapp/data01 -> ../dm-3 lrwxrwxrwx 1 root root 7 8月 4 11:21 /dev/mapper/dbapp-data01 -> ../dm-3
# 查看单个LV详细信息
[root@server ~ 11:21:45]# lvdisplay /dev/dbapp/data01
--- Logical volume ---
LV Path /dev/dbapp/data01
LV Name data01
VG Name dbapp
LV UUID NQe9Q5-4hew-qK17-Yu8R-u2PN-z64C-BgJCRt
LV Write Access read/write
LV Creation host, time server.yuxb.cloud, 2025-08-04 11:21:24 +0800
LV Status available
# open 0
LV Size 25.00 GiB
Current LE 6400
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:3
# 可以看到:物理卷/dev/sdc空间已使用完,物理卷/dev/sdd空间已使用5G
[root@server ~ 11:22:09]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- 52.00g 4.00m
/dev/sdb webapp lvm2 a-- <20.00g <15.00g
/dev/sdc dbapp lvm2 a-- <20.00g 0
/dev/sdd dbapp lvm2 a-- <20.00g 14.99g
# 可以看到逻辑卷/dev/dbapp/data01空间横跨2个硬盘
[root@server ~ 11:22:23]# lsblk /dev/sd{b..d}
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 20G 0 disk
└─webapp-webapp01 253:2 0 5G 0 lvm
sdc 8:32 0 20G 0 disk
└─dbapp-data01 253:3 0 25G 0 lvm
sdd 8:48 0 20G 0 disk
└─dbapp-data01 253:3 0 25G 0 lvm
创建文件系统
[root@server ~ 11:23:08]# mkfs.xfs /dev/webapp/webapp01 meta-data=/dev/webapp/webapp01 isize=512 agcount=4, agsize=327680 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=1310720, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@server ~ 11:23:21]# mount /dev/webapp/webapp01 /var/www/html mount: 挂载点 /var/www/html 不存在 [root@server ~ 11:23:43]# mkdir -p /var/www/html [root@server ~ 11:25:28]# mount /dev/webapp/webapp01 /var/www/html
清理
# 卸载文件系统
[root@server ~ 11:41:26]# umount /dev/webapp/webapp01
# 删除LV
[root@server ~ 11:41:50]# lvremove /dev/webapp/webapp01 /dev/dbapp/data01
Do you really want to remove active logical volume webapp/webapp01? [y/n]: y
Logical volume "webapp01" successfully removed
Do you really want to remove active logical volume dbapp/data01? [y/n]: y
Logical volume "data01" successfully removed
# 删除VG
[root@server ~ 11:42:22]# vgremove webapp dbapp
Volume group "webapp" successfully removed
Volume group "dbapp" successfully removed
# 删除PV
[root@server ~ 11:42:35]# pvremove /dev/sd{b..d}
Labels on physical volume "/dev/sdb" successfully wiped.
Labels on physical volume "/dev/sdc" successfully wiped.
Labels on physical volume "/dev/sdd" successfully wiped.
扩展和缩减卷组
环境准备
# 创建卷组 [root@server ~ 13:32:16]# vgcreate webapp /dev/sdb Physical volume "/dev/sdb" successfully created. Volume group "webapp" successfully created # 创建卷组的时候,如果指定的块设备不是物理卷,则会先将块设备创建为物理卷。 # 创建逻辑卷 [root@server ~ 13:36:27]# lvcreate -n webapp01 -L 10G webapp WARNING: xfs signature detected on /dev/webapp/webapp01 at offset 0. Wipe it? [y/n]: y Wiping xfs signature on /dev/webapp/webapp01. Logical volume "webapp01" created.
扩展卷组
[root@server ~ 13:37:13]# vgextend webapp /dev/sd{c,d}
Physical volume "/dev/sdc" successfully created.
Physical volume "/dev/sdd" successfully created.
Volume group "webapp" successfully extended
缩减卷组
# 查看物理卷使用状态 [root@server ~ 13:37:33]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 centos lvm2 a-- 52.00g 4.00m /dev/sdb webapp lvm2 a-- <20.00g <10.00g /dev/sdc webapp lvm2 a-- <20.00g <20.00g /dev/sdd webapp lvm2 a-- <20.00g <20.00g # 将物理卷/dev/sdb从卷组webapp中剔除,则会报错 [root@server ~ 13:37:37]# vgreduce webapp /dev/sdb Physical volume "/dev/sdb" still in use # 解决方法:将物理卷/dev/sdb中数据移动到卷组中其他物理卷 [root@server ~ 13:37:55]# pvmove /dev/sdb /dev/sdb: Moved: 0.23% /dev/sdb: Moved: 100.00% # 或者移动到卷组中特定物理卷 [root@server ~ 13:38:40]# pvmove /dev/sdb /dev/sdd No data to move for webapp. # 查看物理卷使用状态 [root@server ~ 13:40:34]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 centos lvm2 a-- 52.00g 4.00m /dev/sdb webapp lvm2 a-- <20.00g <20.00g /dev/sdc webapp lvm2 a-- <20.00g <10.00g /dev/sdd webapp lvm2 a-- <20.00g <20.00g # 再次剔除,成功 [root@server ~ 13:40:58]# vgreduce webapp /dev/sdb Removed "/dev/sdb" from volume group "webapp" [root@server ~ 13:41:46]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 centos lvm2 a-- 52.00g 4.00m /dev/sdb lvm2 --- 20.00g 20.00g /dev/sdc webapp lvm2 a-- <20.00g <10.00g /dev/sdd webapp lvm2 a-- <20.00g <20.00g
扩展和缩减逻辑卷
扩展逻辑卷
# 逻辑卷增加2G空间 [root@server ~ 13:47:33]# lvextend -L +2G /dev/webapp/webapp01 Size of logical volume webapp/webapp01 changed from 10.00 GiB (2560 extents) to 12.00 GiB (3072 extents). Logical volume webapp/webapp01 successfully resized. [root@server ~ 13:48:03]# lvs /dev/webapp/webapp01 LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert webapp01 webapp -wi-a----- 12.00g
缩减逻辑卷
[root@server ~ 13:48:15]# lvreduce -L -2G /dev/webapp/webapp01 WARNING: Reducing active logical volume to 10.00 GiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce webapp/webapp01? [y/n]: y Size of logical volume webapp/webapp01 changed from 12.00 GiB (3072 extents) to 10.00 GiB (2560 extents). Logical volume webapp/webapp01 successfully resized. [root@server ~ 13:49:10]# lvs /dev/webapp/webapp01 LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert webapp01 webapp -wi-a----- 10.00g
扩展和缩减文件系统
扩展 XFS 文件系统
[root@server ~ 13:57:21]# mkfs.xfs /dev/webapp/webapp01 meta-data=/dev/webapp/webapp01 isize=512 agcount=4, agsize=655360 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@server ~ 13:57:42]# mount /dev/webapp/webapp01 /var/www/html [root@server ~ 13:58:00]# cp /etc/host* /var/www/html [root@server ~ 13:58:18]# ls /var/www/html host.conf hostname hosts hosts.allow hosts.deny
扩展 EXT4 文件系统
# 第一步:扩展逻辑卷 [root@server ~ 13:59:05]# lvextend -L 15G /dev/webapp/webapp01 Size of logical volume webapp/webapp01 changed from 10.00 GiB (2560 extents) to 15.00 GiB (3840 extents). Logical volume webapp/webapp01 successfully resized. [root@server ~ 13:59:20]# lvs /dev/webapp/webapp01 LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert webapp01 webapp -wi-ao---- 15.00g # 第二步:扩展文件系统 [root@server ~ 13:59:28]# xfs_growfs /var/www/html meta-data=/dev/mapper/webapp-webapp01 isize=512 agcount=4, agsize=655360 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 2621440 to 3932160 [root@server ~ 13:59:52]# df -h /var/www/html 文件系统 容量 已用 可用 已用% 挂载点 /dev/mapper/webapp-webapp01 15G 33M 15G 1% /var/www/html [root@server ~ 14:00:02]# ls /var/www/html/ host.conf hostname hosts hosts.allow hosts.deny # 块设备和文件系统一并扩展 [root@server ~ 14:00:16]# lvextend -rL 20G /dev/webapp/webapp01 Size of logical volume webapp/webapp01 changed from 15.00 GiB (3840 extents) to 20.00 GiB (5120 extents). Logical volume webapp/webapp01 successfully resized. meta-data=/dev/mapper/webapp-webapp01 isize=512 agcount=6, agsize=655360 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=3932160, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 3932160 to 5242880 [root@server ~ 14:00:57]# lvs /dev/webapp/webapp01 LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert webapp01 webapp -wi-ao---- 20.00g [root@server ~ 14:01:05]# df -h /var/www/html 文件系统 容量 已用 可用 已用% 挂载点 /dev/mapper/webapp-webapp01 20G 33M 20G 1% /var/www/html
扩展 EXT4 文件系统
[root@server ~ 14:01:54]# umount /var/www/html [root@server ~ 14:02:02]# mkfs.ext4 /dev/webapp/webapp01 <<< 'y' mke2fs 1.42.9 (28-Dec-2013) 文件系统标签= OS type: Linux 块大小=4096 (log=2) 分块大小=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 1310720 inodes, 5242880 blocks 262144 blocks (5.00%) reserved for the super user 第一个数据块=0 Maximum filesystem blocks=2153775104 160 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000 Allocating group tables: 完成 正在写入inode表: 完成 Creating journal (32768 blocks): 完成 Writing superblocks and filesystem accounting information: 完成 [root@server ~ 14:02:19]# mount /dev/webapp/webapp01 /var/www/html [root@server ~ 14:02:36]# df -h /var/www/html 文件系统 容量 已用 可用 已用% 挂载点 /dev/mapper/webapp-webapp01 20G 45M 19G 1% /var/www/html [root@server ~ 14:02:44]# cp /etc/host* /var/www/html [root@server ~ 14:03:14]# ls /var/www/ cgi-bin/ html/ [root@server ~ 14:03:14]# ls /var/www/html/ host.conf hostname hosts hosts.allow hosts.deny lost+found
# 第一步:扩展逻辑卷 [root@server ~ 14:03:20]# lvextend -L 25G /dev/webapp/webapp01 Size of logical volume webapp/webapp01 changed from 20.00 GiB (5120 extents) to 25.00 GiB (6400 extents). Logical volume webapp/webapp01 successfully resized. [root@server ~ 14:04:04]# lvs /dev/webapp/webapp01 LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert webapp01 webapp -wi-ao---- 25.00g # 第二步:扩展文件系统 [root@server ~ 14:04:12]# resize2fs /dev/webapp/webapp01 resize2fs 1.42.9 (28-Dec-2013) Filesystem at /dev/webapp/webapp01 is mounted on /var/www/html; on-line resizing required old_desc_blocks = 3, new_desc_blocks = 4 The filesystem on /dev/webapp/webapp01 is now 6553600 blocks long. [root@server ~ 14:04:40]# df -h /var/www/html/ 文件系统 容量 已用 可用 已用% 挂载点 /dev/mapper/webapp-webapp01 25G 44M 24G 1% /var/www/html # 块设备和文件系统一并扩展 [root@server ~ 14:05:06]# lvextend -rL 30G /dev/webapp/webapp01 Size of logical volume webapp/webapp01 changed from 25.00 GiB (6400 extents) to 30.00 GiB (7680 extents). Logical volume webapp/webapp01 successfully resized. resize2fs 1.42.9 (28-Dec-2013) Filesystem at /dev/mapper/webapp-webapp01 is mounted on /var/www/html; on-line resizing required old_desc_blocks = 4, new_desc_blocks = 4 The filesystem on /dev/mapper/webapp-webapp01 is now 7864320 blocks long. [root@server ~ 14:05:36]# lvs /dev/webapp/webapp01 LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert webapp01 webapp -wi-ao---- 30.00g [root@server ~ 14:05:42]# df -h /var/www/html 文件系统 容量 已用 可用 已用% 挂载点 /dev/mapper/webapp-webapp01 30G 44M 28G 1% /var/www/html
缩减 EXT4 文件系统
# 第一步:卸载文件系统 [root@server ~ 14:17:06]# umount /var/www/html # 第二步:检测文件系统 [root@server ~ 14:17:13]# e2fsck -f /dev/webapp/webapp01 e2fsck 1.42.9 (28-Dec-2013) 第一步: 检查inode,块,和大小 第二步: 检查目录结构 第3步: 检查目录连接性 Pass 4: Checking reference counts 第5步: 检查簇概要信息 /dev/webapp/webapp01: 16/1966080 files (0.0% non-contiguous), 167447/7864320 blocks # 第三步:缩减文件系统 [root@server ~ 14:17:59]# resize2fs /dev/webapp/webapp01 10G resize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on /dev/webapp/webapp01 to 2621440 (4k) blocks. The filesystem on /dev/webapp/webapp01 is now 2621440 blocks long. # 第四步:缩减逻辑卷 [root@server ~ 14:18:56]# lvreduce -L 10G /dev/webapp/webapp01 WARNING: Reducing active logical volume to 10.00 GiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce webapp/webapp01? [y/n]: y Size of logical volume webapp/webapp01 changed from 30.00 GiB (7680 extents) to 10.00 GiB (2560 extents). Logical volume webapp/webapp01 successfully resized. # 第五步:挂载文件系统验证 [root@server ~ 14:19:23]# mount /dev/webapp/webapp01 /var/www/html [root@server ~ 14:19:39]# df -h /var/www/html/ 文件系统 容量 已用 可用 已用% 挂载点 /dev/mapper/webapp-webapp01 9.8G 37M 9.2G 1% /var/www/html [root@server ~ 14:19:47]# ls /var/www/html host.conf hostname hosts hosts.allow hosts.deny lost+found
逻辑卷快照
# 创建快照:快照的容量不能小于lv容量 [root@server ~ 14:40:31]# lvcreate -s -n webapp01-snap1 -L 10G /dev/webapp/webapp01 Logical volume "webapp01-snap1" created. # 卸载原始卷,挂载快照 [root@server ~ 14:41:43]# umount /dev/webapp/webapp01 [root@server ~ 14:42:20]# mkdir -p /webapp/webapp01 [root@server ~ 14:44:24]# mount /dev/webapp/webapp01-snap1 /webapp/webapp01/ # 查看数据 [root@server ~ 14:44:27]# ls /webapp/webapp01/ host.conf hostname hosts hosts.allow hosts.deny lost+found # 创建新数据 [root@server ~ 14:44:37]# echo hello world > /webapp/webapp01/hello.txt [root@server ~ 14:44:46]# cat /webapp/webapp01/hello.txt hello world
创建raid1 逻辑卷
环境
[root@server ~ 14:58:10]# umount /dev/webapp/webapp01-snap1 [root@server ~ 14:59:06]# lvremove /dev/webapp/webapp01* Do you really want to remove active origin logical volume webapp/webapp01 with 1 snapshot(s)? [y/n]: y Logical volume "webapp01-snap1" successfully removed Logical volume "webapp01" successfully removed [root@server ~ 14:59:19]# lvs /dev/webapp
创建
# 创建 RAID1 类型逻辑卷 [root@server ~ 15:02:13]# lvcreate --type raid1 -n webapp01 -L 15G webapp WARNING: ext4 signature detected on /dev/webapp/webapp01_rmeta_0 at offset 1080. Wipe it? [y/n]: y Wiping ext4 signature on /dev/webapp/webapp01_rmeta_0. Logical volume "webapp01" created. [root@server ~ 15:02:20]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 centos lvm2 a-- 52.00g 4.00m /dev/sdb lvm2 --- 20.00g 20.00g /dev/sdc webapp lvm2 a-- <20.00g 4.99g /dev/sdd webapp lvm2 a-- <20.00g 4.99g # 格式化文件系统 [root@server ~ 15:02:28]# mkfs.xfs /dev/webapp/webapp01 meta-data=/dev/webapp/webapp01 isize=512 agcount=4, agsize=983040 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=3932160, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 # 挂载并测试使用 [root@server ~ 15:02:46]# mount /dev/webapp/webapp01 /var/www/html/ [root@server ~ 15:02:53]# cp /etc/ho* /var/www/html/ [root@server ~ 15:02:53]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 centos lvm2 a-- 52.00g 4.00m /dev/sdb lvm2 --- 20.00g 20.00g /dev/sdc webapp lvm2 a-- <20.00g 4.99g /dev/sdd webapp lvm2 a-- <20.00g 4.99g # 清除磁盘元数据 (实验性步骤) [root@server ~ 15:03:15]# wipefs -a /dev/sdd wipefs: error: /dev/sdd: probing initialization failed: 设备或资源忙 [root@server ~ 15:03:55]# dd if=/dev/zero of=/dev/sdd bs=1M count=256 记录了256+0 的读入 记录了256+0 的写出 268435456字节(268 MB)已复制,0.262357 秒,1.0 GB/秒 # 再次挂载和验证数据一致性 [root@server ~ 15:04:35]# umount /var/www/html [root@server ~ 15:04:37]# mount /dev/webapp/webapp01 /var/www/html/ [root@server ~ 15:04:37]# ls /var/www/html/ host.conf hostname hosts hosts.allow hosts.deny # 故障模拟 [root@server ~ 15:11:03]# dd if=/dev/zero of=/dev/sdd bs=1M count=256 记录了256+0 的读入 记录了256+0 的写出 268435456字节(268 MB)已复制,0.0754606 秒,3.6 GB/秒 [root@server ~ 15:11:04]# pvs WARNING: Device for PV RVCiAG-jRyD-Efyw-ySNO-Vi2z-n0C0-h6AE65 not found or rejected by a filter. Couldn't find device with uuid RVCiAG-jRyD-Efyw-ySNO-Vi2z-n0C0-h6AE65. WARNING: Couldn't find all devices for LV webapp/webapp01_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV webapp/webapp01_rmeta_1 while checking used and assumed devices. PV VG Fmt Attr PSize PFree /dev/sda2 centos lvm2 a-- 52.00g 4.00m /dev/sdb lvm2 --- 20.00g 20.00g /dev/sdc webapp lvm2 a-- <20.00g 4.99g [unknown] webapp lvm2 a-m <20.00g 4.99g [root@server ~ 15:12:04]# ls /var/www/html host.conf hostname hosts hosts.allow hosts.deny [root@server ~ 15:12:11]# umount /var/www/html [root@server ~ 15:12:20]# mount /dev/webapp/webapp01 /var/www/html/ # 修复 [root@server ~ 15:12:37]# vgreduce --removemissing webapp --force WARNING: Device for PV RVCiAG-jRyD-Efyw-ySNO-Vi2z-n0C0-h6AE65 not found or rejected by a filter. Couldn't find device with uuid RVCiAG-jRyD-Efyw-ySNO-Vi2z-n0C0-h6AE65. WARNING: Couldn't find all devices for LV webapp/webapp01_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV webapp/webapp01_rmeta_1 while checking used and assumed devices. Wrote out consistent volume group webapp. [root@server ~ 15:13:17]# vgextend webapp /dev/sdd Physical volume "/dev/sdd" successfully created. Volume group "webapp" successfully extended [root@server ~ 15:13:33]# lvconvert --repair webapp/webapp01 WARNING: Not using lvmetad because of repair. Attempt to replace failed RAID images (requires full device resync)? [y/n]: y WARNING: Disabling lvmetad cache for repair command. Faulty devices in webapp/webapp01 successfully replaced. [root@server ~ 15:13:56]# pvs|grep webapp WARNING: Not using lvmetad because a repair command was run. /dev/sdc webapp lvm2 a-- <20.00g 4.99g /dev/sdd webapp lvm2 a-- <20.00g 4.99g
Linux 交换空间管理
计算机存储器的层次结构
是指从速度最快、容量最小、价格最高到速度最慢、容量最大、价格最低,对各种存储器进行分层管理的结构。这样设计可以在性能和成本之间取得平衡。
各层功能简要说明:
| 层级 | 存储设备 | 特点 |
|---|---|---|
| 寄存器 | CPU内部寄存器 | 访问速度最快,容量极小,仅用于指令操作数据 |
| L1/L2/L3 Cache | CPU缓存 | 速度次于寄存器,用于暂存频繁访问的数据 |
| 主存(RAM) | 内存条 | 程序运行空间,断电后数据丢失 |
| 虚拟内存 | Swap 分区或分页文件 | 将磁盘空间模拟成内存,性能远低于物理内存 |
| 磁盘存储 | SSD / HDD | 大容量,低成本,主要用于文件、系统存储 |
| 外部存储 | U盘、光盘、网络存储 | 移动存储设备或云端,用于数据备份和共享 |
Swap 空间
查看内存
[root@server ~ 15:16:32]# free total used free shared buff/cache available Mem: 4026116 203072 3535980 11972 287064 3585736 Swap: 2097148 0 2097148
创建交换空间
# 使用parted创建所需大小的分区并将其文件系统类型设置为linux-swap [root@server ~ 15:44:00]# parted /dev/sdb mklabel gpt 警告: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue? 是/Yes/否/No? yes 信息: You may need to update /etc/fstab. [root@server ~ 15:44:25]# parted /dev/sdb unit MiB mkpart data01 linux-swap 1 2049 信息: You may need to update /etc/fstab. [root@server ~ 15:44:28]# parted /dev/sdb unit MiB print Model: ATA VMware Virtual S (scsi) Disk /dev/sdb: 20480MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name 标志 1 1.00MiB 2049MiB 2048MiB data01 # 格式化swap空间 [root@server ~ 15:44:34]# mkswap /dev/sdb1 正在设置交换空间版本 1,大小 = 2097148 KiB 无标签,UUID=22f0165b-fb67-410b-bdf0-40399e47479b
激活 swap 空间
# 激活swap空间 [root@server ~ 15:44:57]# swapon /dev/sdb1 # 查看swap设备列表 [root@server ~ 15:45:28]# swapon -s 文件名 类型 大小 已用 权限 /dev/dm-1 partition 2097148 0 -2 /dev/sdb1 partition 2097148 0 -3
取消 swap 空间激活
[root@server ~ 15:45:34]# swapoff /dev/sdb1 [root@server ~ 15:46:08]# swapon -s 文件名 类型 大小 已用 权限 /dev/dm-1 partition 2097148 0 -2
持久化激活 swap 空间
修改/etc/fstab文件,添加如下类似记录:
UUID=2bf4e179-3648-4412-9495-3b278df4acd6 swap swap pri=4 0 0
使用命令swapon -a激活/etc/fstab中所有交换设备。
使用命令swapoff -a取消/etc/fstab中所有交换设备激活。
[root@server ~ 15:47:25]# vim /etc/fstab [root@server ~ 15:50:26]# swapon -a [root@server ~ 15:50:39]# swapon -s 文件名 类型 大小 已用 权限 /dev/dm-1 partition 2097148 0 -2 /dev/sdb1 partition 2097148 0 4 [root@server ~ 15:50:43]# swapoff -a [root@server ~ 15:50:49]# swapon -s [root@server ~ 15:51:11]#
Linux 系统启动原理
设置系统运行目标
设置系统当前运行 target
[root@server ~ 16:35:24]# systemctl isolate multi-user.target [root@server ~ 16:36:26]# systemctl isolate graphical.target
设置系统开机默认运行 target
# 查看系统开机默认运行target [root@server ~ 16:37:22]# systemctl get-default graphical.target # 设置系统开机默认运行target [root@server ~ 16:37:59]# systemctl set-default multi-user.target Removed symlink /etc/systemd/system/default.target. Created symlink from /etc/systemd/system/default.target to /usr/lib/systemd/system/multi-user.target. [root@server ~ 16:38:14]# # 重启验证
在系统启动时选择其它目标
重启虚拟机 开机后随便按一个键,然后按e进入 将光标移至以linux开头的行。此为内核命令行。附加systemd.unit=multi-user.target。 那一行有一行代码是假代码 直接光标到第二行的行末按空格直接输入就可以 按 Ctrl+x 使用这些更改进行启动。
重置 ROOT 密码
同上面步骤一样,只不过写入的是rd.break 然后启动 再输入mount -o remount,rw /sysroot和chroot /sysroot 开始设置密码echo password | passwd --stdin root
更多推荐

所有评论(0)