Containerd部署Kubernetes
Kubernetes 集群由 主控节点(Master)和工作节点(Node)组成。#发现自动生成了一个pod,但是id不一样。
·
Containerd部署Kubernetes
Kubernetes概述
Kubernetes 是什么
- 是 Google 于 2014 年开源的容器集群管理系统,简称 K8S。
- 用于容器化应用程序的部署、扩展和管理。
- 提供容器编排、资源调度、弹性伸缩、部署管理、服务发现等一系列功能。
- 目标是让部署容器化应用简单高效。
- 官方网站:http://www.kubernetes.io
Kubernetes 特性
| 特性 | 说明 |
|---|---|
| 自我修复 | 节点故障时重启容器、保证副本数量,杀死健康检查失败的容器 |
| 弹性伸缩 | 基于命令、UI 或 CPU 使用情况自动扩缩容,保障业务高峰可用性并节约成本 |
| 自动部署和回滚 | 采用滚动更新策略,更新异常时回滚,确保业务不受升级影响 |
| 服务发现和负载均衡 | 为容器提供统一访问入口(内部 IP 和 DNS 名称),并实现负载均衡 |
| 机密和配置管理 | 管理敏感数据和应用配置,避免敏感数据暴露在镜像中 |
| 存储编排 | 挂载本地存储、公有云存储(如 AWS)、网络存储(如 NFS、GlusterFS、Ceph)作为集群资源 |
| 批处理 | 提供一次性任务、定时任务,满足批量数据处理和分析场景 |
Kubernetes 集群架构与组件
Kubernetes 集群由 主控节点(Master)和工作节点(Node)组成。
主控节点(Master)组件
| 组件 | 说明 |
|---|---|
| kube-apiserver | 集群统一入口,以 RESTful API 提供接口服务,处理资源的增删改查和监听操作 |
| kube-controller-manager | 管理各类控制器,处理集群中常规后台任务 |
| kube-scheduler | 根据调度算法为新创建的 Pod 选择 Node 节点 |
| etcd | 分布式键值存储系统,保存集群状态数据(如 Pod、Service 等对象信息) |
工作节点(Node)组件
| 组件 | 说明 |
|---|---|
| kubelet | Master 在 Node 上的代理,管理本机容器生命周期(创建容器、挂载数据卷等) |
| kube-proxy | 实现 Pod 网络代理,维护网络规则和四层负载均衡 |
| docker 或 rocket | 容器引擎,负责运行容器 |
Kubernetes 核心概念
Pod
- 是 Kubernetes 的最小部署单元。
- 是一组容器的集合,同一个 Pod 中的容器共享网络命名空间。
- Pod 是短暂的,其生命周期由控制器管理。
Controllers(控制器)
| 控制器类型 | 说明 |
|---|---|
| ReplicaSet | 确保预期的 Pod 副本数量 |
| Deployment | 用于无状态应用的部署 |
| StatefulSet | 用于有状态应用的部署 |
| DaemonSet | 确保所有 Node 运行同一个 Pod |
| Job | 处理一次性任务 |
| CronJob | 处理定时任务 |
| 作用 | 是更高级层次的对象,用于部署和管理 Pod |
Service
- 防止 Pod 失联。
- 定义一组 Pod 的访问策略,实现服务发现和负载均衡。
其他核心概念
| 概念 | 说明 |
|---|---|
| Label(标签) | 附加到资源上,用于关联对象、查询和筛选 |
| Namespaces | 命名空间,将对象在逻辑上隔离 |
| Annotations | 注释,用于存储资源的额外信息 |
Kubernetes集群部署
环境准备(所有节点)
配置域名解析
#三个节点都要配置域名解析,并且能够相互ping通
[root@node1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.70 node1
192.168.100.71 node2
192.168.100.72 master
升级内核
[root@node1 ~]# yum update -y kernel
#升级完之后reboot即可
安装环境包
[root@node1 ~]# yum -y install vim lrzsz unzip wget net-tools tree bash-completion conntrack ntpdate ntp ipvsadm ipset iptables curl sysstat libseccomp git psmisc telnet unzip gcc gcc-c++ make
#关闭防火墙和内核安全机制
[root@node1 ~]# systemctl disable firewalld.service --now
[root@node1 ~]# getenforce
Disabled
关闭swap分区
[root@node1 ~]# vim /etc/fstab
#swap注释掉
[root@node1 ~]# swapoff -a
#关闭所有交换分区(swap),临时禁用系统的交换功能,常用于 Kubernetes 等场景中避免性能问题。
调整系统内核参数
[root@master ~]# cat /etc/sysctl.d/kubernetes.conf
# 开启Linux内核的网络桥接功能,同时启用iptables和ip6tables的网络包过滤功能,用于在网络桥接时进行网络包过滤
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
# # 开启路由转发,转发IPv4的数据包
net.ipv4.ip_forward=1
# # 尽可能避免使用交换分区,提升k8s性能
vm.swappiness=0
# # 不检查物理内存是否够用
vm.overcommit_memory=1
#立即生效
#输入命令后直接按回车就可以,最后按q退出
[root@node2 ~]# systemctl --system
调整Linux资源限制
[root@node1 ~]# ulimit -SHn 65535
[root@node1 ~]# cat >> /etc/security/limits.conf <<EOF
# 为所有用户设置文件描述符软限制
* soft nofile 655360
# 为所有用户设置文件描述符硬限制
* hard nofile 131072
# 为所有用户设置进程数软限制
* soft nproc 655350
# 为所有用户设置进程数硬限制
* hard nproc 655350
# 为所有用户设置内存锁定软限制为无限制
* soft memlock unlimited
# 为所有用户设置内存锁定硬限制为无限制
* hard memlock unlimited
EOF
配置时间同步
[root@node1 ~]# yum -y install chrony
[root@node1 ~]# systemctl restart chronyd
[root@node1 ~]# chronyc sources -v
210 Number of sources = 4
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| / '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^+ time.cloudflare.com 3 6 7 1 +8183us[ +502ms] +/- 134ms
^+ ntp8.flashdance.cx 2 6 7 1 +10ms[ +504ms] +/- 177ms
^- stratum2-1.ntp.mow01.ru.> 2 6 7 2 +40ms[ +534ms] +/- 144ms
^* 119.91.217.108 3 6 7 3 -4335us[ +490ms] +/- 94ms
[root@node1 ~]# hwclock -s
配置IPVS功能
#添加ipvs模块和内核模块
[root@node1 ~]# cat >>/etc/modules-load.d/ipvs.conf<<EOF
> ip_vs
> ip_vs_rr
> ip_vs_wrr
> ip_vs_sh
> nf_conntrack_ipv4
> ip_tables
> ip_set
> xt_set
> ipt_set
> ipt_rpfilter
> ipt_REJECT
> ipip
> overlay
> br_netfilter
> EOF
#重启服务
[root@node1 ~]# systemctl restart systemd-modules-load
#查看内核模块
[root@node1 ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4 19149 0
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145458 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 143411 2 ip_vs,nf_conntrack_ipv4
libcrc32c 12644 3 xfs,ip_vs,nf_conntrack
安装containerd
[root@master ~]# cd /etc/modules-load.d/
[root@master modules-load.d]# cat >>/etc/modules-load.d/containerd.conf <<EOF
> overlay
> br_netfilter
> EOF
#加载模块
[root@master modules-load.d]# modprobe overlay
[root@master modules-load.d]# modprobe br_netfilter
[root@master modules-load.d]# systemctl --system
#安装依赖包
[root@master modules-load.d]# yum install -y yum-utils device-mapper-persistent-data lvm2
#源
[root@node1 modules-load.d]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror, langpacks
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
#生成 yum 缓存并加快速度
[root@node1 modules-load.d]# yum makecache fast
Loaded plugins: fastestmirror, langpacks
base | 3.6 kB 00:00:00
docker-ce-stable | 3.5 kB 00:00:00
epel | 4.3 kB 00:00:00
extras | 2.9 kB 00:00:00
updates | 2.9 kB 00:00:00
(1/2): docker-ce-stable/7/x86_64/updateinfo | 55 B 00:00:01
(2/2): docker-ce-stable/7/x86_64/primary_db | 152 kB 00:00:03
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
Metadata Cache Created
#查看可安装的版本
[root@node1 modules-load.d]# yum list containerd.io --showduplicates | sort -r
* updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror, langpacks
* extras: mirrors.aliyun.com
containerd.io.x86_64 1.6.9-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.8-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.7-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.6-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.4-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.33-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.32-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.31-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.28-3.2.el7 docker-ce-stable
containerd.io.x86_64 1.6.28-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.27-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.26-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.25-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.24-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.22-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.21-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.20-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.19-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.18-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.16-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.15-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.14-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.13-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.12-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.11-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.6.10-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.5.11-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.5.10-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.4.9-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.4.8-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.4.6-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.4.4-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.4.3-3.2.el7 docker-ce-stable
containerd.io.x86_64 1.4.3-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.4.13-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.4.12-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.4.11-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.4.10-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.3.9-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.3.7-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.2.6-3.3.el7 docker-ce-stable
containerd.io.x86_64 1.2.5-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.2.4-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.2.2-3.el7 docker-ce-stable
containerd.io.x86_64 1.2.2-3.3.el7 docker-ce-stable
containerd.io.x86_64 1.2.13-3.2.el7 docker-ce-stable
containerd.io.x86_64 1.2.13-3.1.el7 docker-ce-stable
containerd.io.x86_64 1.2.10-3.2.el7 docker-ce-stable
containerd.io.x86_64 1.2.0-3.el7 docker-ce-stable
containerd.io.x86_64 1.2.0-2.2.rc.2.1.el7 docker-ce-stable
containerd.io.x86_64 1.2.0-2.0.rc.0.1.el7 docker-ce-stable
containerd.io.x86_64 1.2.0-1.2.beta.2.el7 docker-ce-stable
* base: mirrors.aliyun.com
Available Packages
#安装最新版本
[root@master modules-load.d]# yum -y install containerd.io
#生成containerd的配置文件
[root@master modules-load.d]# mkdir -p /etc/containerd
[root@master modules-load.d]# containerd config default >/etc/containerd/config.toml
#修改containerd的驱动程序
#配置 containerd 使用 systemd cgroup 驱动
[root@master modules-load.d]# sed -i '/SystemdCgroup/s/false/true/g' /etc/containerd/config.toml
#修改镜像仓库地址
//61行
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
#启动containerd
[root@master modules-load.d]# systemctl enable containerd
[root@master modules-load.d]# systemctl start containerd
[root@master modules-load.d]# systemctl status containerd
#查看containerd版本
[root@master modules-load.d]# ctr version
Client:
Version: 1.6.33
Revision: d2d58213f83a351ca8f528a95fbd145f5654e957
Go version: go1.21.11
Server:
Version: 1.6.33
Revision: d2d58213f83a351ca8f528a95fbd145f5654e957
UUID: 537aa849-377b-426c-ab9a-bf4481c8f55d
#镜像加速配置
[root@master modules-load.d]# vim /etc/containerd/config.toml
#创建镜像加速文件夹
[root@master modules-load.d]# mkdir /etc/containerd/certs.d
[root@master modules-load.d]# mkdir /etc/containerd/certs.d/docker.io
[root@master modules-load.d]# vim /etc/containerd/certs.d/docker.io/hosts.toml
[root@master modules-load.d]# cat /etc/containerd/certs.d/docker.io/hosts.toml
server = "https://docker.io"
[host."https://09def58152000fc00ff0c00057bad7e0.mirror.swr.myhuaweicloud.com"]
capabilities = ["pull","resolve","push"]
[host."https://hub-mirror.c.163.com"]
capabilities = ["pull","resolve","push"]
[host."https://do.nark.eu.org"]
capabilities = ["pull","resolve","push"]
[host."https://dc.j8.work"]
capabilities = ["pull","resolve","push"]
[host."https://docker.m.daocloud.io"]
capabilities = ["pull","resolve","push"]
[host."https://dockerproxy.com"]
capabilities = ["pull","resolve","push"]
[host."https://docker.mirrors.ustc.edu.cn"]
capabilities = ["pull","resolve","push"]
[host."https://docker.nju.edu.cn"]
capabilities = ["pull","resolve","push"]
[host."https://registry.docker-cn.com"]
capabilities = ["pull","resolve","push"]
[host."https://hub.uuuadc.top"]
capabilities = ["pull","resolve","push"]
[host."https://docker.anyhub.us.kg"]
capabilities = ["pull","resolve","push"]
[host."https://dockerhub.jobcher.com"]
capabilities = ["pull","resolve","push"]
[host."https://dockerhub.icu"]
capabilities = ["pull","resolve","push"]
[host."https://docker.ckyl.me"]
capabilities = ["pull","resolve","push"]
[host."https://docker.awsl9527.cn"]
capabilities = ["pull","resolve","push"]
[host."https://mirror.baidubce.com"]
capabilities = ["pull","resolve","push"]
[host."https://docker.1panel.live"]
capabilities = ["pull","resolve","push"]
#重启
[root@master modules-load.d]# systemctl restart containerd
#测试下载镜像
[root@master modules-load.d]# ctr images pull docker.io/library/httpd:latest --hosts-dir=/etc/containerd/certs.d
#查看镜像
[root@node1 modules-load.d]# ctr i ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/library/httpd:latest application/vnd.oci.image.index.v1+json sha256:ecfd5ca1bfe1fc5e44a5836c5188bde7f397b50c7a5bb603a017543e29948a01 43.1 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/riscv64,linux/s390x -
安装kubeadm
#添加k8s软件源
[root@master modules-load.d]# cat <<EOF>/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#快速建立yum缓存
[root@master ~]# yum makecache fast
Loaded plugins: fastestmirror, langpacks
base | 3.6 kB 00:00:00
docker-ce-stable | 3.5 kB 00:00:00
epel | 4.3 kB 00:00:00
extras | 2.9 kB 00:00:00
kubernetes | 1.4 kB 00:00:00
updates | 2.9 kB 00:00:00
kubernetes/primary | 137 kB 00:00:00
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
kubernetes 1022/1022
Metadata Cache Created
#查看可安装的k8s版本
[root@node1 ~]# yum list kubectl --showduplicates | sort -r
#安装指定版的k8s
[root@master ~]# yum -y install kubectl-1.28.0 kubelet-1.28.0 kubeadm-1.28.0
#修改kubelet的crgoup与containerd的crgoup保持一致
[root@master ~]# cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
EOF
#kubelet设置为开机自启动
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
#查看k8s版本
[root@master ~]# kubelet --version
Kubernetes v1.28.0
配置crictl工具
#设置crictl连接containerd
[root@master ~]# cat <<EOF | tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
#结果
....
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
master节点部署
#查看k8s集群所需要的镜像
[root@master ~]# kubeadm config images list --kubernetes-version=v1.28.0 --image-repository=registry.aliyuncs.com/google_containers
registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0
registry.aliyuncs.com/google_containers/pause:3.9
registry.aliyuncs.com/google_containers/etcd:3.5.9-0
registry.aliyuncs.com/google_containers/coredns:v1.10.1
#下载所需镜像
[root@master ~]# kubeadm config images pull --kubernetes-version=v1.28.0 --image-repository=registry.aliyuncs.com/google_containers
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1
#查看安装好的镜像
[root@master ~]# crictl images ls
IMAGE TAG IMAGE ID SIZE
registry.aliyuncs.com/google_containers/coredns v1.10.1 ead0a4a53df89 16.2MB
registry.aliyuncs.com/google_containers/etcd 3.5.9-0 73deb9a3f7025 103MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.28.0 bb5e0dde9054c 34.6MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.28.0 4be79c38a4bab 33.4MB
registry.aliyuncs.com/google_containers/kube-proxy v1.28.0 ea1030da44aa1 24.6MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.28.0 f6f496300a2ae 18.8MB
registry.aliyuncs.com/google_containers/pause 3.9 e6f1816883972 322kB
#初始化集群自动开启IPVS
#生成初始化文件
[root@master ~]# kubeadm config print init-defaults > kubeadm-init.yaml
#修改初始化集群配置文件
[root@master ~]# vim kubeadm-init.yaml
[root@master ~]# cat kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.100.72
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
#开始初始化集群
[root@master ~]# kubeadm init --config=kubeadm-init.yaml --upload-certs | tee kubeadm-init.log
......
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.100.72:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:0c6d82ad1f9b01193a1cd3291117e96fd6e160b091776fa93ec3
......
#根据上面返回的信息进行指定操作
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
#设置环境变量
[root@master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master ~]# source ~/.bash_profile
#查看组件状态
[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy ok
node节点部署
#node 1、2分别输入
#在nodeshang输入的命令为上面初始化输出的最后一条直接复制
[root@node1 ~]# kubeadm join 192.168.100.72:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:0c6d82ad1f9b01193a1cd3291117e96fd6e160b091776fa93ec303c64896b61f
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
#去master查看状态
#第一个为只在node1上面输入命令后的状态
#第二个为两个都输入命令后的状态
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 140m v1.28.0
node1 NotReady <none> 36s v1.28.0
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 141m v1.28.0
node1 NotReady <none> 97s v1.28.0
node2 NotReady <none> 52s v1.28.0
部署网络CNI组建
概念
CNI 的核心作用
- 全称 Container Network Interface(容器网络接口),是 Kubernetes 集群网络的核心依赖。
- 解决 Kubernetes 原生未实现的跨主机容器通信问题,为 Pod 分配 CIDR(IP 地址段)。
- 统一网络接口标准,适配多种第三方网络插件,灵活满足集群网络需求。
主流 CNI 插件选择
Kubernetes 支持多种网络插件,按需选择即可:
- flannel:轻量简单,适合入门场景,功能相对基础。
- calico:本次选用,纯三层方案,性能强、功能全面,是主流选择。
- canal:结合 flannel 的简单性与 calico 的网络策略能力,平衡易用性和功能。
Calico 核心原理与优势
核心工作机制
- 将每个 Node 节点视为独立路由器,每个 Pod 作为该路由器下的终端设备。
- 通过 BGP(边界网关协议)实现各 Node 节点间的路由信息交换。
- 节点间自动学习路由规则,直接转发不同节点 Pod 的网络流量,无需中间转发层。
核心优势
- 纯三层网络架构,转发效率高,无额外网络开销。
- 支持网络策略(Network Policy),可精细控制 Pod 间的访问权限。
- 适配大规模集群,稳定性和扩展性强,是生产环境主流方案。
部署(Calico)
#下载Calico文件
[root@master ~]# wget --no-check-certificate https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
--2025-11-12 15:12:05-- https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
Resolving docs.tigera.io (docs.tigera.io)... 52.74.6.109, 13.215.239.219, 2406:da18:b3d:e201::258, ...
Connecting to docs.tigera.io (docs.tigera.io)|52.74.6.109|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 238089 (233K) [text/yaml]
Saving to: ‘calico.yaml’
100%[========================================================>] 238,089 94.6KB/s in 2.5s
2025-11-12 15:12:09 (94.6 KB/s) - ‘calico.yaml’ saved [238089/238089]
#修改Calico文件
[root@master ~]# vim calico.yaml
#找到4601行,去掉注释并修改,记得上下对齐
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
#部署之前的状态
[root@master ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66f779496c-9j6cx 0/1 Pending 0 3h50m
coredns-66f779496c-r47sk 0/1 Pending 0 3h50m
etcd-master 1/1 Running 0 3h50m
kube-apiserver-master 1/1 Running 0 3h50m
kube-controller-manager-master 1/1 Running 0 3h50m
kube-proxy-9hs6v 1/1 Running 0 3h50m
kube-proxy-b8c2f 1/1 Running 0 91m
kube-proxy-htjmc 1/1 Running 0 90m
kube-scheduler-master 1/1 Running 0 3h50m
#部署
[root@master ~]# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers configured
serviceaccount/calico-kube-controllers unchanged
serviceaccount/calico-node unchanged
configmap/calico-config unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
#部署完之后的状态
[root@master ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-658d97c59c-rr7l4 0/1 Pending 0 33s
calico-node-nhksx 0/1 Init:0/3 0 33s
calico-node-vq97r 0/1 Init:0/3 0 33s
calico-node-z2d8t 0/1 Init:0/3 0 33s
coredns-66f779496c-9j6cx 0/1 Pending 0 3h56m
coredns-66f779496c-r47sk 0/1 Pending 0 3h56m
etcd-master 1/1 Running 0 3h56m
kube-apiserver-master 1/1 Running 0 3h56m
kube-controller-manager-master 1/1 Running 0 3h56m
kube-proxy-9hs6v 1/1 Running 0 3h56m
kube-proxy-b8c2f 1/1 Running 0 96m
kube-proxy-htjmc 1/1 Running 0 96m
kube-scheduler-master 1/1 Running 0 3h56m
[root@master ~]# watch kubectl get pod -n kube-system
#watch实时查看部署状态
#如下图
#全部running就可以了



#命令行再次查看状态
[root@master ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-658d97c59c-rr7l4 1/1 Running 0 15m
calico-node-nhksx 1/1 Running 0 15m
calico-node-vq97r 1/1 Running 0 15m
calico-node-z2d8t 1/1 Running 0 15m
coredns-66f779496c-9j6cx 1/1 Running 0 4h10m
coredns-66f779496c-r47sk 1/1 Running 0 4h10m
etcd-master 1/1 Running 0 4h10m
kube-apiserver-master 1/1 Running 0 4h10m
kube-controller-manager-master 1/1 Running 0 4h10m
kube-proxy-9hs6v 1/1 Running 0 4h10m
kube-proxy-b8c2f 1/1 Running 0 111m
kube-proxy-htjmc 1/1 Running 0 110m
kube-scheduler-master 1/1 Running 0 4h10m
#查看节点状态
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 4h13m v1.28.0
node1 Ready <none> 114m v1.28.0
node2 Ready <none> 113m v1.28.0
#地址出来了
[root@master ~]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-658d97c59c-rr7l4 1/1 Running 0 18m 10.244.166.130 node1 <none> <none>
calico-node-nhksx 1/1 Running 0 18m 192.168.100.71 node2 <none> <none>
calico-node-vq97r 1/1 Running 0 18m 192.168.100.72 master <none> <none>
calico-node-z2d8t 1/1 Running 0 18m 192.168.100.70 node1 <none> <none>
coredns-66f779496c-9j6cx 1/1 Running 0 4h14m 10.244.166.129 node1 <none> <none>
coredns-66f779496c-r47sk 1/1 Running 0 4h14m 10.244.166.131 node1 <none> <none>
etcd-master 1/1 Running 0 4h14m 192.168.100.72 master <none> <none>
kube-apiserver-master 1/1 Running 0 4h14m 192.168.100.72 master <none> <none>
kube-controller-manager-master 1/1 Running 0 4h14m 192.168.100.72 master <none> <none>
kube-proxy-9hs6v 1/1 Running 0 4h14m 192.168.100.72 master <none> <none>
kube-proxy-b8c2f 1/1 Running 0 115m 192.168.100.70 node1 <none> <none>
kube-proxy-htjmc 1/1 Running 0 114m 192.168.100.71 node2 <none> <none>
kube-scheduler-master 1/1 Running 0 4h14m 192.168.100.72 master <none> <none>
命令补全工具
[root@master ~]# yum install bash-completion -y
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
Package 1:bash-completion-2.1-8.el7.noarch already installed and latest version
Nothing to do
[root@master ~]# source /usr/share/bash-completion/bash_completion
#加载kubectl completion
#进去文件里面,编辑把下面那段话加到最后面
[root@master ~]# vim .bashrc
source <(kubectl completion bash)
[root@master ~]# source ~/.bashrc
#可以补全命令了
主节点测试kubernetes集群
#查看pods,发现没有
[root@master ~]# kubectl get pods
No resources found in default namespace.
#创建应用服务nginx
[root@master ~]# kubectl get pods
No resources found in default namespace.
[root@master ~]# kubectl create deployment nginx --image=nginx --replicas=3
deployment.apps/nginx created
[root@master ~]# watch kubectl get pods
#再次查看pods
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-7854ff8877-66qw4 1/1 Running 0 45s
nginx-7854ff8877-c2tbj 1/1 Running 0 45s
nginx-7854ff8877-nsrh5 1/1 Running 0 45s
#查看集群中 Deployment 资源状态,确认nginx应用部署情况
[root@master ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 3/3 3 3 91s
#查看 Deployment 关联的 ReplicaSet(控制器)状态
[root@master ~]# kubectl get replicasets
NAME DESIRED CURRENT READY AGE
nginx-7854ff8877 3 3 3 102s
#查看 Pod 详细信息(包含 IP、运行节点等)
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7854ff8877-66qw4 1/1 Running 0 5m57s 10.244.104.1 node2 <none> <none>
nginx-7854ff8877-c2tbj 1/1 Running 0 5m57s 10.244.104.2 node2 <none> <none>
nginx-7854ff8877-nsrh5 1/1 Running 0 5m57s 10.244.166.132 node1 <none> <none>
#将 nginx Deployment 暴露为 Kubernetes Service,实现对 Pod 的稳定访问
#通过kubectl expose命令创建 Service,解决 Pod 动态变化(IP 可能因重建改变)导致的访问问题,为一组 nginx Pod 提供固定访问入口。
#暴露服务端口
[root@master ~]# kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
service/nginx exposed
[root@master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h29m
nginx NodePort 10.108.58.101 <none> 80:32104/TCP 23s
[root@master ~]# curl http://10.108.58.101
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
#核心作用
#服务发现:为动态变化的 Pod 提供固定访问入口(Service IP 和节点端口),无需关心 Pod 的具体 IP。
#负载均衡:Service 会自动将请求分发到关联的 3 个 nginx Pod,实现负载分担。
#外部访问:通过NodePort类型,允许集群外的设备(如物理机、其他服务器)通过节点 IP+32104 端口访问 nginx 服务。

案例:Kubernetes 的自我修复
#master节点查看Pod状态
#发现node2有两个pod
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7854ff8877-66qw4 1/1 Running 0 5m57s 10.244.104.1 node2 <none> <none>
nginx-7854ff8877-c2tbj 1/1 Running 0 5m57s 10.244.104.2 node2 <none> <none>
nginx-7854ff8877-nsrh5 1/1 Running 0 5m57s 10.244.166.132 node1 <none> <none>
#去node2删除一个pod
[root@node2 ~]# crictl ps | grep nginx
e195187f04f7c d261fd19cb632 20 minutes ago Running nginx 0 cbe44236102a4 nginx-7854ff8877-c2tbj
649b17e74c87e d261fd19cb632 20 minutes ago Running nginx 0 e49ba1b11d739 nginx-7854ff8877-66qw4
[root@node2 ~]# crictl rm -f e195187f04f7c
e195187f04f7c
e195187f04f7c
#再次回到master和node2查看状态
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7854ff8877-66qw4 1/1 Running 0 33m 10.244.104.1 node2 <none> <none>
nginx-7854ff8877-c2tbj 1/1 Running 1 33m 10.244.104.2 node2 <none> <none>
nginx-7854ff8877-nsrh5 1/1 Running 0 33m 10.244.166.132 node1 <none> <none>
[root@node2 ~]# crictl ps | grep nginx
73c1d3f7d84bb d261fd19cb632 50 seconds ago Running nginx 1 cbe44236102a4 nginx-7854ff8877-c2tbj
649b17e74c87e d261fd19cb632 33 minutes ago Running nginx 0 e49ba1b11d739 nginx-7854ff8877-66qw4
#发现自动生成了一个pod,但是id不一样
master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7854ff8877-66qw4 1/1 Running 0 33m 10.244.104.1 node2
nginx-7854ff8877-c2tbj 1/1 Running 1 33m 10.244.104.2 node2
nginx-7854ff8877-nsrh5 1/1 Running 0 33m 10.244.166.132 node1
[root@node2 ~]# crictl ps | grep nginx
73c1d3f7d84bb d261fd19cb632 50 seconds ago Running nginx 1 cbe44236102a4 nginx-7854ff8877-c2tbj
649b17e74c87e d261fd19cb632 33 minutes ago Running nginx 0 e49ba1b11d739 nginx-7854ff8877-66qw4
#发现自动生成了一个pod,但是id不一样
更多推荐

所有评论(0)