Kubernetes 安装(基于Containerd+Calico)

一、部署环境说明

1. 集群规划
节点角色 主机名 IP 地址 核心组件
控制节点(Master) master 192.168.100.130 kube-apiserver、etcd 等
工作节点(Node) node1 192.168.100.128 kubelet、kube-proxy 等
工作节点(Node) node2 192.168.100.129 kubelet、kube-proxy 等
2. 基础环境要求
  • 操作系统:CentOS 7.x(64 位)

  • 内核版本:≥ 4.19(推荐 6.5+,本文升级至 6.5.5)

  • 硬件配置:每节点≥2 核 CPU、2GB 内存、20GB 磁盘

  • 网络要求:节点间互通,无防火墙拦截 6443、10250 等端口

二、所有节点统一操作(必做)

1. 内核升级(必做,提升稳定性)

方式 :普通升级(自动匹配最新稳定版)

[root@all ~]# yum update -y kernel 
[root@all ~]# reboot
2. 设置主机名与 Hosts 解析
# 分别在对应节点执行(Master/Node01/Node02)
[root@master ~]# hostnamectl set-hostname master
[root@node1 ~]# hostnamectl set-hostname node1
[root@node2 ~]# hostnamectl set-hostname node2

# 所有节点添加Hosts映射
[root@all ~]# 
cat >>/etc/hosts<<EOF
192.168.100.130 master
192.168.100.128 node1
192.168.100.129 node2
EOF
3. 安装依赖环境包
[root@all ~]# yum -y install vim lrzsz unzip wget net-tools tree bash-completion conntrack ntpdate ntp ipvsadm ipset iptables curl sysstat libseccomp git psmisc telnet unzip gcc gcc-c++ make
4. 关闭防火墙与 SELinux(K8s 要求)
# 关闭防火墙(永久禁用)
[root@all ~]# systemctl disable firewalld --now

# 关闭SELinux(永久+临时)
[root@all ~]# sed -i 's/enforcing/disabled/g' /etc/selinux/config
[root@all ~]# setenforce 0
5. 禁用 Swap 分区(关键!影响性能)

启用swap分区会对系统的性能产生非常负面的影响,因此kubernetes要求每个节点都要禁用swap分区。

# 临时禁用+永久禁用
[root@all ~]# vim etc/fstab
# 进去后把swap用'#'键注释掉即可

[root@all ~]# swapoff -a 
6. 关闭 NetworkManager(可选,避免网络冲突,暂时没有用到)
[root@all ~]# systemctl stop NetworkManager
[root@all ~]# systemctl disable NetworkManager
7. 调整内核参数(优化 K8s 网络与性能)
# 创建配置文件
[root@all ~]# vim /etc/sysctl.d/kubernetes.conf
# 开启Linux内核的网络桥接功能,同时启用iptables和ip6tables的网络包过滤功能,用于在网络桥接时进行网络包过滤
net.bridge.bridge-nf-call-iptables=1     
net.bridge.bridge-nf-call-ip6tables=1
# 开启路由转发,转发IPv4的数据包
net.ipv4.ip_forward=1
# 尽可能避免使用交换分区,提升k8s性能
vm.swappiness=0
# 不检查物理内存是否够用
vm.overcommit_memory=1      

# 立即生效
[root@all ~]# systemctl --system
8. 调整资源限制(提升并发能力)
# 临时配置
# 设置一个进程可以打开的最大文件句柄数
[root@all ~]# ulimit -SHn 65535

# 永久配置(添加到limits.conf) 
# 在Linux系统中,默认情况下一个进程可以打开的最大文件句柄数是1024
[root@all ~]# 
cat >> /etc/security/limits.conf <<EOF
# 为所有用户设置文件描述符软限制
* soft nofile 655360
# 为所有用户设置文件描述符硬限制
* hard nofile 131072
# 为所有用户设置进程数软限制
* soft nproc 655350
# 为所有用户设置进程数硬限制
* hard nproc 655350
# 为所有用户设置内存锁定软限制为无限制
* soft memlock unlimited
# 为所有用户设置内存锁定硬限制为无限制
* hard memlock unlimited      
EOF
9. 配置时间同步(避免集群时间偏差)
[root@all ~]# yum -y install chrony
[root@all ~]# systemctl restart chronyd && systemctl enable chronyd
# 验证同步状态
[root@all ~]# chronyc sources -v
# 硬件时钟同步到系统时钟
[root@all ~]# hwclock -s
10. 配置 IPVS 负载均衡(推荐,替代 iptables)
# 安装依赖(已在步骤3安装,此处确认)
[root@all ~]# yum -y install ipvsadm ipset sysstat conntrack libseccomp

# 添加IPVS内核模块
[root@all ~]#
cat >>/etc/modules-load.d/ipvs.conf<<EOF
ip_vs
ip_vs_rr             # 轮询调度
ip_vs_wrr            # 加权轮询
ip_vs_sh             # 源哈希调度
nf_conntrack_ipv4    # 连接跟踪(内核<4.19用此配置)
# nf_conntrack       # 内核≥4.19替换为这行
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
overlay
br_netfilter
EOF

# 注意:
# Linux内核4.19+版本已经将nf_conntrack_ipv4更新为nf_conntrack。
# nf_conntrack是Linux内核中用于连接跟踪的模块,它负责跟踪网络连接的状态,包括连接的建立、维护和终止,以及相关的网络数据包的处理。这个模块在防火墙、网络地址转换(NAT)、负载均衡等网络功能中起着重要作用。


# 加载模块并重启服务
[root@all ~]# systemctl restart systemd-modules-load
# 查看内核模块(验证模块加载成功)
[root@all ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4      19149  0 
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          143411  2 ip_vs,nf_conntrack_ipv4
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack
11. 安装 Containerd(K8s v1.24 + 默认容器运行时)

注意:k8s v1.24 版本以后放弃对Docker Engine的支持,所以采用containerd容器进行时。

containerd 简介

containerd 是高级容器运行时(容器管理器),以守护进程形式运行,核心职责是管理单个主机上容器的全生命周期:

  • 容器操作:创建、启动、停止、销毁
  • 镜像管理:拉取、存储、分发容器镜像
  • 资源配置:挂载点管理、网络配置等基础编排
核心特性与生态定位
  1. 轻量可嵌入:设计简洁,易于集成到 Docker 引擎、Kubernetes 等更大的系统中
  2. 生态兼容
    • Docker 引擎的底层容器运行依赖 containerd
    • Kubernetes 通过 CRI(容器运行时接口)直接调用 containerd 管理节点容器
  3. 专注核心能力:聚焦容器生命周期管理,剥离冗余功能,保证稳定性与性能
步骤 1:配置内核模块
# 指定 containerd 在系统启动时加载的内核模块
[root@all ~]# cd /etc/modules-load.d/
[root@all modules-load.d]# 
cat >>/etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF

# 加载模块
[root@all modules-load.d]# modprobe overlay 
[root@all modules-load.d]# modprobe br_netfilter

# 立即生效
[root@all modules-load.d]# systemctl --system
步骤 2:安装 Containerd
# 安装依赖的软件包
[root@all modules-load.d]# yum install -y yum-utils device-mapper-persistent-data lvm2

# 添加Docker源(Containerd包含在Docker源中)
[root@all modules-load.d]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror, langpacks
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo

[root@all modules-load.d]# yum makecache fast
Loaded plugins: fastestmirror, langpacks
base                                                                       | 3.6 kB  00:00:00     
docker-ce-stable                                                           | 3.5 kB  00:00:00     
epel                                                                       | 4.3 kB  00:00:00     
extras                                                                     | 2.9 kB  00:00:00     
updates                                                                    | 2.9 kB  00:00:00     
(1/2): docker-ce-stable/7/x86_64/updateinfo                                |   55 B  00:00:01     
(2/2): docker-ce-stable/7/x86_64/primary_db                                | 152 kB  00:00:03     
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Metadata Cache Created

# 查看可用版本
[root@all modules-load.d]# yum list containerd.io --showduplicates | sort -r
 * updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror, langpacks
 * extras: mirrors.aliyun.com
containerd.io.x86_64            1.6.9-3.1.el7                   docker-ce-stable
containerd.io.x86_64            1.6.8-3.1.el7                   docker-ce-stable
containerd.io.x86_64            1.6.7-3.1.el7                   docker-ce-stable
containerd.io.x86_64            1.6.6-3.1.el7                   docker-ce-stable
containerd.io.x86_64            1.6.4-3.1.el7                   docker-ce-stable
containerd.io.x86_64            1.6.33-3.1.el7                  docker-ce-stable
............

# 安装指定版本(跳过,咱们要用就用最新版!)
# [root@all modules-load.d]# yum -y install containerd.io-1.6.16
# 默认安装最新版本containerd
[root@all modules-load.d]# yum -y install containerd.io
步骤 3:配置 Containerd
# 生成默认配置文件
[root@all modules-load.d]# mkdir -p /etc/containerd
[root@all modules-load.d]# containerd config default >/etc/containerd/config.toml

# 修改containerd的驱动程序关键配置(2处)
# 1. 启用SystemdCgroup(与K8s cgroup驱动一致)
[root@all modules-load.d]# sed -i '/SystemdCgroup/s/false/true/g' /etc/containerd/config.toml

# 2. 修改沙箱镜像(替换为阿里云镜像,加速拉取)
[root@all modules-load.d]# vim /etc/containerd/config.toml
63 sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"


# 启动containerd
[root@all modules-load.d]# systemctl enable containerd
[root@all modules-load.d]# systemctl start containerd
[root@all modules-load.d]# systemctl status containerd
步骤 4:配置镜像加速(关键!解决拉取慢问题)
#镜像加速配置
[root@all modules-load.d]# vim /etc/containerd/config.toml
# 找到145行
[plugins."io.containerd.grpc.v1.cri".registry]
  config_path = "/etc/containerd/certs.d"
  
# 创建镜像仓库配置目录
[root@all modules-load.d]# mkdir -p /etc/containerd/certs.d/docker.io
# 编辑加速配置
[root@all modules-load.d]# vim /etc/containerd/certs.d/docker.io/hosts.toml
server = "https://docker.io"
[host."https://09def58152000fc00ff0c00057bad7e0.mirror.swr.myhuaweicloud.com"]
  capabilities = ["pull","resolve","push"]

[host."https://hub-mirror.c.163.com"]
  capabilities = ["pull","resolve","push"]

[host."https://do.nark.eu.org"]
  capabilities = ["pull","resolve","push"]

[host."https://dc.j8.work"]
  capabilities = ["pull","resolve","push"]

[host."https://docker.m.daocloud.io"]
  capabilities = ["pull","resolve","push"]

[host."https://dockerproxy.com"]
  capabilities = ["pull","resolve","push"]

[host."https://docker.mirrors.ustc.edu.cn"]
  capabilities = ["pull","resolve","push"]

[host."https://docker.nju.edu.cn"]
  capabilities = ["pull","resolve","push"]

[host."https://registry.docker-cn.com"]
  capabilities = ["pull","resolve","push"]

[host."https://hub.uuuadc.top"]
  capabilities = ["pull","resolve","push"]

[host."https://docker.anyhub.us.kg"]
  capabilities = ["pull","resolve","push"]

[host."https://dockerhub.jobcher.com"]
  capabilities = ["pull","resolve","push"]

[host."https://dockerhub.icu"]
  capabilities = ["pull","resolve","push"]

[host."https://docker.ckyl.me"]
  capabilities = ["pull","resolve","push"]

[host."https://docker.awsl9527.cn"]
  capabilities = ["pull","resolve","push"]

[host."https://mirror.baidubce.com"]
  capabilities = ["pull","resolve","push"]

[host."https://docker.1panel.live"]
  capabilities = ["pull","resolve","push"]

# #重启 
[root@all modules-load.d]# systemctl restart containerd

# 验证版本
[root@all modules-load.d]# ctr version
Client:
  Version:  1.6.33
  Revision: d2d58213f83a351ca8f528a95fbd145f5654e957
  Go version: go1.21.11

Server:
  Version:  1.6.33
  Revision: d2d58213f83a351ca8f528a95fbd145f5654e957
  UUID: 537aa849-377b-426c-ab9a-bf4481c8f55d
步骤 5:测试镜像拉取(验证加速是否生效)
[root@all modules-load.d]# ctr images pull docker.io/library/httpd:latest --hosts-dir=/etc/containerd/certs.d
# 查看拉取的镜像
[root@all modules-load.d]# ctr i ls
REF                            TYPE                                    DIGEST                                                                  SIZE     PLATFORMS                                                                                              LABELS 
docker.io/library/httpd:latest application/vnd.oci.image.index.v1+json sha256:ecfd5ca1bfe1fc5e44a5836c5188bde7f397b50c7a5bb603a017543e29948a01 43.1 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/riscv64,linux/s390x -      
12. 安装 Kubeadm、Kubelet、Kubectl

各工具包作用

kubeadm:快速部署kubernetes集群的工具
kubelet:在集群中的每个节点上用来启动Pod和容器等
kubectl:负责管理kubernetes集群的命令行工具

步骤 1:添加 K8s 阿里云源
[root@all ~]# 
cat <<EOF>/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#快速建立yum缓存
[root@all ~]# yum makecache fast
步骤 2:安装指定版本(本文用 v1.28.0)
# 查看可用版本(可选)
[root@all ~]# yum list kubectl --showduplicates | sort -r

# 安装v1.28.0
[root@all ~]# yum -y install kubectl-1.28.0 kubelet-1.28.0 kubeadm-1.28.0
步骤 3:配置 Kubelet(与 Containerd 兼容)
# 修改kubelet的crgoup与containerd的crgoup保持一致
# kubelet使用systemd来作为cgroup驱动,使系统运行稳定,限制Pod和容器的资源使用情况,如cpu、内存等
# 注意:k8s v1.22开始,使用 kubeadm 创建集群时,默认使用systemd为cgroup驱动程序;
[root@all ~]# 
cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"  # 与Containerd cgroup一致
KUBE_PROXY_MODE="ipvs"                        # 启用IPVS模式
EOF

# 设置开机自启(暂不启动,初始化后自动启动)
[root@all ~]# systemctl daemon-reload
[root@all ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

# 验证版本
[root@all ~]# kubelet --version  # 应显示v1.28.0
Kubernetes v1.28.0
13. 配置 Crictl 工具(调试容器运行时)

crictl 是CRI兼容的容器运行时命令行接口,可以使用它来检查和调试kubelet节点上的容器运行时和镜像

# 设置crictl连接containerd
[root@all ~]# 
cat <<EOF | tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

三、Master 主节点专属操作

1. 初始化 K8s 集群
步骤 1:查看并下载k8s集群所需要的镜像
# 查看k8s集群所需要的镜像
[root@master ~]# kubeadm config images list --kubernetes-version=v1.28.0 --image-repository=registry.aliyuncs.com/google_containers
registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0
registry.aliyuncs.com/google_containers/pause:3.9
registry.aliyuncs.com/google_containers/etcd:3.5.9-0
registry.aliyuncs.com/google_containers/coredns:v1.10.1

#下载所需镜像
[root@master ~]# kubeadm config images pull --kubernetes-version=v1.28.0 --image-repository=registry.aliyuncs.com/google_containers
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1

#查看安装好的镜像
[root@master ~]# crictl images ls
IMAGE                                                             TAG                 IMAGE ID            SIZE
registry.aliyuncs.com/google_containers/coredns                   v1.10.1             ead0a4a53df89       16.2MB
registry.aliyuncs.com/google_containers/etcd                      3.5.9-0             73deb9a3f7025       103MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.28.0             bb5e0dde9054c       34.6MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.28.0             4be79c38a4bab       33.4MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.28.0             ea1030da44aa1       24.6MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.28.0             f6f496300a2ae       18.8MB
registry.aliyuncs.com/google_containers/pause                     3.9                 e6f1816883972       322kB
步骤 2:创建初始化配置文件
[root@master ~]# kubeadm config print init-defaults > kubeadm-init.yaml
步骤 3:修改配置文件(完整配置如下)
[root@master ~]# vim kubeadm-init.yaml
[root@master ~]# cat kubeadm-init.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.100.130  #12行 Master节点IP
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock  #15行 Containerd套接字
  imagePullPolicy: IfNotPresent
  name: master            #17行 修改Master主机名
  taints:                 #18行 注意!去掉Null
  - effect: NoSchedule    #19行 添加污点
    key: node-role.kubernetes.io/control-plane    #20行 添加
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  #32行 修改为阿里云镜像源
kind: ClusterConfiguration
kubernetesVersion: 1.28.0      #34行 修改K8s版本
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12  # 服务网段
  podSubnet: 10.244.0.0/16     #38行 增加Pod网段(需与Calico一致)
scheduler: {}

# 末尾添加下面的代码
---                    # 更改kube-proxy的代理模式,默认为iptables
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs             # 启用IPVS
---                    # 更改kubelet cgroup驱动为systemd
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd  # 与Containerd一致
步骤 4:初始化集群
# 执行初始化(日志保存到kubeadm-init.log)
[root@master ~]# kubeadm init --config=kubeadm-init.yaml --upload-certs | tee kubeadm-init.log
.........
# 显示如下类似的代码段即证成功
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.128:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:65824321c81ca722c65d6062ee804f04ce670ea818948623a07f2f4b998da406
	
	
# 如果集群初始化失败,按下面的方法解决
# 重置处理:
# 1:删除kubernetes目录中所有内容
rm -rf /etc/kubernetes/*

# 2:删除启动端口进程
pkill -9 kubelet
pkill -9 kube-controll
pkill -9 kube-schedule

# 3:重置sock文件
kubeadm reset -f --cri-socket=unix:///var/run/containerd/containerd.sock
步骤 5:配置kubectl工具(根据初始化成功后的提示进行配置)
  • Kubectl 是一个用于操作Kubernetes集群的命令行工具。
  • kubectl 在 $HOME/.kube 目录中查找一个名为 config 的配置文件。可以通过设置 KUBECONFIG 环境变量或设置 --kubeconfig 参数来指定其它 kubeconfig 文件。
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config


# 配置kubectl(root用户永久生效)
# 在Kubernetes集群中,admin.conf 文件是用于配置kubectl工具访问Kubernetes集群的客户端配置文件。该文件包含了连接到Kubernetes集群所需的认证信息、集群信息和上下文信息。
[root@master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master ~]# source ~/.bash_profile

# 验证组件状态(应显示3个Healthy)
[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
controller-manager   Healthy   ok        
scheduler            Healthy   ok        
etcd-0               Healthy   ok    

四、Node 节点专属操作(node1,2都做)

1. 加入 K8s 集群
# 执行Master初始化成功后输出的join命令(示例如下,需替换实际token和hash)
[root@node1,2 ~]# kubeadm join 192.168.100.130:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:65824321c81ca722c65d6062ee804f04ce670ea818948623a07f2f4b998da406
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
2. 验证节点状态(在 Master 执行)
# 查看集群节点状态
# 此时,k8s内部不具备网络功能,因此节点信息都是 “Not Ready” 状态
[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES           AGE    VERSION
master   NotReady   control-plane   141m   v1.28.0
node1    NotReady   <none>          97s    v1.28.0
node2    NotReady   <none>          52s    v1.28.0

五、部署网络CNI组建(在master节点做)

Kubernetes 集群的网络并非内部原生实现,结构较为复杂,为了更便捷地使用集群,通常会采用第三方 CNI(Container Network Interface,容器网络接口)插件,其核心作用是实现容器跨主机网络通信,而 Pod 的 IP 地址段也被称为 CIDR。

Kubernetes 支持 flannel、Calico、canal 等多种网络插件,任选其一即可,本次选择 Calico。Calico 是一款纯三层网络解决方案,能为容器提供多节点间的访问通信:它将每个 node 节点视作路由器,每个 Pod 则是该虚拟路由器下的终端,各节点通过 BGP(Border Gateway Protocol,边界网关协议)学习路由信息,并在 node 节点生成路由规则,进而实现不同 node 节点上 Pod 的互联互通,是目前 Kubernetes 中主流的网络方案。

需要注意的是,Calico 的 calico.yaml 文件会随版本不同而有差异,需匹配对应的 Kubernetes 版本。其中 Calico 3.25 版本支持的 Kubernetes 版本为 v1.23—v1.28,相关参考及资源地址如下:

  • 官方下载地址:https://docs.tigera.io/calico
  • Github 访问地址:https://github.com/projectcalico/calico
  • 版本匹配及系统要求参考:https://archive-os-3-25.netlify.app/calico/3.25/getting-started/kubernetes/requirements
1:下载 Calico 配置文件
[root@master ~]# wget --no-check-certificate https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
--2025-11-12 15:12:05--  https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
Resolving docs.tigera.io (docs.tigera.io)... 52.74.6.109, 13.215.239.219, 2406:da18:b3d:e201::258, ...
Connecting to docs.tigera.io (docs.tigera.io)|52.74.6.109|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 238089 (233K) [text/yaml]
Saving to: ‘calico.yaml’

100%[========================================================>] 238,089     94.6KB/s   in 2.5s   

2025-11-12 15:12:09 (94.6 KB/s) - ‘calico.yaml’ saved [238089/238089]
2:修改 Pod 网段(与集群初始化一致)
#修改Calico文件
[root@master ~]# vim calico.yaml
#找到4601行,去掉注释并修改,记得上下对齐
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"
3:部署 Calico
# 部署之前的状态
[root@master ~]# kubectl get pod -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-66f779496c-9j6cx         0/1     Pending   0          3h50m
coredns-66f779496c-r47sk         0/1     Pending   0          3h50m
etcd-master                      1/1     Running   0          3h50m
kube-apiserver-master            1/1     Running   0          3h50m
kube-controller-manager-master   1/1     Running   0          3h50m
kube-proxy-9hs6v                 1/1     Running   0          3h50m
kube-proxy-b8c2f                 1/1     Running   0          91m
kube-proxy-htjmc                 1/1     Running   0          90m
kube-scheduler-master            1/1     Running   0          3h50m

# 开始部署
[root@master ~]# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers configured
serviceaccount/calico-kube-controllers unchanged
serviceaccount/calico-node unchanged
configmap/calico-config unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
.........
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

#部署完之后的状态
[root@master ~]# kubectl get pod -n kube-system
NAME                                       READY   STATUS     RESTARTS   AGE
calico-kube-controllers-658d97c59c-rr7l4   0/1     Pending    0          33s
calico-node-nhksx                          0/1     Init:0/3   0          33s
calico-node-vq97r                          0/1     Init:0/3   0          33s
calico-node-z2d8t                          0/1     Init:0/3   0          33s
coredns-66f779496c-9j6cx                   0/1     Pending    0          3h56m
coredns-66f779496c-r47sk                   0/1     Pending    0          3h56m
etcd-master                                1/1     Running    0          3h56m
kube-apiserver-master                      1/1     Running    0          3h56m
kube-controller-manager-master             1/1     Running    0          3h56m
kube-proxy-9hs6v                           1/1     Running    0          3h56m
kube-proxy-b8c2f                           1/1     Running    0          96m
kube-proxy-htjmc                           1/1     Running    0          96m
kube-scheduler-master                      1/1     Running    0          3h56m

# 加watch实时查看部署状态
[root@master ~]# watch kubectl get pod -n kube-system


# 命令行再次查看状态
[root@master ~]# kubectl get pod -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-658d97c59c-rr7l4   1/1     Running   0          15m
calico-node-nhksx                          1/1     Running   0          15m
calico-node-vq97r                          1/1     Running   0          15m
calico-node-z2d8t                          1/1     Running   0          15m
coredns-66f779496c-9j6cx                   1/1     Running   0          4h10m
coredns-66f779496c-r47sk                   1/1     Running   0          4h10m
etcd-master                                1/1     Running   0          4h10m
kube-apiserver-master                      1/1     Running   0          4h10m
kube-controller-manager-master             1/1     Running   0          4h10m
kube-proxy-9hs6v                           1/1     Running   0          4h10m
kube-proxy-b8c2f                           1/1     Running   0          111m
kube-proxy-htjmc                           1/1     Running   0          110m
kube-scheduler-master                      1/1     Running   0          4h10m
4:查看节点状态
#查看节点状态
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES           AGE     VERSION
master   Ready    control-plane   4h13m   v1.28.0
node1    Ready    <none>          114m    v1.28.0
node2    Ready    <none>          113m    v1.28.0


#地址出来了
[root@master ~]# kubectl get pod -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
calico-kube-controllers-658d97c59c-rr7l4   1/1     Running   0          18m     10.244.166.130   node1    <none>           <none>
calico-node-nhksx                          1/1     Running   0          18m     192.168.100.129   node2    <none>           <none>
calico-node-vq97r                          1/1     Running   0          18m     192.168.100.130   master   <none>           <none>
calico-node-z2d8t                          1/1     Running   0          18m     192.168.100.128   node1    <none>           <none>
coredns-66f779496c-9j6cx                   1/1     Running   0          4h14m   10.244.166.129   node1    <none>           <none>
coredns-66f779496c-r47sk                   1/1     Running   0          4h14m   10.244.166.131   node1    <none>           <none>
etcd-master                                1/1     Running   0          4h14m   192.168.100.130   master   <none>           <none>
kube-apiserver-master                      1/1     Running   0          4h14m   192.168.100.130   master   <none>           <none>
kube-controller-manager-master             1/1     Running   0          4h14m   192.168.100.130  master   <none>           <none>
kube-proxy-9hs6v                           1/1     Running   0          4h14m   192.168.100.130   master   <none>           <none>
kube-proxy-b8c2f                           1/1     Running   0          115m   192.168.100.128   node1    <none>           <none>
kube-proxy-htjmc                           1/1     Running   0          114m   192.168.100.129   node2    <none>           <none>
kube-scheduler-master                      1/1     Running   0          4h14m   192.168.100.130   master   <none>           <none>
5:在主节点测试kubernetes集群
# 先查看pods,发现没有东西
[root@master ~]# kubectl get pods
No resources found in default namespace.

# 创建应用服务nginx
[root@master ~]# kubectl create deployment nginx --image=nginx --replicas=3
deployment.apps/nginx created

#再次查看pods
[root@master ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7854ff8877-66qw4   1/1     Running   0          45s
nginx-7854ff8877-c2tbj   1/1     Running   0          45s
nginx-7854ff8877-nsrh5   1/1     Running   0          45s
[root@master ~]# kubectl get deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   3/3     3            3           91s
[root@master ~]# kubectl get replicasets
NAME               DESIRED   CURRENT   READY   AGE
nginx-7854ff8877   3         3         3       102s

#暴露服务端口
[root@master ~]# kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
service/nginx exposed

[root@master ~]# kubectl get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        4h29m
nginx        NodePort    10.108.58.103   <none>        80:31667/TCP   23s

[root@master ~]# curl http://10.108.58.103
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

从外部输入http://192.168.100.128:31667可得如图:

在这里插入图片描述

六、完成后优化操作

主节点安装kubectl命令自动补全

# 安装bash-completion工具
[root@master ~]# yum install bash-completion -y

# 执行bash_completion
[root@master ~]# source /usr/share/bash-completion/bash_completion

# 加载kubectl completion
# 在当前bash环境中永久设置命令补全
[root@master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@master ~]# source  ~/.bashrc

# 这样就可以补全命令了
Logo

有“AI”的1024 = 2048,欢迎大家加入2048 AI社区

更多推荐