目录

一、准备工作

二、环境配置

1、修改hosts配置(所有节点执行)

 2、配置ssh免密登录(master节点)

3、关闭swap分区(所有节点执行)

4、关闭防火墙(所有节点执行)

5、修改内核参数(所有节点执行)

6、配置集群时间同步(所有节点执行)

7、配置k8s的yum源(所有节点执行)

三、安装docker

1、先卸载docker(所有节点执行)

2、配置yum仓库(所有节点执行)

3、安装docker(所有节点执行)

四、安装k8s

1、安装三大组件-kubeadm、kubelet、kubectl(所有节点执行)

2、添加主节点hosts(所有节点添加)

3、初始化k8s集群(master节点)

4、加入节点(所有node节点)

五、安装网络插件Calico(所有节点执行)

六、安装kuboard图形化工具

1、下载kuboard插件

2、安装kuboard

七、安装jenkins

1、文件授权

2、配置Jenkins挂载目录

3、编写docker-compose.yml文件

4、启动compose

5、配置镜像加速

6、获取Jenkins登录密码

7、升级jenkins

8、登录jenkins

9、下载插件

10、将jdk和maven 上传到挂载目录并解压

11、进入jenkins容器

12、在jenkins的全局配置中配置jdk和maven

13、在系统管理->系统配置修改publish over SSH

 14、配置服务器无密码连接jenkins容器

八、安装harbor

1、下载harbor

 2、解压

3、修改harbor.yml配置 

 4、执行安装

九、配置流水线

1、新建流水线

2、拉取git仓库代码(我这里是用的阿里云仓库)

3、通过maven构建项目(没有配置环境变量需要用全路径)

4、通过Docker制作镜像

5、将自定义镜像推送到harbor 

6、将yml推送到服务器(分成两个文件,因为我准备使用的yq命令修改yml)

7、远程执行k8s的kubectl命令

8、最终的流水线版本


一、准备工作

在vmware中安装,创建三台centos服务器

主机 ip 配置
master 192.168.199.128 4核,6g内存,40g硬盘
node1 192.168.199.130 2核,2g内存,20g硬盘
node2 192.168.199.131 2核,2g内存,20g硬盘

二、环境配置


1、修改hosts配置(所有节点执行)

vim /etc/hosts
192.168.199.128 master
192.168.199.130 node1
192.168.199.131 node2

 2、配置ssh免密登录(master节点)

先生成密钥,再复制到所有节点

ssh-keygen
ssh-copy-id master
ssh-copy-id node1
ssh-copy-id node2
#测试连接
ssh node1

3、关闭swap分区(所有节点执行)

kubelet要求必须禁用交换分区,所以kubeadm初始化时回检测swap是否关闭,如果没有关闭会报错,如果不想关闭安装时命令行可以指定-ignore-preflight-errors=Swap,关闭Swap分区在所有节点上执行如下命令:

#临时关闭
swapoff -a
#永久关闭
echo vm.swappiness = 0 >> /etc/sysctl.conf
sysctl -p
#我在虚拟机中重启后状态仍然是开启,不知道是什么原因,没有影响流程,就没有继续查下去

4、关闭防火墙(所有节点执行)

systemctl disable firewalld
systemctl stop firewalld

5、修改内核参数(所有节点执行)

modprobe br_netfilter

echo "modprobe br_netfilter" >> /etc/profile

tee /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

#重新加载配置
sysctl -p /etc/sysctl.d/k8s.conf

6、配置集群时间同步(所有节点执行)

#安装日期插件
yum install -y ntp ntpdate

ntpdate cn.pool.ntp.org

systemctl start ntpd
systemctl enable ntpd

7、配置k8s的yum源(所有节点执行)

这里配置k8s的aliyun源

vim /etc/yum.repos.d/kubernetes.repo
#将以下内容复制进去
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

三、安装docker

1、先卸载docker(所有节点执行)

yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate

2、配置yum仓库(所有节点执行)

配置阿里云的yum仓库地址,默认国外的下载速度比较慢

yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2

yum-config-manager \
    --add-repo \
    https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3、安装docker(所有节点执行)

yum install docker-ce-20.10.9-3.el7 docker-ce-cli-20.10.9-3.el7 docker-compose-plugin containerd.io

#启动docker
systemctl start docker

#设置开机自启
systemctl enable docker

#验证安装
docker -v

配置阿里云镜像加速器
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://p59n3y39.mirror.aliyuncs.com"]
}
> EOF
 重新加载并重启
sudo systemctl daemon-reload
sudo systemctl restart docke

四、安装k8s

1、安装三大组件-kubeadm、kubelet、kubectl(所有节点执行)

  • kubeadm:用来初始化k8s集群的指令。
  • kubelet:在集群的每个节点上用来启动 Pod 和容器等。
  • kubectl:用来与k8s集群通信的命令行工具,查看、创建、更新和删除各种资源。
yum install -y kubelet-1.23.17 kubeadm-1.23.17 kubectl-1.23.17

# 所有节点设置开机自启
systemctl enable kubelet

2、添加主节点hosts(所有节点添加)

echo "192.168.199.128 cluster-endpoint" >> /etc/hosts

3、初始化k8s集群(master节点)

kubeadm init \
--apiserver-advertise-address=192.168.199.128 \
--control-plane-endpoint=cluster-endpoint \
--image-repository  registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.23.17 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=172.20.0.0/16

成功界面

Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
Alternatively, if you are the root user, you can run:
 
  export KUBECONFIG=/etc/kubernetes/admin.conf
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
Then you can join any number of worker nodes by running the following on each as root:
 
kubeadm join cluster-endpoint:6443 --token e8enii.joiejbe3xoj6v9mt \
        --discovery-token-ca-cert-hash sha256:5756518626710f38bc3b7d4a405990c6274b513628467df94d1aa0a5c5b6f196

4、加入节点(所有node节点)

kubeadm join cluster-endpoint:6443 --token e8enii.joiejbe3xoj6v9mt \
        --discovery-token-ca-cert-hash sha256:5756518626710f38bc3b7d4a405990c6274b513628467df94d1aa0a5c5b6f196

查看token列表,可观察到每个token的剩余有效时间

TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
p2hjar.gvqyz2ip3nqyac2c   23h         2024-02-01T08:56:31Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

 默认token有效期为24小时,过期之后token失效,可重新生成token

kubeadm token create --print-join-command

五、安装网络插件Calico(所有节点执行)

下载calico.yml保存
curl https://docs.projectcalico.org/v3.15/manifests/calico.yaml -O

修改配置
3727行             - name: CALICO_IPV4POOL_CIDR
3728行               value: "172.20.0.0/16"

安装至kubectl
kubectl apply -f calico.yaml

验证是否成功
kubectl get pod -A | grep calico

六、安装kuboard图形化工具(master节点)

1、下载kuboard插件

curl https://addons.kuboard.cn/kuboard/kuboard-v3.yaml -O

2、安装kuboard

kubectl apply -f kuboard-v3.yaml

查询安装状态

[root@master soft]# kubectl get pods -n kuboard
NAME                                   READY   STATUS    RESTARTS           AGE
kuboard-agent-2-65fdb5df8b-27chx       1/1     Running   13                 26d
kuboard-agent-57ffc5f966-8nnbd         1/1     Running   13                 26d
kuboard-etcd-xtzrb                     1/1     Running   6                  26d
kuboard-loki-0                         1/1     Running   2 (2d ago)         22d
kuboard-loki-grafana-f78869978-qq9kp   1/1     Running   2 (2d ago)         22d
kuboard-promtail-85227                 1/1     Running   5                  22d
kuboard-promtail-kv8b9                 1/1     Running   2 (2d ago)         22d
kuboard-promtail-xhh8z                 1/1     Running   2 (2d ago)         22d
kuboard-pv-browser-cq6v8               2/2     Running   2782 (4m35s ago)   26d
kuboard-pv-browser-xljll               2/2     Running   2776 (2m19s ago)   26d
kuboard-pv-browser-xtpfx               2/2     Running   2779 (4m35s ago)   26d
kuboard-questdb-78d884c786-nrb99       1/1     Running   6                  26d
kuboard-v3-56b4b954c9-zwhtc            1/1     Running   6                  26d

七、安装jenkins(master节点)

1、文件授权

cd /var/run

#修改docker.sock 文件所属组
chown root:root docker.sock

#修改权限
chmod o+rw docker.sock

2、配置Jenkins挂载目录

mkdir -p /home/jenkins/jenkins_mount
chmod 777 /home/jenkins/jenkins_mount

3、编写docker-compose.yml文件

vim docker-compose.yml

#设置容器外访问端口8085

version: '3.1'
services:
  jenkins:
    image: jenkins/jenkins
    privileged: true
    user: root
    ports:
      - 8085:8080
      - 50000:50000
    container_name: jy_jenkins
    volumes:
      - /home/jenkins/jenkins_mount:/var/jenkins_home
      - /etc/localtime:/etc/localtime
      - /var/run/docker.sock:/var/run/docker.sock
      - /usr/bin/docker:/usr/bin/docker
      - /etc/docker/daemon.json:/etc/docker/daemon.json

4、启动compose

docker compose up -d

5、配置镜像加速


[root@master jenkins_mount]# pwd
/home/jenkins/jenkins_mount

修改挂载目录的hudson.model.UpdateCenter.xml文件 添加清华源加速

<?xml version='1.1' encoding='UTF-8'?>
<sites>
  <site>
    <id>default</id>
    <url>https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json</url>
  </site>
</sites>

6、获取Jenkins登录密码

cat /home/jenkins/jenkins_mount/secrets/initialAdminPassword

710f38bc3b7d4a405990c6274b513628467df94d1aa0a5

7、升级jenkins

下载jenkins的war包

wget https://mirrors.tuna.tsinghua.edu.cn/jenkins/war-stable/latest/jenkins.war

然后停止jenkins容器,记得是停止!

查找jenkins服务
docker ps -a|grep jenkins

docker stop <container_id>

执行docker cp命令,将war拉取到jenkins容器内

docker cp jenkins.war <container_id>:/usr/share/jenkins/jenkins.war

再启动容器

docker start <container_id>

8、登录jenkins

账号是:admin 密码是:710f38bc3b7d4a405990c6274b513628467df94d1aa0a5

选择安装推荐的插件就可以了

9、下载插件

系统管理->插件管理->Available plugins

搜索插件安装

Git Parameter Plug-In
Publish Over SSH

10、将jdk和maven 上传到挂载目录并解压

修改maven仓库地址

  • 添加阿里云镜像地址
</mirrors>
 	<mirror>
        <id>alimaven</id>
        <name>aliyun maven</name>
        <url>https://maven.aliyun.com/repository/public/</url>
        <mirrorOf>central</mirrorOf>
    </mirror>
 </mirrors>
  • 添加jdk8编译
</profiles>
  <profile>
    <id>jdk8</id>
    <activation>
        <activeByDefault>true</activeByDefault>
        <jdk>1.8</jdk>
    </activation>
    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <maven.compiler.compilerVersion>1.8</maven.compiler.compilerVersion>
    </properties>
 </profile>
</profiles>
  • 激活profile
<activeProfiles>
    <activeProfile>jdk8</activeProfile>
 </activeProfiles>

11、进入jenkins容器

docker ps -a|grep jenkins

#进入容器
docker exec -it <container_id> bash

root@26a9d00b6a4c:/var/jenkins_home/apache-maven-3.9.6# pwd
/var/jenkins_home/apache-maven-3.9.6

12、在jenkins的全局配置中配置jdk和maven

13、在系统管理->系统配置修改publish over SSH

 14、配置服务器无密码连接jenkins容器

# 进入jenkins容器
docker exec -it b5a49147b7f5 bash
# 创建密钥对,一路默认回车
ssh-keygen
#复制密钥
cat ~/.ssh/id_rsa.pub

在master服务器中添加密钥
cd /root
mkdir .ssh
cd .ssh
#将密钥拷贝进去
vim authorized_keys

八、安装harbor(master节点)

1、下载harbor

#wget下载不了的话,下载后上传到服务器
wget https://github.com/goharbor/harbor/releases/download/v2.10.0/harbor-offline-installer-v2.10.0.tgz

 2、解压

tar -xzvf harbor-offline-installer-v2.10.0.tgz
#进入目录
cd ./harbor

3、修改harbor.yml配置 

vim harbor.yml

 设置hostname地址,端口,密码

注释掉https模块,不然保存ERROR:root:Error: The protocol is https but attribute ssl_cert is not set

 4、执行安装

./prepare
./install.sh

 安装完成

九、配置流水线

1、新建流水线

 打开流水线语法

2、拉取git仓库代码(我这里是用的阿里云仓库)

3、通过maven构建项目(没有配置环境变量需要用全路径)

4、通过Docker制作镜像

5、将自定义镜像推送到harbor 

#修改私服在jenkins服务器的/etc/docker/daemon.json

{
      "registry-mirrors" : ["https://q5bf287q.mirror.aliyuncs.com", "https://registry.docker-cn.com","http://hub-mirror.c.163.com"],
      "exec-opts": ["native.cgroupdriver=systemd"],
      "insecure-registries": ["192.168.199.128"]
}

# 重新加载
systemctl daemon-reload
systemctl restart docker
docker swarm init

 将镜像推送到harbor

6、将yml推送到服务器(分成两个文件,因为我准备使用的yq命令修改yml)

 配置Deployment和Service

7、远程执行k8s的kubectl命令

8、最终的流水线版本

def TAG = 'v1.0.0'
// 所有脚本命令都放在pipline中
pipeline{
	// 指定任务在哪个集群节点中执行
	agent any
	// 声明全局变量,方便使用
	environment {
	    harborUser = 'wuxiong'
        harborPassword = 'Wuxiong1028'
        harborAddress = '192.168.199.128:5000'
        harborRepo = 'harbor'
	}
    stages {
        stage('拉取git仓库代码') {
            steps {
                checkout scmGit(branches: [[name: '*/master']], extensions: [], userRemoteConfigs: [[credentialsId: '2900e7f7-50ed-4b5e-9bc3-758386c5aaa9', url: 'https://codeup.aliyun.com/658a941385f0966eafd09946/springbootdemo.git']])
                script{
                    echo '拉取git仓库代码-SUCCESS'
                    TAG = sh(script: "git tag --sort=-creatordate |head -n1", returnStdout:true).trim()
                    echo TAG
                }
                checkout scmGit(branches: [[name: TAG]], extensions: [], userRemoteConfigs: [[credentialsId: '2900e7f7-50ed-4b5e-9bc3-758386c5aaa9', url: 'https://codeup.aliyun.com/658a941385f0966eafd09946/springbootdemo.git']])
            }
        }
        stage('通过maven构建项目') {
            steps {
                sh '/var/jenkins_home/apache-maven-3.9.6/bin/mvn clean package -DiskpTest'
            }
        }
        stage('通过Docker制作自定义镜像') {
            steps {
                sh '''cp ./target/*.jar ./
                    docker build -t ${JOB_NAME}:'''+TAG+''' ./'''
            }
        }
        stage('将自定义镜像推送到harbor') {
            steps {
                sh '''docker login -u ${harborUser} -p ${harborPassword} ${harborAddress}
                    docker tag ${JOB_NAME}:'''+TAG+''' ${harborAddress}/${harborRepo}/${JOB_NAME}:'''+TAG+'''
                    docker push ${harborAddress}/${harborRepo}/${JOB_NAME}:'''+TAG
            }
        }
        stage('修改k8s的yml文件') {
            steps {
                sh 'ssh root@192.168.199.128 "yq -i \'.spec.template.spec.containers[0].image = \\"${harborAddress}/${harborRepo}/${JOB_NAME}:'+TAG+'\\"\' /usr/local/k8s/k8s-deployment.yml"'
            }
        }
        stage('将yml文件传到k8s-master上') {
            steps {
                sshPublisher(publishers: [sshPublisherDesc(configName: 'k8s', transfers: [sshTransfer(cleanRemote: false, excludes: '', execCommand: '', execTimeout: 120000, flatten: false, makeEmptyDirs: false, noDefaultExcludes: false, patternSeparator: '[, ]+', remoteDirectory: '', remoteDirectorySDF: false, removePrefix: '', sourceFiles: 'k8s-deployment.yml')], usePromotionTimestamp: false, useWorkspaceInPromotion: false, verbose: false), sshPublisherDesc(configName: 'k8s', transfers: [sshTransfer(cleanRemote: false, excludes: '', execCommand: '', execTimeout: 120000, flatten: false, makeEmptyDirs: false, noDefaultExcludes: false, patternSeparator: '[, ]+', remoteDirectory: '', remoteDirectorySDF: false, removePrefix: '', sourceFiles: 'k8s-service.yml')], usePromotionTimestamp: false, useWorkspaceInPromotion: false, verbose: false)])
            }
        }
        stage('远程执行k8s-master的kubectl命令') {
            steps {
                sh '''ssh root@192.168.199.128 kubectl apply -f /usr/local/k8s/k8s-deployment.yml
                    ssh root@192.168.199.128 kubectl apply -f /usr/local/k8s/k8s-service.yml'''
            }
        }
    }
}

十、其他

1、我的dockerfile

FROM java:8

MAINTAINER wuxiong_sc@163.com

ENV TZ=Asia/Shanghai

RUN ln -sf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

ENV JAVA_OPTS="-Xms128m -Xmx256m -Djava.security.egd=file:/dev/./urandom"

WORKDIR /workdir

ADD ./target/*.jar ./app.jar

EXPOSE 9111

CMD ["sh", "-c","java $JAVA_OPTS -jar app.jar"]

2、 docker-deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: springbootdemo-pipe
  namespace: test
spec:
  replicas: 3
  selector:
    matchLabels:
      app: springbootdemo-pipe-pod
  template:
    metadata:
      labels:
        app: springbootdemo-pipe-pod
    spec:
      containers:
        - name: springbootdemo-pipe
          image: 192.168.199.128:5000/harbor/springbootdemo-pipe:v40
          ports:
            - containerPort: 9111

3、 docker-service.yml

apiVersion: v1
kind: Service
metadata:
  name: springbootdemo-pipe
  namespace: test
spec:
  selector:
    app: springbootdemo-pipe-pod
  type: NodePort
  ports:
    - port: 31111 # Service的端口
      protocol: TCP
      targetPort: 9111 # Pod的端口
      nodePort: 31111
Logo

有“AI”的1024 = 2048,欢迎大家加入2048 AI社区

更多推荐