Centos7 安装部署Kubernetes(k8s)集群实现过程

2023-12-01 0 747
目录
  • 一.系统环境
  • 二.前言
  • 三.Kubernetes
    • 3.1 概述
    • 3.2 Kubernetes 组件
      • 3.2.1 控制平面组件
      • 3.2.2 Node组件
  • 四.安装部署Kubernetes集群
    • 4.1 环境介绍
      • 4.2 配置节点的基本环境
        • 4.3 节点安装docker,并进行相关配置
          • 4.4 安装kubelet,kubeadm,kubectl
            • 4.5 kubeadm初始化
              • 4.6 添加worker节点到k8s集群
                • 4.7 部署CNI网络插件calico
                  • 4.8 配置kubectl命令tab键自动补全

                  一.系统环境

                  服务器版本docker软件版本CPU架构CentOS Linux release 7.4.1708 (Core)Docker version 20.10.12x86_64

                  二.前言

                  下图描述了软件部署方式的变迁:传统部署时代,虚拟化部署时代,容器部署时代。

                  Centos7 安装部署Kubernetes(k8s)集群实现过程

                  传统部署时代:

                  早期,各个组织是在物理服务器上运行应用程序。 由于无法限制在物理服务器中运行的应用程序资源使用,因此会导致资源分配问题。 例如,如果在同一台物理服务器上运行多个应用程序, 则可能会出现一个应用程序占用大部分资源的情况,而导致其他应用程序的性能下降。 一种解决方案是将每个应用程序都运行在不同的物理服务器上, 但是当某个应用程式资源利用率不高时,剩余资源无法被分配给其他应用程式, 而且维护许多物理服务器的成本很高。

                  虚拟化部署时代:

                  因此,虚拟化技术被引入了。虚拟化技术允许你在单个物理服务器的 CPU 上运行多台虚拟机(VM)。 虚拟化能使应用程序在不同 VM 之间被彼此隔离,且能提供一定程度的安全性, 因为一个应用程序的信息不能被另一应用程序随意访问。

                  虚拟化技术能够更好地利用物理服务器的资源,并且因为可轻松地添加或更新应用程序, 而因此可以具有更高的可扩缩性,以及降低硬件成本等等的好处。 通过虚拟化,你可以将一组物理资源呈现为可丢弃的虚拟机集群。

                  每个 VM 是一台 完整的计算机,在虚拟化硬件之上运行所有组件,包括其自己的操作系统。

                  容器部署时代:

                  容器类似于 VM,但是更宽松的隔离特性,使容器之间可以共享操作系统(OS)。 因此,容器比起 VM 被认为是更轻量级的。且与 VM 类似,每个容器都具有自己的文件系统、CPU、内存、进程空间等。 由于它们与基础架构分离,因此可以跨云和 OS 发行版本进行移植。

                  容器因具有许多优势而变得流行起来,例如:

                  • 敏捷应用程序的创建和部署:与使用 VM 镜像相比,提高了容器镜像创建的简便性和效率。
                  • 持续开发、集成和部署:通过快速简单的回滚(由于镜像不可变性), 提供可靠且频繁的容器镜像构建和部署。
                  • 关注开发与运维的分离:在构建、发布时创建应用程序容器镜像,而不是在部署时, 从而将应用程序与基础架构分离。
                  • 可观察性:不仅可以显示 OS 级别的信息和指标,还可以显示应用程序的运行状况和其他指标信号。
                  • 跨开发、测试和生产的环境一致性:在笔记本计算机上也可以和在云中运行一样的应用程序。
                  • 跨云和操作系统发行版本的可移植性:可在 Ubuntu、RHEL、CoreOS、本地、 Google Kubernetes Engine 和其他任何地方运行。
                  • 以应用程序为中心的管理:提高抽象级别,从在虚拟硬件上运行 OS 到使用逻辑资源在 OS 上运行应用程序。
                  • 松散耦合、分布式、弹性、解放的微服务:应用程序被分解成较小的独立部分, 并且可以动态部署和管理 – 而不是在一台大型单机上整体运行。
                  • 资源隔离:可预测的应用程序性能。
                  • 资源利用:高效率和高密度。

                  三.Kubernetes

                  3.1 概述

                  Kubernetes 是一个可移植、可扩展的开源平台,用于管理容器化的工作负载和服务,可促进声明式配置和自动化。 Kubernetes 拥有一个庞大且快速增长的生态,其服务、支持和工具的使用范围相当广泛。

                  Kubernetes 这个名字源于希腊语,意为“舵手”或“飞行员”。k8s 这个缩写是因为 k 和 s 之间有八个字符的关系。 Google 在 2014 年开源了 Kubernetes 项目。 Kubernetes 建立在 Google 大规模运行生产工作负载十几年经验的基础上, 结合了社区中最优秀的想法和实践。

                  Kubernetes 为你提供的功能如下:

                  • 服务发现和负载均衡:Kubernetes 可以使用 DNS 名称或自己的 IP 地址来曝露容器。 如果进入容器的流量很大, Kubernetes 可以负载均衡并分配网络流量,从而使部署稳定。
                  • 存储编排:Kubernetes 允许你自动挂载你选择的存储系统,例如本地存储、公共云提供商等。
                  • 自动部署和回滚:你可以使用 Kubernetes 描述已部署容器的所需状态, 它可以以受控的速率将实际状态更改为期望状态。 例如,你可以自动化 Kubernetes 来为你的部署创建新容器, 删除现有容器并将它们的所有资源用于新容器。
                  • 自动完成装箱计算:你为 Kubernetes 提供许多节点组成的集群,在这个集群上运行容器化的任务。 你告诉 Kubernetes 每个容器需要多少 CPU 和内存 (RAM)。 Kubernetes 可以将这些容器按实际情况调度到你的节点上,以最佳方式利用你的资源。
                  • 自我修复:Kubernetes 将重新启动失败的容器、替换容器、杀死不响应用户定义的运行状况检查的容器, 并且在准备好服务之前不将其通告给客户端。
                  • 密钥与配置管理:Kubernetes 允许你存储和管理敏感信息,例如密码、OAuth 令牌和 ssh 密钥。 你可以在不重建容器镜像的情况下部署和更新密钥和应用程序配置,也无需在堆栈配置中暴露密钥。

                  3.2 Kubernetes 组件

                  Kubernetes 集群架构如下:

                  Centos7 安装部署Kubernetes(k8s)集群实现过程

                  Kubernetes 集群组件如下:

                  Centos7 安装部署Kubernetes(k8s)集群实现过程

                  Kubernetes有两种节点类型:master节点,worker节点。master节点又称为控制平面(Control Plane)。控制平面有很多组件,控制平面组件会为集群做出全局决策,比如资源的调度。 以及检测和响应集群事件,例如当不满足部署的 replicas 字段时, 要启动新的 pod)。

                  控制平面组件可以在集群中的任何节点上运行。 然而,为了简单起见,设置脚本通常会在同一个计算机上启动所有控制平面组件, 并且不会在此计算机上运行用户容器。

                  3.2.1 控制平面组件

                  控制平面组件如下:

                  • kube-apiserver:API 服务器是 Kubernetes 控制平面的组件, 该组件负责公开了 Kubernetes API,负责处理接受请求的工作。 API 服务器是 Kubernetes 控制平面的前端。Kubernetes API 服务器的主要实现是 kube-apiserver。 kube-apiserver 设计上考虑了水平扩缩,也就是说,它可通过部署多个实例来进行扩缩。 你可以运行 kube-apiserver 的多个实例,并在这些实例之间平衡流量。
                  • etcd:etcd 是兼顾一致性与高可用性的键值对数据库,可以作为保存 Kubernetes 所有集群数据的后台数据库。你的 Kubernetes 集群的 etcd 数据库通常需要有个备份计划。
                  • kube-scheduler:kube-scheduler 是控制平面的组件, 负责监视新创建的、未指定运行节点(node)的 Pods, 并选择节点来让 Pod 在上面运行。调度决策考虑的因素包括单个 Pod 及 Pods 集合的资源需求、软硬件及策略约束、 亲和性及反亲和性规范、数据位置、工作负载间的干扰及最后时限。
                  • kube-controller-manager:kube-controller-manager 是控制平面的组件, 负责运行控制器进程。从逻辑上讲, 每个控制器都是一个单独的进程, 但是为了降低复杂性,它们都被编译到同一个可执行文件,并在同一个进程中运行。这些控制器包括:节点控制器(Node Controller):负责在节点出现故障时进行通知和响应任务控制器(Job Controller):监测代表一次性任务的 Job 对象,然后创建 Pods 来运行这些任务直至完成端点控制器(Endpoints Controller):填充端点(Endpoints)对象(即加入 Service 与 Pod)服务帐户和令牌控制器(Service Account & Token Controllers):为新的命名空间创建默认帐户和 API 访问令牌
                  • cloud-controller-manager:一个 Kubernetes 控制平面组件, 嵌入了特定于云平台的控制逻辑。 云控制器管理器(Cloud Controller Manager)允许你将你的集群连接到云提供商的 API 之上, 并将与该云平台交互的组件同与你的集群 交互的组件分离开来。cloud-controller-manager 仅运行特定于云平台的控制器。 因此如果你在自己的环境中运行 Kubernetes,或者在本地计算机中运行学习环境, 所部署的集群不需要有云控制器管理器。与 kube-controller-manager 类似,cloud-controller-manager 将若干逻辑上独立的控制回路组合到同一个可执行文件中, 供你以同一进程的方式运行。 你可以对其执行水平扩容(运行不止一个副本)以提升性能或者增强容错能力。下面的控制器都包含对云平台驱动的依赖:节点控制器(Node Controller):用于在节点终止响应后检查云提供商以确定节点是否已被删除路由控制器(Route Controller):用于在底层云基础架构中设置路由服务控制器(Service Controller):用于创建、更新和删除云提供商负载均衡器

                  3.2.2 Node组件

                  节点组件会在每个节点上运行,负责维护运行的 Pod 并提供 Kubernetes 运行环境。

                  node组件如下:

                  • kubelet:kubelet 会在集群中每个节点(node)上运行。 它保证容器(containers)都运行在 Pod 中。kubelet 接收一组通过各类机制提供给它的 PodSpecs, 确保这些 PodSpecs 中描述的容器处于运行状态且健康。 kubelet 不会管理不是由 Kubernetes 创建的容器。
                  • kube-proxy:kube-proxy 是集群中每个节点(node)所上运行的网络代理, 实现 Kubernetes 服务(Service) 概念的一部分。kube-proxy 维护节点上的一些网络规则, 这些网络规则会允许从集群内部或外部的网络会话与 Pod 进行网络通信。如果操作系统提供了可用的数据包过滤层,则 kube-proxy 会通过它来实现网络规则。 否则,kube-proxy 仅做流量转发。

                  四.安装部署Kubernetes集群

                  4.1 环境介绍

                  Kubernetes集群架构:k8scloude1作为master节点,k8scloude2,k8scloude3作为worker节点

                  服务器操作系统版本CPU架构进程功能描述k8scloude1/192.168.110.130CentOS Linux release 7.4.1708 (Core)x86_64docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calicok8s master节点k8scloude2/192.168.110.129CentOS Linux release 7.4.1708 (Core)x86_64docker,kubelet,kube-proxy,calicok8s worker节点k8scloude3/192.168.110.128CentOS Linux release 7.4.1708 (Core)x86_64docker,kubelet,kube-proxy,calicok8s worker节点

                  4.2 配置节点的基本环境

                  先配置节点的基本环境,3个节点都要同时设置,在此以k8scloude1作为示例

                  首先设置主机名

                  [root@localhost ~]# vim /etc/hostname
                  [root@localhost ~]# cat /etc/hostname
                  k8scloude1

                  配置节点IP地址(可选)

                  [root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens32
                  [root@k8scloude1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens32
                  TYPE=Ethernet
                  BOOTPROTO=static
                  NAME=ens32
                  DEVICE=ens32
                  ONBOOT=yes
                  DNS1=114.114.114.114
                  IPADDR=192.168.110.130
                  NETMASK=255.255.255.0
                  GATEWAY=192.168.110.2
                  ZONE=trusted

                  重启网络

                  [root@localhost ~]# service network restart
                  Restarting network (via systemctl): [ 确定 ]
                  [root@localhost ~]# systemctl restart NetworkManager

                  重启机器之后,主机名变为k8scloude1,测试机器是否可以访问网络

                  [root@k8scloude1 ~]# ping www.baidu.com
                  PING www.a.shifen.com (14.215.177.38) 56(84) bytes of data.
                  64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=1 ttl=128 time=25.9 ms
                  64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=2 ttl=128 time=26.7 ms
                  64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=3 ttl=128 time=26.4 ms
                  ^C
                  — www.a.shifen.com ping statistics —
                  3 packets transmitted, 3 received, 0% packet loss, time 2004ms
                  rtt min/avg/max/mdev = 25.960/26.393/26.724/0.320 ms

                  配置IP和主机名映射

                  [root@k8scloude1 ~]# vim /etc/hosts
                  [root@k8scloude1 ~]# cat /etc/hosts
                  127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
                  ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
                  192.168.110.130 k8scloude1
                  192.168.110.129 k8scloude2
                  192.168.110.128 k8scloude3
                  #复制 /etc/hosts到其他两个节点
                  [root@k8scloude1 ~]# scp /etc/hosts 192.168.110.129:/etc/hosts
                  [root@k8scloude1 ~]# scp /etc/hosts 192.168.110.128:/etc/hosts
                  #可以ping通其他两个节点则成功
                  [root@k8scloude1 ~]# ping k8scloude1
                  PING k8scloude1 (192.168.110.130) 56(84) bytes of data.
                  64 bytes from k8scloude1 (192.168.110.130): icmp_seq=1 ttl=64 time=0.044 ms
                  64 bytes from k8scloude1 (192.168.110.130): icmp_seq=2 ttl=64 time=0.053 ms
                  ^C
                  — k8scloude1 ping statistics —
                  2 packets transmitted, 2 received, 0% packet loss, time 999ms
                  rtt min/avg/max/mdev = 0.044/0.048/0.053/0.008 ms
                  [root@k8scloude1 ~]# ping k8scloude2
                  PING k8scloude2 (192.168.110.129) 56(84) bytes of data.
                  64 bytes from k8scloude2 (192.168.110.129): icmp_seq=1 ttl=64 time=0.297 ms
                  64 bytes from k8scloude2 (192.168.110.129): icmp_seq=2 ttl=64 time=1.05 ms
                  64 bytes from k8scloude2 (192.168.110.129): icmp_seq=3 ttl=64 time=0.254 ms
                  ^C
                  — k8scloude2 ping statistics —
                  3 packets transmitted, 3 received, 0% packet loss, time 2001ms
                  rtt min/avg/max/mdev = 0.254/0.536/1.057/0.368 ms
                  [root@k8scloude1 ~]# ping k8scloude3
                  PING k8scloude3 (192.168.110.128) 56(84) bytes of data.
                  64 bytes from k8scloude3 (192.168.110.128): icmp_seq=1 ttl=64 time=0.285 ms
                  64 bytes from k8scloude3 (192.168.110.128): icmp_seq=2 ttl=64 time=0.513 ms
                  64 bytes from k8scloude3 (192.168.110.128): icmp_seq=3 ttl=64 time=0.390 ms
                  ^C
                  — k8scloude3 ping statistics —
                  3 packets transmitted, 3 received, 0% packet loss, time 2002ms
                  rtt min/avg/max/mdev = 0.285/0.396/0.513/0.093 ms

                  关闭屏保(可选)

                  [root@k8scloude1 ~]# setterm -blank 0

                  下载新的yum源

                  [root@k8scloude1 ~]# rm -rf /etc/yum.repos.d/* ;wget ftp://ftp.rhce.cc/k8s/* -P /etc/yum.repos.d/
                  –2022-01-07 17:07:28– ftp://ftp.rhce.cc/k8s/*
                  => “/etc/yum.repos.d/.listing”
                  正在解析主机 ftp.rhce.cc (ftp.rhce.cc)… 101.37.152.41
                  正在连接 ftp.rhce.cc (ftp.rhce.cc)|101.37.152.41|:21… 已连接。
                  正在以 anonymous 登录 … 登录成功!
                  ==> SYST … 完成。 ==> PWD … 完成。
                  ……
                  100%[=======================================================================================================================================================================>] 276 –.-K/s 用时 0s
                  2022-01-07 17:07:29 (81.9 MB/s) – “/etc/yum.repos.d/k8s.repo” 已保存 [276]
                  #新的repo文件如下
                  [root@k8scloude1 ~]# ls /etc/yum.repos.d/
                  CentOS-Base.repo docker-ce.repo epel.repo k8s.repo

                  关闭selinux,设置SELINUX=disabled

                  [root@k8scloude1 ~]# cat /etc/selinux/config
                  # This file controls the state of SELinux on the system.
                  # SELINUX= can take one of these three values:
                  # enforcing – SELinux security policy is enforced.
                  # permissive – SELinux prints warnings instead of enforcing.
                  # disabled – No SELinux policy is loaded.
                  SELINUX=disabled
                  # SELINUXTYPE= can take one of three two values:
                  # targeted – Targeted processes are protected,
                  # minimum – Modification of targeted policy. Only selected processes are protected.
                  # mls – Multi Level Security protection.
                  SELINUXTYPE=targeted
                  [root@k8scloude1 ~]# getenforce
                  Disabled
                  [root@k8scloude1 ~]# setenforce 0
                  setenforce: SELinux is disabled

                  配置防火墙允许所有数据包通过

                  [root@k8scloude1 ~]# firewall-cmd –set-default-zone=trusted
                  Warning: ZONE_ALREADY_SET: trusted
                  success
                  [root@k8scloude1 ~]# firewall-cmd –get-default-zone
                  trusted

                  Linux swapoff命令用于关闭系统交换分区(swap area)。

                  注意:如果不关闭swap,就会在kubeadm初始化Kubernetes的时候报错:“[ERROR Swap]: running with swap on is not supported. Please disable swap”

                  [root@k8scloude1 ~]# swapoff -a ;sed -i \’/swap/d\’ /etc/fstab
                  [root@k8scloude1 ~]# cat /etc/fstab
                  # /etc/fstab
                  # Created by anaconda on Thu Oct 18 23:09:54 2018
                  #
                  # Accessible filesystems, by reference, are maintained under \’/dev/disk\’
                  # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
                  #
                  UUID=9875fa5e-2eea-4fcc-a83e-5528c7d0f6a5 / xfs defaults 0 0

                  4.3 节点安装docker,并进行相关配置

                  k8s是容器编排工具,需要容器管理工具,所以三个节点同时安装docker,还是以k8scloude1为例。

                  安装docker

                  [root@k8scloude1 ~]# yum -y install docker-ce
                  已加载插件:fastestmirror
                  base | 3.6 kB 00:00:00
                  ……
                  已安装:
                  docker-ce.x86_64 3:20.10.12-3.el7
                  ……
                  完毕!

                  设置docker开机自启动并现在启动docker

                  [root@k8scloude1 ~]# systemctl enable docker –now
                  Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
                  [root@k8scloude1 ~]# systemctl status docker
                  ● docker.service – Docker Application Container Engine
                  Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
                  Active: active (running) since 六 2022-01-08 22:10:38 CST; 18s ago
                  Docs: https://docs.docker.com
                  Main PID: 1377 (dockerd)
                  Memory: 30.8M
                  CGroup: /system.slice/docker.service
                  └─1377 /usr/bin/dockerd -H fd:// –containerd=/run/containerd/containerd.sock

                  查看docker版本

                  [root@k8scloude1 ~]# docker –version
                  Docker version 20.10.12, build e91ed57

                  配置docker镜像加速器

                  [root@k8scloude1 ~]# cat > /etc/docker/daemon.json <<EOF
                  > {
                  > \”registry-mirrors\”: [\”https://frz7i079.mirror.aliyuncs.com\”]
                  > }
                  > EOF
                  [root@k8scloude1 ~]# cat /etc/docker/daemon.json
                  {
                  \”registry-mirrors\”: [\”https://frz7i079.mirror.aliyuncs.com\”]
                  }

                  重启docker

                  [root@k8scloude1 ~]# systemctl restart docker
                  [root@k8scloude1 ~]# systemctl status docker
                  ● docker.service – Docker Application Container Engine
                  Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
                  Active: active (running) since 六 2022-01-08 22:17:45 CST; 8s ago
                  Docs: https://docs.docker.com
                  Main PID: 1529 (dockerd)
                  Memory: 32.4M
                  CGroup: /system.slice/docker.service
                  └─1529 /usr/bin/dockerd -H fd:// –containerd=/run/containerd/containerd.sock

                  设置iptables不对bridge的数据进行处理,启用IP路由转发功能

                  [root@k8scloude1 ~]# cat <<EOF> /etc/sysctl.d/k8s.conf
                  > net.bridge.bridge-nf-call-ip6tables = 1
                  > net.bridge.bridge-nf-call-iptables = 1
                  > net.ipv4.ip_forward = 1
                  > EOF
                  #使配置生效
                  [root@k8scloude1 ~]# sysctl -p /etc/sysctl.d/k8s.conf
                  net.bridge.bridge-nf-call-ip6tables = 1
                  net.bridge.bridge-nf-call-iptables = 1
                  net.ipv4.ip_forward = 1

                  4.4 安装kubelet,kubeadm,kubectl

                  三个节点都安装kubelet,kubeadm,kubectl:

                  • Kubelet 是 kubernetes 工作节点上的一个代理组件,运行在每个节点上
                  • Kubeadm 是一个快捷搭建kubernetes(k8s)的安装工具,它提供了 kubeadm init 以及 kubeadm join 这两个命令来快速创建 kubernetes 集群,kubeadm 通过执行必要的操作来启动和运行一个最小可用的集群
                  • kubectl是Kubernetes集群的命令行工具,通过kubectl能够对集群本身进行管理,并能够在集群上进行容器化应用的安装部署。

                  #repoid:禁用为给定kubernetes定义的排除
                  ##–disableexcludes=kubernetes 禁掉除了这个之外的别的仓库
                  [root@k8scloude1 ~]# yum -y install kubelet-1.21.0-0 kubeadm-1.21.0-0 kubectl-1.21.0-0 –disableexcludes=kubernetes
                  已加载插件:fastestmirror
                  Loading mirror speeds from cached hostfile
                  正在解决依赖关系
                  –> 正在检查事务
                  —> 软件包 kubeadm.x86_64.0.1.21.0-0 将被 安装
                  ……
                  已安装:
                  kubeadm.x86_64 0:1.21.0-0 kubectl.x86_64 0:1.21.0-0 kubelet.x86_64 0:1.21.0-0
                  ……
                  完毕!

                  设置kubelet开机自启动并现在启动kubelet

                  [root@k8scloude1 ~]# systemctl enable kubelet –now
                  Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
                  #kubelet现在是启动不了的
                  [root@k8scloude1 ~]# systemctl status kubelet
                  ● kubelet.service – kubelet: The Kubernetes Node Agent
                  Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
                  Drop-In: /usr/lib/systemd/system/kubelet.service.d
                  └─10-kubeadm.conf
                  Active: activating (auto-restart) (Result: exit-code) since 六 2022-01-08 22:35:33 CST; 3s ago
                  Docs: https://kubernetes.io/docs/
                  Process: 1722 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
                  Main PID: 1722 (code=exited, status=1/FAILURE)
                  1月 08 22:35:33 k8scloude1 systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
                  1月 08 22:35:33 k8scloude1 systemd[1]: Unit kubelet.service entered failed state.
                  1月 08 22:35:33 k8scloude1 systemd[1]: kubelet.service failed.

                  4.5 kubeadm初始化

                  查看kubeadm哪些版本是可用的

                  [root@k8scloude2 ~]# yum list –showduplicates kubeadm –disableexcludes=kubernetes
                  已加载插件:fastestmirror
                  Loading mirror speeds from cached hostfile
                  已安装的软件包
                  kubeadm.x86_64 1.21.0-0 @kubernetes
                  可安装的软件包
                  kubeadm.x86_64 1.6.0-0 kubernetes
                  kubeadm.x86_64 1.6.1-0 kubernetes
                  kubeadm.x86_64 1.6.2-0 kubernetes
                  ……
                  kubeadm.x86_64 1.23.0-0 kubernetes
                  kubeadm.x86_64 1.23.1-0

                  kubeadm init:在主节点k8scloude1上初始化 Kubernetes 控制平面节点

                  #进行kubeadm初始化
                  #–image-repository registry.aliyuncs.com/google_containers:使用阿里云镜像仓库,不然有些镜像下载不下来
                  #–kubernetes-version=v1.21.0:指定k8s的版本
                  #–pod-network-cidr=10.244.0.0/16:指定pod的网段
                  #如下报错:registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0下载不下来,原因为:coredns改名为coredns/coredns了,手动下载coredns即可
                  #coredns是一个用go语言编写的开源的DNS服务
                  [root@k8scloude1 ~]# kubeadm init –image-repository registry.aliyuncs.com/google_containers –kubernetes-version=v1.21.0 –pod-network-cidr=10.244.0.0/16
                  [init] Using Kubernetes version: v1.21.0
                  [preflight] Running pre-flight checks
                  [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
                  [WARNING IsDockerSystemdCheck]: detected \”cgroupfs\” as the Docker cgroup driver. The recommended driver is \”systemd\”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
                  [preflight] Pulling images required for setting up a Kubernetes cluster
                  [preflight] This might take a minute or two, depending on the speed of your internet connection
                  [preflight] You can also perform this action in beforehand using \’kubeadm config images pull\’
                  error execution phase preflight: [preflight] Some fatal errors occurred:
                  [ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0: output: Error response from daemon: pull access denied for registry.aliyuncs.com/google_containers/coredns/coredns, repository does not exist or may require \’docker login\’: denied: requested access to the resource is denied
                  , error: exit status 1
                  [preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
                  To see the stack trace of this error execute with –v=5 or higher

                  手动下载coredns镜像

                  [root@k8scloude1 ~]# docker pull coredns/coredns:1.8.0
                  1.8.0: Pulling from coredns/coredns
                  c6568d217a00: Pull complete
                  5984b6d55edf: Pull complete
                  Digest: sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e
                  Status: Downloaded newer image for coredns/coredns:1.8.0
                  docker.io/coredns/coredns:1.8.0

                  需要重命名coredns镜像,不然识别不了

                  [root@k8scloude1 ~]# docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
                  #删除coredns/coredns:1.8.0镜像
                  [root@k8scloude1 ~]# docker rmi coredns/coredns:1.8.0

                  此时可以发现现在k8scloude1上有7个镜像,缺一个镜像,kubeadm初始化都不能成功

                  [root@k8scloude1 ~]# docker images
                  REPOSITORY TAG IMAGE ID CREATED SIZE
                  registry.aliyuncs.com/google_containers/kube-apiserver v1.21.0 4d217480042e 9 months ago 126MB
                  registry.aliyuncs.com/google_containers/kube-proxy v1.21.0 38ddd85fe90e 9 months ago 122MB
                  registry.aliyuncs.com/google_containers/kube-controller-manager v1.21.0 09708983cc37 9 months ago 120MB
                  registry.aliyuncs.com/google_containers/kube-scheduler v1.21.0 62ad3129eca8 9 months ago 50.6MB
                  registry.aliyuncs.com/google_containers/pause 3.4.1 0f8457a4c2ec 12 months ago 683kB
                  registry.aliyuncs.com/google_containers/coredns/coredns v1.8.0 296a6d5035e2 14 months ago 42.5MB
                  registry.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 16 months ago 253MB

                  重新进行kubeadm初始化

                  [root@k8scloude1 ~]# kubeadm init –image-repository registry.aliyuncs.com/google_containers –kubernetes-version=v1.21.0 –pod-network-cidr=10.244.0.0/16
                  [init] Using Kubernetes version: v1.21.0
                  [preflight] Running pre-flight checks
                  [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
                  [WARNING IsDockerSystemdCheck]: detected \”cgroupfs\” as the Docker cgroup driver. The recommended driver is \”systemd\”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
                  [preflight] Pulling images required for setting up a Kubernetes cluster
                  [preflight] This might take a minute or two, depending on the speed of your internet connection
                  [preflight] You can also perform this action in beforehand using \’kubeadm config images pull\’
                  [certs] Using certificateDir folder \”/etc/kubernetes/pki\”
                  [certs] Generating \”ca\” certificate and key
                  [certs] Generating \”apiserver\” certificate and key
                  [certs] apiserver serving cert is signed for DNS names [k8scloude1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.110.130]
                  [certs] Generating \”apiserver-kubelet-client\” certificate and key
                  [certs] Generating \”front-proxy-ca\” certificate and key
                  [certs] Generating \”front-proxy-client\” certificate and key
                  [certs] Generating \”etcd/ca\” certificate and key
                  [certs] Generating \”etcd/server\” certificate and key
                  [certs] etcd/server serving cert is signed for DNS names [k8scloude1 localhost] and IPs [192.168.110.130 127.0.0.1 ::1]
                  [certs] Generating \”etcd/peer\” certificate and key
                  [certs] etcd/peer serving cert is signed for DNS names [k8scloude1 localhost] and IPs [192.168.110.130 127.0.0.1 ::1]
                  [certs] Generating \”etcd/healthcheck-client\” certificate and key
                  [certs] Generating \”apiserver-etcd-client\” certificate and key
                  [certs] Generating \”sa\” key and public key
                  [kubeconfig] Using kubeconfig folder \”/etc/kubernetes\”
                  [kubeconfig] Writing \”admin.conf\” kubeconfig file
                  [kubeconfig] Writing \”kubelet.conf\” kubeconfig file
                  [kubeconfig] Writing \”controller-manager.conf\” kubeconfig file
                  [kubeconfig] Writing \”scheduler.conf\” kubeconfig file
                  [kubelet-start] Writing kubelet environment file with flags to file \”/var/lib/kubelet/kubeadm-flags.env\”
                  [kubelet-start] Writing kubelet configuration to file \”/var/lib/kubelet/config.yaml\”
                  [kubelet-start] Starting the kubelet
                  [control-plane] Using manifest folder \”/etc/kubernetes/manifests\”
                  [control-plane] Creating static Pod manifest for \”kube-apiserver\”
                  [control-plane] Creating static Pod manifest for \”kube-controller-manager\”
                  [control-plane] Creating static Pod manifest for \”kube-scheduler\”
                  [etcd] Creating static Pod manifest for local etcd in \”/etc/kubernetes/manifests\”
                  [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \”/etc/kubernetes/manifests\”. This can take up to 4m0s
                  [kubelet-check] Initial timeout of 40s passed.
                  [apiclient] All control plane components are healthy after 65.002757 seconds
                  [upload-config] Storing the configuration used in ConfigMap \”kubeadm-config\” in the \”kube-system\” Namespace
                  [kubelet] Creating a ConfigMap \”kubelet-config-1.21\” in namespace kube-system with the configuration for the kubelets in the cluster
                  [upload-certs] Skipping phase. Please see –upload-certs
                  [mark-control-plane] Marking the node k8scloude1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
                  [mark-control-plane] Marking the node k8scloude1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
                  [bootstrap-token] Using token: nta3x4.3e54l2dqtmj9tlry
                  [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
                  [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
                  [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
                  [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
                  [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
                  [bootstrap-token] Creating the \”cluster-info\” ConfigMap in the \”kube-public\” namespace
                  [kubelet-finalize] Updating \”/etc/kubernetes/kubelet.conf\” to point to a rotatable kubelet client certificate and key
                  [addons] Applied essential addon: CoreDNS
                  [addons] Applied essential addon: kube-proxy
                  Your Kubernetes control-plane has initialized successfully!
                  To start using your cluster, you need to run the following as a regular user:
                  mkdir -p $HOME/.kube
                  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
                  sudo chown $(id -u):$(id -g) $HOME/.kube/config
                  Alternatively, if you are the root user, you can run:
                  export KUBECONFIG=/etc/kubernetes/admin.conf
                  You should now deploy a pod network to the cluster.
                  Run \”kubectl apply -f [podnetwork].yaml\” with one of the options listed at:
                  https://kubernetes.io/docs/concepts/cluster-administration/addons/
                  Then you can join any number of worker nodes by running the following on each as root:
                  kubeadm join 192.168.110.130:6443 –token nta3x4.3e54l2dqtmj9tlry \\
                  –discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8

                  根据提示创建目录和配置文件

                  [root@k8scloude1 ~]# mkdir -p $HOME/.kube
                  [root@k8scloude1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
                  [root@k8scloude1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

                  现在已经可以看到master节点了

                  [root@k8scloude1 ~]# kubectl get node
                  NAME STATUS ROLES AGE VERSION
                  k8scloude1 NotReady control-plane,master 5m54s v1.21.0

                  4.6 添加worker节点到k8s集群

                  接下来把另外的两个worker节点也加入到k8s集群。

                  kubeadm init的时候输出了如下这句:

                  kubeadm join 192.168.110.130:6443 –token nta3x4.3e54l2dqtmj9tlry –discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8

                  在另外两个worker节点执行这一命令就可以把节点加入到k8s集群里。

                  如果加入集群的token忘了,可以使用如下的命令获取最新的加入命令token

                  [root@k8scloude1 ~]# kubeadm token create –print-join-command
                  kubeadm join 192.168.110.130:6443 –token 8e3haz.m1wrpuf357g72k1u –discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8

                  在另外两个节点执行加入集群的token命令

                  [root@k8scloude2 ~]# kubeadm join 192.168.110.130:6443 –token 8e3haz.m1wrpuf357g72k1u –discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8
                  [preflight] Running pre-flight checks
                  [WARNING IsDockerSystemdCheck]: detected \”cgroupfs\” as the Docker cgroup driver. The recommended driver is \”systemd\”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
                  [preflight] Reading configuration from the cluster…
                  [preflight] FYI: You can look at this config file with \’kubectl -n kube-system get cm kubeadm-config -o yaml\’
                  [kubelet-start] Writing kubelet configuration to file \”/var/lib/kubelet/config.yaml\”
                  [kubelet-start] Writing kubelet environment file with flags to file \”/var/lib/kubelet/kubeadm-flags.env\”
                  [kubelet-start] Starting the kubelet
                  [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…
                  This node has joined the cluster:
                  * Certificate signing request was sent to apiserver and a response was received.
                  * The Kubelet was informed of the new secure connection details.
                  Run \’kubectl get nodes\’ on the control-plane to see this node join the cluster.
                  [root@k8scloude3 ~]# kubeadm join 192.168.110.130:6443 –token 8e3haz.m1wrpuf357g72k1u –discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8
                  [preflight] Running pre-flight checks
                  [WARNING IsDockerSystemdCheck]: detected \”cgroupfs\” as the Docker cgroup driver. The recommended driver is \”systemd\”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
                  [preflight] Reading configuration from the cluster…
                  [preflight] FYI: You can look at this config file with \’kubectl -n kube-system get cm kubeadm-config -o yaml\’
                  [kubelet-start] Writing kubelet configuration to file \”/var/lib/kubelet/config.yaml\”
                  [kubelet-start] Writing kubelet environment file with flags to file \”/var/lib/kubelet/kubeadm-flags.env\”
                  [kubelet-start] Starting the kubelet
                  [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…
                  This node has joined the cluster:
                  * Certificate signing request was sent to apiserver and a response was received.
                  * The Kubelet was informed of the new secure connection details.
                  Run \’kubectl get nodes\’ on the control-plane to see this node join the cluster.

                  在k8scloude1查看节点状态,可以看到两个worker节点都加入到了k8s集群

                  [root@k8scloude1 ~]# kubectl get nodes
                  NAME STATUS ROLES AGE VERSION
                  k8scloude1 NotReady control-plane,master 8m43s v1.21.0
                  k8scloude2 NotReady <none> 28s v1.21.0
                  k8scloude3 NotReady <none> 25s v1.21.0

                  可以发现worker节点加入到k8s集群后多了两个镜像

                  [root@k8scloude2 ~]# docker images
                  REPOSITORY TAG IMAGE ID CREATED SIZE
                  registry.aliyuncs.com/google_containers/kube-proxy v1.21.0 38ddd85fe90e 9 months ago 122MB
                  registry.aliyuncs.com/google_containers/pause 3.4.1 0f8457a4c2ec 12 months ago 683kB
                  [root@k8scloude3 ~]# docker images
                  REPOSITORY TAG IMAGE ID CREATED SIZE
                  registry.aliyuncs.com/google_containers/kube-proxy v1.21.0 38ddd85fe90e 9 months ago 122MB
                  registry.aliyuncs.com/google_containers/pause 3.4.1 0f8457a4c2ec 12 months ago 683kB

                  4.7 部署CNI网络插件calico

                  虽然现在k8s集群已经有1个master节点,2个worker节点,但是此时三个节点的状态都是NotReady的,原因是没有CNI网络插件,为了节点间的通信,需要安装cni网络插件,常用的cni网络插件有calico和flannel,两者区别为:flannel不支持复杂的网络策略,calico支持网络策略,因为今后还要配置k8s网络策略networkpolicy,所以本文选用的cni网络插件为calico!

                  现在去官网下载calico.yaml文件:

                  官网:https://projectcalico.docs.tigera.io/about/about-calico

                  Centos7 安装部署Kubernetes(k8s)集群实现过程

                  搜索框里直接搜索calico.yaml

                  Centos7 安装部署Kubernetes(k8s)集群实现过程

                  找到下载calico.yaml的命令

                  Centos7 安装部署Kubernetes(k8s)集群实现过程

                  下载calico.yaml文件

                  [root@k8scloude1 ~]# curl https://docs.projectcalico.org/manifests/calico.yaml -O
                  % Total % Received % Xferd Average Speed Time Time Time Current
                  Dload Upload Total Spent Left Speed
                  100 212k 100 212k 0 0 44222 0 0:00:04 0:00:04 –:–:– 55704
                  [root@k8scloude1 ~]# ls
                  calico.yaml

                  查看需要下载的calico镜像,这四个镜像需要在所有节点都下载,以k8scloude1为例

                  [root@k8scloude1 ~]# grep image calico.yaml
                  image: docker.io/calico/cni:v3.21.2
                  image: docker.io/calico/cni:v3.21.2
                  image: docker.io/calico/pod2daemon-flexvol:v3.21.2
                  image: docker.io/calico/node:v3.21.2
                  image: docker.io/calico/kube-controllers:v3.21.2
                  [root@k8scloude1 ~]# docker pull docker.io/calico/cni:v3.21.2
                  v3.21.2: Pulling from calico/cni
                  Digest: sha256:ce618d26e7976c40958ea92d40666946d5c997cd2f084b6a794916dc9e28061b
                  Status: Image is up to date for calico/cni:v3.21.2
                  docker.io/calico/cni:v3.21.2
                  [root@k8scloude1 ~]# docker pull docker.io/calico/pod2daemon-flexvol:v3.21.2
                  v3.21.2: Pulling from calico/pod2daemon-flexvol
                  Digest: sha256:b034c7c886e697735a5f24e52940d6d19e5f0cb5bf7caafd92ddbc7745cfd01e
                  Status: Image is up to date for calico/pod2daemon-flexvol:v3.21.2
                  docker.io/calico/pod2daemon-flexvol:v3.21.2
                  [root@k8scloude1 ~]# docker pull docker.io/calico/node:v3.21.2
                  v3.21.2: Pulling from calico/node
                  Digest: sha256:6912fe45eb85f166de65e2c56937ffb58c935187a84e794fe21e06de6322a4d0
                  Status: Image is up to date for calico/node:v3.21.2
                  docker.io/calico/node:v3.21.2
                  [root@k8scloude1 ~]# docker pull docker.io/calico/kube-controllers:v3.21.2
                  v3.21.2: Pulling from calico/kube-controllers
                  d6a693444ed1: Pull complete
                  a5399680e995: Pull complete
                  8f0eb4c2bcba: Pull complete
                  52fe18e41b06: Pull complete
                  2f8d3f9f1a40: Pull complete
                  bc94a7e3e934: Pull complete
                  55bf7cf53020: Pull complete
                  Digest: sha256:1f4fcdcd9d295342775977b574c3124530a4b8adf4782f3603a46272125f01bf
                  Status: Downloaded newer image for calico/kube-controllers:v3.21.2
                  docker.io/calico/kube-controllers:v3.21.2
                  #主要是如下4个镜像
                  [root@k8scloude1 ~]# docker images
                  REPOSITORY TAG IMAGE ID CREATED SIZE
                  calico/node v3.21.2 f1bca4d4ced2 4 weeks ago 214MB
                  calico/pod2daemon-flexvol v3.21.2 7778dd57e506 5 weeks ago 21.3MB
                  calico/cni v3.21.2 4c5c32530391 5 weeks ago 239MB
                  calico/kube-controllers v3.21.2 b20652406028 5 weeks ago 132MB

                  修改calico.yaml 文件,CALICO_IPV4POOL_CIDR的IP段要和kubeadm初始化时候的pod网段一致,注意格式要对齐,不然会报错

                  [root@k8scloude1 ~]# vim calico.yaml
                  [root@k8scloude1 ~]# cat calico.yaml | egrep \”CALICO_IPV4POOL_CIDR|\”10.244\”\”
                  – name: CALICO_IPV4POOL_CIDR
                  value: \”10.244.0.0/16\”

                  不直观的话看图片:修改calico.yaml 文件

                  Centos7 安装部署Kubernetes(k8s)集群实现过程

                  应用calico.yaml文件

                  [root@k8scloude1 ~]# kubectl apply -f calico.yaml
                  configmap/calico-config unchanged
                  customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
                  customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
                  customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
                  customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org configured
                  customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
                  customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
                  customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
                  customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
                  customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
                  customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
                  customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
                  customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
                  customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
                  customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org configured
                  customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
                  customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
                  customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
                  clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
                  clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
                  clusterrole.rbac.authorization.k8s.io/calico-node unchanged
                  clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
                  daemonset.apps/calico-node created
                  serviceaccount/calico-node created
                  deployment.apps/calico-kube-controllers created
                  serviceaccount/calico-kube-controllers created
                  Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
                  poddisruptionbudget.policy/calico-kube-controllers created

                  此时发现三个节点都是Ready状态了

                  [root@k8scloude1 ~]# kubectl get nodes
                  NAME STATUS ROLES AGE VERSION
                  k8scloude1 Ready control-plane,master 53m v1.21.0
                  k8scloude2 Ready &lt;none&gt; 45m v1.21.0
                  k8scloude3 Ready &lt;none&gt; 45m v1.21.0

                  4.8 配置kubectl命令tab键自动补全

                  查看kubectl自动补全命令

                  [root@k8scloude1 ~]# kubectl –help | grep bash
                  completion Output shell completion code for the specified shell (bash or zsh)

                  添加source <(kubectl completion bash)到/etc/profile,并使配置生效

                  [root@k8scloude1 ~]# cat /etc/profile | head -2
                  # /etc/profile
                  source &lt;(kubectl completion bash)
                  [root@k8scloude1 ~]# source /etc/profile

                  此时即可kubectl命令tab键自动补全

                  [root@k8scloude1 ~]# kubectl get nodes
                  NAME STATUS ROLES AGE VERSION
                  k8scloude1 Ready control-plane,master 59m v1.21.0
                  k8scloude2 Ready &lt;none&gt; 51m v1.21.0
                  k8scloude3 Ready &lt;none&gt; 51m v1.21.0
                  #注意:需要bash-completion-2.1-6.el7.noarch包,不然不能自动补全命令
                  [root@k8scloude1 ~]# rpm -qa | grep bash
                  bash-completion-2.1-6.el7.noarch
                  bash-4.2.46-30.el7.x86_64
                  bash-doc-4.2.46-30.el7.x86_64

                  自此,Kubernetes(k8s)集群部署完毕!

                  更多关于Centos7安装部署Kubernetes的资料请关注悠久资源网其它相关文章!

                  收藏 (0) 打赏

                  感谢您的支持,我会继续努力的!

                  打开微信/支付宝扫一扫,即可进行扫码打赏哦,分享从这里开始,精彩与您同在
                  点赞 (0)

                  悠久资源 Linux服务器 Centos7 安装部署Kubernetes(k8s)集群实现过程 https://www.u-9.cn/server/linux/2413.html

                  常见问题

                  相关文章

                  发表评论
                  暂无评论
                  官方客服团队

                  为您解决烦忧 - 24小时在线 专业服务