目 录CONTENT

文章目录

kubeadm 部署 v1.20.4 版本 Kubernetes集群

所念皆星河
2020-01-13 / 0 评论 / 0 点赞 / 41 阅读 / 22070 字

一、环境准备

主机 IP 配置
kube-master 192.168.200.12 2c-4G-40G
kube-node1 192.168.200.13 2c-4G-40G
kube-node2 192.168.200.14 2c-4G-40G

1.关闭防火墙,selinux,配置免密通信,设置主机名称,配置hosts三节点都执行

[root@localhost ~]# systemctl disable --now firewalld
[root@localhost ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@localhost ~]# ssh-keygen -t rsa (一路回车)
[root@localhost ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.200.13
[root@localhost ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.200.14
[root@localhost ~]# hostnamectl set-hostname kube-master
[root@kube-master ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.12 kube-master
192.168.200.13 kube-node1
192.168.200.14 kube-node2
[root@kube-master ~]# scp /etc/hosts root@kube-node1:/etc/hosts
[root@kube-master ~]# scp /etc/hosts root@kube-node1:/etc/hosts

2.禁用swap,确保网络模块开机自动加载,使桥接流量对iptables可见

  禁用swap:
[root@kube-master ~]# swapoff -a
[root@kube-master ~]# sed -i 's/.*swap.*/#&/' /etc/fstab
  确保网络模块开机自动加载:
[root@kube-master ~]# cat > /etc/modules-load.d/docker.conf <<E                                                                    OF
> overlay
> br_netfilter
> EOF
[root@kube-master ~]# modprobe overlay
[root@kube-master ~]# modprobe br_netfilter
  使桥接流量对iptables可见:
[root@kube-master ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@kube-master ~]# sysctl --system

3.配置docker源和kubernets源

[root@kube-master ~]# yum-config-manager --add-repo http://mirr                                                                    ors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@kube-master ~]# cat > /etc/yum.repos.d/kubernetes.repo <<                                                                    EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kuber                                                                    netes-el7-x86_64/
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.                                                                    gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-k                                                                    ey.gpg
> EOF
[root@kube-master ~]# yum makecache fast

重启主机

二、部署docker,Kubernetes 三节点都执行

1.安装kubeadm, kubelet and kubectl docker

[root@kube-master ~]# yum install kubeadm kubelet kubectl docker -y
[root@kube-master ~]# systemctl enable docker
[root@kube-master ~]# systemctl start docker
[root@kube-master ~]# systemctl enable kubelet.service
[root@kube-master ~]# systemctl start kubelet.service

2.查看指定k8s版本需要哪些镜像

[root@kube-master ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.20.4
k8s.gcr.io/kube-controller-manager:v1.20.4
k8s.gcr.io/kube-scheduler:v1.20.4
k8s.gcr.io/kube-proxy:v1.20.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

3.由于下载所需镜像需要从谷歌拉取

  首先生成默认kubeadm.conf文件 
[root@kube-master ~]kubeadm config print init-defaults > kubeadm.conf
  然后再替换国内阿里云地址
[root@kube-master ~]sed -i "s/imageRepository: .*/imageRepository: registry.aliyuncs.com\/google_containers/g" kubeadm.conf
  指定下载的版本 
[root@kube-master ~]sed -i "s/kubernetesVersion: .*/kubernetesVersion: v1.20.4/g" kubeadm.conf
  然后下载镜像到本地执行
[root@kube-master ~]kubeadm config images pull --config kubeadm.conf
[root@kube-master ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.20.4             c29e6c583067        5 days ago          118 MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.20.4             ae5eb22e4a9d        5 days ago          122 MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.20.4             0a41a1414c53        5 days ago          116 MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.20.4             5f8cb769bd73        5 days ago          47.3 MB
registry.aliyuncs.com/google_containers/etcd                      3.4.13-0            0369cf4303ff        6 months ago        253 MB
registry.aliyuncs.com/google_containers/coredns                   1.7.0               bfe3a36ebd25        8 months ago        45.2 MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        12 months ago       683 kB

4.我们自己将下载的镜像通过脚本转换一下

[root@kube-master ~]# vi tegimage.sh
#!/bin/bash

newtag=k8s.gcr.io
for i in $(docker images | grep -v TAG |awk '{print $1 ":" $2}')
do
   image=$(echo $i | awk -F '/' '{print $3}')
   docker tag $i $newtag/$image
   docker rmi $i
done
[root@kube-master ~]# chmod +x tagimage.sh
[root@kube-master ~]# source tagimage.sh

[root@kube-master ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.20.4             c29e6c583067        5 days ago          118 MB
k8s.gcr.io/kube-apiserver            v1.20.4             ae5eb22e4a9d        5 days ago          122 MB
k8s.gcr.io/kube-controller-manager   v1.20.4             0a41a1414c53        5 days ago          116 MB
k8s.gcr.io/kube-scheduler            v1.20.4             5f8cb769bd73        5 days ago          47.3 MB
k8s.gcr.io/etcd                      3.4.13-0            0369cf4303ff        6 months ago        253 MB
k8s.gcr.io/coredns                   1.7.0               bfe3a36ebd25        8 months ago        45.2 MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        12 months ago       683 kB
[root@kube-master ~]# scp tagimage.sh root@kube-node1:/root/
[root@kube-master ~]# scp tagimage.sh root@kube-node2:/root/

三、部署master

  执行
[root@kube-master ~]# kubeadm init --apiserver-advertise-address 192.168.200.12 --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.20.4
[init] Using Kubernetes version: v1.20.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.200.12]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kube-master localhost] and IPs [192.168.200.12 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kube-master localhost] and IPs [192.168.200.12 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 58.005367 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kube-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node kube-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 4hkti9.l165g632jkdl1r8n
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.200.12:6443 --token 4hkti9.l165g632jkdl1r8n \
    --discovery-token-ca-cert-hash sha256:9d123892cc43a816a7f4734c7d56eb33ad27384bd05df8a3de6008c2b010444d

为日常使用集群的用户添加kubectl使用权限

[root@kube-master ~]# su qgx
[qgx@kube-master root]$ mkdir -p $HOME/.kube
[qgx@kube-master root]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/admin.conf
[sudo] password for qgx:
[qgx@kube-master root]$ sudo chown $(id -u):$(id -g) $HOME/.kube/admin.conf
[qgx@kube-master root]$ echo "export KUBECONFIG=$HOME/.kube/admin.conf" >> ~/.bashrc
[qgx@kube-master root]$ exit

安装bash自动补全插件

[root@kube-master ~]# yum install bash-completion -y

设置kubectl与kubeadm命令补全,下次login生效

[root@kube-master ~]# kubectl completion bash >/etc/bash_completion.d/kubectl
[root@kube-master ~]# kubeadm completion bash > /etc/bash_completion.d/kubeadm

四、安装pod网络

要让 Kubernetes Cluster 能够工作,必须安装 Pod 网络,否则 Pod 之间无法通信。这里我们先使用 flannel。执行如下命令部署 flannel:

[root@kube-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
The connection to the server localhost:8080 was refused - did you specify the right host or port?

解决方案:

[root@kube-master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@kube-master ~]# source  ~/.bash_profile

现在不仍不能解决,网络原因没办法,通过科学上网解决吧,

[root@kube-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

或者自行编辑kube-flannel.yml

[root@kube-master ~]# vi kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.1-rc2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.1-rc2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

执行部署

[root@kube-master ~]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@kube-master ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE   VERSION
kube-master   Ready    control-plane,master   30m   v1.20.4

五、部署node加入到集群

  在 k8s-node1 和 k8s-node2 上分别执行如下命令,将其注册到 Cluster 中:部署完master也会给出token提示,如果忘了我们可以执行kubeadm token list如果token过期可以执行kubeadm token create 然后kubeadm token list | awk -F" " '{print $1}' |tail -n 1 查看token   执行如下命令加入到集群

[root@kube-node1 ~]# kubeadm join 192.168.200.12:6443 --token 4hkti9.l165g632jkdl1r8n \
>     --discovery-token-ca-cert-hash sha256:9d123892cc43a816a7f4734c7d56eb33ad27384bd05df8a3de6008c2b010444d
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@kube-master ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE   VERSION
kube-master   Ready    control-plane,master   46m   v1.20.4
kube-node1    Ready    <none>                 25s   v1.20.4
kube-node2    Ready    <none>                 22s   v1.20.4
[root@kube-master ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-74ff55c5b-757r6               1/1     Running   0          102m
kube-system   coredns-74ff55c5b-ks744               1/1     Running   0          102m
kube-system   etcd-kube-master                      1/1     Running   0          102m
kube-system   kube-apiserver-kube-master            1/1     Running   0          102m
kube-system   kube-controller-manager-kube-master   1/1     Running   0          102m
kube-system   kube-flannel-ds-gc7h5                 1/1     Running   0          72m
kube-system   kube-flannel-ds-r56vg                 1/1     Running   0          57m
kube-system   kube-flannel-ds-vwtv4                 1/1     Running   0          57m
kube-system   kube-proxy-58n9k                      1/1     Running   0          102m
kube-system   kube-proxy-rrcc5                      1/1     Running   0          57m
kube-system   kube-proxy-w552s                      1/1     Running   0          57m
kube-system   kube-scheduler-kube-master            1/1     Running   0          102m
0
k8s

评论区