基于Docker容器引擎的开源容器编排工具

k8s就是按照用户期望的样子来运行部署应用程序。

Kubernetes架构图

Kubernetes Structure

10、使用kubeadm引导集群

kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具,能通过两条指令完成一个kubernetes集群的部署:

1、创建一个Master节点:
kubeadm init

2、将Node节点加入到Master集群中:
kubeadm join <Master节点的IP和端口>

k8s部署环境要求

  1. 一台或多台机器,操作系统CentOS 7.x-86_x64
  2. 硬件配置:内存2GB或2G+,CPU 2核或CPU 2核+;
  3. 集群内各个机器之间能相互通信;
  4. 集群内各个机器可以访问外网,需要拉取镜像;
  5. 禁止swap分区;

k8s部署环境准备

# 关闭并禁用防火墙
systemctl stop firewalld && systemctl disable firewalld

# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  #永久
setenforce 0  #临时

# 关闭swap(k8s禁止虚拟内存以提高性能)
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久
swapoff -a #临时

# 在master添加hosts
cat >> /etc/hosts << EOF
10.10.225.89 master.k8s
10.10.225.91 node.k8s
EOF

# 设置网桥参数
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  #使网桥参数生效

# 时间同步
yum install ntpdate -y
ntpdate time.windows.com

Kubernetes部署具体步骤

所有服务器节点安装 Docker/kubeadm/kubelet/kubectl

Docker:Kubernetes 默认容器运行环境是Docker,因此首先需要安装Docker;

Kubelet:运行在cluster所有节点上,负责启动POD和容器;

Kubeadm:用于初始化cluster的一个工具;

Kubectl:kubectl是kubenetes命令行工具,通过kubectl可以部署和管理应用,查看各种资源,创建,删除和更新组件;

# 第一、安装并启动Docker,docker-ce-19.03.13,参考50、Docker安装配置

# 第二、安装kubeadm、kubelet、kubectl
# 添加k8s的阿里云YUM源,到时候下载k8s的相关组件才能找到下载源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 安装kubeadm,kubelet,kubectl 1.19.4 或 1.20.0
yum install -y kubeadm-1.19.4 kubelet-1.19.4 kubectl-1.19.4
# 开机启动kubelet服务
systemctl enable kubelet
# 查看安装结果及版本
yum list installed | grep kubelet
yum list installed | grep kubeadm
yum list installed | grep kubectl
kubelet --version

# 第三、重启centos,使配置的开机启动项目生效
reboot
# 第四、部署Kubernetes Master主节点(此命令在master机器上执行)
# --apiserver-advertise-address=10.10.225.89 master主节点地址
# --image-repository 仓库配置为阿里云
# service-cidr和pod-network-cidr以及本机网络不能有重叠或者冲突,一般可以选择一个本机网络和PodCIDR都没有用到的网段
kubeadm init \
--apiserver-advertise-address=10.10.225.89 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.19.4 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

# 指令执行成功后,结尾会给出kubeadm join指令,用于添加nodes(此命令在node机器上执行)
kubeadm join 10.10.225.89:6443 \
--token x3mj5w.kcs9lnwg8g06r5ry \
--discovery-token-ca-cert-hash sha256:c6fdd8331fe21a0dc17a3dd9f3df5d4aa73a9715654dbfca3682407e2ca5d6fc

# 如果初始化master节点失败,通过reset命令撤销初始化
kubeadm reset -f

# kubeadm要求执行的指令(此命令在master机器上执行)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 第五、部署网络插件(当前pods状态是NotReady,部署网络插件,使其就绪)
# 下载kube-flannel.yml文件,https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
mkdir -p ~/software/k8s/
cat > ~/software/k8s/kube-flannel.yml << EOF
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
EOF

# 应用kube-flannel.yml文件(在master机器上执行)
kubectl apply -f ~/software/k8s/kube-flannel.yml 

# 再次查看节点pod状态,如果是NotReady,说明还没有就绪,需要等一会儿,然后节点就就绪了

第六步、至此我们的k8s环境就搭建好了

# kubectl指令帮助说明
kubectl --help

# 查询命名空间,其他的指令中可以用-n 指定命名空间
kubectl get namespace
# 查询节点
kubectl get node/nodes
# 查询服务
kubectl get service/services/svc
# 查询控制器
kubectl get deployment/deployments/deploy
# 查看运行时容器pod (一个pod里面运行了多个docker容器)
kubectl get pod/pods

# 删除service,deployment,pod
kubectl delete service/deployment/pod nginx

# 查看pod详细信息
kubectl describe podName
# 查看pod日志
kubectl logs -f podName

第七步、Kubernetes部署“容器化应用”

# 部署Nginx到控制器
kubectl create deployment nginx --image=nginx
# 开放容器内80端口
kubectl expose deployment nginx --port=80 --type=NodePort

# 访问地址:http://NodeIP:Port
http://10.10.225.89:30187
# 在Kubernetes集群中部署SpringBoot应用程序
# 1、制作JDK镜像,参考55、Docker制作镜像-10、JDK8
# 2、制作SpringBoot镜像,参考55、Docker制作镜像-20、SpringBoot
# 3、k8s部署SpringBoot镜像

# 空运行,生成配置文件
# --dry-run 空运行
# -o 输出文件格式[yaml, json],后面可跟上 > filename.yaml
kubectl create deployment service --image=service --dry-run -o yaml > service-deploy.yaml

# 注意:deploy.yaml文件里面镜像从本地拉取;把镜像拉取策略imagePullPolicy改为Never;
containers:
      - image: service
        name: service
        imagePullPolicy: Never
# yml文件方式部署
kubectl apply -f deploy.yaml
# 等价于命令方式直接部署,不过一般采用yaml文件方式部署
kubectl create deployment service --image=service

# 暴露服务端口:
kubectl expose deployment service --port=8080 --type=NodePort

第八步、部署Kubernetes Dashboard

k8s集群的基于Web的通用UI

# 下载yaml的资源清单
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
# 修改recommended.yaml文件
-------------------------------------------
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard
-------------------------------------------

# 下载不下来的话,新建kubernetes-dashboard.yaml文件,写入以下内容
--------------------------------------------------------
cat > kubernetes-dashboard.yaml << EOF
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.4
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.4
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
EOF
----------------------------------------------------------------
# 应用yaml的资源清单
kubectl apply -f kubernetes-dashboard.yaml


# 查看pod是否成功
kubectl get pod -n kubernetes-dashboard
# 浏览器使用https访问:
https://kxy89.cn:30001/ 

# 需要输入token,token的生成采用下面的三条命令(固定的,照着操作即可,不用记住):
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
# 得到token,每次重新执行该指令就可以
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

第九步、配置暴露容器化应用

暴露应用的方式有三种,生产环境一般使用Ingress-Nginx

  • NodePort

NodePort服务是让外部请求直接访问服务的最原始方式,NodePort是在所有的节点(虚拟机)上开放指定的端口,所有发送到这个端口的请求都会直接转发到服务中的pod里;这种方式有一个“nodePort"的端口,能在节点上指定开放哪个端口,如果没有指定端口,它会选择一个随机端口,大多数时候应该让Kubernetes随机选择端口;这种方式的不足:

1、一个端口只能供一个服务使用;

2、只能使用30000–32767之间的端口;

3、如果节点/虚拟机的IP地址发生变化,需要人工进行处理;

因此,在生产环境不推荐使用这种方式来直接发布服务,如果不要求运行的服务实时可用,或者用于演示或者临时运行一个应用可以用这种方式。

  • Ingress

    Ingresss是k8s集群中的一个API资源对象,相当于一个集群网关,我们可以自定义路由规则来转发、管理、暴露服务(一组pod),比较灵活,生产环境建议使用这种方式;

  • LoadBalancer

    LoadBlancer可以暴露服务,这种方式需要向云平台申请负载均衡器,目前很多云平台都支持,但是这种方式深度耦合了云平台;(相当于是购买服务)从外部的访问通过负载均衡器LoadBlancer转发到后端的Pod,具体如何实现要看云提供商。

三种端口说明

  • nodePort

    外部机器(在windows浏览器)可以访问的端口;比如一个Web应用需要被其他用户访问,那么需要配置type=NodePort,而且配置nodePort=30001,那么其他机器就可以通过浏览器访问scheme://node:30001访问到该服务;

  • targetPort

    容器的端口,与制作容器时暴露的端口一致(Dockerfile中EXPOSE),例如docker.io官方的nginx暴露的是80端口;

  • port

    Kubernetes集群中的各个服务之间访问的端口,虽然mysql容器暴露了3306端口,但外部机器不能访问到mysql服务,因为他没有配置NodePort类型,该3306端口是集群内其他容器需要通过003306端口访问该服务;

举例:kubectl expose deployment springboot-k8s --port=8080 --target-port=8080 --type=NodePort

采用Ingress暴露容器化应用

Ingress 英文翻译为:入口、进入、进入权、进食,也就是入口,即外部请求进入k8s集群必经之口,Ingress不是kubernetes内置的,需要单独安装,而且有多种类型Google Cloud Load Balancer,Nginx,Contour,Istio等等,我们这里选择官方维护的Ingress Nginx;

ingress-nginx是使用NGINX作为反向代理和负载均衡器的Kubernetes的Ingress控制器

使用Ingress Nginx的步骤

1、部署Ingress Nginx;

2、配置Ingress Nginx规则;

# 1、部署Ingress Nginx# 官网:https://github.com/kubernetes/ingress-nginx # 方式一,直接部署,需要科学上网kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/deploy/static/provider/baremetal/deploy.yaml# 方式二,yaml文件部署方式# 328行增加配置项:hostNetwork: true# 332行修改成阿里云镜像:registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.33.0scp ingress-nginx.yaml root@10.10.225.89:/root/software/k8s# 应用kubectl apply -f /root/software/k8s/ingress-nginx.yaml# 查看Ingress的状态kubectl get services,deployments,pods -n ingress-nginx
# 2、配置Ingress Nginx规则scp ingress-nginx-rule.yaml root@10.10.225.89:/root/software/k8s# 应用kubectl apply -f /root/software/k8s/ingress-nginx-rule.yaml# 如果报509错误,执行 kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission# 检查规则kubectl get ing(ress) 

k8s部署Spring Cloud微服务(以若依为例)

第一步、生成本地镜像并打包上传到私服kxy.cn

第二步、干运行生成yaml配置文件,并修改spec-template-spec-containers追加imagePullSecrets,参考如下

# 部署ruoyi-gateway ruoyi-auth ruoyi-modules-system 以网关模块为例kubectl create deployment ruoyi-gateway --image=kxy.cn/ruoyi-gateway:2.5.0 --dry-run -o yaml > ruoyi-gateway.yamlkubectl apply -f ruoyi-gateway.yamlkubectl expose deployment ruoyi-gateway --port=8080 --target-port=8080 --type=NodePort# 此时可通过NodePort暴露的端口访问gateway# 通过ingress-nginx暴露ruoyi-gatewaykubectl delete service ruoyi-gatewaykubectl expose deployment ruoyi-gateway --port=8090 --target-port=8080kubectl apply -f ingress-nginx-rule.yaml--------------------------------------------apiVersion: networking.k8s.io/v1kind: Ingressmetadata:  name: k8s-ingressspec:  rules:  - host: kxy91.cn    http:      paths:      - pathType: Prefix        path: /        backend:          service:            name: ruoyi-gateway            port:              number: 8090--------------------------------------------# 此时可通过80端口访问gatewayruoyi配置改为target: `http://kxy91.cn:80`,
containers:      - image: kxy.cn/ruoyi-gateway:2.5.0        name: ruoyi-gateway        resources: {}        # imagePullPolicy: Never 工作节点存在本地镜像的情况下,采用这种不拉取镜像的方式      imagePullSecrets:      - name: test# test来历如下:参考链接:https://blog.csdn.net/u012586326/article/details/112343690# 运行下面的指令,创建Secret对象导入# k8s使用需认证的私服仓库kubectl create secret docker-registry test --docker-server=https://kxy.cn --docker-username=stallonely --docker-password=xiangyun74 --docker-email=stallonely@163.com# 查看本地所有的imagePullSecretskubectl get secret

第三步、设置nginx静态代理前端工程

server {    listen       80;    server_name  kxy.cn;    #charset koi8-r;    #access_log  /var/log/nginx/host.access.log  main;    location / {  	root /data/app/ruoyi-ui;  	try_files $uri $uri/ /index.html;  	index index.html index.htm;    }    location /prod-api/ {  	#proxy_set_header Host $http_host;	  proxy_set_header Host $proxy_host;  	proxy_set_header X-Real-IP $remote_addr;  	proxy_set_header REMOTE-HOST $remote_addr;  	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;  	proxy_pass http://kxy91.cn:80/;    }    #error_page  404              /404.html;    # redirect server error pages to the static page /50x.html    #    error_page   500 502 503 504  /50x.html;    location = /50x.html {        root   /usr/share/nginx/html;    }    # proxy the PHP scripts to Apache listening on 127.0.0.1:80    #    #location ~ \.php$ {    #    proxy_pass   http://127.0.0.1;    #}    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000    #    #location ~ \.php$ {    #    root           html;    #    fastcgi_pass   127.0.0.1:9000;    #    fastcgi_index  index.php;    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;    #    include        fastcgi_params;    #}    # deny access to .htaccess files, if Apache's document root    # concurs with nginx's one    #    #location ~ /\.ht {    #    deny  all;    #}}

Kubernetes架构及和核心组件

![](/Users/KXY/work/70.ITAssets/imgs/Kubernetes Structure.png)

1、master组件

API Server

  • 所有请求的唯一入口;

  • 集群的统一入口,各组件之间的协调者,以restful api提供接口服务;

  • 管理所有的事务,所有对象资源的增删改查和监听操作都交给apiserver处理后再提交给etcd存储记录;

scheduler

  • 调度器用来调度资源,查看业务节点的资源情况,确定在哪个node上创建pod,把指令告知给api server;api server把任务下发给业务节点的kubelet去执行;
  • 根据调度算法为新创建的pod选择一个node节点,可以任意部署,可以部署在同一个节点上,也可以部署在不同节点上;

controller-manager

  • 控制管理器管理pod;
  • 处理集群中常规的后台任务,一种资源对应一个控制器,controller-manager就是负责管理这些控制器的;

ETCD

  • k8s的数据库
  • 分布式键值存储系统,用户保存集群状态数据,比如pod、service等对象信息;
  • etcd有一个自动服务发现的特性机制,etcd会搭建有三个节点的集群,实现三副本

2、node组件

kubelet

  • kubelet是master在node节点上的代理agent,管理本node运行容器的生命周期,比如创建容器、pod挂载数据卷、下载sercet、获取容器和节点状态等工作,kubelet将每个pod转换成一组容器;

kube-proxy

  • 在node节点上实现pod的网络代理,维护网络规则和四层的负载均衡工作;
  • 客户访问通过kube-proxy去访问pod;

docker

  • 容器引擎,运行容器;

pod

  • 最小部署单元;

  • 一组容器的集合;可以有一个或多个容器,一个pod里最好只放一个容器,除了一种情况除外,那就是elk,elk会在pod内多放一个logstash去收集日志;

  • 一个pod中的容器共享网络命名空间;

  • pod是短暂的;

  • pod可以分为有状态和无状态的pod;

  • pod下面的不一定是docker,还有别的容器;

controllers

  • 控制器,控制pod启动、停止、删除

  • replicaset:确保预期的pod副本数量;

  • deployment:无状态应用部署,比如nginx、apache,一定程度上的增减不会影响客户体验;

  • statefulset:有状态应用部署,是独一无二型的,会影响到客户的体验;

  • daemonset:确保所有node运行同一个pod,确保pod在统一命名空间;

  • job:一次性任务;

  • cronjob:定时任务;

service

  • 将一组pod关联起来,提供一个统一的入口,防止pod失联;
  • 定义一组pod的访问策略;
  • 确保了每个pod的独立性和安全性;

storage

volumes

persistent volumes

pollcies策略

resource quotas

label(标签)

  • 标签,附加到某个资源上,用户关联对象、查询和筛选;
  • 一组pod有一个统一的标签
  • service通过标签管理一组pod

namespaces(命名空间)

  • 将对象从逻辑上隔离;

annotations:注释;

Kubectl:k8s提供的终端控制命令;

Kubeadm:可以用来初始化或加入一个k8s集群;



11、What is Kubernetes?

Kubernetes这个单词来自于希腊语,含义是 舵手 或 领航员;

Production-Grade Container Orchestration

Automated container deployment, scaling, and management

Kubernetes,也称为K8S,其中8是代表中间“ubernete”的8个字符,是Google在2014年开源的一个容器编排引擎,用于自动化容器化应用程序的部署、规划、扩展和管理,它将组成应用程序的容器(container)分组为逻辑单元,以便于管理和发现,用于管理云平台中多个主机上的容器化的应用,Kubernetes 的目标是让部署容器化的应用简单并且高效,很多细节都不需要运维人员去进行复杂的手工配置和处理;

Kubernetes拥有Google在生产环境上15年运行的经验,并结合了社区中最佳实践;

K8S是 CNCF 毕业的项目,本来Kubernetes是Google的内部项目,后来开源出来,又后来为了其茁壮成长,捐给了CNCF;

CNCF全称Cloud Native Computing Foundation(云原生计算基金会)

官网:https://kubernetes.io/

代码:https://github.com/kubernetes/kubernetes

Kubernetes是采用Go语言开发的,Go语言是谷歌2009发布的一款开源编程语言;

Kubernetes生产环境级别的容器编排,那么编排是什么意思?

  1. 按照一定的目的依次排列;

  2. 调配、安排;

12、Kubernetes管理员认证(CKA)

CKA全称Certified Kubernetes Administrator,是Linux基金会和Cloud Native Computing Foundation(CNCF)官方推出的全球Kubernetes管理员认证,对于技术团队,CKA认证可以作为团队成员的技术能力的一个考察标准,也可以作为整个团队对Kubernetes云平台的管理能力的有力证明;

考试只允查阅官方文档,在考试过程中你只能去

https://kubernetes.io/

https://github.com

如果去了其它的网站,按作弊处理;

  • 考试时长3小时

  • CKA满分100分,66分及格

  • 考试费用 美元:$300 人民币:¥2088

  • 有一次免费重考的机会,一年后过期

13、Kubernetes整体架构图

k8s整体架构图

Master

k8s集群控制节点,对集群进行调度管理,接受集群外用户去集群操作请求;

Master Node 由 API Server、Scheduler、ClusterState Store(ETCD 数据库)和 Controller MangerServer 所组成;

Nodes

集群工作节点,运行用户业务应用容器;

Nodes节点也叫Worker Node,包含kubelet、kube proxy 和 Pod(Container Runtime);

14、Kubernetes环境部署方式

部署 Kubernetes 环境(集群)主要有多种方式:

(1)minikube

minikube可以在本地运行Kubernetes的工具,minikube可以在个人计算机(包括Windows,macOS和Linux PC)上运行一个单节点Kubernetes集群,以便您可以试用Kubernetes或进行日常开发工作;

https://kubernetes.io/docs/tutorials/hello-minikube/

(2)kind

Kind和minikube类似的工具,让你在本地计算机上运行Kubernetes,此工具需要安装并配置Docker;

https://kind.sigs.k8s.io/

(3)kubeadm

Kubeadm是一个K8s部署工具,提供kubeadm init 和 kubeadm join两个操作命令,可以快速部署一个Kubernetes集群;

官方地址:

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

(4)二进制包

从Github下载发行版的二进制包,手动部署安装每个组件,组成Kubernetes集群,步骤比较繁琐,但是能让你对各个组件有更清晰的认识;

(5)yum安装

通过yum安装Kubernetes的每个组件,组成Kubernetes集群,不过yum源里面的k8s版本已经比较老了,所以这种方式用得也比较少了;

(6)第三方工具

有一些大神封装了一些工具,利用这些工具进行k8s环境的安装;

(7)花钱购买

直接购买类似阿里云这样的公有云平台k8s,一键搞定;

Q.E.D.


行走在天地间自由的灵魂