一、前言
dashboard是一个图形化管理界面,由于我这边的Kubernetes v1.5.2比较老,按官网的安装方法没有效果,现在试试手动安装,使用下面2个yaml文件安装
二、使用的两个yaml文件
# cat dashboard-controller.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
  name: kubernetes-dashboard
  namespace: kube-system
  selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/kubernetes-dashboard
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - args:
        - --apiserver-host=http://192.168.114.3:8080
        image: docker.io/googlecontainer/kubernetes-dashboard-amd64:v1.6.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 9090
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 30
        name: kubernetes-dashboard
        ports:
        - containerPort: 9090
          protocol: TCP
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
      dnsPolicy: ClusterFirst
      restartPolicy: Always


# cat dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 9090

 

kubectl create -f dashboard-controller.yaml
kubectl create -f dashboard-service.yaml
但是查看pods发现是启动失败的:
# kubectl get pods –namespace kube-system
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-1302489372-51k19 0/1 ImagePullBackOff 0 9m
READY状态是0/1等于就是没有正常启动
查看其具体失败情况:
# kubectl describe pod kubernetes-dashboard-1302489372-51k19 –namespace kube-system
有个很明显的报错就是pull docker.io/googlecontainer/kubernetes-dashboard-amd64:v1.6.1镜像失败,我们先把这个镜像手动push到我的私有仓库。
我这边pull 我方框标记的这个镜像
# docker pull docker.io/siriuszg/kubernetes-dashboard-amd64:v1.6.1
打个tag标记
# docker tag docker.io/siriuszg/kubernetes-dashboard-amd64:v1.6.1 192.168.114.3:5000/kubernetes-dashboard-amd64:v1.6.1
将192.168.114.3:5000/kubernetes-dashboard-amd64:v1.6.1 上传到本地私有库,方便node调用
# docker push 192.168.114.3:5000/kubernetes-dashboard-amd64:v1.6.1
修改下上面dashboard-controller.yaml 文件
主要就是将镜像源地址修改下
将google的替换本地的镜像
image: 192.168.114.3:5000/kubernetes-dashboard-amd64:v1.6.1
不要去google拉取镜像
imagePullPolicy: IfNotPresent
配置apiserver的ip和端口
– –apiserver-host=http://192.168.114.3:8080
重新应用下dashboard-controller.yaml 文件
# kubectl apply -f dashboard-controller.yaml
deployment “kubernetes-dashboard” configured
再次查看,拉取镜像的报错消失了,但还有一个报错没解决。
查看具体日志 ,看看是什么错误导致的
# kubectl logs kubernetes-dashboard-1444176572-nlkkt -n kube-system
Using HTTP port: 8443
Using apiserver-host location: http://192.168.114.3:8080
Creating API server client for http://192.168.114.3:8080
E0717 09:42:40.498613 1 config.go:322] Expected to load root CA config from /var/run/secrets/kubernetes.io/serviceaccount/ca.crt, but got err: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory
E0717 09:42:40.498613 1 config.go:322] Expected to load root CA config from /var/run/secrets/kubernetes.io/serviceaccount/ca.crt, but got err: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory
log: exiting because of error: log: cannot create log: open /tmp/dashboard.kubernetes-dashboard-1444176572-nlkkt.unknownuser.log.ERROR.20200717-094240.1: no such file or directory
Successful initial request to the apiserver, version: v1.5.2
Creating in-cluster Heapster client
可以看到是缺少一个/var/run/secrets/kubernetes.io/serviceaccount/ca.crt 文件导致的
产生这个错误是因为Kubernetes默认创建的secrets资源不包含用于访问kube-apiserver的根证书。
具体操作如下:
下载easyrsa3
# cd /usr/local/src
# curl -L -O https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz
# tar xzf easy-rsa.tar.gz
# cd easy-rsa-master/easyrsa3
# ./easyrsa init-pki
创建根证书
# ./easyrsa –batch “–req-cn=192.168.114.3@`date +%s`” build-ca nopass
创建服务端证书和密钥
# ./easyrsa –subject-alt-name=”IP:192.168.114.3″ build-server-full server nopass
拷贝pki/ca.crt、pki/issued/server.crt和pki/private/server.key至指定的目录
# mkdir /etc/kubernetes/pki
# cp pki/ca.crt pki/issued/server.crt pki/private/server.key /etc/kubernetes/pki/
chmod 644 /etc/kubernetes/pki/*
编辑/etc/kubernetes/apiserver
# Add your own!
KUBE_API_ARGS=”–client-ca-file=/etc/kubernetes/pki/ca.crt –tls-cert-file=/etc/kubernetes/pki/server.crt –tls-private-key-file=/etc/kubernetes/pki/server.key”
编辑/etc/kubernetes/controller-manager:
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS=”–service_account_private_key_file=/etc/kubernetes/pki/server.key –root-ca-file=/etc/kubernetes/pki/ca.crt”
删除旧secrets资源
# kubectl get secrets –all-namespaces
# systemctl stop kube-controller-manager
# kubectl delete secret default-token-s1vfh
# kubectl delete secret default-token-jct68 –namespace=kube-system
重新启动kube-apiserver和kube-controller-manager服务
# systemctl restart kube-apiserver
# systemctl start kube-controller-manager
检查新创建的secret是否包含根证书
# kubectl get secrets –all-namespaces
# kubectl describe secret default-token-27w5m –namespace=kube-system
可以看到新创建的secret资源已包含ca.crt。
重新创建Dashboard Pod
kubectl delete -f dashboard-controller.yaml
kubectl delete -f dashboard-service.yaml
kubectl create -f dashboard-controller.yaml
kubectl create -f dashboard-service.yaml
访问192.168.114.3:8080/ui/ 即可
需要注意的是:systemctl restart kube-apiserver 若启动失败,检查/etc/kubernetes/pki/下的文件,改权限为644
访问:192.168.114.3:8080/ui/
报错:
Error: ‘dial tcp 172.16.43.2:9090: getsockopt: connection timed out’ Trying to reach: ‘http://172.16.43.2:9090/’
检查所有节点的systemctl start flanneld.service 是否正常启动。

k8s简单部署

K8s中文手册地址:https://www.kuboard.cn/learning/k8s-basics/kubernetes-basics.html#kubernetes%E5%8A%9F%E8%83%BD   这边安装的Kubernetes v1.5.2 比较...

阅读全文

centos7上openvpn搭建详细教程

写在前面的前言: 因为工作需要安全连接公司的内网机器。对比了几个vpn的配置及工作模式。安全性:openvpn>l2tp/ipsec>pptp ,当然还有ss+代理的模式。...

阅读全文

redis 主从复制及keepalived高可用

前言及思路: 这边的思路是以3作为主,4作为从,110作为VIP漂移地址,应用通过110的6379端口访问redis数据库。 正常运行下,当主节点3宕机后,VIP飘逸到4上,...

阅读全文

欢迎留言