「腾达签名」 - 解决苹果app/ipa/tf/ios企业超级签名下载掉签难题「腾达签名」

专注IOS苹果签名内测
解决APP签名下载难题

Centos7 部署kubernetes+docker+flannel+dashboard集群环境

Centos7 部署kubernetes+docker+flannel+dashboard集群环境-「腾达签名」

一、Kubernetes简介

Kubernetes(简称K8S)是开源的容器集群管理系统,可以实现容器集群的自动化部署、自动扩缩容、维护等功能.它既是一款容器编排工具,也是全新的基于容器技术的分布式架构领先方案.在Docker技术的基础上,为容器化的应用提供部署运行、资源调度、服务发现和动态伸缩等功能,提高了大规模容器集群管理的便捷性.

K8S集群中有管理节点与工作节点两种类型.管理节点主要负责K8S集群管理,集群中各节点间的信息交互、任务调度,还负责容器、Pod、NameSpaces、PV等生命周期的管理.工作节点主要为容器和Pod提供计算资源,Pod及容器全部运行在工作节点上,工作节点通过kubelet服务与管理节点通信以管理容器的生命周期,并与集群其他节点进行通信.

二、安装环境配置(下步骤在所有主机上都需执行)

1.在安装之前,需要先做如下准备.三台CentOS 7主机:

在各自主机上设置主机名:

hostnamectl set-hostname k8s-1 # 根据不同主机设置主机名

或者修改配置文件永久修改自己的主机名,通过下面的步骤修改文件.

cat >> /etc/sysconfig/network << EOF
hostname=k8s-1
EOF

2.编辑 /etc/hosts 文件,添加域名解析.

cat <<EOF >>/etc/hosts
192.168.2.87 k8s-1
192.168.2.91 k8s-2
192.168.2.92 k8s-3
EOF

3.关闭防火墙、selinux和swap.

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab

4.第一次安装打开路由转发功能,很多文档上没有写,但是我安装的时候确实报错了.

echo "1" > /proc/sys/net/ipv4/ip_forward

5.配置内核参数,将桥接的IPv4流量传递到iptables的链:

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

使其生效:

sysctl --system

6.配置国内yum源:

yum install -y wget
mkdir /etc/yum.repos.d/bak && mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repo
yum clean all && yum makecache

配置国内Kubernetes源:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

配置 docker 源:

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

三、master部署

master节点需要安装以下组件:

etcd
flannel
kubernets

1.etcd安装

安装命令:

yum install etcd -y

编辑etcd的默认配置文件:

vi /etc/etcd/etcd.conf

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" # 127.0.0.1改成0.0.0.0
ETCD_NAME="default"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379" # 127.0.0.1改成0.0.0.0

注:整个etcd.conf文件有许多选项用"#"注释掉了,我们这里不再一一列出,etcd可以用多台配置成集群,有兴趣的朋友可以自行百度,参考文章很多.我们用单台实现基本功能.

启动etcd并验证:

systemctl start etcd

验证etcd工作状态:

etcdctl -C http://192.168.2.87:2379 cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy

2.flannel安装

安装命令:

yum install flannel

配置flannel:

vi /etc/sysconfig/flanneld

# Flanneld configuration options 

# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.2.87:2379" #这里ip改为master服务器ip

# etcd config key. This is the configuration key that flannel queries 
# For address range assignment 
FLANNEL_ETCD_KEY="/coreos.com/network" # 注意要修改coreos.com和FLANNEL_ETCD_KEY(不是FLANNEL_ETCD_PREFIX)
#FLANNEL_ETCD_PREFIX="/atomic.io/network" 
#FLANNEL_OPTIONS="--etcd-endpoints=http://192.168.2.87:2379 --ip-masq=true" 

# Any additional options that you want to pass #FLANNEL_OPTIONS=""

配置etcd中关于flannel的key:

etcdctl set /coreos.com/network/config '{ "Network": "172.17.0.0/16" }'

{ "Network": "172.17.0.0/16" }

启动flannel并设置开机自启:

systemctl start flanneld.service
systemctl enable flanneld.service

3.kubernets安装

k8s的安装命令很简单,执行:

yum install kubernetes

master上需要运行以下组件:

kube-apiserver
kube-scheduler
kube-controller-manager

下面详细阐述:

配置/etc/kubernetes/apiserver文件:

vi /etc/kubernetes/apiserver 

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" # ip修改为0.0.0.0

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080" # 这行取消注释

# Port minions listen on
KUBELET_PORT="--kubelet-port=10250" # 这行取消注释

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.2.87:2379" # ip修改为master服务器ip

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="NamespaceExists,LimitRanger,ResourceQuota" # 注意这行只是修改了上面一行,是必须存在的.

# Add your own!
KUBE_API_ARGS=""

配置/etc/kubernetes/config文件:

vi /etc/kubernetes/config

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# Comma seperated list of nodes in the etcd cluster  
#KUBE_ETCD_SERVERS="--etcd_servers=http://192.168.2.87:2379"  

# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.2.87:8080" # 修改ip为master服务器ip

启动k8s各个组件:

systemctl start kube-apiserver.service 
systemctl start kube-controller-manager.service 
systemctl start kube-scheduler.service

设置k8s各组件开机启动:

systemctl enable kube-apiserver.service 
systemctl enable kube-controller-manager.service 
systemctl enable kube-scheduler.service

四、部署Slave节点

slave节点需要安装以下组件:

flannel
docker
kubernetes

下面按顺序阐述:

1.flannel安装

安装命令:

yum install flannel

配置flannel:

vi /etc/sysconfig/flanneld

# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.2.87:2379" # ip修改为master服务器ip

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/coreos.com/network" # 注意要修改coreos.com和FLANNEL_ETCD_KEY(不是FLANNEL_ETCD_PREFIX)
#FLANNEL_ETCD_PREFIX="/atomic.io/network"
#FLANNEL_OPTIONS="--etcd-endpoints=http://192.168.2.87:2379  --ip-masq=true"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

启动flannel并设置开机自启:

systemctl start flanneld.service 
systemctl enable flanneld.service

2.kubernetes安装

安装命令:

yum install kubernetes

不同于master节点,slave节点上需要运行kubernetes的如下组件:

kubelet
kubernets-proxy

下面详细阐述要配置的东西:

配置/etc/kubernetes/config文件:

vi /etc/kubernetes/config

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.2.87:8080" # ip修改为master服务器ip

配置/etc/kubernetes/kubelet文件:

vi /etc/kubernetes/config

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0" # 修改ip为0.0.0.0

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.2.91" # 修改ip为本机ip

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.2.87:8080" # 修改ip为master服务器ip

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" # 成功部署后node节点会拉取这个容器,如果拉取不到修改这里的容器地址

# Add your own!
KUBELET_ARGS="--cluster-dns=192.168.2.87 --cluster-domain=playcrab-inc.com" # 增加这样一句,作用未知

启动kube服务:

systemctl start kubelet.service 
systemctl start kube-proxy.service

设置k8s组件开机自启:

systemctl enable kubelet.service 
systemctl enable kube-proxy.service

至此为止,k8s集群的搭建过程就完成了,下面来验证一下集群是否搭建成功了.

验证集群状态:

查看端点信息:

kubectl get endpoints

NAME            ENDPOINTS           AGE
kubernetes      192.168.2.87:6443   8d

查看集群信息:

kubectl cluster-info

Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

获取集群中的节点状态:

kubectl get nodes

NAME           STATUS    AGE
192.168.2.91   Ready     8d
192.168.2.92   Ready     8d

五、配置dashboard服务

dashboard本质上就是webui连接master的api接口,通过api获取k8s集群的相关信息,然后在web上展示出来,对用户来说比较友好一些,实际用处并不是很大.

1.yaml文件

编辑dashboard.yaml,注意或更改以下红色部分:

vi dashboard.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Keep the name in sync with image version and
# gce/coreos/kube-manifests/addons/dashboard counterparts
  name: kubernetes-dashboard-latest
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
        version: latest
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: kubernetes-dashboard
        image: docker.io/siriuszg/kubernetes-dashboard-amd64:v1.5.1
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        ports:
        - containerPort: 9090
        args:
         -  --apiserver-host=http://192.168.2.87:8080
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
# image后面地址是kubernetes-dashboard容器地址,如无法拉取可在此处修改.
# apiserver-host后面要修改为master服务器ip

编辑dashboardsvc.yaml文件:

vi dashboardsvc.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 9090

2.启动

在master执行如下命令:

kubectl create -f dashboard.yaml
kubectl create -f dashboardsvc.yaml

如果出现错误需要删除重新修改执行:

kubectl deldete -f dashboard.yaml
kubectl deldete -f dashboardsvc.yaml

之后,dashboard搭建完成.

4.验证

命令验证,master上执行如下命令:

kubectl get deployment --all-namespaces

NAMESPACE     NAME                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   kubernetes-dashboard-latest   1         1         1            1           1h

kubectl get svc --all-namespaces 

NAMESPACE     NAME                   CLUSTER-IP        EXTERNAL-IP   PORT(S)            AGE
default       frontend               10.254.190.16     80:30001/TCP                     1d
default       kubernetes             10.254.0.1        443/TCP                          10d
default       redis-master           10.254.166.81     6379/TCP                         1d
default       redis-slave            10.254.140.100    6379/TCP                         1d
kube-system   kubernetes-dashboard   10.254.84.123     80/TCP                           1h

界面验证,浏览器访问:

http://192.168.2.87:8080/ui

5.销毁应用

在master上执行:

kubectl delete deployment kubernetes-dashboard-latest --namespace=kube-system
kubectl delete svc  kubernetes-dashboard --namespace=kube-system

六、遇到的错误

1.无法访问dashboard,提示超时,master服务器无法ping通node节点容器的ip.

node服务器iptables问题,使用iptables -nL命令查看,果然,Forward的策略还是drop,执行:

iptables -P FORWARD ACCEPT

问题解决.

2.报错:details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)

yum install *rhsm* 
wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem

完成后,执行一下docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest,发现可以正常拉取容器.

3.在创建Dashborad时,查看状态总是ContainerCreating.

kubectl get pod --namespace=kube-system

NAME                                    READY     STATUS              RESTARTS   AGE
kubernetes-dashboard-2094756401-kzhnx   0/1       ContainerCreating   0          10m

解决办法:

vi dashboard.yaml

image: docker.io/siriuszg/kubernetes-dashboard-amd64:v1.5.1 # 这个容器地址一定要是服务器可以正常拉取的

苹果签名www.nanti.net

企业签名www.nanti.net

超级签名www.nanti.net

TF签名www.nanti.net

Centos7 部署kubernetes+docker+flannel+dashboard集群环境-「腾达签名」
本原创文章未经允许不得转载 | 当前页面:「腾达签名」 » Centos7 部署kubernetes+docker+flannel+dashboard集群环境

评论