生产Kubenetes部署EFK集群(独立主机带认证版本)

Published on with 49 views

环境

  • Kubenetes 1.16.4 集群
  • 用于安装ES集群的Node磁盘空间:200GB

选择节点创建目录

  1. 我们的方案是选择单独1个Node部署ES集群的全部节点,这里选择的Node是iz2zeiaaq1cifk1tfxu7z9z,使用SSH登入这个Node进行操作。
  2. 创建3个目录用来做K8S的本地卷:
mkdir -p /mnt/localpv/es7-0 /mnt/localpv/es7-1 /mnt/localpv/es7-2
  1. 目录授权:
chmod -R 777 /mnt/localpv/

创建LocalVolume资源

  1. 回到K8S集群的master上或使用容器Paas平台进行下一步操作。
  2. 创建命名空间文件logging-ns.yml
apiVersion: v1
kind: Namespace
metadata:
  name: logging
  1. 执行文件:
kubectl create -f logging-ns.yml
  1. 创建StorageClass文件localstorage-storageclass.yml:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
  1. 执行文件:
kubectl create -f localstorage-storageclass.yml
  1. 创建三组PersistentVolume、PersistentVolumeClaim资源文件localstorage-pv0.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-storage-pv-0
  labels:
    name: local-storage-pv-0
spec:
  capacity:
    storage: 60Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /mnt/localpv/es7-0
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - iz2zeiaaq1cifk1tfxu7z9z # 此处需替换为文档开始选定的Node名
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: local-storage-pv-es7-cluster-0
  namespace: logging
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: local-storage
  selector:
    matchLabels:
      name: local-storage-pv-0
  resources:
    requests:
      storage: 60Gi

localstorage-pv1.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-storage-pv-1
  labels:
    name: local-storage-pv-1
spec:
  capacity:
    storage: 60Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /mnt/localpv/es7-1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - iz2zeiaaq1cifk1tfxu7z9z # 此处需替换为文档开始选定的Node名
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: local-storage-pv-es7-cluster-1
  namespace: logging
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: local-storage
  selector:
    matchLabels:
      name: local-storage-pv-1
  resources:
    requests:
      storage: 60Gi

localstorage-pv2.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-storage-pv-2
  labels:
    name: local-storage-pv-2
spec:
  capacity:
    storage: 60Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /mnt/localpv/es7-2
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - iz2zeiaaq1cifk1tfxu7z9z # 此处需替换为文档开始选定的Node名
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: local-storage-pv-es7-cluster-2
  namespace: logging
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: local-storage
  selector:
    matchLabels:
      name: local-storage-pv-2
  resources:
    requests:
      storage: 60Gi
  1. 执行文件:
kubectl create -f localstorage-pv0.yml
kubectl create -f localstorage-pv1.yml
kubectl create -f localstorage-pv2.yml

创建Elasticsearch7集群

  1. 由于ES7官方的认证方法需要证书,这里我们采用把制作好的证书打到镜像里的方式,这里的证书使用ES提供的证书工具制作,我们可以提供也可自行制作,Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:7.6.1
Maintainer zhuangzf1989@126.com
COPY elastic-stack-ca.p12 config/certs/elastic-stack-ca.p12
COPY elastic-certificates.p12 config/certs/elastic-certificates.p12
  1. 构建新镜像(仓库和镜像名自行替换):
docker build -t registry.cn-beijing.aliyuncs.com/jenphyjohn/elasticsearch-auth:7.6.1 .
  1. 创建无头Service文件elasticsearch7-svc.yml:
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch7
  namespace: logging
  labels:
    app: elasticsearch7
spec:
  selector:
    app: elasticsearch7
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter-node
  1. 执行文件:
kubectl create -f elasticsearch7-svc.yml
  1. 创建StatefulSet文件elasticsearch7-statefulset.yml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es7-cluster
  namespace: logging
spec:
  serviceName: elasticsearch7
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch7
  template:
    metadata:
      labels:
        app: elasticsearch7
    spec:
      containers:
      - name: elasticsearch7
        image: registry.cn-beijing.aliyuncs.com/jenphyjohn/elasticsearch-auth:7.6.1 # 替换为上边制作好的新镜像
        resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
        ports:
        - containerPort: 9200
          name: rest
          protocol: TCP
        - containerPort: 9300
          name: inter-node
          protocol: TCP
        volumeMounts:
        - name: local-storage-pv
          mountPath: /usr/share/elasticsearch/data
        env:
          - name: cluster.name
            value: k8s-logs
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: xpack.security.enabled
            value: "true"
          - name: xpack.security.transport.ssl.enabled
            value: "true"
          - name: xpack.security.transport.ssl.verification_mode
            value: "certificate"
          - name: xpack.security.transport.ssl.keystore.path
            value: "certs/elastic-certificates.p12"
          - name: xpack.security.transport.ssl.truststore.path
            value: "certs/elastic-certificates.p12"
          - name: discovery.zen.minimum_master_nodes # 含义请参阅官方 Elasticsearch 文档
            value: "2"
          - name: discovery.seed_hosts # 含义请参阅官方 Elasticsearch 文档
            value: "es7-cluster-0.elasticsearch7,es7-cluster-1.elasticsearch7,es7-cluster-2.elasticsearch7"
          - name: cluster.initial_master_nodes # 初始化的 master 节点,旧版本相关配置 discovery.zen.minimum_master_nodes
            value: "es7-cluster-0,es7-cluster-1,es7-cluster-2" # 含义请参阅官方 Elasticsearch 文档
          - name: ES_JAVA_OPTS
            value: "-Xms2g -Xmx4g" # 根据具体资源及需求调整
      initContainers:
      - name: fix-permissions
        image: busybox
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: local-storage-pv
          mountPath: /usr/share/elasticsearch/data
      - name: increase-vm-max-map
        image: busybox
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        command: ["sh", "-c", "ulimit -n 65536"]
  volumeClaimTemplates:
  - metadata:
      name: local-storage-pv
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "local-storage"
      resources:
        requests:
          storage: 60Gi
  1. 执行文件:
kubectl create -f elasticsearch7-statefulset.yml
  1. 进入容器配置密码(只一次):
kubectl exec -it es7-cluster-0 -n logging bash

进入容器后,执行自定义密码或自动分配密码:

bin/elasticsearch-setup-passwords interactive

bin/elasticsearch-setup-passwords auto

成功后记录各用户名对应的密码。

注意:自定义密码不要使用纯数字,否则Kibana连接会出现异常

验证是否安装成功

  1. 执行端口转发:
kubectl port-forward es7-cluster-0 9200:9200 --namespace=logging
  1. 再开一个终端,执行请求测试:
curl http://localhost:9200/_cluster/health?pretty

因为开启了认证,所以返回结果:

{
  "error" : {
    "root_cause" : [
      {
        "type" : "security_exception",
        "reason" : "missing authentication credentials for REST request [/_cluster/health?pretty]",
        "header" : {
          "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
        }
      }
    ],
    "type" : "security_exception",
    "reason" : "missing authentication credentials for REST request [/_cluster/health?pretty]",
    "header" : {
      "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
    }
  },
  "status" : 401
}

我们把请求加上Basic认证信息,再次尝试:

curl --user elastic:YourPassword http://localhost:9200/_cluster/health?pretty

返回结果:

{
  "cluster_name" : "k8s-logs",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

至此,Elasticsearch集群部署成功。

创建Kibana7

  1. 创建Kibana7资源文件kibana7.yml:
apiVersion: v1
kind: Service
metadata:
  name: kibana7
  namespace: logging
  labels:
    app: kibana7
spec:
  ports:
  - port: 5601
    nodePort: 32209 # 自行修改外部端口
  type: NodePort
  selector:
    app: kibana7

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana7
  namespace: logging
  labels:
    app: kibana7
spec:
  selector:
    matchLabels:
      app: kibana7
  template:
    metadata:
      labels:
        app: kibana7
    spec:
      containers:
      - name: kibana7
        image: docker.elastic.co/kibana/kibana:7.6.1
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          - name: ELASTICSEARCH_HOSTS
            value: http://elasticsearch7:9200
          - name: ELASTICSEARCH_USERNAME
            value: kibana
          - name: ELASTICSEARCH_PASSWORD
            value: xxxxxx # 之前配置密码时,记录的kibana密码
        ports:
        - containerPort: 5601
  1. 执行文件:
kubectl create -f kibana7.yml
  1. 打开网页,使用账号elastic及密码,即可登录Kibana

kibanalogin.png

创建Fluentd

  1. 创建配置文件资源fluentd-configmap.yml:
kind: ConfigMap
apiVersion: v1
metadata:
  name: fluentd-config
  namespace: logging
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
data:
  system.conf: |-
    <system>
      root_dir /tmp/fluentd-buffers/
    </system>
  containers.input.conf: |-
    <source>
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/xxx-*.log,/var/log/containers/yyy-*.log # 这里配置需要抽取日志的业务组件名
      pos_file /var/log/es-containers.log.pos
      time_format %Y-%m-%dT%H:%M:%S.%NZ
      localtime
      tag raw.kubernetes.*
      format json
      read_from_head true
    </source>
    # Detect exceptions in the log output and forward them as one log entry.
    <match raw.kubernetes.**>
      @id raw.kubernetes
      @type detect_exceptions
      remove_tag_prefix raw
      message log
      stream stream
      multiline_flush_interval 5
      max_bytes 500000
      max_lines 1000
    </match>
  forward.input.conf: |-
    # Takes the messages sent over TCP
    <source>
      @type forward
    </source>
  output.conf: |-
    # Enriches records with Kubernetes metadata
    <filter kubernetes.**>
      @type kubernetes_metadata
    </filter>
    <match **>
      @id elasticsearch
      @type elasticsearch
      @log_level info
      include_tag_key true
      host elasticsearch7
      port 9200
      user elastic
      password xxxxxx # elastic账号的密码
      logstash_format true
      request_timeout    30s
      <buffer>
        @type file
        path /var/log/fluentd-buffers/kubernetes.system.buffer
        flush_mode interval
        retry_type exponential_backoff
        flush_thread_count 2
        flush_interval 5s
        retry_forever
        retry_max_interval 30
        chunk_limit_size 2M
        queue_limit_length 8
        overflow_action block
      </buffer>
    </match>
  1. 执行配置文件:
kubectl create -f fluentd-configmap.yml
  1. 创建Fluentd的DaemonSet资源fluentd-daemonset.yml:
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd-es
  namespace: logging
  labels:
    k8s-app: fluentd-es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    k8s-app: fluentd-es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - "namespaces"
  - "pods"
  verbs:
  - "get"
  - "watch"
  - "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    k8s-app: fluentd-es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
  name: fluentd-es
  namespace: logging
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: fluentd-es
  apiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-es
  namespace: logging
  labels:
    k8s-app: fluentd-es
    version: v2.0.4
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-es
      version: v2.0.4
  template:
    metadata:
      labels:
        k8s-app: fluentd-es
        kubernetes.io/cluster-service: "true"
        version: v2.0.4
      # This annotation ensures that fluentd does not get evicted if the node
      # supports critical pod annotation based priority scheme.
      # Note that this does not guarantee admission on the nodes (#40573).
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      serviceAccountName: fluentd-es
      containers:
      - name: fluentd-es
        image: registry.cn-beijing.aliyuncs.com/jenphyjohn/fluentd-elasticsearch # fluentd-es镜像,需要提前做好
        env:
        - name: FLUENTD_ARGS
          value: --no-supervisor -q
        resources:
          limits:
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: config-volume
          mountPath: /etc/fluent/config.d
      nodeSelector: # 节点选择器,节点上需要有对应label才会创建
        beta.kubernetes.io/fluentd-ds-ready: "true"
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: config-volume
        configMap:
          name: fluentd-config

为了能够灵活控制哪些节点的日志可以被收集,我们这里添加了一个 nodSelector 属性:

nodeSelector: # 节点选择器,节点上需要有对应label才会创建
  beta.kubernetes.io/fluentd-ds-ready: "true"

想采集哪些Node上的日志,只需给此Node打上beta.kubernetes.io/fluentd-ds-ready=true的标签即可,针对这项配置,可根据需求灵活调整。

  1. 执行文件:
kubectl create -f fluentd-daemonset.yml
  1. 部署完成,当有日志产生后,去Kibana控制台Discover面板,创建Index为logstash-*,即可匹配到 Elasticsearch 集群中的所有日志数据;下一步配置时间过滤字段时,选择@timestamp;创建完成后,即可根据各索引条件进行日志的检索与管理,例如kubernetes.pod_name:xxx
    至此,全部的EFK服务部署已完成。

标题:生产Kubenetes部署EFK集群(独立主机带认证版本)
作者:jenphyjohn
地址:http://blog.join-e.tech/articles/2020/04/13/1586741965713.html

Responses