kubernetes 실전glusterfs

4515 단어

시스템 환경

  • CentOS Linux release 7.5.1804 (Core)
  • Docker version 1.13.1, build 8633870/1.13.1
  • Kubernetes v1.10.0

  • 서버
    IP
    역할
    master-192
    172.30.81.192
    k8s-master,gluster-node
    node-193
    172.30.81.193
    k8s-node,glutser-node
    node-194
    172.30.81.194
    k8s-node,gluster-client

    gluster 배포


    gluster 클러스터 배포


    소프트웨어 설치yum install centos-release-gluster -y yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma glusterfs-geo-replication glusterfs-devel
    /etc/hosts 파일 복사
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    172.30.81.192 master-192
    172.30.81.193 node-193
    172.30.81.194 node-194
    
    

    그룹 가입을 시작합니다systemctl enable glusterd systemctl start glusterd gluster peer probe node-193
    [root@master-192 glusterfs]# gluster peer status
    Number of Peers: 1
    
    Hostname: node-193
    Uuid: c9114119-3601-4b20-ba42-7272e4bf72f5
    State: Peer in Cluster (Connected)
    

    k8svolume 만들기gluster volume create k8s-volume replica 2 master-192:/data/ node-193:/data1/ gluster volume start k8s-volume
    [root@master-192 glusterfs]# gluster volume info 
     
    Volume Name: k8s-volume
    Type: Replicate
    Volume ID: e61f74c7-9f69-40b5-9211-fc1446493009
    Status: Started
    Snapshot Count: 0
    Number of Bricks: 1 x 2 = 2
    Transport-type: tcp
    Bricks:
    Brick1: master-192:/data
    Brick2: node-193:/data1
    Options Reconfigured:
    transport.address-family: inet
    nfs.disable: on
    performance.client-io-threads: off
    

    glusterfs 클라이언트 마운트 테스트

    yum install centos-release-gluster -y yum install -y glusterfs-fuse glusterfs
    [root@node-194 /]# mount -t glusterfs 172.30.81.192:k8s-volume /mnt
    [root@node-194 /]# ls /mnt/
    index.html  lost+found
    

    k8sglusterfs 사용


    1. glusterfs-endpoint를 만듭니다.json
    {
      "kind": "Endpoints",
      "apiVersion": "v1",
      "metadata": {
        "name": "glusterfs-cluster"
      },
      "subsets": [
        {
          "addresses": [
            {
              "ip": "172.30.81.192"
            }
          ],
          "ports": [
            {
              "port": 1
            }
          ]
        },
        {
          "addresses": [
            {
              "ip": "172.30.81.193"
            }
          ],
          "ports": [
            {
              "port": 1
            }
          ]
        }
      ]
    }
    
    kubectl create -f glusterfs-endpoints.json
    2. glusterfs 서비스를 만듭니다.json
    {
      "kind": "Service",
      "apiVersion": "v1",
      "metadata": {
        "name": "glusterfs-cluster"
      },
      "spec": {
        "ports": [
          {"port": 1}
        ]
      }
    }
    
    kubectl create -f glusterfs-service.json
    3. pv, pvc 만들기
    glusterfs-pv.yaml
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv
    spec:
      capacity:
        storage: 10Gi
      accessModes:
        - ReadWriteMany
      glusterfs:
        endpoints: "glusterfs-cluster"
        path: "k8s-volume"
        readOnly: false
    

    glusterfs-pvc.yaml
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 2Gi
    
    kubectl create -f glusterfs-pv.yaml kubectl create -f glusterfs-pvc.yaml
    4. pvc를 사용하여pod 만들기
    test.yaml
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: test1
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: test1
        spec:
          containers:
          - name: test1
            image: nginx
            volumeMounts:
            - name: gs
              mountPath: /usr/share/nginx/html
          volumes:
          - name: gs
            persistentVolumeClaim:
              claimName: pvc
            
          nodeSelector:
            kubernetes.io/hostname: node-194
    
    --- 
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: test1
      name: test1
      namespace: default
    spec:
      selector:
        app: test1
      ports:
        - port: 80
      type: NodePort
    
    kubectl create -f test.yaml

    좋은 웹페이지 즐겨찾기