AWS EKS에서 Amazon RDS에 액세스

내용물


  • Premise
  • Setup the MySQL Database - Amazon RDS
  • Create the VPC
  • Create the subnets
  • Create the DB subnet group
  • Create the VPC security group
  • Create a DB instance in the VPC
  • Amazon RDS setup diagram

  • Setup the EKS cluster
  • Let's build the bridge!
  • Create and Accept a VPC Peering Connection
  • Update the EKS cluster VPC's route table
  • Update the RDS VPC's route table
  • Update the RDS instance's security group

  • Test the connection

  • 1. 전제

    When moving your services to the Kubernetes ecosystem for the first time, it is best practice to port only the stateless parts to begin with.

    Here's the problem I had to solve: Our service uses Amazon RDS for MySQL . RDS 인스턴스와 EKS 모두 자체 전용VPC 내에 상주합니다. AWS EKS 내에서 실행되는 리소스는 데이터베이스와 어떻게 통신합니까?



    바로 다이빙하자!

    2. MySQL 데이터베이스 설정(Amazon RDS)

    We will be using the AWS CLI for setting up MySQL database.

    2.1 VPC 생성

    We will first create a VPC with the CIDR block 10.0.0.0/24 which accommodate 254 hosts in all. This is more than enough to host our RDS instance.

    $ aws ec2 create-vpc --cidr-block 10.0.0.0/24 | jq '{VpcId:.Vpc.VpcId,CidrBlock:.Vpc.CidrBlock}'
    {
        "VpcId": "vpc-0cf40a5f6db5eb3cd",
        "CidrBlock": "10.0.0.0/24"
    }
    
    # Export the RDS VPC ID for easy reference in the subsequent commands
    $ export RDS_VPC_ID=vpc-0cf40a5f6db5eb3cd
    

    2.2 서브넷 생성

    RDS instances launched in a VPC must have a DB subnet group . DB 서브넷 그룹은 VPC 내의 서브넷 모음입니다. 각 DB 서브넷 그룹은 지정된 subnets에서 적어도 두 개Availability ZonesAWS Region가 있어야 합니다.

    RDS VPC( RDS_VPC_ID )를 10.0.0.0/2510.0.0.128/25 의 두 개의 동일한 서브넷으로 나눕니다.

    따라서 가용성 영역ap-south-1b에 첫 번째 서브넷을 생성해 보겠습니다.

    $ aws ec2 create-subnet --availability-zone "ap-south-1b" --vpc-id ${RDS_VPC_ID} --cidr-block 10.0.0.0/25 | jq '{SubnetId:.Subnet.SubnetId,AvailabilityZone:.Subnet.AvailabilityZone,CidrBlock:.Subnet.CidrBlock,VpcId:.Subnet.VpcId}'
    # Response:
    {
      "SubnetId": "subnet-042a4bee8e92287e8",
      "AvailabilityZone": "ap-south-1b",
      "CidrBlock": "10.0.0.0/25",
      "VpcId": "vpc-0cf40a5f6db5eb3cd"
    }
    
    


    가용성 영역의 두 번째 항목ap-south-1a
    $ aws ec2 create-subnet --availability-zone "ap-south-1a" --vpc-id ${RDS_VPC_ID} --cidr-block 10.0.0.128/25 | jq '{SubnetId:.Subnet.SubnetId,AvailabilityZone:.Subnet.AvailabilityZone,CidrBlock:.Subnet.CidrBlock,VpcId:.Subnet.VpcId}'
    # Response:
    {
      "SubnetId": "subnet-0c01a5ba480b930f4",
      "AvailabilityZone": "ap-south-1a",
      "CidrBlock": "10.0.0.128/25",
      "VpcId": "vpc-0cf40a5f6db5eb3cd"
    }
    


    각 VPC에는 네트워크 트래픽이 전달되는 위치를 제어하는 ​​암시적 라우터가 있습니다. VPC의 각 서브넷은 서브넷의 라우팅을 제어하는 ​​라우팅 테이블과 명시적으로 연결되어야 합니다.

    계속해서 생성한 이 두 서브넷을 VPC의 라우팅 테이블에 연결해 보겠습니다.

    # Fetch the route table information
    $ aws ec2 describe-route-tables --filters Name=vpc-id,Values=${RDS_VPC_ID} | jq '.RouteTables[0].RouteTableId'
    "rtb-0e680357de97595b1"
    
    # For easy reference
    $ export RDS_ROUTE_TABLE_ID=rtb-0e680357de97595b1
    
    # Associate the first subnet with the route table
    $ aws ec2 associate-route-table --route-table-id rtb-0e680357de97595b1 --subnet-id subnet-042a4bee8e92287e8
    {
        "AssociationId": "rtbassoc-02198db22b2d36c97"
    }
    
    # Associate the second subnet with the route table
    $ aws ec2 associate-route-table --route-table-id rtb-0e680357de97595b1 --subnet-id subnet-0c01a5ba480b930f4
    {
        "AssociationId": "rtbassoc-0e5c3959d360c92ab"
    }
    
    


    2.3 DB 서브넷 그룹 생성

    Now that we have two subnets spanning two availability zones, we can go ahead and create the DB subnet group.

    $ aws rds create-db-subnet-group --db-subnet-group-name  "DemoDBSubnetGroup" --db-subnet-group-description "Demo DB Subnet Group" --subnet-ids "subnet-042a4bee8e92287e8" "subnet-0c01a5ba480b930f4" | jq '{DBSubnetGroupName:.DBSubnetGroup.DBSubnetGroupName,VpcId:.DBSubnetGroup.VpcId,Subnets:.DBSubnetGroup.Subnets[].SubnetIdentifier}'
    # Response:
    {
      "DBSubnetGroupName": "demodbsubnetgroup",
      "VpcId": "vpc-0cf40a5f6db5eb3cd",
      "Subnets": "subnet-0c01a5ba480b930f4"
    }
    {
      "DBSubnetGroupName": "demodbsubnetgroup",
      "VpcId": "vpc-0cf40a5f6db5eb3cd",
      "Subnets": "subnet-042a4bee8e92287e8"
    }
    

    2.4 VPC 보안 그룹 생성

    The penultimate step to creating the DB instance is creating a VPC security group, an instance level virtual firewall with rules to control inbound and outbound traffic.

    $ aws ec2 create-security-group --group-name DemoRDSSecurityGroup --description "Demo RDS security group" --vpc-id ${RDS_VPC_ID}
    {
        "GroupId": "sg-06800acf8d6279971"
    }
    
    # Export the RDS VPC Security Group ID for easy reference in the subsequent commands
    $ export RDS_VPC_SECURITY_GROUP_ID=sg-06800acf8d6279971
    

    We will use this security group at a later point, to set an inbound rule to allow all traffic from the EKS cluster to the RDS instance.

    2.5 VPC에서 DB 인스턴스 생성

    $ aws rds create-db-instance \
      --db-name demordsmyqldb \
      --db-instance-identifier demordsmyqldbinstance \
      --allocated-storage 10 \
      --db-instance-class db.t2.micro \
      --engine mysql \
      --engine-version "5.7.26" \
      --master-username demoappuser \
      --master-user-password demoappuserpassword \
      --no-publicly-accessible \
      --vpc-security-group-ids ${RDS_VPC_SECURITY_GROUP_ID} \
      --db-subnet-group-name "demodbsubnetgroup" \
      --availability-zone ap-south-1b \
      --port 3306 | jq '{DBInstanceIdentifier:.DBInstance.DBInstanceIdentifier,Engine:.DBInstance.Engine,DBName:.DBInstance.DBName,VpcSecurityGroups:.DBInstance.VpcSecurityGroups,EngineVersion:.DBInstance.EngineVersion,PubliclyAccessible:.DBInstance.PubliclyAccessible}'
    
    # Respone:
    {
      "DBInstanceIdentifier": "demordsmyqldbinstance",
      "Engine": "mysql",
      "DBName": "demordsmyqldb",
      "VpcSecurityGroups": [
        {
          "VpcSecurityGroupId": "sg-06800acf8d6279971",
          "Status": "active"
        }
      ],
      "EngineVersion": "5.7.26",
      "PubliclyAccessible": false
    }
    

    We can verify that the DB instance has been created in the UI as well:



    2.6 Amazon RDS 설정 다이어그램



    3. EKS 클러스터 설정

    Spinning up an EKS cluster on AWS is as simple as:

    $ eksctl create cluster --name=demo-eks-cluster --nodes=2 --region=ap-south-1
    [ℹ]  using region ap-south-1
    [ℹ]  setting availability zones to [ap-south-1a ap-south-1c ap-south-1b]
    [ℹ]  subnets for ap-south-1a - public:192.168.0.0/19 private:192.168.96.0/19
    [ℹ]  subnets for ap-south-1c - public:192.168.32.0/19 private:192.168.128.0/19
    [ℹ]  subnets for ap-south-1b - public:192.168.64.0/19 private:192.168.160.0/19
    [ℹ]  nodegroup "ng-ae09882f" will use "ami-09c3eb35bb3be46a4" [AmazonLinux2/1.12]
    [ℹ]  creating EKS cluster "demo-eks-cluster" in "ap-south-1" region
    [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
    [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-south-1 --name=demo-eks-cluster'
    [ℹ]  2 sequential tasks: { create cluster control plane "demo-eks-cluster", create nodegroup "ng-ae09882f" }
    [ℹ]  building cluster stack "eksctl-demo-eks-cluster-cluster"
    [ℹ]  deploying stack "eksctl-demo-eks-cluster-cluster"
    [ℹ]  building nodegroup stack "eksctl-demo-eks-cluster-nodegroup-ng-ae09882f"
    [ℹ]  --nodes-min=2 was set automatically for nodegroup ng-ae09882f
    [ℹ]  --nodes-max=2 was set automatically for nodegroup ng-ae09882f
    [ℹ]  deploying stack "eksctl-demo-eks-cluster-nodegroup-ng-ae09882f"
    [✔]  all EKS cluster resource for "demo-eks-cluster" had been created
    [✔]  saved kubeconfig as "/Users/Bensooraj/.kube/config"
    [ℹ]  adding role "arn:aws:iam::account_number:role/eksctl-demo-eks-cluster-nodegroup-NodeInstanceRole-1631FNZJZTDSK" to auth ConfigMap
    [ℹ]  nodegroup "ng-ae09882f" has 0 node(s)
    [ℹ]  waiting for at least 2 node(s) to become ready in "ng-ae09882f"
    [ℹ]  nodegroup "ng-ae09882f" has 2 node(s)
    [ℹ]  node "ip-192-168-30-190.ap-south-1.compute.internal" is ready
    [ℹ]  node "ip-192-168-92-207.ap-south-1.compute.internal" is ready
    [ℹ]  kubectl command should work with "/Users/Bensooraj/.kube/config", try 'kubectl get nodes'
    [✔]  EKS cluster "demo-eks-cluster" in "ap-south-1" region is ready
    
    

    We will create a kubernetes Service named mysql-service of type ExternalName aliasing the RDS endpoint demordsmyqldbinstance.cimllxgykuy3.ap-south-1.rds.amazonaws.com .

    Run kubectl apply -f mysql-service.yaml to create the service.

    # mysql-service.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: mysql-service
      name: mysql-service
    spec:
      externalName: demordsmyqldbinstance.cimllxgykuy3.ap-south-1.rds.amazonaws.com
      selector:
        app: mysql-service
      type: ExternalName
    status:
      loadBalancer: {}
    

    Now, clients running inside the pods within the cluster can connect to the RDS instance using mysql-service .

    Let's test the connect using a throwaway busybox pod:

    $ kubectl run -i --tty --rm debug --image=busybox --restart=Never -- sh
    If you don't see a command prompt, try pressing enter.
    / # nc mysql-service 3306
    ^Cpunt!
    
    

    It is evident that the pod is unable to get through! Let's solve the problem now.

    4. 다리를 건설하자!

    We are going to create a VPC Peering Connection 두 VPC의 리소스 간 통신을 용이하게 합니다. 설명서에 따르면:

    A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection).



    4.1 VPC 피어링 연결 생성 및 수락

    To create a VPC peering connection, navigate to:

  • VPC console: https://console.aws.amazon.com/vpc/
  • Peering Connections를 선택하고 Create Peering Connection를 클릭합니다.
  • 다음과 같이 세부 정보를 구성합니다(EKS VPC를 Requester로 선택하고 RDS VPC를 Accepter로 선택).

  • Create Peering Connection를 클릭하세요.

  • 방금 생성한 Peering Connection를 선택합니다. Actions => Accept 를 클릭합니다. 다시 확인 대화 상자에서 Yes, Accept를 클릭합니다.


  • VPC 피어링 연결 ID를 내보내는 것을 잊지 마세요.

    $ export VPC_PEERING_CONNECTION_ID=pcx-0cc408e65493fe197
    


    4.2 EKS 클러스터 VPC의 라우팅 테이블 업데이트

    # Fetch the route table associated with the 3 public subnets of the VPC created by `eksctl`:
    $ aws ec2 describe-route-tables --filters Name="tag:aws:cloudformation:logical-id",Values="PublicRouteTable" | jq '.RouteTables[0].RouteTableId'
    "rtb-06103bd0704b3a9ee"
    
    # For easy reference
    export EKS_ROUTE_TABLE_ID=rtb-06103bd0704b3a9ee
    
    # Add route: All traffic to (destination) the RDS VPC CIDR block is via the VPC Peering Connection (target)
    $ aws ec2 create-route --route-table-id ${EKS_ROUTE_TABLE_ID} --destination-cidr-block 10.0.0.0/24 --vpc-peering-connection-id ${VPC_PEERING_CONNECTION_ID}
    {
        "Return": true
    }
    

    4.3 RDS VPC의 라우팅 테이블 업데이트

    # Add route: All traffic to (destination) the EKS cluster CIDR block is via the VPC Peering Connection (target)
    $ aws ec2 create-route --route-table-id ${RDS_ROUTE_TABLE_ID} --destination-cidr-block 192.168.0.0/16 --vpc-peering-connection-id ${VPC_PEERING_CONNECTION_ID}
    {
        "Return": true
    }
    

    4.4 RDS 인스턴스의 보안 그룹 업데이트

    Allow all ingress traffic from the EKS cluster to the RDS instance on port 3306 :

    $ aws ec2 authorize-security-group-ingress --group-id ${RDS_VPC_SECURITY_GROUP_ID} --protocol tcp --port 3306 --cidr 192.168.0.0/16
    

    5. 연결 테스트

    $ kubectl run -i --tty --rm debug --image=busybox --restart=Never -- sh
    If you don't see a command prompt, try pressing enter.
    / # nc mysql-service 3306
    N
    5.7.26-logR&=lk`xTH???mj    _5#K)>mysql_native_password
    

    We can see that busybox can now successfully talk to the RDS instance using the service mysql-service .

    That said, this is what our final setup looks like (lot of hard work guys):


    메모:
    이 설정을 통해 EKS 클러스터의 모든 포드가 RDS 인스턴스에 액세스할 수 있습니다. 사용 사례에 따라 아키텍처에 이상적일 수도 있고 그렇지 않을 수도 있습니다. 보다 세분화된 액세스 제어를 구현하려면 NetworkPolicy 리소스 설정을 고려하십시오.

    유용한 리소스:
  • Visual Subnet Calculator
  • jq - Command-line JSON processor
  • AWS CLI Command Reference
  • AWS VPC Peering
  • 좋은 웹페이지 즐겨찾기