AWS에서 EKS용 Cluster 생성

자신의 공부용 메모입니다.

AWS EC2에서 EKS 클러스터 생성



1. SSH에서 EC2 인스턴스에 로그인하여 AWS CLI를 설정합니다.

일단 EC2의 Bastion용 서버나 SSH로 로그인합니다.
EC2에 로그인한 후 다음 명령을 실행하여 CLI를 설정합니다.
aws configure

공식 문서
htps : // / cs. 아 ws. 아마존. 코 m/그럼_jp/cぃ/ぁてst/うせrぐいで/cぃーちゃーpこんふぃぐれ. HTML

2. 자격 증명 표시

다음 명령을 사용하여 AWS Account, UserID 및 Arn을 검색할 수 있습니다.
aws sts get-caller-identity

참고로 해 주신 Qiita의 기사
htps : // 코 m / 코오 헤이 / ms / 2 아 8 아 09 5f36 바 c614879

3. 클러스터 생성
eksctl create cluster --name test --region=us-east-1 --node-type t3.medium --nodes 3

터미널 출력 예
[ℹ]  eksctl version 0.11.1
[ℹ]  using region us-east-1
[ℹ]  setting availability zones to [us-east-1c us-east-1f]
[ℹ]  subnets for us-east-1c - public:***.***.0.0/19 private:***.***.64.0/19
[ℹ]  subnets for us-east-1f - public:***.***.32.0/19 private:***.***.96.0/19
[ℹ]  nodegroup "ng-07a533c2" will use "ami-*****" [AmazonLinux2/1.14]
[ℹ]  using Kubernetes version 1.14
[ℹ]  creating EKS cluster "test" in "us-east-1" region with un-managed nodes
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --cluster=test'
[ℹ]  CloudWatch logging will not be enabled for cluster "test" in "us-east-1"
[ℹ]  you can enable it with 'eksctl utils update-cluster-logging --region=us-east-1 --cluster=test'
[ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "test" in "us-east-1"
[ℹ]  2 sequential tasks: { create cluster control plane "test", create nodegroup "ng-07a533c2" }
[ℹ]  building cluster stack "eksctl-test-cluster"
[ℹ]  deploying stack "eksctl-test-cluster"
[ℹ]  building nodegroup stack "eksctl-test-nodegroup-ng-07a533c2"
[ℹ]  --nodes-min=3 was set automatically for nodegroup ng-07a533c2
[ℹ]  --nodes-max=3 was set automatically for nodegroup ng-07a533c2
[ℹ]  deploying stack "eksctl-test-nodegroup-ng-07a533c2"
[✔]  all EKS cluster resources for "test" have been created
[✔]  saved kubeconfig as "/home/user/.kube/config"
[ℹ]  adding identity "arn:aws:iam::*****:role/eksctl-test-nodegroup-ng-07a533c2-NodeInstanceRole-1DJPUV1THLN54" to auth ConfigMap
[ℹ]  nodegroup "ng-07a533c2" has 0 node(s)
[ℹ]  waiting for at least 3 node(s) to become ready in "ng-07a533c2"
[ℹ]  nodegroup "ng-07a533c2" has 3 node(s)
[ℹ]  node "**126.ec2.internal" is ready
[ℹ]  node "**152.ec2.internal" is ready
[ℹ]  node "**111.ec2.internal" is ready
[ℹ]  kubectl command should work with "/home/user/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "test" in "us-east-1" region is ready

node-type은 인스턴스 유형을 지정합니다.
인스턴스 유형
htps : // 아 ws. 아마존. 이 m/jp/에c2/인 s단세-tyぺs/

노드 수를 지정하지 않으면 두 개가 작성됩니다. (기본값 2)

공식 문서
htps : // / cs. 아 ws. 아마존. 이 m / 그럼 _ jp / 에 ks / ㅁ st / 우세 r 굉장히 / c 어서 - c ぅ s r. HTML

도중 터미널이 멈춘 것처럼 보였습니다만 (땀), 15분 정도로 완성되었습니다.

노드를 확인해 봅니다.
[user@**** ~]$ kubectl get nodes
NAME                 STATUS   ROLES    AGE    VERSION
**126.ec2.internal   Ready    <none>   104s   v1.14.7-eks-1861c5
**152.ec2.internal   Ready    <none>   105s   v1.14.7-eks-1861c5
**111.ec2.internal   Ready    <none>   105s   v1.14.7-eks-1861c5
[user@**** ~]$ 

클러스터를 확인해 봅니다.
[user@**** ~]$ kubectl config get-clusters
NAME
test.us-east-1.eksctl.io
[user@**** ~]$ 

4. 기타 파라미터 정보
다음은 --help 출력 결과입니다.

[userxxx@******** ~]$ eksctl create cluster --help
Create a cluster

Usage: eksctl create cluster [flags]

General flags:
  -n, --name string               EKS cluster name (generated if unspecified, e.g. "hilarious-wardrobe-1577715578")
      --tags stringToString       A list of KV pairs used to tag the AWS resources (e.g. "Owner=John Doe,Team=Some Team") (default [])
  -r, --region string             AWS region
      --zones strings             (auto-select if unspecified)
      --version string            Kubernetes version (valid options: 1.12, 1.13, 1.14) (default "1.14")
  -f, --config-file string        load configuration from a file (or stdin if set to '-')
      --timeout duration          maximum waiting time for any long-running operation (default 25m0s)
      --install-vpc-controllers   Install VPC controller that's required for Windows workloads
      --managed                   Create EKS-managed nodegroup
      --fargate                   Create a Fargate profile scheduling pods in the default and kube-system namespaces onto Fargate

Initial nodegroup flags:
      --nodegroup-name string          name of the nodegroup (generated if unspecified, e.g. "ng-1a88f5a7")
      --without-nodegroup              if set, initial nodegroup will not be created
  -t, --node-type string               node instance type (default "m5.large")
  -N, --nodes int                      total number of nodes (for a static ASG) (default 2)
  -m, --nodes-min int                  minimum nodes in ASG (default 2)
  -M, --nodes-max int                  maximum nodes in ASG (default 2)
      --node-volume-size int           node volume size in GB
      --node-volume-type string        node volume type (valid options: gp2, io1, sc1, st1) (default "gp2")
      --max-pods-per-node int          maximum number of pods per node (set automatically if unspecified)
      --ssh-access                     control SSH access for nodes. Uses ~/.ssh/id_rsa.pub as default key path if enabled
      --ssh-public-key string          SSH public key to use for nodes (import from local path, or use existing EC2 key pair)
      --node-ami string                Advanced use cases only. If 'static' is supplied (default) then eksctl will use static AMIs; if 'auto' is supplied then eksctl will automatically set the AMI based on version/region/instance type; if any other value is supplied it will override the AMI to use for the nodes. Use with extreme care. (default "static")
      --node-ami-family string         Advanced use cases only. If 'AmazonLinux2' is supplied (default), then eksctl will use the official AWS EKS AMIs (Amazon Linux 2); if 'Ubuntu1804' is supplied, then eksctl will use the official Canonical EKS AMIs (Ubuntu 18.04). (default "AmazonLinux2")
  -P, --node-private-networking        whether to make nodegroup networking private
      --node-security-groups strings   Attach additional security groups to nodes, so that it can be used to allow extra ingress/egress access from/to pods
      --node-labels stringToString     Extra labels to add when registering the nodes in the nodegroup, e.g. "partition=backend,nodeclass=hugememory" (default [])
      --node-zones strings             (inherited from the cluster if unspecified)

Cluster and nodegroup add-ons flags:
      --asg-access            enable IAM policy for cluster-autoscaler
      --external-dns-access   enable IAM policy for external-dns
      --full-ecr-access       enable full access to ECR
      --appmesh-access        enable full access to AppMesh
      --alb-ingress-access    enable full access for alb-ingress-controller

VPC networking flags:
      --vpc-cidr ipNet                 global CIDR to use for VPC (default 192.168.0.0/16)
      --vpc-private-subnets strings    re-use private subnets of an existing VPC
      --vpc-public-subnets strings     re-use public subnets of an existing VPC
      --vpc-from-kops-cluster string   re-use VPC from a given kops cluster
      --vpc-nat-mode string            VPC NAT mode, valid options: HighlyAvailable, Single, Disable (default "Single")

AWS client flags:
  -p, --profile string        AWS credentials profile to use (overrides the AWS_PROFILE environment variable)
      --cfn-role-arn string   IAM role used by CloudFormation to call AWS API on your behalf

Output kubeconfig flags:
      --kubeconfig string               path to write kubeconfig (incompatible with --auto-kubeconfig) (default "/home/cloud_user/.kube/config")
      --authenticator-role-arn string   AWS IAM role to assume for authenticator
      --set-kubeconfig-context          if true then current-context will be set in kubeconfig; if a context is already set then it will be overwritten (default true)
      --auto-kubeconfig                 save kubeconfig file by cluster name, e.g. "/home/cloud_user/.kube/eksctl/clusters/hilarious-wardrobe-1577715578"
      --write-kubeconfig                toggle writing of kubeconfig (default true)

Common flags:
  -C, --color string   toggle colorized logs (valid options: true, false, fabulous) (default "true")
  -h, --help           help for this command
  -v, --verbose int    set log level, use 0 to silence, 4 for debugging and 5 for debugging with AWS debug logging (default 3)

Use 'eksctl create cluster [command] --help' for more information about a command.

EKS의 Master Node는 어디에 있습니까?



EKS는 풀 매니지드형의 Kubernetes 서비스이므로, Master 노드는 만들지 않아도 좋네요. 과연. (지금 느낌)

이 기사가 도움이되었습니다.
eksctl 명령을 사용한 Amazon EKS 구축 시작
htps : //에서 v.ぁsss d. jp / c ぉ d / 아 ws / 껄껄 gs r d

Kubernetes의 수행 (공부)는 계속됩니다. .

좋은 웹페이지 즐겨찾기