Terraform으로 Amazon Elastic Kubernetes Service(Amazon EKS) 프로비저닝
38787 단어 eksterraformawskubernetes
HashiCorp Terraform은 버전 지정, 재사용 및 공유가 가능한 사람이 읽을 수 있는 구성 파일에서 클라우드 및 온프레미스 리소스를 모두 정의할 수 있는 코드형 인프라(IaC) 도구입니다.
Terraform을 사용하는 Amazon EKS 클러스터:
이 리포지토리는 eks 클러스터의 인프라 구성을 유지하는 데 사용됩니다.
전제 조건
AWS CLI를 구성하려면 AWS Access Key ID, Secret Access Key, 리전 및 출력 형식을 입력해야 합니다. eks 클러스터 리소스를 생성하려면 적절한 권한이 필요합니다.
$ aws configure
AWS Access Key ID [None]: AWS_ACCESS_KEY_ID
AWS Secret Access Key [None]: AWS_SECRET_ACCESS_KEY
Default region name [None]: AWS_REGION
Default output format [None]: json
Terraform 초기 설정 구성
AWS 공급자를 생성해야 합니다. VPC, EKS, S3, EC2 등과 같은 AWS 리소스와 상호 작용할 수 있습니다.
provider.tf
terraform {
required_version = ">= 1.1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
region = var.region
access_key = var.aws_access_key
secret_key = var.aws_secret_key
# other options for authentication
}
Terraform 상태 설정
이제 S3에서 백엔드 Terraform 상태 파일의 위치를 지정하기 위해 terraform 백엔드를 생성해야 합니다.
원격 상태는 해당 상태 파일을 내 로컬 파일 시스템이 아닌 원격으로 저장합니다.
백엔드.tf
terraform {
backend "s3" {
bucket = "mondev-terraform-states"
key = "terraform-aws-eks-mondev.tfstate"
region = "ap-southeast-1"
encrypt = true
}
}
네트워크 인프라 설정
VPC, 서브넷, 보안 그룹 등 설정
Amazon EKS에서는 서브넷이 서로 다른 두 개 이상의 가용 영역에 있어야 합니다.
vpc.tf
# VPC
resource "aws_vpc" "mondev" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "${var.project}-vpc",
"kubernetes.io/cluster/${var.project}-cluster" = "shared"
}
}
# Public Subnets
resource "aws_subnet" "public" {
count = var.availability_zones_count
vpc_id = aws_vpc.mondev.id
cidr_block = cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, count.index)
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "${var.project}-public-sg"
"kubernetes.io/cluster/${var.project}-cluster" = "shared"
"kubernetes.io/role/elb" = 1
}
map_public_ip_on_launch = true
}
# Private Subnets
resource "aws_subnet" "private" {
count = var.availability_zones_count
vpc_id = aws_vpc.mondev.id
cidr_block = cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, count.index + var.availability_zones_count)
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "${var.project}-private-sg"
"kubernetes.io/cluster/${var.project}-cluster" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
}
# Internet Gateway
resource "aws_internet_gateway" "mondev" {
vpc_id = aws_vpc.mondev.id
tags = {
"Name" = "${var.project}-igw"
}
depends_on = [aws_vpc.mondev]
}
# Route Table(s)
# Route the public subnet traffic through the IGW
resource "aws_route_table" "main" {
vpc_id = aws_vpc.mondev.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.mondev.id
}
tags = {
Name = "${var.project}-Default-rt"
}
}
# Route table and subnet associations
resource "aws_route_table_association" "internet_access" {
count = var.availability_zones_count
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.main.id
}
# NAT Elastic IP
resource "aws_eip" "main" {
vpc = true
tags = {
Name = "${var.project}-ngw-ip"
}
}
# NAT Gateway
resource "aws_nat_gateway" "main" {
allocation_id = aws_eip.main.id
subnet_id = aws_subnet.public[0].id
tags = {
Name = "${var.project}-ngw"
}
}
# Add route to route table
resource "aws_route" "main" {
route_table_id = aws_vpc.mondev.default_route_table_id
nat_gateway_id = aws_nat_gateway.main.id
destination_cidr_block = "0.0.0.0/0"
}
# Security group for public subnet
resource "aws_security_group" "public_sg" {
name = "${var.project}-Public-sg"
vpc_id = aws_vpc.mondev.id
tags = {
Name = "${var.project}-Public-sg"
}
}
# Security group traffic rules
resource "aws_security_group_rule" "sg_ingress_public_443" {
security_group_id = aws_security_group.public_sg.id
type = "ingress"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_security_group_rule" "sg_ingress_public_80" {
security_group_id = aws_security_group.public_sg.id
type = "ingress"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_security_group_rule" "sg_egress_public" {
security_group_id = aws_security_group.public_sg.id
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
# Security group for data plane
resource "aws_security_group" "data_plane_sg" {
name = "${var.project}-Worker-sg"
vpc_id = aws_vpc.mondev.id
tags = {
Name = "${var.project}-Worker-sg"
}
}
# Security group traffic rules
resource "aws_security_group_rule" "nodes" {
description = "Allow nodes to communicate with each other"
security_group_id = aws_security_group.data_plane_sg.id
type = "ingress"
from_port = 0
to_port = 65535
protocol = "-1"
cidr_blocks = flatten([cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 0), cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 1), cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 2), cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 3)])
}
resource "aws_security_group_rule" "nodes_inbound" {
description = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
security_group_id = aws_security_group.data_plane_sg.id
type = "ingress"
from_port = 1025
to_port = 65535
protocol = "tcp"
cidr_blocks = flatten([cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 2), cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 3)])
# cidr_blocks = flatten([var.private_subnet_cidr_blocks])
}
resource "aws_security_group_rule" "node_outbound" {
security_group_id = aws_security_group.data_plane_sg.id
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
# Security group for control plane
resource "aws_security_group" "control_plane_sg" {
name = "${var.project}-ControlPlane-sg"
vpc_id = aws_vpc.mondev.id
tags = {
Name = "${var.project}-ControlPlane-sg"
}
}
# Security group traffic rules
resource "aws_security_group_rule" "control_plane_inbound" {
security_group_id = aws_security_group.control_plane_sg.id
type = "ingress"
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = flatten([cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 0), cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 1), cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 2), cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 3)])
# cidr_blocks = flatten([var.private_subnet_cidr_blocks, var.public_subnet_cidr_blocks])
}
resource "aws_security_group_rule" "control_plane_outbound" {
security_group_id = aws_security_group.control_plane_sg.id
type = "egress"
from_port = 0
to_port = 65535
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
EKS 클러스터 설정
EKS 클러스터 생성. Amazon EKS에서 관리하는 Kubernetes 클러스터는 서비스에서 사용하는 리소스를 관리하기 위해 사용자를 대신하여 다른 AWS 서비스를 호출합니다. 예를 들어 EKS는 관리형 노드를 사용하는 경우 각 인스턴스 그룹에 대해 Auto Scaling 그룹을 생성합니다.
EKS에 대한 IAM 역할 및 정책 설정: EKS가 제대로 작동하려면 관련 정책이 사전 정의된 몇 가지 IAM 역할이 필요합니다.
IAM 역할: Amazon EKS가 Kubernetes 클러스터용 AWS 리소스를 생성하고 AWS API와 상호 작용하는 데 사용할 필요한 권한이 있는 역할을 생성합니다.
IAM 정책: Amazon EKS가 이 역할을 가정하고 사용할 수 있도록 허용하는 신뢰할 수 있는 정책(AmazonEKSClusterPolicy)을 연결합니다.
eks-cluster.tf
# EKS Cluster
resource "aws_eks_cluster" "mondev" {
name = "${var.project}-cluster"
role_arn = aws_iam_role.cluster.arn
version = "1.22"
vpc_config {
# security_group_ids = [aws_security_group.eks_cluster.id, aws_security_group.eks_nodes.id] # already applied to subnet
subnet_ids = flatten([aws_subnet.public[*].id, aws_subnet.private[*].id])
endpoint_private_access = true
endpoint_public_access = true
public_access_cidrs = ["0.0.0.0/0"]
}
tags = merge(
var.tags
)
depends_on = [
aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy
]
}
# EKS Cluster IAM Role
resource "aws_iam_role" "cluster" {
name = "${var.project}-Cluster-Role"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.cluster.name
}
# EKS Cluster Security Group
resource "aws_security_group" "eks_cluster" {
name = "${var.project}-cluster-sg"
description = "Cluster communication with worker nodes"
vpc_id = aws_vpc.mondev.id
tags = {
Name = "${var.project}-cluster-sg"
}
}
resource "aws_security_group_rule" "cluster_inbound" {
description = "Allow worker nodes to communicate with the cluster API Server"
from_port = 443
protocol = "tcp"
security_group_id = aws_security_group.eks_cluster.id
source_security_group_id = aws_security_group.eks_nodes.id
to_port = 443
type = "ingress"
}
resource "aws_security_group_rule" "cluster_outbound" {
description = "Allow cluster API Server to communicate with the worker nodes"
from_port = 1024
protocol = "tcp"
security_group_id = aws_security_group.eks_cluster.id
source_security_group_id = aws_security_group.eks_nodes.id
to_port = 65535
type = "egress"
노드 그룹(관리) 설정
애플리케이션 워크로드를 실행할 노드 그룹 생성
IAM 역할: EKS 클러스터와 마찬가지로 작업자 노드 그룹을 생성하기 전에 노드 그룹이 다른 AWS 서비스와 통신하는 데 필요한 권한이 있는 IAM 역할을 생성해야 합니다.
IAM 정책: Amazon EC2가 이 역할을 가정하고 사용하도록 허용하는 신뢰할 수 있는 정책(AmazonEKSWorkerNodePolicy)을 연결합니다. 또한 AWS 관리 권한 정책(AmazonEKS_CNI_Policy, AmazonEC2ContainerRegistryReadOnly)을 연결합니다.
node-groups.tf
# EKS Node Groups
resource "aws_eks_node_group" "mondev" {
cluster_name = aws_eks_cluster.mondev.name
node_group_name = var.project
node_role_arn = aws_iam_role.node.arn
subnet_ids = aws_subnet.private[*].id
scaling_config {
desired_size = 2
max_size = 5
min_size = 1
}
ami_type = "AL2_x86_64" # AL2_x86_64, AL2_x86_64_GPU, AL2_ARM_64, CUSTOM
capacity_type = "ON_DEMAND" # ON_DEMAND, SPOT
disk_size = 20
instance_types = ["t2.medium"]
tags = merge(
var.tags
)
depends_on = [
aws_iam_role_policy_attachment.node_AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.node_AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.node_AmazonEC2ContainerRegistryReadOnly,
]
}
# EKS Node IAM Role
resource "aws_iam_role" "node" {
name = "${var.project}-Worker-Role"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "node_AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.node.name
}
resource "aws_iam_role_policy_attachment" "node_AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.node.name
}
resource "aws_iam_role_policy_attachment" "node_AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.node.name
}
# EKS Node Security Group
resource "aws_security_group" "eks_nodes" {
name = "${var.project}-node-sg"
description = "Security group for all nodes in the cluster"
vpc_id = aws_vpc.mondev.id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.project}-node-sg"
"kubernetes.io/cluster/${var.project}-cluster" = "owned"
}
}
resource "aws_security_group_rule" "nodes_internal" {
description = "Allow nodes to communicate with each other"
from_port = 0
protocol = "-1"
security_group_id = aws_security_group.eks_nodes.id
source_security_group_id = aws_security_group.eks_nodes.id
to_port = 65535
type = "ingress"
}
resource "aws_security_group_rule" "nodes_cluster_inbound" {
description = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
from_port = 1025
protocol = "tcp"
security_group_id = aws_security_group.eks_nodes.id
source_security_group_id = aws_security_group.eks_cluster.id
to_port = 65535
type = "ingress"
}
지형 변수
AWS 계정에 대한 관리자 액세스 권한이 있는 IAM 사용자를 생성하고 인증을 위한 액세스 키와 비밀 키를 가져옵니다.
변수.tf
variable "aws_access_key" {
description = "AWS access key"
type = string
}
variable "aws_secret_key" {
description = "AWS secret key"
type = string
}
variable "region" {
description = "The aws region. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html"
type = string
default = "ap-southeast-1"
}
variable "availability_zones_count" {
description = "The number of AZs."
type = number
default = 2
}
variable "project" {
description = "MonirulProject"
type = string
}
variable "vpc_cidr" {
description = "The CIDR block for the VPC. Default value is a valid CIDR, but not acceptable by AWS and should be overridden"
type = string
default = "10.0.0.0/16"
}
variable "subnet_cidr_bits" {
description = "The number of subnet bits for the CIDR. For example, specifying a value 8 for this parameter will create a CIDR with a mask of /24."
type = number
default = 8
}
variable "tags" {
description = "A map of tags to add to all resources"
type = map(string)
default = {
"Project" = "MonirulProject"
"Environment" = "Development"
"Owner" = "Monirul"
}
}
요구 사항에 따라 terraform 변수 값을 설정합니다.
terraform.tfvars
aws_access_key = "aaaaaaaaaaaaaa"
aws_secret_key = "bbbbbbbbbbbbbbbbbbbbb"
region = "ap-southeast-1"
availability_zones_count = 2
project = "MonirulProject"
vpc_cidr = "10.0.0.0/16"
subnet_cidr_bits = 8
그리고 terraform 데이터 소스도 마찬가지입니다.
데이터 소스.tf
data "aws_availability_zones" "available" {
state = "available"
}
EKS 인프라 실행
리소스 선언을 마치면 모든 리소스를 배포할 수 있습니다.
$ terraform init
$ terraform plan
$ terraform apply
산출
$ terraform output
프로젝트 구조
Cluster
|-- README.md
|-- backend.tf
|-- data-sources.tf
|-- eks-cluster.tf
|-- node-groups.tf
|-- outputs.tf
|-- providers.tf
|-- terraform.tfvars
|-- variables.tf
|-- vpc.tf
클러스터에 액세스하고 필요한 경우 다른 네임스페이스 생성
aws eks --region ap-southeast-1 update-kubeconfig --name MonirulProject-cluster
kubectl create ns dev && kubectl create ns stg && kubectl create ns prd
가용성 영역(Pod 토폴로지)
작업 공간 정리
$ terraform destroy
여러 환경을 위한 작업 공간
여러 고유한 인프라 리소스/환경 세트를 관리합니다.
관리할 각 환경에 대해 새 디렉터리를 만드는 대신 작업 공간을 만들고 사용하면 됩니다.
$ terraform workspace new dev
$ terraform workspace new stg
$ terraform workspace new prd
$ terraform workspace list
Reference
이 문제에 관하여(Terraform으로 Amazon Elastic Kubernetes Service(Amazon EKS) 프로비저닝), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다
https://dev.to/monirul87/provisioning-amazon-elastic-kubernetes-service-amazon-eks-with-terraform-5cd6
텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념
(Collection and Share based on the CC Protocol.)
# EKS Cluster
resource "aws_eks_cluster" "mondev" {
name = "${var.project}-cluster"
role_arn = aws_iam_role.cluster.arn
version = "1.22"
vpc_config {
# security_group_ids = [aws_security_group.eks_cluster.id, aws_security_group.eks_nodes.id] # already applied to subnet
subnet_ids = flatten([aws_subnet.public[*].id, aws_subnet.private[*].id])
endpoint_private_access = true
endpoint_public_access = true
public_access_cidrs = ["0.0.0.0/0"]
}
tags = merge(
var.tags
)
depends_on = [
aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy
]
}
# EKS Cluster IAM Role
resource "aws_iam_role" "cluster" {
name = "${var.project}-Cluster-Role"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.cluster.name
}
# EKS Cluster Security Group
resource "aws_security_group" "eks_cluster" {
name = "${var.project}-cluster-sg"
description = "Cluster communication with worker nodes"
vpc_id = aws_vpc.mondev.id
tags = {
Name = "${var.project}-cluster-sg"
}
}
resource "aws_security_group_rule" "cluster_inbound" {
description = "Allow worker nodes to communicate with the cluster API Server"
from_port = 443
protocol = "tcp"
security_group_id = aws_security_group.eks_cluster.id
source_security_group_id = aws_security_group.eks_nodes.id
to_port = 443
type = "ingress"
}
resource "aws_security_group_rule" "cluster_outbound" {
description = "Allow cluster API Server to communicate with the worker nodes"
from_port = 1024
protocol = "tcp"
security_group_id = aws_security_group.eks_cluster.id
source_security_group_id = aws_security_group.eks_nodes.id
to_port = 65535
type = "egress"
# EKS Node Groups
resource "aws_eks_node_group" "mondev" {
cluster_name = aws_eks_cluster.mondev.name
node_group_name = var.project
node_role_arn = aws_iam_role.node.arn
subnet_ids = aws_subnet.private[*].id
scaling_config {
desired_size = 2
max_size = 5
min_size = 1
}
ami_type = "AL2_x86_64" # AL2_x86_64, AL2_x86_64_GPU, AL2_ARM_64, CUSTOM
capacity_type = "ON_DEMAND" # ON_DEMAND, SPOT
disk_size = 20
instance_types = ["t2.medium"]
tags = merge(
var.tags
)
depends_on = [
aws_iam_role_policy_attachment.node_AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.node_AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.node_AmazonEC2ContainerRegistryReadOnly,
]
}
# EKS Node IAM Role
resource "aws_iam_role" "node" {
name = "${var.project}-Worker-Role"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "node_AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.node.name
}
resource "aws_iam_role_policy_attachment" "node_AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.node.name
}
resource "aws_iam_role_policy_attachment" "node_AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.node.name
}
# EKS Node Security Group
resource "aws_security_group" "eks_nodes" {
name = "${var.project}-node-sg"
description = "Security group for all nodes in the cluster"
vpc_id = aws_vpc.mondev.id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.project}-node-sg"
"kubernetes.io/cluster/${var.project}-cluster" = "owned"
}
}
resource "aws_security_group_rule" "nodes_internal" {
description = "Allow nodes to communicate with each other"
from_port = 0
protocol = "-1"
security_group_id = aws_security_group.eks_nodes.id
source_security_group_id = aws_security_group.eks_nodes.id
to_port = 65535
type = "ingress"
}
resource "aws_security_group_rule" "nodes_cluster_inbound" {
description = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
from_port = 1025
protocol = "tcp"
security_group_id = aws_security_group.eks_nodes.id
source_security_group_id = aws_security_group.eks_cluster.id
to_port = 65535
type = "ingress"
}
variable "aws_access_key" {
description = "AWS access key"
type = string
}
variable "aws_secret_key" {
description = "AWS secret key"
type = string
}
variable "region" {
description = "The aws region. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html"
type = string
default = "ap-southeast-1"
}
variable "availability_zones_count" {
description = "The number of AZs."
type = number
default = 2
}
variable "project" {
description = "MonirulProject"
type = string
}
variable "vpc_cidr" {
description = "The CIDR block for the VPC. Default value is a valid CIDR, but not acceptable by AWS and should be overridden"
type = string
default = "10.0.0.0/16"
}
variable "subnet_cidr_bits" {
description = "The number of subnet bits for the CIDR. For example, specifying a value 8 for this parameter will create a CIDR with a mask of /24."
type = number
default = 8
}
variable "tags" {
description = "A map of tags to add to all resources"
type = map(string)
default = {
"Project" = "MonirulProject"
"Environment" = "Development"
"Owner" = "Monirul"
}
}
aws_access_key = "aaaaaaaaaaaaaa"
aws_secret_key = "bbbbbbbbbbbbbbbbbbbbb"
region = "ap-southeast-1"
availability_zones_count = 2
project = "MonirulProject"
vpc_cidr = "10.0.0.0/16"
subnet_cidr_bits = 8
data "aws_availability_zones" "available" {
state = "available"
}
$ terraform init
$ terraform plan
$ terraform apply
$ terraform output
Cluster
|-- README.md
|-- backend.tf
|-- data-sources.tf
|-- eks-cluster.tf
|-- node-groups.tf
|-- outputs.tf
|-- providers.tf
|-- terraform.tfvars
|-- variables.tf
|-- vpc.tf
aws eks --region ap-southeast-1 update-kubeconfig --name MonirulProject-cluster
kubectl create ns dev && kubectl create ns stg && kubectl create ns prd
$ terraform destroy
$ terraform workspace new dev
$ terraform workspace new stg
$ terraform workspace new prd
$ terraform workspace list
Reference
이 문제에 관하여(Terraform으로 Amazon Elastic Kubernetes Service(Amazon EKS) 프로비저닝), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다 https://dev.to/monirul87/provisioning-amazon-elastic-kubernetes-service-amazon-eks-with-terraform-5cd6텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념 (Collection and Share based on the CC Protocol.)