서비스 고객의 IAM 역할과 Pod급 보안 그룹을 결합하여 종심 방어 전략을 실현하다
43160 단어 securityawspostgreskubernetes
서비스 계정에 대한 IAM 역할 활성화
도착assign an IAM role to a pod:
rds-db:connect
권한을 사용하여 이 역할에 IAM 정책을 추가합니다. infra/plan/eks-cluster.tf
:
data "tls_certificate" "cert" {
url = aws_eks_cluster.eks.identity[0].oidc[0].issuer
}
resource "aws_iam_openid_connect_provider" "openid" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = [data.tls_certificate.cert.certificates[0].sha1_fingerprint]
url = aws_eks_cluster.eks.identity[0].oidc[0].issuer
}
data "aws_iam_policy_document" "web_identity_assume_role_policy" {
statement {
actions = ["sts:AssumeRoleWithWebIdentity"]
effect = "Allow"
condition {
test = "StringEquals"
variable = "${replace(aws_iam_openid_connect_provider.openid.url, "https://", "")}:sub"
values = ["system:serviceaccount:metabase:metabase"]
}
condition {
test = "StringEquals"
variable = "${replace(aws_iam_openid_connect_provider.openid.url, "https://", "")}:aud"
values = ["sts.amazonaws.com"]
}
principals {
identifiers = [aws_iam_openid_connect_provider.openid.arn]
type = "Federated"
}
}
}
resource "aws_iam_role" "web_identity_role" {
assume_role_policy = data.aws_iam_policy_document.web_identity_assume_role_policy.json
name = "web-identity-role-${var.env}"
}
OpenID Connect (OIDC)
신분 제공자와 Kubernetes 서비스 계정 주석을 결합하면pod급에서 IAM 역할을 사용할 수 있습니다.EKS 내부에 admission controller가pod가 사용하는 서비스 계정의 설명에 따라 AWS 세션 증거를 각각 캐릭터의pod에 주입합니다.자격 증명은
AWS_ROLE_ARN
및 AWS_WEB_IDENTITY_TOKEN_FILE
환경 변수를 통해 공개됩니다.[3]이 기능에 대한 자세한 설명은 [서비스 계정에 세분화된 IAM 역할 도입][aws-7]을 참조하십시오.
Kubernetes pods에서 RDS 인스턴스에 액세스할 수 있도록 IAM 역할을 만들 수 있습니다.
전체
infra/plan/eks-cluster.tf
:resource "aws_iam_role_policy" "rds_access_from_k8s_pods" {
name = "rds-access-from-k8s-pods-${var.env}"
role = aws_iam_role.web_identity_role.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"rds-db:connect",
]
Effect = "Allow"
Resource = "arn:aws:rds-db:${var.region}:${data.aws_caller_identity.current.account_id}:dbuser:${aws_db_instance.postgresql.resource_id}/metabase"
}
]
})
}
크레인 안전팀
Pod 보안 그룹을 활성화하려면 관리형 정책AmazonEKSVPCResourceController
을 추가해야 합니다.이것은 캐릭터가 네트워크 인터페이스, 그들의 전용 IP 주소, 그리고 그것들과 실례 간의 연결과 분리를 관리할 수 있도록 한다.
전체 infra/plan/eks-cluster.tf
:
resource "aws_iam_role_policy_attachment" "eks-AmazonEKSVPCResourceController" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
role = aws_iam_role.eks.name
}
이제 우리pod 안전팀을 만들자.
전체 infra/plan/eks-node-group.tf
:
resource "aws_security_group" "rds_access" {
name = "rds-access-from-pod-${var.env}"
description = "Allow RDS Access from Kubernetes Pods"
vpc_id = aws_vpc.main.id
ingress {
from_port = 0
to_port = 0
protocol = "-1"
self = true
}
ingress {
from_port = 53
to_port = 53
protocol = "tcp"
security_groups = [aws_eks_cluster.eks.vpc_config[0].cluster_security_group_id]
}
ingress {
from_port = 53
to_port = 53
protocol = "udp"
security_groups = [aws_eks_cluster.eks.vpc_config[0].cluster_security_group_id]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "rds-access-from-pod-${var.env}"
Environment = var.env
}
}
pod가 Amazon RDS에 접근할 수 있도록 하기 위해서, 우리는pod 보안 그룹이 RDS 포트의 입구/출구 데이터의 출처로 허용해야 합니다.
다음 포털/출구 규칙 업데이트aws_security_group.sg
의 VPC 보안 그룹infra/plan/rds.tf
을 사용합니다.
ingress {
from_port = var.rds_port
to_port = var.rds_port
protocol = "tcp"
security_groups = [aws_security_group.rds_access.id]
}
egress {
from_port = 1025
to_port = 65535
protocol = "tcp"
security_groups = [aws_security_group.rds_access.id]
}
다음 출력을 추가합니다.
output "sg-eks-cluster" {
value = aws_eks_cluster.eks.vpc_config[0].cluster_security_group_id
}
output "sg-rds-access" {
value = aws_security_group.rds_access.id
}
수정 사항 배포
cd infra/envs/dev
terraform apply ../../plan/
쿠베르네트스 구조
EKS 클러스터에 연결
aws eks --region $REGION update-kubeconfig --name $EKS_CLUSTER_NAME
이제 우리는 POD가 자신의 네트워크 인터페이스를 수신할 수 있도록 해야 한다.이 작업을 수행하기 전에 다음 명령을 사용하여 클러스터의 CNI 버전을 인쇄합니다.
kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2
Amazon EKS 클러스터는 Kubernetes 버전 1.17과 Amazon EKS 플랫폼 버전 EKS를 실행해야 합니다.3년 이상.
CNI 버전 업그레이드[1]
curl -o aws-k8s-cni.yaml https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.7.9/config/v1.7/aws-k8s-cni.yaml
sed -i "s/us-west-2/$REGION/g" aws-k8s-cni.yaml
kubectl apply -f aws-k8s-cni.yaml
aws 노드 수호 프로그램에서 ENABLE_POD_ENI
변수를true로 설정하고 CNI 플러그인을 사용해서 POD의 네트워크 인터페이스를 관리합니다.이 설정이true로 설정되면 플러그인은 그룹의 모든 노드에 값 vpc.amazonaws.com/has-trunk-attached=true
이 있는 탭을 추가합니다.VPC 자원 컨트롤러는 특수한 네트워크 인터페이스를 만들고 연결한다. 이를 중계 네트워크 인터페이스라고 하는데 다음과 같다aws-k8s-trunk-eni
[2].
kubectl set env daemonset -n kube-system aws-node ENABLE_POD_ENI=true
다음 명령을 사용하면 어떤 노드의 aws-k8s-trunk-eni
가true로 설정되었는지 볼 수 있습니다.
$ kubectl get nodes -o wide -l vpc.amazonaws.com/has-trunk-attached=true
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-3-109.eu-west-1.compute.internal Ready <none> 56m v1.18.9-eks-d1db3c 10.0.3.109 <none> Amazon Linux 2 4.14.219-164.354.amzn2.x86_64 docker://19.3.13
ip-10-0-7-157.eu-west-1.compute.internal Ready <none> 56m v1.18.9-eks-d1db3c 10.0.7.157 34.253.89.183 Amazon Linux 2 4.14.219-164.354.amzn2.x86_64 docker://19.3.13
RDS 인스턴스에 대한 메타 데이터베이스 연결 테스트 중
Kustomize를 사용하여 k8s 목록을 배치합니다.폴더config/base
에 다음 목록을 추가합니다config/base/service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: metabase
name: metabase
config/base/security-group-policy.yaml
apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
name: metabase
spec:
serviceAccountSelector:
matchLabels:
app: metabase
config/base/database-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: metabase
type: Opaque
data:
password: metabase
config/base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: metabase
labels:
app: metabase
spec:
selector:
matchLabels:
app: metabase
replicas: 1
template:
metadata:
labels:
app: metabase
spec:
containers:
- name: metabase
image: metabase/metabase
imagePullPolicy: IfNotPresent
config/base/service.yaml
apiVersion: v1
kind: Service
metadata:
name: metabase
labels:
app: metabase
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 3000
protocol: TCP
selector:
app: metabase
마지막으로 저희 config/base/kustomization.yaml
파일입니다.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: metabase
resources:
- security-group-policy.yaml
- service-account.yaml
- deployment.yaml
- service.yaml
- database-secret.yaml
현재 우리는kustomizebase
가 있어서 terraform 출력으로 제공하는 값 수정 목록을 사용할 수 있습니다.
생성config/envs/$ENV/service-account.patch.yaml
.Dell은 이전에 RDS 액세스를 위해 작성한 IAM 역할을 사용하여 서비스 계정에 대한 설명을 제공합니다.
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: <RDS_ACCESS_ROLE_ARN>
labels:
app: metabase
name: metabase
생성config/envs/$ENV/security-group-policy.patch.yaml
.SecurityGroupPolicy
CRD는 POD에 할당할 보안 그룹을 지정합니다.명칭 공간에서pod 라벨이나pod와 관련된 서비스 계정의 라벨에 따라pod를 선택할 수 있습니다.적용할 보안 그룹 ID를 정의합니다.
apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
name: metabase
spec:
serviceAccountSelector:
matchLabels:
app: metabase
securityGroups:
groupIds:
- <POD_SECURITY_GROUP_ID>
- <EKS_CLUSTER_SECURITY_GROUP_ID>
생성config/envs/$ENV/database-secret.patch.yaml
apiVersion: v1
kind: Secret
metadata:
name: metabase
type: Opaque
data:
password: <MB_DB_PASS>
생성config/envs/$ENV/deployment.patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: metabase
labels:
app: metabase
spec:
selector:
matchLabels:
app: metabase
replicas: 1
template:
metadata:
labels:
app: metabase
spec:
serviceAccountName: metabase
containers:
- name: metabase
image: metabase/metabase
imagePullPolicy: IfNotPresent
env:
- name: MB_DB_TYPE
value: postgres
- name: MB_DB_HOST
value: <MB_DB_HOST>
- name: MB_DB_PORT
value: "5432"
- name: MB_DB_DBNAME
value: metabase
- name: MB_DB_USER
value: metabase
- name: MB_DB_PASS
valueFrom:
secretKeyRef:
name: metabase
key: password
nodeSelector:
type: private
및 config/envs/$ENV/kustomization.yaml
파일
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: metabase
resources:
- ../../base
patchesStrategicMerge:
- security-group-policy.patch.yaml
- service-account.patch.yaml
- database-secret.patch.yaml
- deployment.patch.yaml
실제 값으로 대체합시다.
cd config/envs/dev
# Generate DB auth token
METABASE_PWD=$(aws rds generate-db-auth-token --hostname $(terraform output private-rds-endpoint) --port 5432 --username metabase --region $REGION)
METABASE_PWD=$(echo -n $METABASE_PWD | base64 -w 0 )
sed -i "s/<MB_DB_PASS>/$METABASE_PWD/g" database-secret.patch.yaml
sed -i "s/<POD_SECURITY_GROUP_ID>/$(terraform output sg-rds-access)/g; s/<EKS_CLUSTER_SECURITY_GROUP_ID>/$(terraform output sg-eks-cluster)/g" security-group-policy.patch.yaml
sed -i "s,<RDS_ACCESS_ROLE_ARN>,$(terraform output rds-access-role-arn),g" service-account.patch.yaml
sed -i "s/<MB_DB_HOST>/$(terraform output private-rds-endpoint)/g" deployment.patch.yaml
실행 목록
kubectl create namespace metabase
kubectl config set-context --current --namespace=metabase
kustomize build . | kubectl apply -f -
효과가 있는지 한번 봅시다.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
metabase-6d47d7b94b-796sx 1/1 Running 2 98s
$ kubectl describe pods metabase-6d47d7b94b-796sx
Name: metabase-6d47d7b94b-796sx
Namespace: metabase
Priority: 0
Node: ip-10-0-3-109.eu-west-1.compute.internal/10.0.3.109
[..]
Labels: app=metabase
pod-template-hash=6d47d7b94b
Annotations: kubernetes.io/psp: eks.privileged
vpc.amazonaws.com/pod-eni:
[{"eniId":"eni-054df22ad2b1b89c3","ifAddress":"02:3b:a8:a7:9c:f5","privateIp":"10.0.3.128","vlanId":1,"subnetCidr":"10.0.2.0/23"}]
Status: Running
IP: 10.0.3.128
IPs:
IP: 10.0.3.128
[..]
Node-Selectors: type=private
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32s default-scheduler Successfully assigned metabase/metabase-6d47d7b94b-796sx to ip-10-0-3-109.eu-west-1.compute.internal
Normal SecurityGroupRequested 32s vpc-resource-controller Pod will get the following Security Groups [sg-0c0195a69b1b8bdc3 sg-0d4b509bad15ec963]
Normal ResourceAllocated 31s vpc-resource-controller Allocated [{"eniId":"eni-054df22ad2b1b89c3","ifAddress":"02:3b:a8:a7:9c:f5","privateIp":"10.0.3.128","vlanId":1,"subnetCidr":"10.0.2.0/23"}] to the pod
Normal Pulled 31s kubelet Container image "metabase/metabase" already present on machine
Normal Created 31s kubelet Created container metabase
Normal Started 31s kubelet Started container metabase
우리가 본 바와 같이 안전팀은 이미 기중기에 연결되었다.
$ kubectl logs metabase-6d47d7b94b-796sx
[..]
2021-03-20 13:22:35,660 INFO metabase.core :: Setting up and migrating Metabase DB. Please sit tight, this may take a minute...
2021-03-20 13:22:35,663 INFO db.setup :: Verifying postgres Database Connection ...
2021-03-20 13:22:40,245 INFO db.setup :: Successfully verified PostgreSQL 12.5 application database connection. ✅
2021-03-20 13:22:40,246 INFO db.setup :: Running Database Migrations...
2021-03-20 13:22:40,387 INFO db.setup :: Setting up Liquibase...
2021-03-20 13:22:40,502 INFO db.setup :: Liquibase is ready.
2021-03-20 13:22:40,503 INFO db.liquibase :: Checking if Database has unrun migrations...
2021-03-20 13:22:42,900 INFO db.liquibase :: Database has unrun migrations. Waiting for migration lock to be cleared...
2021-03-20 13:22:42,980 INFO db.liquibase :: Migration lock is cleared. Running migrations...
2021-03-20 13:22:48,068 INFO db.setup :: Database Migrations Current ... ✅
[..]
2021-03-20 13:23:13,054 INFO metabase.core :: Metabase Initialization COMPLETE
If the deployment is created before the SecurityGroupPolicy
you will get a connect timed out
. Delete and recreate the deployment.
연결이 실패했는지 확인하기 위해 보안 그룹 정책을 삭제하고 배치를 다시 만듭니다.
$ kubectl delete -f security-group-policy.patch.yaml
$ kubectl delete -f deployment.patch.yaml
$ kubectl apply -f deployment.patch.yaml
$ kubectl logs metabase-6d47d7b94b-wbn4r
2021-03-20 13:31:32,993 INFO db.setup :: Verifying postgres Database Connection ...
2021-03-20 13:31:43,052 ERROR metabase.core :: Metabase Initialization FAILED
clojure.lang.ExceptionInfo: Unable to connect to Metabase postgres DB.
[..]
Caused by: java.net.SocketTimeoutException: connect timed out
[..]
2021-03-20 13:31:43,072 INFO metabase.core :: Metabase Shutting Down ...
2021-03-20 13:31:43,077 INFO metabase.server :: Shutting Down Embedded Jetty Webserver
2021-03-20 13:31:43,088 INFO metabase.core :: Metabase Shutdown COMPLETE
보시다시피 Metabase는 RDS 실례에 접근할 수 있는 권한을 부여받지 않습니다.
마지막으로 보안 그룹 정책을 다시 추가하고 서비스 계정에서 IAM 역할을pod에 추가하는 주석을 삭제합니다.
$ kubectl annotate sa metabase eks.amazonaws.com/role-arn-
$ kubectl apply -f security-group-policy.patch.yaml
$ kubectl delete -f deployment.patch.yaml
$ kubectl apply -f deployment.patch.yaml
2021-03-20 13:43:42,329 INFO db.setup :: Verifying postgres Database Connection ...
2021-03-20 13:43:42,710 ERROR metabase.core :: Metabase Initialization FAILED
clojure.lang.ExceptionInfo: Unable to connect to Metabase postgres DB.
[..]
Caused by: org.postgresql.util.PSQLException: FATAL: PAM authentication failed for user "metabase"
[..]
보시다시피 메타베이스는 인증을 거치지 않고 사용자'메타베이스'에 접근할 수 있는 권한을 부여받았습니다.
결론
이 긴 워크숍을 통해 다음과 같은 작업을 수행했습니다.
resource "aws_iam_role_policy_attachment" "eks-AmazonEKSVPCResourceController" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
role = aws_iam_role.eks.name
}
resource "aws_security_group" "rds_access" {
name = "rds-access-from-pod-${var.env}"
description = "Allow RDS Access from Kubernetes Pods"
vpc_id = aws_vpc.main.id
ingress {
from_port = 0
to_port = 0
protocol = "-1"
self = true
}
ingress {
from_port = 53
to_port = 53
protocol = "tcp"
security_groups = [aws_eks_cluster.eks.vpc_config[0].cluster_security_group_id]
}
ingress {
from_port = 53
to_port = 53
protocol = "udp"
security_groups = [aws_eks_cluster.eks.vpc_config[0].cluster_security_group_id]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "rds-access-from-pod-${var.env}"
Environment = var.env
}
}
ingress {
from_port = var.rds_port
to_port = var.rds_port
protocol = "tcp"
security_groups = [aws_security_group.rds_access.id]
}
egress {
from_port = 1025
to_port = 65535
protocol = "tcp"
security_groups = [aws_security_group.rds_access.id]
}
output "sg-eks-cluster" {
value = aws_eks_cluster.eks.vpc_config[0].cluster_security_group_id
}
output "sg-rds-access" {
value = aws_security_group.rds_access.id
}
cd infra/envs/dev
terraform apply ../../plan/
EKS 클러스터에 연결
aws eks --region $REGION update-kubeconfig --name $EKS_CLUSTER_NAME
이제 우리는 POD가 자신의 네트워크 인터페이스를 수신할 수 있도록 해야 한다.이 작업을 수행하기 전에 다음 명령을 사용하여 클러스터의 CNI 버전을 인쇄합니다.kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2
Amazon EKS 클러스터는 Kubernetes 버전 1.17과 Amazon EKS 플랫폼 버전 EKS를 실행해야 합니다.3년 이상.CNI 버전 업그레이드[1]
curl -o aws-k8s-cni.yaml https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.7.9/config/v1.7/aws-k8s-cni.yaml
sed -i "s/us-west-2/$REGION/g" aws-k8s-cni.yaml
kubectl apply -f aws-k8s-cni.yaml
aws 노드 수호 프로그램에서 ENABLE_POD_ENI
변수를true로 설정하고 CNI 플러그인을 사용해서 POD의 네트워크 인터페이스를 관리합니다.이 설정이true로 설정되면 플러그인은 그룹의 모든 노드에 값 vpc.amazonaws.com/has-trunk-attached=true
이 있는 탭을 추가합니다.VPC 자원 컨트롤러는 특수한 네트워크 인터페이스를 만들고 연결한다. 이를 중계 네트워크 인터페이스라고 하는데 다음과 같다aws-k8s-trunk-eni
[2].kubectl set env daemonset -n kube-system aws-node ENABLE_POD_ENI=true
다음 명령을 사용하면 어떤 노드의 aws-k8s-trunk-eni
가true로 설정되었는지 볼 수 있습니다.$ kubectl get nodes -o wide -l vpc.amazonaws.com/has-trunk-attached=true
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-3-109.eu-west-1.compute.internal Ready <none> 56m v1.18.9-eks-d1db3c 10.0.3.109 <none> Amazon Linux 2 4.14.219-164.354.amzn2.x86_64 docker://19.3.13
ip-10-0-7-157.eu-west-1.compute.internal Ready <none> 56m v1.18.9-eks-d1db3c 10.0.7.157 34.253.89.183 Amazon Linux 2 4.14.219-164.354.amzn2.x86_64 docker://19.3.13
RDS 인스턴스에 대한 메타 데이터베이스 연결 테스트 중
Kustomize를 사용하여 k8s 목록을 배치합니다.폴더config/base
에 다음 목록을 추가합니다config/base/service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: metabase
name: metabase
config/base/security-group-policy.yaml
apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
name: metabase
spec:
serviceAccountSelector:
matchLabels:
app: metabase
config/base/database-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: metabase
type: Opaque
data:
password: metabase
config/base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: metabase
labels:
app: metabase
spec:
selector:
matchLabels:
app: metabase
replicas: 1
template:
metadata:
labels:
app: metabase
spec:
containers:
- name: metabase
image: metabase/metabase
imagePullPolicy: IfNotPresent
config/base/service.yaml
apiVersion: v1
kind: Service
metadata:
name: metabase
labels:
app: metabase
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 3000
protocol: TCP
selector:
app: metabase
마지막으로 저희 config/base/kustomization.yaml
파일입니다.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: metabase
resources:
- security-group-policy.yaml
- service-account.yaml
- deployment.yaml
- service.yaml
- database-secret.yaml
현재 우리는kustomizebase
가 있어서 terraform 출력으로 제공하는 값 수정 목록을 사용할 수 있습니다.
생성config/envs/$ENV/service-account.patch.yaml
.Dell은 이전에 RDS 액세스를 위해 작성한 IAM 역할을 사용하여 서비스 계정에 대한 설명을 제공합니다.
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: <RDS_ACCESS_ROLE_ARN>
labels:
app: metabase
name: metabase
생성config/envs/$ENV/security-group-policy.patch.yaml
.SecurityGroupPolicy
CRD는 POD에 할당할 보안 그룹을 지정합니다.명칭 공간에서pod 라벨이나pod와 관련된 서비스 계정의 라벨에 따라pod를 선택할 수 있습니다.적용할 보안 그룹 ID를 정의합니다.
apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
name: metabase
spec:
serviceAccountSelector:
matchLabels:
app: metabase
securityGroups:
groupIds:
- <POD_SECURITY_GROUP_ID>
- <EKS_CLUSTER_SECURITY_GROUP_ID>
생성config/envs/$ENV/database-secret.patch.yaml
apiVersion: v1
kind: Secret
metadata:
name: metabase
type: Opaque
data:
password: <MB_DB_PASS>
생성config/envs/$ENV/deployment.patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: metabase
labels:
app: metabase
spec:
selector:
matchLabels:
app: metabase
replicas: 1
template:
metadata:
labels:
app: metabase
spec:
serviceAccountName: metabase
containers:
- name: metabase
image: metabase/metabase
imagePullPolicy: IfNotPresent
env:
- name: MB_DB_TYPE
value: postgres
- name: MB_DB_HOST
value: <MB_DB_HOST>
- name: MB_DB_PORT
value: "5432"
- name: MB_DB_DBNAME
value: metabase
- name: MB_DB_USER
value: metabase
- name: MB_DB_PASS
valueFrom:
secretKeyRef:
name: metabase
key: password
nodeSelector:
type: private
및 config/envs/$ENV/kustomization.yaml
파일
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: metabase
resources:
- ../../base
patchesStrategicMerge:
- security-group-policy.patch.yaml
- service-account.patch.yaml
- database-secret.patch.yaml
- deployment.patch.yaml
실제 값으로 대체합시다.
cd config/envs/dev
# Generate DB auth token
METABASE_PWD=$(aws rds generate-db-auth-token --hostname $(terraform output private-rds-endpoint) --port 5432 --username metabase --region $REGION)
METABASE_PWD=$(echo -n $METABASE_PWD | base64 -w 0 )
sed -i "s/<MB_DB_PASS>/$METABASE_PWD/g" database-secret.patch.yaml
sed -i "s/<POD_SECURITY_GROUP_ID>/$(terraform output sg-rds-access)/g; s/<EKS_CLUSTER_SECURITY_GROUP_ID>/$(terraform output sg-eks-cluster)/g" security-group-policy.patch.yaml
sed -i "s,<RDS_ACCESS_ROLE_ARN>,$(terraform output rds-access-role-arn),g" service-account.patch.yaml
sed -i "s/<MB_DB_HOST>/$(terraform output private-rds-endpoint)/g" deployment.patch.yaml
실행 목록
kubectl create namespace metabase
kubectl config set-context --current --namespace=metabase
kustomize build . | kubectl apply -f -
효과가 있는지 한번 봅시다.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
metabase-6d47d7b94b-796sx 1/1 Running 2 98s
$ kubectl describe pods metabase-6d47d7b94b-796sx
Name: metabase-6d47d7b94b-796sx
Namespace: metabase
Priority: 0
Node: ip-10-0-3-109.eu-west-1.compute.internal/10.0.3.109
[..]
Labels: app=metabase
pod-template-hash=6d47d7b94b
Annotations: kubernetes.io/psp: eks.privileged
vpc.amazonaws.com/pod-eni:
[{"eniId":"eni-054df22ad2b1b89c3","ifAddress":"02:3b:a8:a7:9c:f5","privateIp":"10.0.3.128","vlanId":1,"subnetCidr":"10.0.2.0/23"}]
Status: Running
IP: 10.0.3.128
IPs:
IP: 10.0.3.128
[..]
Node-Selectors: type=private
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32s default-scheduler Successfully assigned metabase/metabase-6d47d7b94b-796sx to ip-10-0-3-109.eu-west-1.compute.internal
Normal SecurityGroupRequested 32s vpc-resource-controller Pod will get the following Security Groups [sg-0c0195a69b1b8bdc3 sg-0d4b509bad15ec963]
Normal ResourceAllocated 31s vpc-resource-controller Allocated [{"eniId":"eni-054df22ad2b1b89c3","ifAddress":"02:3b:a8:a7:9c:f5","privateIp":"10.0.3.128","vlanId":1,"subnetCidr":"10.0.2.0/23"}] to the pod
Normal Pulled 31s kubelet Container image "metabase/metabase" already present on machine
Normal Created 31s kubelet Created container metabase
Normal Started 31s kubelet Started container metabase
우리가 본 바와 같이 안전팀은 이미 기중기에 연결되었다.
$ kubectl logs metabase-6d47d7b94b-796sx
[..]
2021-03-20 13:22:35,660 INFO metabase.core :: Setting up and migrating Metabase DB. Please sit tight, this may take a minute...
2021-03-20 13:22:35,663 INFO db.setup :: Verifying postgres Database Connection ...
2021-03-20 13:22:40,245 INFO db.setup :: Successfully verified PostgreSQL 12.5 application database connection. ✅
2021-03-20 13:22:40,246 INFO db.setup :: Running Database Migrations...
2021-03-20 13:22:40,387 INFO db.setup :: Setting up Liquibase...
2021-03-20 13:22:40,502 INFO db.setup :: Liquibase is ready.
2021-03-20 13:22:40,503 INFO db.liquibase :: Checking if Database has unrun migrations...
2021-03-20 13:22:42,900 INFO db.liquibase :: Database has unrun migrations. Waiting for migration lock to be cleared...
2021-03-20 13:22:42,980 INFO db.liquibase :: Migration lock is cleared. Running migrations...
2021-03-20 13:22:48,068 INFO db.setup :: Database Migrations Current ... ✅
[..]
2021-03-20 13:23:13,054 INFO metabase.core :: Metabase Initialization COMPLETE
If the deployment is created before the SecurityGroupPolicy
you will get a connect timed out
. Delete and recreate the deployment.
연결이 실패했는지 확인하기 위해 보안 그룹 정책을 삭제하고 배치를 다시 만듭니다.
$ kubectl delete -f security-group-policy.patch.yaml
$ kubectl delete -f deployment.patch.yaml
$ kubectl apply -f deployment.patch.yaml
$ kubectl logs metabase-6d47d7b94b-wbn4r
2021-03-20 13:31:32,993 INFO db.setup :: Verifying postgres Database Connection ...
2021-03-20 13:31:43,052 ERROR metabase.core :: Metabase Initialization FAILED
clojure.lang.ExceptionInfo: Unable to connect to Metabase postgres DB.
[..]
Caused by: java.net.SocketTimeoutException: connect timed out
[..]
2021-03-20 13:31:43,072 INFO metabase.core :: Metabase Shutting Down ...
2021-03-20 13:31:43,077 INFO metabase.server :: Shutting Down Embedded Jetty Webserver
2021-03-20 13:31:43,088 INFO metabase.core :: Metabase Shutdown COMPLETE
보시다시피 Metabase는 RDS 실례에 접근할 수 있는 권한을 부여받지 않습니다.
마지막으로 보안 그룹 정책을 다시 추가하고 서비스 계정에서 IAM 역할을pod에 추가하는 주석을 삭제합니다.
$ kubectl annotate sa metabase eks.amazonaws.com/role-arn-
$ kubectl apply -f security-group-policy.patch.yaml
$ kubectl delete -f deployment.patch.yaml
$ kubectl apply -f deployment.patch.yaml
2021-03-20 13:43:42,329 INFO db.setup :: Verifying postgres Database Connection ...
2021-03-20 13:43:42,710 ERROR metabase.core :: Metabase Initialization FAILED
clojure.lang.ExceptionInfo: Unable to connect to Metabase postgres DB.
[..]
Caused by: org.postgresql.util.PSQLException: FATAL: PAM authentication failed for user "metabase"
[..]
보시다시피 메타베이스는 인증을 거치지 않고 사용자'메타베이스'에 접근할 수 있는 권한을 부여받았습니다.
결론
이 긴 워크숍을 통해 다음과 같은 작업을 수행했습니다.
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: metabase
name: metabase
apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
name: metabase
spec:
serviceAccountSelector:
matchLabels:
app: metabase
apiVersion: v1
kind: Secret
metadata:
name: metabase
type: Opaque
data:
password: metabase
apiVersion: apps/v1
kind: Deployment
metadata:
name: metabase
labels:
app: metabase
spec:
selector:
matchLabels:
app: metabase
replicas: 1
template:
metadata:
labels:
app: metabase
spec:
containers:
- name: metabase
image: metabase/metabase
imagePullPolicy: IfNotPresent
apiVersion: v1
kind: Service
metadata:
name: metabase
labels:
app: metabase
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 3000
protocol: TCP
selector:
app: metabase
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: metabase
resources:
- security-group-policy.yaml
- service-account.yaml
- deployment.yaml
- service.yaml
- database-secret.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: <RDS_ACCESS_ROLE_ARN>
labels:
app: metabase
name: metabase
apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
name: metabase
spec:
serviceAccountSelector:
matchLabels:
app: metabase
securityGroups:
groupIds:
- <POD_SECURITY_GROUP_ID>
- <EKS_CLUSTER_SECURITY_GROUP_ID>
apiVersion: v1
kind: Secret
metadata:
name: metabase
type: Opaque
data:
password: <MB_DB_PASS>
apiVersion: apps/v1
kind: Deployment
metadata:
name: metabase
labels:
app: metabase
spec:
selector:
matchLabels:
app: metabase
replicas: 1
template:
metadata:
labels:
app: metabase
spec:
serviceAccountName: metabase
containers:
- name: metabase
image: metabase/metabase
imagePullPolicy: IfNotPresent
env:
- name: MB_DB_TYPE
value: postgres
- name: MB_DB_HOST
value: <MB_DB_HOST>
- name: MB_DB_PORT
value: "5432"
- name: MB_DB_DBNAME
value: metabase
- name: MB_DB_USER
value: metabase
- name: MB_DB_PASS
valueFrom:
secretKeyRef:
name: metabase
key: password
nodeSelector:
type: private
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: metabase
resources:
- ../../base
patchesStrategicMerge:
- security-group-policy.patch.yaml
- service-account.patch.yaml
- database-secret.patch.yaml
- deployment.patch.yaml
cd config/envs/dev
# Generate DB auth token
METABASE_PWD=$(aws rds generate-db-auth-token --hostname $(terraform output private-rds-endpoint) --port 5432 --username metabase --region $REGION)
METABASE_PWD=$(echo -n $METABASE_PWD | base64 -w 0 )
sed -i "s/<MB_DB_PASS>/$METABASE_PWD/g" database-secret.patch.yaml
sed -i "s/<POD_SECURITY_GROUP_ID>/$(terraform output sg-rds-access)/g; s/<EKS_CLUSTER_SECURITY_GROUP_ID>/$(terraform output sg-eks-cluster)/g" security-group-policy.patch.yaml
sed -i "s,<RDS_ACCESS_ROLE_ARN>,$(terraform output rds-access-role-arn),g" service-account.patch.yaml
sed -i "s/<MB_DB_HOST>/$(terraform output private-rds-endpoint)/g" deployment.patch.yaml
kubectl create namespace metabase
kubectl config set-context --current --namespace=metabase
kustomize build . | kubectl apply -f -
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
metabase-6d47d7b94b-796sx 1/1 Running 2 98s
$ kubectl describe pods metabase-6d47d7b94b-796sx
Name: metabase-6d47d7b94b-796sx
Namespace: metabase
Priority: 0
Node: ip-10-0-3-109.eu-west-1.compute.internal/10.0.3.109
[..]
Labels: app=metabase
pod-template-hash=6d47d7b94b
Annotations: kubernetes.io/psp: eks.privileged
vpc.amazonaws.com/pod-eni:
[{"eniId":"eni-054df22ad2b1b89c3","ifAddress":"02:3b:a8:a7:9c:f5","privateIp":"10.0.3.128","vlanId":1,"subnetCidr":"10.0.2.0/23"}]
Status: Running
IP: 10.0.3.128
IPs:
IP: 10.0.3.128
[..]
Node-Selectors: type=private
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32s default-scheduler Successfully assigned metabase/metabase-6d47d7b94b-796sx to ip-10-0-3-109.eu-west-1.compute.internal
Normal SecurityGroupRequested 32s vpc-resource-controller Pod will get the following Security Groups [sg-0c0195a69b1b8bdc3 sg-0d4b509bad15ec963]
Normal ResourceAllocated 31s vpc-resource-controller Allocated [{"eniId":"eni-054df22ad2b1b89c3","ifAddress":"02:3b:a8:a7:9c:f5","privateIp":"10.0.3.128","vlanId":1,"subnetCidr":"10.0.2.0/23"}] to the pod
Normal Pulled 31s kubelet Container image "metabase/metabase" already present on machine
Normal Created 31s kubelet Created container metabase
Normal Started 31s kubelet Started container metabase
$ kubectl logs metabase-6d47d7b94b-796sx
[..]
2021-03-20 13:22:35,660 INFO metabase.core :: Setting up and migrating Metabase DB. Please sit tight, this may take a minute...
2021-03-20 13:22:35,663 INFO db.setup :: Verifying postgres Database Connection ...
2021-03-20 13:22:40,245 INFO db.setup :: Successfully verified PostgreSQL 12.5 application database connection. ✅
2021-03-20 13:22:40,246 INFO db.setup :: Running Database Migrations...
2021-03-20 13:22:40,387 INFO db.setup :: Setting up Liquibase...
2021-03-20 13:22:40,502 INFO db.setup :: Liquibase is ready.
2021-03-20 13:22:40,503 INFO db.liquibase :: Checking if Database has unrun migrations...
2021-03-20 13:22:42,900 INFO db.liquibase :: Database has unrun migrations. Waiting for migration lock to be cleared...
2021-03-20 13:22:42,980 INFO db.liquibase :: Migration lock is cleared. Running migrations...
2021-03-20 13:22:48,068 INFO db.setup :: Database Migrations Current ... ✅
[..]
2021-03-20 13:23:13,054 INFO metabase.core :: Metabase Initialization COMPLETE
If the deployment is created before the SecurityGroupPolicy
you will get a connect timed out
. Delete and recreate the deployment.
$ kubectl delete -f security-group-policy.patch.yaml
$ kubectl delete -f deployment.patch.yaml
$ kubectl apply -f deployment.patch.yaml
$ kubectl logs metabase-6d47d7b94b-wbn4r
2021-03-20 13:31:32,993 INFO db.setup :: Verifying postgres Database Connection ...
2021-03-20 13:31:43,052 ERROR metabase.core :: Metabase Initialization FAILED
clojure.lang.ExceptionInfo: Unable to connect to Metabase postgres DB.
[..]
Caused by: java.net.SocketTimeoutException: connect timed out
[..]
2021-03-20 13:31:43,072 INFO metabase.core :: Metabase Shutting Down ...
2021-03-20 13:31:43,077 INFO metabase.server :: Shutting Down Embedded Jetty Webserver
2021-03-20 13:31:43,088 INFO metabase.core :: Metabase Shutdown COMPLETE
$ kubectl annotate sa metabase eks.amazonaws.com/role-arn-
$ kubectl apply -f security-group-policy.patch.yaml
$ kubectl delete -f deployment.patch.yaml
$ kubectl apply -f deployment.patch.yaml
2021-03-20 13:43:42,329 INFO db.setup :: Verifying postgres Database Connection ...
2021-03-20 13:43:42,710 ERROR metabase.core :: Metabase Initialization FAILED
clojure.lang.ExceptionInfo: Unable to connect to Metabase postgres DB.
[..]
Caused by: org.postgresql.util.PSQLException: FATAL: PAM authentication failed for user "metabase"
[..]
이 긴 워크숍을 통해 다음과 같은 작업을 수행했습니다.
깨끗했어
kustomize build . | kubectl delete -f -
cd ../../../infra/envs/$ENV
terraform destroy ../../plan/
마지막 말
The source code is available on Gitlab .
질문이나 피드백이 있으면 언제든지 댓글을 달아주세요.
그렇지 않으면 Amazon EKS를 Amazon RDS에 연결하고 네트워크 및 인증 계층에서pod급 종심 방어 보안 정책을 제공하는 데 대한 몇 가지 난제에 대한 답변을 도와드렸으면 합니다.
겸사겸사 한마디 하자면 또래와 나누는 것을 주저하지 마라😊
읽어주셔서 감사합니다!
문서
[1] https://docs.aws.amazon.com/eks/latest/userguide/cni-upgrades.html
[2] https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html
[3] https://eksctl.io/usage/iamserviceaccounts/#how-it-works
Reference
이 문제에 관하여(서비스 고객의 IAM 역할과 Pod급 보안 그룹을 결합하여 종심 방어 전략을 실현하다), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다
https://dev.to/stack-labs/securing-the-connectivity-between-amazon-eks-and-amazon-rds-part-5-1coh
텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념
(Collection and Share based on the CC Protocol.)
kustomize build . | kubectl delete -f -
cd ../../../infra/envs/$ENV
terraform destroy ../../plan/
The source code is available on Gitlab .
질문이나 피드백이 있으면 언제든지 댓글을 달아주세요.
그렇지 않으면 Amazon EKS를 Amazon RDS에 연결하고 네트워크 및 인증 계층에서pod급 종심 방어 보안 정책을 제공하는 데 대한 몇 가지 난제에 대한 답변을 도와드렸으면 합니다.
겸사겸사 한마디 하자면 또래와 나누는 것을 주저하지 마라😊
읽어주셔서 감사합니다!
문서
[1] https://docs.aws.amazon.com/eks/latest/userguide/cni-upgrades.html
[2] https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html
[3] https://eksctl.io/usage/iamserviceaccounts/#how-it-works
Reference
이 문제에 관하여(서비스 고객의 IAM 역할과 Pod급 보안 그룹을 결합하여 종심 방어 전략을 실현하다), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다
https://dev.to/stack-labs/securing-the-connectivity-between-amazon-eks-and-amazon-rds-part-5-1coh
텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념
(Collection and Share based on the CC Protocol.)
Reference
이 문제에 관하여(서비스 고객의 IAM 역할과 Pod급 보안 그룹을 결합하여 종심 방어 전략을 실현하다), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다 https://dev.to/stack-labs/securing-the-connectivity-between-amazon-eks-and-amazon-rds-part-5-1coh텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념 (Collection and Share based on the CC Protocol.)