AWS 코드 구축 및 Gitlab CI를 사용하여 Docker 이미지 구축 보호
Docker 명령을 활성화하는 세 가지 방법이 있습니다.
docker build
를 사용하려면:IRSA는 자격 증명을 저장하거나 docker 레지스트리에 연결하지 않고도 Gitlab CI 파이프라인에서 AWS 코드 구축 작업을 실행할 수 있도록 도와줍니다.우리는 AWS 코드를 구축해서 우리를 위해 이 일을 하도록 했다.
건축하다
본 문서에서는 AWS 코드 구축 작업을 시작하기 위해 Gitlab runner를 구성합니다.다음 아키텍처는 이 프로세스를 설명합니다.
If you want to understand how an IAM role can be attached to a Gitlab runner, please refer to my previous post on
EKS 구성
먼저 EKS 클러스터를 만듭니다.
export AWS_PROFILE=<AWS_PROFILE>
export AWS_REGION=eu-west-1
export EKS_CLUSTER_NAME=devops
export EKS_VERSION=1.19
eksctl create cluster \
--name $EKS_CLUSTER_NAME \
--version $EKS_VERSION \
--region $AWS_REGION \
--managed \
--node-labels "nodepool=dev"
export AWS_PROFILE=<AWS_PROFILE>
export AWS_REGION=eu-west-1
export EKS_CLUSTER_NAME=devops
export EKS_VERSION=1.19
eksctl create cluster \
--name $EKS_CLUSTER_NAME \
--version $EKS_VERSION \
--region $AWS_REGION \
--managed \
--node-labels "nodepool=dev"
eksctl utils associate-iam-oidc-provider --cluster=$EKS_CLUSTER_NAME --approve
ISSUER_URL=$(aws eks describe-cluster \
--name $EKS_CLUSTER_NAME \
--query cluster.identity.oidc.issuer \
--output text)
kubectl
과 클러스터 통신:aws eks --region $AWS_REGION update-kubeconfig --name $EKS_CLUSTER_NAME
kubectl create namespace dev
kubectl create serviceaccount -n dev app-deployer
ISSUER_HOSTPATH=$(echo $ISSUER_URL | cut -f 3- -d'/')
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
PROVIDER_ARN="arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/$ISSUER_HOSTPATH"
cat > oidc-trust-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "$PROVIDER_ARN"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${ISSUER_HOSTPATH}:sub": "system:serviceaccount:dev:app-deployer",
"${ISSUER_HOSTPATH}:aud": "sts.amazonaws.com"
}
}
}
]
}
EOF
GITLAB_ROLE_NAME=gitlab-runner-role
aws iam create-role \
--role-name $GITLAB_ROLE_NAME \
--assume-role-policy-document file://oidc-trust-policy.json
GITLAB_ROLE_ARN=$(aws iam get-role \
--role-name $GITLAB_ROLE_NAME \
--query Role.Arn --output text)
eks.amazonaws.com/role-arn=$GITLAB_ROLE_ARN
설명을 Kubernetes 서비스 계정에 추가합니다.kubectl annotate serviceAccount app-deployer -n dev eks.amazonaws.com/role-arn=$GITLAB_ROLE_ARN
You could also use
eksctl create iamserviceaccount [..]
[2]
코드 생성 구성
코드 생성을 위한 서비스 역할 만들기
IMAGE_REPO_NAME="app"
cat > codebuild-trust-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codebuild.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
CODE_BUILD_ROLE_NAME="app-code-build-service-role"
aws iam create-role \
--role-name $CODE_BUILD_ROLE_NAME \
--assume-role-policy-document file://codebuild-trust-policy.json
CODEBUILD_ROLE_ARN=$(aws iam get-role \
--role-name $CODE_BUILD_ROLE_NAME \
--query Role.Arn --output text)
cat > code-build-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CloudWatchLogsPolicy",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"*"
]
},
{
"Sid": "S3GetObjectPolicy",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::codebuild-$AWS_REGION-$AWS_ACCOUNT_ID-input-bucket/*"
]
},
{
"Sid": "ECRPullPolicy",
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage"
],
"Resource": [
"arn:aws:ecr:$AWS_REGION:$AWS_ACCOUNT_ID:repository/$IMAGE_REPO_NAME"
]
},
{
"Sid": "ECRAuthPolicy",
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken"
],
"Resource": [
"*"
]
},
{
"Sid": "ECRPushPolicy",
"Effect": "Allow",
"Action": [
"ecr:CompleteLayerUpload",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
],
"Resource": [
"arn:aws:ecr:$AWS_REGION:$AWS_ACCOUNT_ID:repository/$IMAGE_REPO_NAME"
]
},
{
"Sid": "S3BucketIdentity",
"Effect": "Allow",
"Action": [
"s3:GetBucketAcl",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::codebuild-$AWS_REGION-$AWS_ACCOUNT_ID-input-bucket"
}
]
}
EOF
aws iam put-role-policy \
--role-name $CODE_BUILD_ROLE_NAME \
--policy-name app-code-build-policy \
--policy-document file://code-build-policy.json
ECR 라이브러리 작성
aws ecr create-repository --repository-name $IMAGE_REPO_NAME
코드 구축 프로젝트와 관련된 버킷을 만들어서 부품을 저장합니다
CODEBUILD_BUCKET=codebuild-$AWS_REGION-$AWS_ACCOUNT_ID-input-bucket
aws s3api create-bucket \
--bucket $CODEBUILD_BUCKET \
--region $AWS_REGION \
--create-bucket-configuration LocationConstraint=$AWS_REGION
aws s3api put-public-access-block \
--bucket $CODEBUILD_BUCKET \
--public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
cat > project.json << EOF
{
"name": "build-app-docker-image",
"source": {
"type": "S3",
"location": "codebuild-<AWS_REGION>-<AWS_ACCOUNT_ID>-input-bucket/app/<IMAGE_TAG>/image.zip"
},
"artifacts": {
"type": "NO_ARTIFACTS"
},
"environment": {
"type": "LINUX_CONTAINER",
"image": "aws/codebuild/standard:4.0",
"computeType": "BUILD_GENERAL1_SMALL",
"environmentVariables": [
{
"name": "AWS_REGION",
"value": "<AWS_REGION>"
},
{
"name": "AWS_ACCOUNT_ID",
"value": "<AWS_ACCOUNT_ID>"
},
{
"name": "IMAGE_REPO_NAME",
"value": "<IMAGE_REPO_NAME>"
},
{
"name": "IMAGE_TAG",
"value": "<IMAGE_TAG>"
}
],
"privilegedMode": true
},
"serviceRole": "<ROLE_ARN>"
}
EOF
IMAGE_TAG="latest"
sed -i "s/<IMAGE_REPO_NAME>/$IMAGE_REPO_NAME/g; s/<IMAGE_TAG>/$IMAGE_TAG/g; s/<AWS_REGION>/$AWS_REGION/g; s/<AWS_ACCOUNT_ID>/$AWS_ACCOUNT_ID/g; s,<ROLE_ARN>,$CODEBUILD_ROLE_ARN,g;" project.json
aws codebuild create-project --cli-input-json file://project.json &
마지막으로 테스트를 위해 build.json
, buildspec.yml
및 Dockerfile을 만듭니다.
cat > build.json << EOF
{
"projectName": "app",
"sourceLocationOverride": "codebuild-<AWS_REGION>-<AWS_ACCOUNT_ID>-input-bucket/app/<IMAGE_TAG>/image.zip",
"environmentVariablesOverride": [
{
"name": "IMAGE_TAG",
"value": "<IMAGE_TAG>",
"type": "PLAINTEXT"
}
]
}
EOF
건축 규범.yml
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
cat > Dockerfile << EOF
FROM nginx
ENV AUTHOR=Dev.to
WORKDIR /usr/share/nginx/html
COPY hello_docker.html /usr/share/nginx/html
CMD cd /usr/share/nginx/html && sed -e s/Docker/"$AUTHOR"/ hello_docker.html > index.html ; nginx -g 'daemon off;'
EOF
cat > hello_docker.html << EOF
<!DOCTYPE html><html>
<head>
<meta charset="utf-8">
</head>
<body>
<h1 id="toc_0">Hello Docker!</h1>
<p>This is being served from a <b>docker</b><br>
container running Nginx.</p>
</body>
</html>
EOF
Gitlab runner에 KSA 할당
다음 단계는 KSA를 Gitlab runner에 할당하는 것입니다.
IMAGE_REPO_NAME="app"
cat > codebuild-trust-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codebuild.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
CODE_BUILD_ROLE_NAME="app-code-build-service-role"
aws iam create-role \
--role-name $CODE_BUILD_ROLE_NAME \
--assume-role-policy-document file://codebuild-trust-policy.json
CODEBUILD_ROLE_ARN=$(aws iam get-role \
--role-name $CODE_BUILD_ROLE_NAME \
--query Role.Arn --output text)
cat > code-build-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CloudWatchLogsPolicy",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"*"
]
},
{
"Sid": "S3GetObjectPolicy",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::codebuild-$AWS_REGION-$AWS_ACCOUNT_ID-input-bucket/*"
]
},
{
"Sid": "ECRPullPolicy",
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage"
],
"Resource": [
"arn:aws:ecr:$AWS_REGION:$AWS_ACCOUNT_ID:repository/$IMAGE_REPO_NAME"
]
},
{
"Sid": "ECRAuthPolicy",
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken"
],
"Resource": [
"*"
]
},
{
"Sid": "ECRPushPolicy",
"Effect": "Allow",
"Action": [
"ecr:CompleteLayerUpload",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
],
"Resource": [
"arn:aws:ecr:$AWS_REGION:$AWS_ACCOUNT_ID:repository/$IMAGE_REPO_NAME"
]
},
{
"Sid": "S3BucketIdentity",
"Effect": "Allow",
"Action": [
"s3:GetBucketAcl",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::codebuild-$AWS_REGION-$AWS_ACCOUNT_ID-input-bucket"
}
]
}
EOF
aws iam put-role-policy \
--role-name $CODE_BUILD_ROLE_NAME \
--policy-name app-code-build-policy \
--policy-document file://code-build-policy.json
aws ecr create-repository --repository-name $IMAGE_REPO_NAME
CODEBUILD_BUCKET=codebuild-$AWS_REGION-$AWS_ACCOUNT_ID-input-bucket
aws s3api create-bucket \
--bucket $CODEBUILD_BUCKET \
--region $AWS_REGION \
--create-bucket-configuration LocationConstraint=$AWS_REGION
aws s3api put-public-access-block \
--bucket $CODEBUILD_BUCKET \
--public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
cat > project.json << EOF
{
"name": "build-app-docker-image",
"source": {
"type": "S3",
"location": "codebuild-<AWS_REGION>-<AWS_ACCOUNT_ID>-input-bucket/app/<IMAGE_TAG>/image.zip"
},
"artifacts": {
"type": "NO_ARTIFACTS"
},
"environment": {
"type": "LINUX_CONTAINER",
"image": "aws/codebuild/standard:4.0",
"computeType": "BUILD_GENERAL1_SMALL",
"environmentVariables": [
{
"name": "AWS_REGION",
"value": "<AWS_REGION>"
},
{
"name": "AWS_ACCOUNT_ID",
"value": "<AWS_ACCOUNT_ID>"
},
{
"name": "IMAGE_REPO_NAME",
"value": "<IMAGE_REPO_NAME>"
},
{
"name": "IMAGE_TAG",
"value": "<IMAGE_TAG>"
}
],
"privilegedMode": true
},
"serviceRole": "<ROLE_ARN>"
}
EOF
IMAGE_TAG="latest"
sed -i "s/<IMAGE_REPO_NAME>/$IMAGE_REPO_NAME/g; s/<IMAGE_TAG>/$IMAGE_TAG/g; s/<AWS_REGION>/$AWS_REGION/g; s/<AWS_ACCOUNT_ID>/$AWS_ACCOUNT_ID/g; s,<ROLE_ARN>,$CODEBUILD_ROLE_ARN,g;" project.json
aws codebuild create-project --cli-input-json file://project.json &
cat > build.json << EOF
{
"projectName": "app",
"sourceLocationOverride": "codebuild-<AWS_REGION>-<AWS_ACCOUNT_ID>-input-bucket/app/<IMAGE_TAG>/image.zip",
"environmentVariablesOverride": [
{
"name": "IMAGE_TAG",
"value": "<IMAGE_TAG>",
"type": "PLAINTEXT"
}
]
}
EOF
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
cat > Dockerfile << EOF
FROM nginx
ENV AUTHOR=Dev.to
WORKDIR /usr/share/nginx/html
COPY hello_docker.html /usr/share/nginx/html
CMD cd /usr/share/nginx/html && sed -e s/Docker/"$AUTHOR"/ hello_docker.html > index.html ; nginx -g 'daemon off;'
EOF
cat > hello_docker.html << EOF
<!DOCTYPE html><html>
<head>
<meta charset="utf-8">
</head>
<body>
<h1 id="toc_0">Hello Docker!</h1>
<p>This is being served from a <b>docker</b><br>
container running Nginx.</p>
</body>
</html>
EOF
다음 단계는 KSA를 Gitlab runner에 할당하는 것입니다.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm repo add gitlab https://charts.gitlab.io
values.yaml
:
cat > values.yaml << EOF
imagePullPolicy: IfNotPresent
gitlabUrl: https://gitlab.com/
runnerRegistrationToken: "<REGISTRATION_TOKEN>"
unregisterRunners: true
terminationGracePeriodSeconds: 3600
concurrent: 10
checkInterval: 30
rbac:
create: true
metrics:
enabled: true
runners:
image: ubuntu:18.04
locked: true
pollTimeout: 360
protected: true
serviceAccountName: app-deployer
privileged: false
namespace: dev
builds:
cpuRequests: 100m
memoryRequests: 128Mi
services:
cpuRequests: 100m
memoryRequests: 128Mi
helpers:
cpuRequests: 100m
memoryRequests: 128Mi
tags: "k8s-dev-runner"
nodeSelector:
nodepool: dev
EOF
You can find the description of each attribute in the Gitlab runner charts repository [3]
Project -> Settings -> CI/CD -> Runners
부분의 Setup a specific Runner manually
에서 Gitlab 등록 영패를 획득.helm install -n dev docker-image-dev-runner -f values.yaml gitlab/gitlab-runner
Gitlab CI에서 특정 실행 프로그램 사용
Gitlab CI에서 첫 번째 파이프라인을 실행하기 전에 Kubernetes 클러스터 관리자 권한을 이전에 만든 IAM 역할에 추가합니다.
cat > build-docker-image-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CodeBuildStartPolicy",
"Effect": "Allow",
"Action": [
"codebuild:StartBuild",
"codebuild:BatchGet*"
],
"Resource": [
"arn:aws:codebuild:$AWS_REGION:$AWS_ACCOUNT_ID:project/build-app-docker-image"
]
},
{
"Sid": "LogsAccessPolicy",
"Effect": "Allow",
"Action": [
"logs:FilterLogEvents"
],
"Resource": [
"*"
]
},
{
"Sid": "S3ObjectCodeBuildPolicy",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::codebuild-$AWS_REGION-$AWS_ACCOUNT_ID-input-bucket",
"arn:aws:s3:::codebuild-$AWS_REGION-$AWS_ACCOUNT_ID-input-bucket/app/*"
]
}
]
}
EOF
aws iam put-role-policy \
--role-name $GITLAB_ROLE_NAME \
--policy-name build-docker-image-policy \
--policy-document file://build-docker-image-policy.json
Note: This policy is used as an example. AWS recommends you to use fine grained permissions.
현재 우리는 우리의 파이프.gitlab-ci.yml
를 운행할 수 있다.
stages:
- dev
before_script:
- yum install -y zip jq
publish image:
stage: dev
image:
name: amazon/aws-cli
script:
- IMAGE_TAG=$CI_COMMIT_TAG-$CI_COMMIT_SHORT_SHA
- sed -i "s/<IMAGE_TAG>/$IMAGE_TAG/g; s/<AWS_REGION>/$AWS_REGION/g; s/<AWS_ACCOUNT_ID>/$AWS_ACCOUNT_ID/g;" build.json
- zip -r image.zip buildspec.yml Dockerfile hello_docker.html
- aws s3api put-object --bucket $CODEBUILD_BUCKET --key app/$IMAGE_TAG/image.zip --body image.zip
- CODEBUILD_ID=$(aws codebuild start-build --project-name "build-app-docker-image" --cli-input-json file://build.json | jq -r '.build.id')
- sleep 5
- CODEBUILD_JOB=$(aws codebuild batch-get-builds --ids $CODEBUILD_ID)
- LOG_GROUP_NAME=$(jq -r '.builds[0].logs.groupName' <<< "$CODEBUILD_JOB")
- |
if [[ ${CODEBUILD_ID} != "" ]];
then
while true
do
sleep 10
aws logs tail $LOG_GROUP_NAME --since 10s
CODE_BUILD_STATUS=$(aws codebuild batch-get-builds --ids "$CODEBUILD_ID" | jq '.builds[].phases[] | select (.phaseType=="BUILD") | .phaseStatus' | tr -d '"')
if [[ ${CODE_BUILD_STATUS} = "FAILED" ]];
then
exit 1
elif [[ ${CODE_BUILD_STATUS} = "SUCCEEDED" ]];
then
break
fi
done
else
echo "Build initialization has failed"
exit 1
fi
tags:
- k8s-dev-runner
only:
refs:
- tags
이 작업은 다음과 같습니다.
cat > build-docker-image-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CodeBuildStartPolicy",
"Effect": "Allow",
"Action": [
"codebuild:StartBuild",
"codebuild:BatchGet*"
],
"Resource": [
"arn:aws:codebuild:$AWS_REGION:$AWS_ACCOUNT_ID:project/build-app-docker-image"
]
},
{
"Sid": "LogsAccessPolicy",
"Effect": "Allow",
"Action": [
"logs:FilterLogEvents"
],
"Resource": [
"*"
]
},
{
"Sid": "S3ObjectCodeBuildPolicy",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::codebuild-$AWS_REGION-$AWS_ACCOUNT_ID-input-bucket",
"arn:aws:s3:::codebuild-$AWS_REGION-$AWS_ACCOUNT_ID-input-bucket/app/*"
]
}
]
}
EOF
aws iam put-role-policy \
--role-name $GITLAB_ROLE_NAME \
--policy-name build-docker-image-policy \
--policy-document file://build-docker-image-policy.json
Note: This policy is used as an example. AWS recommends you to use fine grained permissions.
stages:
- dev
before_script:
- yum install -y zip jq
publish image:
stage: dev
image:
name: amazon/aws-cli
script:
- IMAGE_TAG=$CI_COMMIT_TAG-$CI_COMMIT_SHORT_SHA
- sed -i "s/<IMAGE_TAG>/$IMAGE_TAG/g; s/<AWS_REGION>/$AWS_REGION/g; s/<AWS_ACCOUNT_ID>/$AWS_ACCOUNT_ID/g;" build.json
- zip -r image.zip buildspec.yml Dockerfile hello_docker.html
- aws s3api put-object --bucket $CODEBUILD_BUCKET --key app/$IMAGE_TAG/image.zip --body image.zip
- CODEBUILD_ID=$(aws codebuild start-build --project-name "build-app-docker-image" --cli-input-json file://build.json | jq -r '.build.id')
- sleep 5
- CODEBUILD_JOB=$(aws codebuild batch-get-builds --ids $CODEBUILD_ID)
- LOG_GROUP_NAME=$(jq -r '.builds[0].logs.groupName' <<< "$CODEBUILD_JOB")
- |
if [[ ${CODEBUILD_ID} != "" ]];
then
while true
do
sleep 10
aws logs tail $LOG_GROUP_NAME --since 10s
CODE_BUILD_STATUS=$(aws codebuild batch-get-builds --ids "$CODEBUILD_ID" | jq '.builds[].phases[] | select (.phaseType=="BUILD") | .phaseStatus' | tr -d '"')
if [[ ${CODE_BUILD_STATUS} = "FAILED" ]];
then
exit 1
elif [[ ${CODE_BUILD_STATUS} = "SUCCEEDED" ]];
then
break
fi
done
else
echo "Build initialization has failed"
exit 1
fi
tags:
- k8s-dev-runner
only:
refs:
- tags
Gitlab 프로젝트에서 다음 파일을 밀어넣습니다.제출 표시를 잊지 마세요.
AWS_ACCOUNT_ID=$AWS_ACCOUNT_ID
AWS_REGION=$AWS_REGION
CODEBUILD_BUCKET=$CODEBUILD_BUCKET
이제 너는 파이프를 운행할 수 있다.gitlab.출력
Running with gitlab-runner 13.9.0 (2ebc4dc4)
on docker-image-dev-runner-gitlab-runner-5d98965dc9-2tr44 LEPNxeEr
Resolving secrets
00:00
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: dev
WARNING: Pulling GitLab Runner helper image from Docker Hub. Helper image is migrating to registry.gitlab.com, for more information see https://docs.gitlab.com/runner/configuration/advanced-configuration.html#migrating-helper-image-to-registrygitlabcom
Using Kubernetes executor with image amazon/aws-cli ...
Preparing environment
00:06
Waiting for pod dev/runner-lepnxeer-project-25941042-concurrent-0lj8mk to be running, status is Pending
Waiting for pod dev/runner-lepnxeer-project-25941042-concurrent-0lj8mk to be running, status is Pending
ContainersNotReady: "containers with unready status: [build helper]"
ContainersNotReady: "containers with unready status: [build helper]"
Running on runner-lepnxeer-project-25941042-concurrent-0lj8mk via docker-image-dev-runner-gitlab-runner-5d98965dc9-2tr44...
Getting source from Git repository
00:02
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/stack-labs/internal/sandbox/chabanerefes/code-build-gitlab-ci/.git/
Created fresh repository.
Checking out 4b3e0451 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:58
$ yum install -y zip jq
Loaded plugins: ovl, priorities
Resolving Dependencies
--> Running transaction check
---> Package jq.x86_64 0:1.5-1.amzn2.0.2 will be installed
--> Processing Dependency: libonig.so.2()(64bit) for package: jq-1.5-1.amzn2.0.2.x86_64
---> Package zip.x86_64 0:3.0-11.amzn2.0.2 will be installed
--> Running transaction check
---> Package oniguruma.x86_64 0:5.9.6-1.amzn2.0.4 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
jq x86_64 1.5-1.amzn2.0.2 amzn2-core 154 k
zip x86_64 3.0-11.amzn2.0.2 amzn2-core 263 k
Installing for dependencies:
oniguruma x86_64 5.9.6-1.amzn2.0.4 amzn2-core 127 k
Transaction Summary
================================================================================
Install 2 Packages (+1 Dependent package)
Total download size: 543 k
Installed size: 1.6 M
Downloading packages:
--------------------------------------------------------------------------------
Total 2.8 MB/s | 543 kB 00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : oniguruma-5.9.6-1.amzn2.0.4.x86_64 1/3
Installing : jq-1.5-1.amzn2.0.2.x86_64 2/3
Installing : zip-3.0-11.amzn2.0.2.x86_64 3/3
Verifying : zip-3.0-11.amzn2.0.2.x86_64 1/3
Verifying : oniguruma-5.9.6-1.amzn2.0.4.x86_64 2/3
Verifying : jq-1.5-1.amzn2.0.2.x86_64 3/3
Installed:
jq.x86_64 0:1.5-1.amzn2.0.2 zip.x86_64 0:3.0-11.amzn2.0.2
Dependency Installed:
oniguruma.x86_64 0:5.9.6-1.amzn2.0.4
Complete!
$ IMAGE_TAG="v0.1.0-$CI_COMMIT_SHORT_SHA"
$ sed -i "s/<IMAGE_TAG>/$IMAGE_TAG/g; s/<AWS_REGION>/$AWS_REGION/g; s/<AWS_ACCOUNT_ID>/$AWS_ACCOUNT_ID/g;" build.json
$ zip -r image.zip buildspec.yml Dockerfile hello_docker.html
updating: Dockerfile (deflated 34%)
updating: hello_docker.html (deflated 22%)
updating: buildspec.yml (deflated 61%)
$ aws s3api put-object --bucket $CODEBUILD_BUCKET --key app/$IMAGE_TAG/image.zip --body image.zip
{
"ETag": "\"adaa387a8c8186972f83cc03ef85c0d9\""
}
$ CODEBUILD_ID=$(aws codebuild start-build --project-name "build-app-docker-image" --cli-input-json file://build.json | jq -r '.build.id')
$ sleep 5
$ CODEBUILD_JOB=$(aws codebuild batch-get-builds --ids $CODEBUILD_ID)
$ LOG_GROUP_NAME=$(jq -r '.builds[0].logs.groupName' <<< "$CODEBUILD_JOB")
$ if [[ ${CODEBUILD_ID} != "" ]]; # collapsed multi-line command
2021/04/16 16:40:28 Waiting for agent ping
2021/04/16 16:40:30 Waiting for DOWNLOAD_SOURCE
2021/04/16 16:40:31 Phase is DOWNLOAD_SOURCE
2021/04/16 16:40:31 CODEBUILD_SRC_DIR=/codebuild/output/src352441152/src
2021/04/16 16:40:31 YAML location is /codebuild/output/src352441152/src/buildspec.yml
2021/04/16 16:40:31 Processing environment variables
2021/04/16 16:40:31 No runtime version selected in buildspec.
2021/04/16 16:40:31 Moving to directory /codebuild/output/src352441152/src
2021/04/16 16:40:31 Registering with agent
2021/04/16 16:40:31 Phases found in YAML: 3
2021/04/16 16:40:31 PRE_BUILD: 2 commands
2021/04/16 16:40:31 BUILD: 4 commands
2021/04/16 16:40:31 POST_BUILD: 3 commands
2021/04/16 16:40:31 Phase complete: DOWNLOAD_SOURCE State: SUCCEEDED
2021/04/16 16:40:31 Phase context status code: Message:
2021/04/16 16:40:31 Entering phase INSTALL
2021/04/16 16:40:31 Phase complete: INSTALL State: SUCCEEDED
2021/04/16 16:40:31 Phase context status code: Message:
2021/04/16 16:40:31 Entering phase PRE_BUILD
2021/04/16 16:40:31 Running command echo Logging in to Amazon ECR...
2021-04-16T16:40:35.757000+00:00 bd057f7c-b326-41a0-a426-7ffd47eefd7c Logging in to Amazon ECR...
2021-04-16T16:40:35.757000+00:00 bd057f7c-b326-41a0-a426-7ffd47eefd7c
2021/04/16 16:40:31 Running command aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Container] 2021/04/16 16:40:37 Phase complete: PRE_BUILD State: SUCCEEDED
[Container] 2021/04/16 16:40:37 Phase context status code: Message:
[Container] 2021/04/16 16:40:37 Entering phase BUILD
[Container] 2021/04/16 16:40:37 Running command echo Build started on `date`
Build started on Fri Apr 16 16:40:37 UTC 2021
[Container] 2021/04/16 16:40:37 Running command echo Building the Docker image...
Building the Docker image...
[Container] 2021/04/16 16:40:37 Running command docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
Sending build context to Docker daemon 4.608kB
Step 1/5 : FROM nginx
latest: Pulling from library/nginx
f7ec5a41d630: Pulling fs layer
aa1efa14b3bf: Pulling fs layer
b78b95af9b17: Pulling fs layer
c7d6bca2b8dc: Pulling fs layer
cf16cd8e71e0: Pulling fs layer
0241c68333ef: Pulling fs layer
c7d6bca2b8dc: Waiting
cf16cd8e71e0: Waiting
0241c68333ef: Waiting
b78b95af9b17: Verifying Checksum
b78b95af9b17: Download complete
aa1efa14b3bf: Verifying Checksum
aa1efa14b3bf: Download complete
f7ec5a41d630: Download complete
c7d6bca2b8dc: Verifying Checksum
c7d6bca2b8dc: Download complete
0241c68333ef: Verifying Checksum
0241c68333ef: Download complete
cf16cd8e71e0: Verifying Checksum
cf16cd8e71e0: Download complete
f7ec5a41d630: Pull complete
---> Running in 47fe16263fa9
Removing intermediate container 47fe16263fa9
---> f39182f28f46
Step 3/5 : WORKDIR /usr/share/nginx/html
---> Running in ab16c2902110
Removing intermediate container ab16c2902110
---> 1af4cd082179
Step 4/5 : COPY hello_docker.html /usr/share/nginx/html
---> b198e809d3bd
Step 5/5 : CMD cd /usr/share/nginx/html && sed -e s/Docker/""/ hello_docker.html > index.html ; nginx -g 'daemon off;'
---> Running in 7ab6d9888ce7
Removing intermediate container 7ab6d9888ce7
---> 124aa6c81ee7
Successfully built 124aa6c81ee7
Successfully tagged app:v0.1.0-4b3e0451
2021-04-16T16:40:45.928000+00:00 bd057f7c-b326-41a0-a426-7ffd47eefd7c
[Container] 2021/04/16 16:40:45 Running command docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
2021-04-16T16:40:45.928000+00:00 bd057f7c-b326-41a0-a426-7ffd47eefd7c
[Container] 2021/04/16 16:40:45 Phase complete: BUILD State: SUCCEEDED
[Container] 2021/04/16 16:40:45 Phase context status code: Message:
[Container] 2021/04/16 16:40:45 Entering phase POST_BUILD
[Container] 2021/04/16 16:40:45 Running command echo Build completed on `date`
Build completed on Fri Apr 16 16:40:45 UTC 2021
2021-04-16T16:40:45.928000+00:00 bd057f7c-b326-41a0-a426-7ffd47eefd7c
[Container] 2021/04/16 16:40:45 Running command echo Pushing the Docker image...
Pushing the Docker image...
2021-04-16T16:40:45.928000+00:00 bd057f7c-b326-41a0-a426-7ffd47eefd7c
[Container] 2021/04/16 16:40:45 Running command docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
The push refers to repository [[MASKED].dkr.ecr.eu-west-1.amazonaws.com/app]
61504854da99: Preparing
64ee8c6d0de0: Preparing
974e9faf62f1: Preparing
15aac1be5f02: Preparing
23c959acc3d0: Preparing
4dc529e519c4: Preparing
7e718b9c0c8c: Preparing
4dc529e519c4: Waiting
7e718b9c0c8c: Waiting
974e9faf62f1: Layer already exists
15aac1be5f02: Layer already exists
23c959acc3d0: Layer already exists
64ee8c6d0de0: Layer already exists
4dc529e519c4: Layer already exists
7e718b9c0c8c: Layer already exists
61504854da99: Pushed
v0.1.0-4b3e0451: digest: sha256:6d12814984b825931f91f33d43962b0442737557bb1a3b3d8399b3e7ef9b71e0 size: 1777
2021-04-16T16:40:48.366000+00:00 bd057f7c-b326-41a0-a426-7ffd47eefd7c
[Container] 2021/04/16 16:40:46 Phase complete: POST_BUILD State: SUCCEEDED
[Container] 2021/04/16 16:40:46 Phase context status code: Message:
Cleaning up file based variables
00:00
Job succeeded
이렇게!
결론
AWS 코드를 사용하여 docker 이미지를 제작하여 파이핑 수준에서 작업 및 보안 계층을 관리할 필요가 없습니다.
네가 이 박문을 즐겨 읽기를 바란다.
질문이나 피드백이 있으면 언제든지 댓글을 달아주세요.
읽어주셔서 감사합니다!
문서
[1] https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
[2] https://eksctl.io/usage/iamserviceaccounts
[3] https://gitlab.com/gitlab-org/charts/gitlab-runner/-/blob/master/values.yaml
Reference
이 문제에 관하여(AWS 코드 구축 및 Gitlab CI를 사용하여 Docker 이미지 구축 보호), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다
https://dev.to/stack-labs/building-docker-images-with-aws-code-build-and-gitlab-ci-a49
텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념
(Collection and Share based on the CC Protocol.)
[1] https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
[2] https://eksctl.io/usage/iamserviceaccounts
[3] https://gitlab.com/gitlab-org/charts/gitlab-runner/-/blob/master/values.yaml
Reference
이 문제에 관하여(AWS 코드 구축 및 Gitlab CI를 사용하여 Docker 이미지 구축 보호), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다 https://dev.to/stack-labs/building-docker-images-with-aws-code-build-and-gitlab-ci-a49텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념 (Collection and Share based on the CC Protocol.)