Let’s deploy a docker image to AWS ECR and further to AWS Fargate.
If you don’t have docker installed, now is the time to do it.
GitHub Action YML:
Create a workflow named main-aws.yml
, for example as:
name: Deploy to Amazon ECS
on:
workflow_dispatch:
env:
AWS_REGION: eu-central-1 # set this to your preferred AWS region, e.g. us-west-1
ECS_SERVICE: test-svc # set this to your Amazon ECS service name
ECS_CLUSTER: test-fargate-dev # set this to your Amazon ECS cluster name
ECS_TASK_DEFINITION: .aws/td.json
CONTAINER_NAME: test # set this to the name of the container in the
ECS_TASK_NAME: test-task
defaults:
run:
shell: bash
jobs:
Build:
name: Build
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Get commit hash
id: get-commit-hash
run: echo "::set-output name=commit-hash::$(git rev-parse --short HEAD)"
- name: Get timestamp
id: get-timestamp
run: echo "::set-output name=timestamp::$(date +'%Y%m%d-%H%M')"
- name: Build, tag, and push the image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: ${{ secrets.AWS_REPO_NAME }}
IMAGE_TAG: ${{ steps.get-commit-hash.outputs.commit-hash }}-${{ steps.get-timestamp.outputs.timestamp }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT
- name: Create task definition in Amazon ECS
run: aws ecs register-task-definition --cli-input-json file://${{ env.ECS_TASK_DEFINITION }}
- name: Create svc in Amazon ECS
run: aws ecs create-service --cluster ${{ env.ECS_CLUSTER }} --service-name ${{ env.ECS_SERVICE }} --task-definition ${{ env.ECS_TASK_NAME }} --desired-count 1 --capacity-provider-strategy '[{"capacityProvider":"FARGATE_SPOT","weight":100}]' --network-configuration "awsvpcConfiguration={subnets=[subnet-1,subnet-2,subnet-3],securityGroups=[sg-1,sg-2],assignPublicIp=ENABLED}" --region ${{ env.AWS_REGION }}
- name: Fill in the new image ID in the Amazon ECS task definition
id: task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: ${{ env.ECS_TASK_DEFINITION }}
# same as container name in taks def
container-name: ${{ env.CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition@v2
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: ${{ env.ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
Details for every step are bellow.
Login to AWS
In order to login to AWS, setup a AWS access id and secret key and add them to GitHub Actions secrets and variables as AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
.
Now you can reference it from the yml like bellow:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
Also, configure the AWS region (AWS_REGION
) in the environment variables section .
Login to AWS ECR, build and push the image
First, we need to add the ECR repo name to GitHub Actions secrets and variables as AWS_REPO_NAME
.
Next we need to update the image name in the yml environment variables as CONTAINER_NAME
.
The code logins to AWS ECR using the credentials above. Next, it get the last commit hash for the current branch and the date to add them to the image name as version, and lastly it build the docker image and pushes it to AWS ECR.
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Get commit hash
id: get-commit-hash
run: echo "::set-output name=commit-hash::$(git rev-parse --short HEAD)"
- name: Get timestamp
id: get-timestamp
run: echo "::set-output name=timestamp::$(date +'%Y%m%d-%H%M')"
- name: Build, tag, and push the image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: ${{ secrets.AWS_REPO_NAME }}
IMAGE_TAG: ${{ steps.get-commit-hash.outputs.commit-hash }}-${{ steps.get-timestamp.outputs.timestamp }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT
If everything is ok, after this step you should see the image in AWS Console.
Fargate Task definition
Now we need to create the task definition.
First we need a task definition json, like the one bellow named .aws/td_2.json
:
{
"containerDefinitions": [
{
"name": "test",
"image": "nginx",
"cpu": 0,
"portMappings": [
{
"name": "test-5000-tcp",
"containerPort": 5000,
"hostPort": 5000,
"protocol": "tcp",
"appProtocol": "http"
}
],
"essential": true,
"environment": [],
"environmentFiles": [],
"mountPoints": [],
"volumesFrom": [],
"ulimits": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/cinematch-task",
"mode": "non-blocking",
"awslogs-create-group": "true",
"max-buffer-size": "25m",
"awslogs-region": "eu-central-1",
"awslogs-stream-prefix": "ecs"
},
"secretOptions": []
},
"systemControls": []
}
],
"family": "test-task",
"executionRoleArn": "arn:aws:iam::***:role/ecsTaskExecutionRole",
"networkMode": "awsvpc",
"volumes": [],
"placementConstraints": [],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "1024",
"memory": "1024",
"runtimePlatform": {
"cpuArchitecture": "X86_64",
"operatingSystemFamily": "LINUX"
}
}
What we need to change is the name
, portMappings
(with your container exposed port), executionRoleArn
with the IAM role, awslogs-group
with the log group, and cpu
and memory
as needed.
Note : Do not change the image
, since we’ll update it later with the correct image and tag.
The yml code that creates the ECS task definition is:
- name: Create task definition in Amazon ECS
run: aws ecs register-task-definition --cli-input-json file://${{ env.ECS_TASK_DEFINITION }}
Fargate Service
First, let’s update the cluster name (ECS_CLUSTER
), service name (ECS_SERVICE
), task name (ECS_TASK_NAME
)
Now we need to create the service based on the task definition above.
- name: Create svc in Amazon ECS
run: aws ecs create-service --cluster ${{ env.ECS_CLUSTER }} --service-name ${{ env.ECS_SERVICE }} --task-definition ${{ env.ECS_TASK_NAME }} --desired-count 1 --capacity-provider-strategy '[{"capacityProvider":"FARGATE_SPOT","weight":100}]' --network-configuration "awsvpcConfiguration={subnets=[subnet-1,subnet-2,subnet-3],securityGroups=[sg-1,sg-2],assignPublicIp=ENABLED}" --region ${{ env.AWS_REGION }}
We need to change the subnets and securityGroups with the correct ones.
Note: this service uses a Public IP. This is probably not the case for you !
Update the image in the task definition
Now is the time to find the image we pushed to ECR and update the task definition.
This is exactly what the next code does:
- name: Fill in the new image ID in the Amazon ECS task definition
id: task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: ${{ env.ECS_TASK_DEFINITION }}
# same as container name in taks def
container-name: ${{ env.CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition@v2
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: ${{ env.ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
Putting it all together
Putting it all together we should have the Farget task definition updated, and the service up and running.
Also, you can check the logs from CloudWatch :
Documentation and links:
- Manage ECS Fargate Containers with the AWS CLI - https://www.matillion.com/blog/manage-elastic-container-services-ecs-fargate-containers-with-the-aws-command-line-interface-cli-worked-example-for-matillion-redshift-and-databricks-users