hcoelho.com

my blog

Title only : Full post

AWS ECS: Fixing agent container not running and exit code 5

:

I had a great time this week when our instance on Amazon imploded for strange reasons, sadly I don't have the logs anymore to post them here, but these were the key problems (or what I remember from the event) and their solutions:

The docker process suddenly stopped responding some commands such as docker ps

Solution: I reinstalled docker and AWS ECS Container Agent.

Docker is now responding, but the container agent will not run

By running it with the -it flags, we noticed it was exiting with code 5.

Solution: Delete the file /var/lib/ecs/data/ecs-agent-data.json and start the container agent again.

cdto amazon aws ecs agent 

Making shell scripts for deployment on AWS

:

Long gone are the days which deploying a website meant simply uploading .php files via FTP to your favourite web host. Now, if you are using any of those new, fancy technologies such as Node.js, Docker, and Cloud Hosting, you will have to perform several tasks in order to put the new version of your website online. Luckily, these services often allow you to do this via command line - not that it makes things easier at first glance, but it allows us to automate these processes. Our application is built on Node.js, it uses 3 Docker containers (2 APIs and 1 Database), and they are all deployed as a single task on Amazon Web Services (AWS); in this post I am going to describe how our deployment currently done, and what I did to automate it.


1- Logging in

To make any changes on AWS, you first have to log in. For this, we use the command that they provided us:

aws ecr get-login

Run this in the command line and it will give you another command; copy and paste the new command, and you are logged in. Or, if you want to be lazy:

aws ecr get-login | bash -e


2- Building, tagging, and pushing the container repositories

For these tasks, AWS provides you the required commands - when you use their website to upload a new version of the repository, they will tell you to run commands similar to these:

docker build -t <name> . &&
docker tag <name>:latest ____________.dkr.ecr._________.amazonaws.com/<name>:latest  &&
docker push ____________.dkr.ecr._________.amazonaws.com/<name>:latest


3- Stopping the current tasks

For the new containers to be run, the current tasks will have to be stopped. There is no quick and easy way to do this, as far as I know, so this is what I did:

  • List all the current tasks running on the cluster
    aws ecs list-tasks --cluster default
    

This will give you a list of tasks in JSON format, like this:

{
    "taskArns": [
        "task1",
        "task2",
        "task3"
    ]
}
  • Extract the tasks

For extracting the tasks, I piped the output into two seds:

sed '/\([{}].*\|.*taskArns.*\| *]\)/d' | sed 's/ *"\([^"]*\).*/\1/'

This is the result:

task1
task2
task3
  • For every line (task), stop the task with that name

Now we can use the command provided by AWS: aws ecs stop-task. I just used a for-loop to go through every line and stop the task:

while read -r task; do aws ecs stop-task --cluster default --task $task; done


4- Wrapping up

With the basic pieces done, I wrapped them in a shell script:

#!/bin/bash


###############################################################################
# LOGGING IN
###############################################################################
login()
{
    aws ecr get-login | bash -e

    deployDatabase
}


###############################################################################
# DEPLOYING DATABASE
###############################################################################
deployDatabase()
{

    echo -e "Ready to deploy the database? (Y/n)"
    read shouldDeploy

    if [ "$shouldDeploy" = "Y" ];then
        echo -e "Deploying the database\n"
        cd ../database &&
        docker build -t <name> . &&
        docker tag <name>:latest ____________.dkr.ecr._________.amazonaws.com/<name>:latest  &&
        docker push ____________.dkr.ecr._________.amazonaws.com/<name>:latest
    fi

    deployAPI1
}


###############################################################################
# DEPLOYING API 1
###############################################################################
deployAPI1()
{

    echo -e "Ready to deploy API 1? (Y/n)"
    read shouldDeploy

    if [ "$shouldDeploy" = "Y" ];then
        echo -e "Deploying API 1\n"
        cd ../api1 &&
        docker build -t <name> . &&
        docker tag <name>:latest ____________.dkr.ecr._________.amazonaws.com/<name>:latest &&
        docker push ____________.dkr.ecr._________.amazonaws.com/<name>:latest
    fi

    deployAPI2
}


###############################################################################
# DEPLOYING API 2
###############################################################################
deployAPI2()
{

    echo -e "Ready to deploy API 2? (Y/n)"
    read shouldDeploy

    if [ "$shouldDeploy" = "Y" ];then
        echo -e "Deploying API 2\n"
        cd ../api2 &&
        docker build -t <name> . &&
        docker tag <name>:latest ____________.dkr.ecr._________.amazonaws.com/<name>:latest &&
        docker push ____________.dkr.ecr._________.amazonaws.com/<name>:latest
    fi

    stopTasks
}


###############################################################################
# STOPPING CURRENT TASKS
###############################################################################
stopTasks()
{
    echo -e "Stop the current tasks? (Y/n)"
    read shouldDeploy

    if [ "$shouldDeploy" = "Y" ];then
        aws ecs list-tasks --cluster default | \
        sed '/\([{}].*\|.*taskArns.*\| *]\)/d' | sed 's/ *"\([^"]*\).*/\1/' | \
        while read -r task; do aws ecs stop-task --cluster default --task $task; done
    fi

    echo -e "Done"
}


###############################################################################
# STARTING POINT
###############################################################################
clear

echo -e "Are you sure you want to deploy on AWS? This cannot be undone. (Y/n)"
read shouldDeploy

if [ "$shouldDeploy" = "Y" ];then
    login
else
    echo "Not deployed"
fi

cdot aws shell linux 

Deployment of containers on AWS

:

We spent the past days reading about AWS in order to deploy the 2 containers we developed: one container only has a "Dockerfile" with our database (MongoDB) and the other container has the API to insert data into this database (Node.js). In this post, I'll describe some things I would like to have known before I started the deployment; it was a very frustrating process, but after you learn how everything works, it becomes really easy.

First of all, the deployment is not a linear process: you will have to know some details about your application before you start the process; these details, however, will not be obvious for you if you haven't used AWS before: this is one of the reasons why it was a slow, painful process for us.

Looking back, I think the first step to deploy these containers is to upload the repositories, even though they are not properly configured yet: you need the repositories there to have a better perspective on what to do. So, first step: push the docker images to EC2 Container Registry. The process is simple, it only takes 4 steps (3 steps, after the first push), which are just copying and pasting commands in the command line.

After the containers are uploaded, we should choose a machine that will run Docker with the containers, and here is the catch: we need to choose a machine that is already optimized to be used for Container Service, otherwise it will not be a valid Container Instance and you would have to configure it yourself. To find machines that are optimized for ECS, we search for "ecs" on the custom instances. After the machine was chosen, we select the other specifications we'll need, such as storage, IPs, and so on - but nothing too special here.

With the right machine instance, a default Cluster will be created in the Container Registry. Here is the interesting part: the cluster is a set of services, which are responsible for (re)starting a set of tasks, which are groups of docker containers to be used by the machine. Instead of starting from the service, we now should start from the task, adding its containers and work back to the service - then the deployment will be complete.

To create a task is simple: we give it a name and a list of the repositories (the ones that we uploaded in the beginning), but we also have to set how the containers are going to interact with each other and with the devices outside. There are two special settings we had to do:

1- The MongoDB container should be visible for the API. This can be done by linking them together: on the container for the API, we map the name of the database container to an alias (for instance: Mongo:MongoContainer); with this, the container of the API will receive some environment variables, such as MONGOCONTAINER_PORT, with the address and port of the other container. We can use this to make the API connect to the database (and the source code would probably have to be modified to use this port).

2- The MongoDB container should use an external drive for storage, otherwise, its data will be lost when the container is restarted. For this, we map the external directory (that we want the data to be stored into) to an internal directory, which is used by the database (for instance, /usr/mongodb:/mongo/db). Since we wanted to use an external device, we also had to make sure the device would be mounted when the machine was started.

After the task is set up, we make the service for the cluster: the service, in this case, contains the only task that we made. With the service properly configured, it will start and restart the tasks automatically: the deployment should now be ready.

It's easy to understand why we spent so much time trying to make it work (giving the amount of details and steps), but looking back, this modularity makes a lot of sense. The learning curve is very steep, but I am very impressed by how powerful this service is. I am very inclined to start using it for deploying my own projects.

cdot aws containers ec2 ecs