# This is how to create service tasks in Amazon ECS¶

This is the notebook we use to create running services on the AWS Ec2 Container Service. In the following we assume you have created a ECS Cluster. The steps for how to do that are simple and described in the book. To make this run you need your AMI identity arn string (see hte third box below).

We assume that this notebook is running on a machine with your credentials in your .aws directory and that your ECS cluster is call tutorial-cluster.

In [2]:
import boto3

In [3]:
client = boto3.client('ecs')


# we created a cluster called cloudbook in the ecs portal¶

if we didn't, then go to the portal and do so. instructions are in the book.

let's see if we can find it

In [4]:
client.list_clusters()['clusterArns']

Out[4]:
[u'arn:aws:ecs:us-west-2:066301190734:cluster/tutorial-cluster',
u'arn:aws:ecs:us-west-2:066301190734:cluster/default']

Next let's see how many VMs (called containers in ecs) we have. there shoud be two.

In [5]:
instance_list = client.list_container_instances(cluster='tutorial-cluster')['containerInstanceArns']

In [6]:
instance_list

Out[6]:
[u'arn:aws:ecs:us-west-2:066301190734:container-instance/2e7c68ae-8e92-40e5-95b9-257cf60b6d3d',
u'arn:aws:ecs:us-west-2:066301190734:container-instance/f1cfbf50-33a3-4e38-b127-4ebe1dfad6ea']

we can even get their IP addresses

In [8]:
ec2instances = [ client.describe_container_instances(
cluster='tutorial-cluster',
containerInstances=[instance]
)[u'containerInstances'][0][u'ec2InstanceId'] for instance in instance_list]

In [9]:
ec2 = boto3.resource('ec2')
instances = ec2.instances.filter( Filters=[{'Name': 'instance-id', 'Values': ec2instances}])
for instance in instances:

('i-0dea1c5f95972ae9e', 'm4.large', 'ami-022b9262', '54.244.192.186')
('i-0ebf47034775338e9', 'm4.large', 'ami-022b9262', '54.244.194.0')
('i-041c9126db2e95af8', 'm4.large', 'ami-022b9262', '54.191.230.115')


# now we will create the first task definitions¶

we will have four services.

• The predictor service. This services reads prediction jobs from the amazon sqs queue service, invokes the predictor to classify it, then it sends the classification, title, the service hostname, the correct answer and a sequence number to the table service. there will be two versions:

• predictorAWS which will send requests to port 8050

• predictorAzure which goes to port 8055

• the table service. this is a simple web service that waits for a message from a predictor service and then pushes the result to the aws dynamoDB in table "BookTable". This one will be called tableserviceAWS

• the table service for azure is tableServiceAzure and it listens on 8085 and send records to escience2 in table BookTable </ul> We start with the task definition of the for the table service. we specify that task definition family name, default network and our AIM role that authorizes the service to use the queue and the dynamoDB. This is very important to have this role. Go to the AIM portal and creat it. Again, this is described in the book We are going to deploy this as a docker container which we have saved to the Docker hub. (see the build files for this in directory table-service). We also need to specify a port binding.

This first version is the one that uses the Azure table service. we are going to make it listen on port 8055.

In [12]:
response = client.register_task_definition(
family='tableserviceAzure',
networkMode='bridge',

containerDefinitions=[
{
'name': 'tableserviceAzure',
'image': 'dbgannon/table-service-bottle-azure',
'cpu': 20,
'memory': 400,
'memoryReservation': 123,
'portMappings': [
{
'containerPort': 8055,
'hostPort': 8055,
'protocol': 'tcp'
},
],
'essential': True,
},
],
)

In [13]:
client.list_task_definitions(familyPrefix='tableserviceAzure')['taskDefinitionArns']

Out[13]:
[u'arn:aws:ecs:us-west-2:066301190734:task-definition/tableserviceAzure:1']

# now we create the tableservice service¶

note that when we create a task definition it gives it a sequnce number.
that is because we often revises the task definition during debugging. We specify that we want at least 50% of our requested instances running at all time. we specify that we want 3 instances of this service running. because we have three nodes in our cluster, this will put one on each node because the port binding will take port 8055 for one container only.

In [14]:
response = client.create_service( cluster='tutorial-cluster',
serviceName='tableserviceAzure',
desiredCount=3, deploymentConfiguration={
'maximumPercent': 100,
'minimumHealthyPercent': 50 }
)


## next we create the task definition for the tableservice for the AWS DynamoDB.¶

In this case we use a different container (because the code for DynamoDB is different than Azure tables). Here we map to port 8050.

In [ ]:
response = client.register_task_definition(
family='tableserviceAWS',
networkMode='bridge',

containerDefinitions=[
{
'name': 'tableserviceAWS',
'image': 'dbgannon/table-service-bottle',
'cpu': 20,
'memory': 400,
'memoryReservation': 123,
'portMappings': [
{
'containerPort': 8050,
'hostPort': 8050,
'protocol': 'tcp'
},
],
'essential': True,
},
],
)


## Now create the instances of tableserviceAWS¶

In [ ]:
response = client.create_service( cluster='tutorial-cluster',
serviceName='tableserviceAWS',
desiredCount=3, deploymentConfiguration={
'maximumPercent': 100,
'minimumHealthyPercent': 50 }
)


# now the task definition for the predictor.¶

In this case we use the container predictor-new which takes one argument: this is the port that it expects to find the table service. This is the azure version.

In [15]:
response = client.register_task_definition(
family='predictorAzure',
networkMode='bridge',

containerDefinitions=[
{
'name': 'predictorAzure',
'image': 'dbgannon/predictor-new',
'cpu': 20,
'memoryReservation': 400,
'essential': True,
'command': ['8055']
},
],
)

In [17]:
client.list_task_definitions(familyPrefix='predictorAzure')['taskDefinitionArns']

Out[17]:
[u'arn:aws:ecs:us-west-2:066301190734:task-definition/predictorAzure:1']

# create the predictor service¶

As you can see from the above i am on the 4th iteration of the predictor (earlier versions had a few bugs). Note that we are creating 8 copies of this service.

In [27]:
response = client.create_service( cluster='tutorial-cluster',
serviceName='predictorAzure',
desiredCount=1, deploymentConfiguration={
'maximumPercent': 100,
'minimumHealthyPercent': 50 }
)


now check to see how many services i have and how many tasks. There should be 10 tasks

In [21]:
client.list_services( cluster='tutorial-cluster')['serviceArns']

Out[21]:
[u'arn:aws:ecs:us-west-2:066301190734:service/predictorAzure',
u'arn:aws:ecs:us-west-2:066301190734:service/tableserviceAzure']
In [22]:
client.list_tasks(cluster='tutorial-cluster')['taskArns']

Out[22]:
[u'arn:aws:ecs:us-west-2:066301190734:task/0130b4e4-402f-4c6f-81b8-91e703732463',
u'arn:aws:ecs:us-west-2:066301190734:task/be7ca21b-2876-4fba-a156-c36123d59744']

now we will create the aws table service and predictor for the AWS version. same container. just a different parameter for the port

In [25]:
response = client.register_task_definition(
family='predictorAWS',
networkMode='bridge',

containerDefinitions=[
{
'name': 'predictorAWS',
'image': 'dbgannon/predictor-new',
'cpu': 20,
'memoryReservation': 400,
'essential': True,
'command': ['8050']
},
],
)

In [26]:
response = client.create_service( cluster='tutorial-cluster',
serviceName='predictorAWS',