In this demo we show
Items 4 and 5 require access to ssh, so this won't run on Windows, but it works fine on the Mac and Linux.
there are a bunch of instance ids and volume ids here, but they have been deleted. you will need to create your own.
import boto3
first you need you aws_access_key_id and secret_access_key that were created when you created your account
ec2 = boto3.resource('ec2', 'us-west-2',
aws_access_key_id = 'your access key',
aws_secret_access_key = 'your big long secret key')
The following function is a handy way to see the status of your instances
def show_instance(status):
instances = ec2.instances.filter(
Filters=[{'Name': 'instance-state-name', 'Values': [status]}])
for instance in instances:
print(instance.id, instance.instance_type, instance.image_id, instance.public_ip_address)
show_instance('stopped')
this will start a stopped instance
ec2.instances.filter(InstanceIds=['i-0ccfc93aaf3e0305c']).start()
show_instance('running')
This is how you can stop and terminate an instance.
stoplist = ['i-0ccfc93aaf3e0305c']
ec2.instances.filter(InstanceIds=stoplist).stop()
if i want to permanently delete an instance i can terminate it with
ec2.instances.filter(InstanceIds=terminatelist).terminate()
This is easiest to do from the portal assuming you have a keypair called escience1 here is the way to create an instance.
In this case we are using MaxCount = 1. If we had MaxCount = 5 it will try to create 5 instances for us.
ec2.create_instances(ImageId='ami-7172b611', InstanceType='t2.micro', KeyName='escience1', MinCount=1, MaxCount=1)
show_instance('running')
import subprocess
import sys
vols = ec2.volumes.filter(
Filters=[])
for vol in vols:
print(vol.id, vol.size, vol.state)
To attach the volume to an instance we first create a volume object and then attach it to an instance.
It is important to note that the volume and the instance must be in the same availability zone.
vol = ec2.Volume('vol-0bdd0584d0833e691')
vol.attach_to_instance(
InstanceId='i-0a184b56b0ebdba98',
Device='/dev/xvdh'
)
Mounting a volume on a file system cannot be done with boto3 directly because the mount commands must be executed by the operating system. However we can use SSH to connect to the instance and execute the commands remotely.
The following function uses Python to create a subprocess to invoke ssh. Unfortunately this will only work on Linux or the Mac OS, because SSH is not a regular shell command for Windows. What follows was executed on a Mac.
def myexec( pathtopem, hostip, commands):
ssh = subprocess.Popen(['ssh', '-i', pathtopem, 'ec2-user@%s'%hostip, commands ],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
if result == []:
error = ssh.stderr.readlines()
print >>sys.stderr, "ERROR: %s" % error
return "error"
else:
return result
Before we can mount the volume we must make sure it has a file system. Then we can create a directory at the root called "/data" as the mount point. then we do the mount and run a "df" to see if it is there. we will need our private key to accomplish this.
priv_key = 'path-to-your-secret-key.pem'
command = 'sudo mkfs -t ext3 /dev/xvdh\n \
sudo mkdir /data\n \
sudo mount /dev/xvdh /data\n \
df\n'
myexec(priv_key, '54.187.61.12', command)
As you can see our 20G volume is now available as "/data". Let's double check
vols = ec2.volumes.filter(
Filters=[])
for vol in vols:
print(vol.id, vol.size, vol.state)
Now let's make a subdirectory for user ec2-user called mydata and create a file.
command = 'cd /data\n \
sudo mkdir mydata\n \
sudo chown ec2-user mydata\n \
cd mydata\n \
touch filex\n \
ls -l\n'
myexec(priv_key, '54.187.61.12', command)
EBS volumes can only be mounted on one instance at a time, though they can be detached and attached and mounted on other instances.
The AWS EC2 Elastic File System (EFS) provides a brilliant way to attach and mount an filesystem that satifies the Networked File System (NFS) standards.
We first need to make sure that the instance has the right type of security group. A security group defines the network protocols. We need to make sure that we have one in which port 2049 (NFS) is open.
We also need to know a few things about the instance i-0a184b56b0ebdba98 which was created from the portal as described in the book text example. When we created this instance we gave it a special security group "default" and we added the NFS 2049 port. we also need to know the subnet number.
instances = ec2.instances.filter(
Filters=[{'Name': 'instance-state-name', 'Values': ['running']}])
for instance in instances:
print(instance.id, instance.instance_type, instance.subnet_id, instance.security_groups)
ec2_client = boto3.client('ec2', 'us-west-2',
aws_access_key_id = 'your access key',
aws_secret_access_key = 'your big long secret key')
ec2_client.describe_security_groups(
GroupNames=[
'default',
],
GroupIds=[
'sg-c67ce2a0',
],
)
we can not start building the EFS file_system. this requires an efs client. then we can create the file system
client = boto3.client('efs','us-west-2',
aws_access_key_id = 'your access key',
aws_secret_access_key = 'your big long secret key')
response = client.create_file_system(
CreationToken='myefs',
PerformanceMode='generalPurpose'
)
response
Next step we must create a mount target
mtresp = client.create_mount_target(
FileSystemId='fs-69fe00c0',
SubnetId='subnet-06e66170',
SecurityGroups=[
'sg-c67ce2a0'
]
)
mtresp
we first must install the nfs utilities.
next we create a mount point. we will call it /scidata
finally we need to do the mount. We can use the IP address of the mount target for that one.
command = 'sudo yum install -y nfs-utils'
myexec(priv_key, '54.187.61.12', command)
command = 'sudo mkdir /scidata\n ls -l / | grep scidata \n'
myexec(priv_key, '54.187.61.12', command)
command = 'sudo mount -t nfs4 -o vers=4.1 172.31.38.128:/ /scidata \n \
df \n'
mntcmd = myexec(priv_key, '54.187.61.12', command)
mntcmd