Chapter 12
Building Your Own Cloud with
Eucalyptus (with Rich Wolski)
“Cloud is about how you do computing, not where you do computing.”
—Paul Maritz
Eucalyptus is an open source software infrastructure for implementing private
clouds. It is structured as a set of cooperating web services that can be deployed in
a large array of configurations, using stand ard, commodity data center hardware.
It is packaged by using the Red Hat package manager [46] and tested extensively
by Hewlett Packard, which serves as the primary curator of the open source project.
Hewlett Packard also oers commercial extensions to Eucalyptus that enable i t to
incorporate non-commodity network and storage oerings from other vendors.
Eucalyptus has two key features that dierentiate it from o ther p rivate cloud
IaaS platforms, such as OpenStack, CloudStack and OpenNebula. The first is that
it is API compatible with Amazon. As a result, code, configuration scripts, VM
images, and data can move between any Eucalyptus cloud and Amazon without
modification. In particular, the large collection of freely available open source
software (and all necessary configurations) on Amazon can be downloaded and run
in a Eucalyptus cloud.
The second dierentiating feature of Eucalyptus is that it is packaged to enable
easy deployment onto the types of compute resources typically found in a data
center (e.g., 10 gigabit Ethernet, commodity servers or blade chassis, storage area
network devices, JBOD arrays); but once deployed, it operates in exactly the same
way as every other Eucalyptus cloud—and, for that matter, i n the same way as
12.1. Implementing Cloud Infrastructure Abstractions
Amazon. This design feature is useful for enterprises that wis h to deploy a cloud
to use but are not interested in developin g their own new cloud technologies or
creating a locally unique customized cloud.
The implementations of the cloud abstractions within Eucalyptus are designed
end to end so that they wo rk consistently across deployment architectures, are
performance optimized, and are reliable. These features make Eucalyptus inex-
pensive to maintain as scalable data-center infrastructure, often requiring only
a small fraction of the system administration support typically needed for other
technologies and platforms. The disadvantage, however, is that Eu calyp tus is more
dicult to modify than other cloud platforms. Thus, enterprises wishing to develop
their own proprietary cloud technologies often find it an inappropriate choice. In
short, Eucalyptus is designed for those who wish to run a production private cloud,
but not for those who wish to use it as a toolkit to build other technologies.
12.1 Implementing Cloud Infrastructure Abstractions
The cloud IaaS abstractions provide user access to virtualized resources that, once
provis ion ed, behave in the same way as their data-center counterparts. Using
cloud-hosted resources is dierent from using native bare-metal resources, however,
because in the cloud resource provisioning and decommissioning are self-service and
each resource is characterized by a service level agreement (SLA) or service level
objective (SLO). That is, cloud users are expected to operate a provisioning API
that allocates resources for thei r exclusive use (and similarly a decommissioning
API when they have finished using the resources). Also, rather than acquiring
access to a particula r make and model of resource, users must consider the qual ity
of service (described by an SLA or SLO) associated with that resource. The cloud
is free to implement each resource request (using virtualization) with whatever
resources are capable of meeting the SLA terms or the SLO.
For private clouds, self-service is implemented by using distributed, repli cated,
and tiered web services to automate resource provi sion in g. In Eucalyptus, requests
are decomposed into su brequests that are routed to dierent services and handled
asynch ronousl y to improve scale and throughput. A request is ready for its user
once all its subrequests complete successfully.
For example, to provisi on a VM, Eucalyptus decomposes the provisioning
request (which s pecifies a VM type) into subreques ts for
a VM image containing either a Linux or Wind ows distribution;
a fixed number of virtual CPUs and a fixed-size memory partition;
262
Chapter 12. Building Your Own Cloud with Eucalyptus
one or more ephemeral disk partitions attached to the VM when it boots;
a public and a private IP address for the VM;
a MAC address to be used by the VM to make a DHCP request for other
network information; and
a set of firewall rules associated with the security grou p in which the VM is
allocated.
Once the request is authenticated and any user-specific a ccess control policies
are applied, these resource-provisioning subrequests are initiated separately and
handled asynchronously by one or more internal web services. The same decom-
position approach is used for the other IaaS abstractions, such as disk volumes,
objects in the object store, firewall rul es, lo ad balancers, and autoscaling groups.
Eucalyptus allows cloud administrators to determine the SLAs and SLOs
that are to be supported via a site-specific deployment architecture that the
administrator must define. Further, admini strators are responsible for publishing
the resulting SLAs and SLOs to their user community.
For example, in one installation at UCSB, the cloud is divided into two separate
Availability Zones (AZs)—one containing newer, faster computing servers than the
other. Thu s a user requesting a specific VM type from one AZ gets a dierent
speed processor (dierent cache size, dierent memory-bus speed, etc.) that the
same VM type would deliver from the other AZ. It is up to the cloud administrator
to publish what SLO a user should expect from each AZ, so that users of this
cloud can reason about the VMs that they are provisioning.
Thu s, in a pri vate cloud, the deployment architecture determines the SLAs
and SLOs that can be satisfied. This feature is often attractive in private data-
center contexts, in which dierent organizations purchase hardware scoped to
meet diering specific needs. Eucalyptus allows this hardware to be accessed in a
uniform (and Amazon-compatible) manner, while also allowing the administrator
to specify the SLAs and SLOs that the users can expect.
12.2 Deployment Planning
The Eucalyptus documentation describes the deployment planning process [
16
]
and provid es sample reference architectures that are suitable for dierent private
cloud use cases. In this section, we describe some of the high-level trade-os that
typically arise when planning a Eucalyptus deployment.
263
12.2. Deployment Planning
12.2.1 Control Plane Deployment
The Eucalyptus control plane consists of the following cooperating web services,
which communicate via authenticated messages:
Cloud Controller (CLC), which manages the internal object request lifecycle
and cloud bootstrapping
Cluster Controller (CC), which manages a cluster or partition of compute
nodes
Storage Controller (SC), which implem ents the network-attached block-level
storage abstraction (e.g., Amazon EBS)
Walrus, which implements the cloud object store (e.g., Amazon S3)
Node Controller (NC), which actuates VMs on a compute node
User Facing Services (UFS) component, which fields and routes all user-
requests to the appropriate internal services
Eucalyptus Management Console, which implements the graph ical user
console and cloud administration console
In addition, the CC uses a separate component called Eucanetd to handle the
various networking modes that Eucalyptus supports. In some networking modes,
Eucanetd must be co-located with the CC; in others, it is co-located with the NC.
These services can be deployed in a variety of ways: all together on the same
node (as we describe in section 12.3 on page 267), on separate nodes (one service
per node), or in any combination. Sep aratin g the services so that they run on
dierent nodes improves availability. Eucalyptus continues to function (possibly
with a degradation of service) when one or more of its internal services become
unavailable. Thus, separating services increases the chance that a node failure
can be masked by the cloud. On the other hand, co-located services involve less
installation time and require less hardware dedicated to the control plane.
The cloud administrator must also decide how many AZs to configure. Euca-
lyptus treats each AZ as a sep arate cluster. That is, a compute node hosting a
VM can be in only one AZ. Each AZ requires its own CC service and SC service.
Neither the CC nor SC services for two dierent AZs can be on the same host.
However, the CC-SC pair for each of two AZs can be co-located if desired.
264
Chapter 12. Building Your Own Cloud with Eucalyptus
Each node that hosts a VM must run an NC, which acts as an agent for
Eucalyptus. VM provisioning requests ultimately translate into commands to the
NC, causing it to assemble and boot a VM on the local hypervisor and to attach
the new VM to the Eucalyptus-provisioned network.
Beyond simple TCP connectivity, there are some additional connectivity re-
quirements between the components when they are hosted on separate machines.
The Eucalyptus installation documentation provid es specifics [18].
12.2.2 Networking
Perha ps the most complex set of choices to make when planning a Eucalyptus
deployment relate to the provisioning of virtual networks. To implement cloud
connectivity and network security, Eucalyptus must be able to set up and tear
down virtual networks within the data center. Typically, the network architecture
in each data center is unique. Furthermore, the n etwork infrastructure is often the
vehicle for implementing security policies, which often prescribe a limited number of
feasible network co ntrol options for a particular deployment. Eucalyptus supports
several networking modes [19] (including Software Defined Networking), allowing
the administrator to decide on the best approach for a specific data center.
The two most popular modes are
MANAGED-NOVLAN
, which uses the node that
is hosting the CC service as a “soft” Layer 3 IP router that the CC can program
dynamically, and
EDGE
, in which each node hosting a VM also acts as a router
to implement network virtualization. The former has the advantage of being
simple to configure and troubleshoot. However, it does not implement full Layer 2
network isolation, and thus a VM can snoop Ethernet network packets from the
network to which the node hosting it is attached. Also, if the node h osting the CC
goes down, VMs lose their external connectivity until it is restored. In contrast,
EDGE
mode implements both Layer 2 and Layer 3 network isolatio n and does not
route network trac through the node hosting the CC. However, changes to cloud
network abstractions, such as those employed by security groups, take longer to
propagate, due to the use of eventual consistency mechanisms.
12.2.3 Storage
A number of deployment options are available for the various cloud storage abstrac-
tions supported by Eucalyptus. For the object storage, the cloud can use a Linux
file system on th e node that is hosting the Walrus service: either RIAK CS [
43
]or
Ceph
ceph.com
; each has its own installation and maintenance complexity, failure
265
12.2. Deployment Planning
resilience p roperties, and performance profile. Similarly, for network-attached
volume storage, Eucalyptus can use the local Linux file system on the node hosting
the SC, Ceph via its RADOS interface [
42
], or one of several storage area network
oerings from various vendors.
In general, the deployments that use the local Linux file system are simple
to configure and maintain and are relatively performant. They d o not replicate
data, however, so an u nm itiga ted storage failure can cause data loss. Some cloud
administrators use the software RAID capability of Linux [
28
] to implement the file
system that backs Walrus and the SC in this deployment configuration. When data
loss is a strong concern, however, one of the other replicating storage tech nol ogies
is usually less complex to maintain.
12.2.4 Compute Servers
On each compute server hosting an NC, the cloud administrator gets to specify
VM sizing and virtual CPU speed. When a VM is initiated, it is allocated some
nu mber of cores, a fixed memory size, and ephemeral disk storage that appears
as separate attached disk devices in the VM. These requirements are carried in
the VM type, and the cloud administrator determines what VM types a specific
cloud supports. Eucalyptus uses the core count, allocated disk space, and available
memory to determine the maximum size VM type that can be hosted on each
node. Additionally, Eucalyptus can use hypervisor multiplexing to overprovision
the servers in terms of core counts, in which case the cores are time sliced.
These configuration operations mean that when planning a deployment, the
cloud administrator must typically determine how much local disk storage (and
from what disk partition) is to be used for VM ephemeral storage; whether to enable
hardware hyperthreading (if it is available); and the degree to which hypervisor
timeslicing of cores should be employed. Each of these parameters controls, in
some measure, the SLO that a hosted VM can achieve.
12.2.5 Identity Management
Eucalyptus supports the same role-based identity management and request au-
thentication mechanisms and APIs as Amazon does. Thi s feature is particularly
important, both for security reasons (Amazon is generally considered secure)
and for API compatibili ty reasons. However, the deployment di ctates how user
credenti als and role definitions are managed. In particular, it can operate as a
standalone cloud, in which the cloud administrator is respons ible for credential
266
Chapter 12. Building Your Own Cloud with Eucalyptus
administration (e.g., credential distribution, revocation, role-definition policies),
or it can be integrated with the data center’s existing Active Directory or LDAP
installation.
12.3 Single-cluster Eucalyptus Cloud
We illustrate the process of deploying a Eucalyptus private cloud in a single
computational cluster. One node (machine) in the cluster acts as the
head node
that hosts all of the web services that compose the Eucalyptus control plane.
In this configuration, all nodes except the head node host VMs. We call the
nodes that host VMs
worker nodes
. Cloud requests (made via HTTPS or the
Management Console) are fielded by the various services on the head node and,
once authenticated and determined to be feasible, are forwarded to one or more NCs
running on worker nodes for actuation. Similarly, when a request is terminated,
the head node s ends notice of the termination to all NCs that must deallocate
resources associated with the request. The request is fully terminated when all
NCs report successful deallocation.
This configuration is useful for a supported production deployment in many
academic or research settings where a moderate-sized user po pul ation (e.g., an
instructional class, research group, or development team) shares a cluster, also
of moderate size (tens to hundreds of nodes). Note that the scalability of this
configuration is typically determined by the number of nodes and not the total
number of cores (separate CPUs) that each node comprises. Also, from a reliability
perspective, all VMs remain active and network reachable in the event the head
node fails or goes o line. No new cloud requests can be serviced while the head
node is down and some storage abstractions cease to function; but VM activity,
network connectivity, and access to ephemeral storage (which is local to each VM)
are not interrupted with a head node failure. Further, functionality is completely
restored when the head node is restored to functionality. Thus, this configuration,
which is rela tively simple to deploy and is portable to a wide variety of hardware
configurations, is capable of long-duration VM hosting.
A single-cluster configuration typically requires little data-center support:
commodity servers connected to a publicly routable subnet are sucient to support
a cloud. The cloud administration eort required for such an installation is also low:
once the cloud is deployed, the cloud administrator is responsible for issuing us er
credenti als, man agin g resource quotas, and setting instance type configurations.
In an academic setting, this b urden is usually budgeted as a small fraction of a
local system administrator’s available time.
267
12.3. Single-c luster Eucalyptus Cloud
12.3.1 Hardware Configuration
We consider for the following example installation a hardware con figuration com-
prising four x86_64 servers, each with a single gigabit E thernet interface attached
to a publicly visible IPv4 subnet. Each server has four cores, 8 GB of memory,
and 1 TB of attached storage. Eucalyptus is designed to function properly on
a wide variety of server configurations. For the head node, gen erally 8 GB is
necessary, but the worker nodes can have almost any configuration. However,
the core counts, memory sizes, and available local storage on the worker nodes
determine the maximum size for any instance type that an administrator can
configure for the cloud.
12.3.2 Deployment
All services running on the head node (except the CC and the Management Console)
share a single Java Virtual Machine (JVM). The available disk space on the head
node is split between Walrus and the SC, and no software RAID is pres ent. Here
we use the
EDGE
networking mode, which requires each worker node to run both
an NC service and a Eucanetd process.
In addition, the cloud requires a pool of available IP addresses from the same
subnet to which the head node and worker nodes are attached, for assignment to
hosted VMs. Th is configuration allows a ll VMs within the cloud to be reachable
as if they were hosts on the same publicly visible subnet as the head and worker
nodes. Further, these IP addresses cannot be in use by o ther hosts on the subnet:
they mus t be available, but unassigned, within the subnet address s pace.
In this example, we assume a publicly routable subnet of
128.111.49.0/24
,
with 255 available IP addresses, and with
128.111.49.1
as the gateway for the
subnet. We als o a ssu me that the nodes have been assigned
128.111.49.10
through
128.111.49.13
by the local network administrator but that all addresses between
128.111.49.14
and
128.111.49.254
on the subnet are available to the cloud
for VMs. We further assume that the head node has the public IP address
128.111.49.10.
In addition, Eucalyptus assigns each VM both an internal private IP address
and an externally routable public IP address (as do es Amazon EC2). Thus, the
cloud needs a private IP address range to use for VM private addresses. Because
this network is private to the cloud, it can be any private network address range.
We use
10.1.0.0/16
as the private address space for the cloud in this example. To
allow each worker n ode to implement a firewall and router for the VMs it hosts, it
needs a network address on this private subnet as well. We assume that the worker
268
Chapter 12. Building Your Own Cloud with Eucalyptus
nodes get
10.1.0.2
through
10.1.0.4
and that the others are available for VMs.
Note that the head node does not need an address from the private address range.
To keep the integration points between the cloud and the existing data center
administration to a minimum, we assume that the cloud administrator manages
user accounts either via scripts or via the Eucalyptus Management Cons ole.
Figure 12.1 shows the single-cluster deployment that we use in this chapter as
an example Eucalyptus deployment.
Figure 12.1: Example single-cluster Eucalyptus deployment, showing the one head node
running management, storage, and networkin g services and three worker nodes each
running NC software.
Eucalyptus also assigns internal and externally resolvable DNS names to
each VM that it ho sts. To do so, it requires a cloud-local subdomain for the
private cloud that it is to manag e. In the example, we use the subdomain name
testcloud.ucsb.edu
. The Domain Name System (DNS) server that the nodes use
for DNS service must be configured to forward DNS name requests for externally
resolvable instance names to the h ead n ode on port 53 (the standard DNS port).
12.3.3 Software Dependencies and Configuration
The current version of Eucalyptus requires th at nod es run the latest version of
either Red Hat Enterprise Linux (RHEL) version 7 or the Commu nity-supp orted
version (CentOS) version 7. All nodes must run the Network Time Protocol
ntp
[
86
] so that the Secure Software Layer (SSL) can protect against request replay
attacks [30].
269
12.3. Single-c luster Eucalyptus Cloud
Eucalyptus, in
EDGE
mode, requires a number of Layer 3 IP ports to be opened
so that the internal services can communicate with each other, with software
dependencies within Linux, and with users and administrators. Information about
these ports and their functions is available online [
17
]. Ports can be opened by the
root user individually [
10
]—or, if the subnet that the nodes are using is protected
by another firewall, the root user can make all ports accessible by executing the
following command. (Note that this command opens all ports, so care must be
taken to ensu re that the s ystem is otherwise secure.)
systemctl stop firewalld
For
EDGE
mode, the worker nodes must attach a Linux network bridge to the
network interface [
53
]. Below, we show the bridge configuration steps for the first
worker node, which has the IP address
128.111.49.11
.Youmustrstinstallthe
bridge utilities software with this command:
yum -y install bridge - utils
In the directory
/etc/sysconfig/network-scripts
, create the file
ifcfg-br0
,
and place in that file the following text:
DEVICE=br0
TYPE= Bridge
BOOTPROTO=static
ONBOOT=yes
DELAY =0
GATEWAY =128.111.49.1
IPADDR =128.111.49.11
NETMASK =255.255.255.0
BROADCAST =128.111.49.255
IPADDR1 =10.1.0.2
NETMASK1 =255.255.0.0
BROADCAST1 =10.1.255.255
NM_CONTROLLED=no
DNS1 =128.111.49.2
Note that the bridge needs bo th the public IP address for the node and an
address on the private subnet that is to be u sed by VMs. In this example, we have
chosen
10.1.0.2
. Also, you need to specify the IP address of the DNS service that
this node should use: in this case, 128.111.49.2.
Next, run the command
ip addr show
to determine the system-chosen name
for the Ethernet interface. RHEL CentOS 7 uses a dynamic naming scheme
for network addresses. To determine what name it has chosen for the Ethernet
interface, look for the IP address attached to a devi ce that i s ma rked as UP.
270
Chapter 12. Building Your Own Cloud with Eucalyptus
To bridge this device, find in the directory
/etc/sysconfig/network-scripts
the file that begins with the string
ifcfg-
and ends with the device name. For
example, if Linux has named the Ethernet device
enp3s0
, then you want the file
named
ifcfg-enp3s0
in that directory. You need the Ethernet MAC address
for this device as well. To get it, run the command
ip link
and look for the
link/ether
field for the device. The MAC address is the colon-separated address,
which in our example is 00:19:b9:17:91:73.
DEVICE= enp3s0
# Change hardware address to that used by your NIC
HWADDR =00:19: b9 :17:91:73
ONBOOT=yes
BRIDGE=br0
NM_CONTROLLED=no
The network must be restarted in order for this change to take eect. To do so,
issue the following command as the root user and then check with
ip addr show
that the bridge now carries the public and private IP addresses.
systemctl restart network
12.3.4 Installation
You must first temporarily disable the firewall on each node by running the following
command. (This command and all other commands in this section need to be run
as the root user.)
systemctl stop firewalld.service
Note that the firewall will be re-enabled (if it was already enabled) when the
system reboots. If you reboot the system for any reason before the installation is
complete, you must repeat this step. After the installation is complete, you can
re-enable the firewall service.
Next, install the Eucalyptus release packages
downloads.eucalyptus.com
.At
the time of writing, the latest release of Eucalyptus is version 4.3 and of Euca2ools
is versi on 3. 4. Thus you run the following commands:
PREFIX=" http :// downloads .eucalyptus.com/ software "
yum -y install ${ PREFIX }/ eu calyp tus /4.3/ rhel /7/ x86_64 / eucalyptus - release -4.3 -1. el7 . noarch .rpm
yum -y install ${ PREFIX }/ euca2ools /3.4/ rhel /7/ x86_64 / euca2ools - release -3.4 -1. el7 .noarch .rpm
yum -y install http :// dl. fedorap roject . org/ pub /epel /epel - release - latest -7. noarch .rpm
You then install the node controllers on the worker nodes. Each worker node
needs to run a Eucalyptus NC. To install the NC code and the network virtualization
271
12.3. Single-c luster Eucalyptus Cloud
daemon (Eucanetd), run the following command sequence on each worker node.
The last two
virsh
commands remove the default libvirt network so that Eucanetd
can start a DHCP server.
yum -y install eucalyptus - node
yum -y install eucanetd
systemctl start libvirtd.service
virsh net -destroy default
virsh net -autostart default --disable
You need to check that KVM did not have the virtual network busy, by running
ls -l /dev/kvm
. If the output does not show the KVM device as being accessible,
as follows, then reboot the machine to clear the libvirt lock on th e device.
crw- rw- rw-+ 1 root kvm 10 , 232 Jan 24 13 :03 / dev / kvm
The final step installs the cloud on the head node. On the head node, run the
followin g commands to install the Eucalyptus control plane services plus an imaging
service (that runs as a VM) for converting dierent image formats automatically
within the cloud.
yum -y install eucalyptus - cloud
yum -y install eucalyptus - cluster
yum -y install eucalyptus -sc
yum -y install eucalyptus - walrus
yum -y install eucalyptus -service -image
yum -y install eucaconsole
12.3.5 Head Node Configuration
You must configure the Security-Enhanced Linux (SELinux) kernel security module
to allow Eucalyptus to function as a trusted service. On the head node, run the
followin g commands as the root user:
setsebool -P eucalyptus_storage_controller 1
setsebool -P httpd_can_network_connect 1
Next, you con figure
EDGE
networking on the head nod e. You do this by editing
the file
/etc/eucalyptus/eucalyptus.conf
and setting
VNET_MODE="EDGE"
.Note
that the
#
character serves as a comment. A lso on the head node, you need to create
a JSON file that contains the network topology that the cloud is to use for each
VM that it hosts. The file format is documented in the Eucalyptus EDGE Network
Configuration manual [
15
]. For the example hardware configuration described in
272
Chapter 12. Building Your Own Cloud with Eucalyptus
{
"InstanceDnsDomain": "eucalyptus.internal",
"InstanceDnsServers": ["128.111.49.2"],
"MacPrefix": "d0:0d",
"PublicIps": [
"128.111.49.14-128.111.49.254"
],
"Subnets": [
],
"Clusters": [
{
"Name": "az1",
"Subnet": {
"Name": "10.1.0.0",
"Subnet": "10.1.0.0",
"Netmask": "255.255.0.0",
"Gateway": "10.1.0.1"
},
"PrivateIps": [
"10.1.0.5-10.1.0.254"
]
}
]
}
Figure 12.2: Example JSON configuration file for a Eucalyptus virtual network.
section 12.3.1 on page 268, you create the file
/etc/eucalyptus/network.json
with the contents shown in figure 12.2.
Note that the public and private IP addresses used by the head node and
worker nodes are not in the list of addresses. Eucalyptus uses addresses from the
list in this file for VMs; thus, these addresses must not conflict with the addresses
used by the nodes hosting the cloud. Note also that the
Cluster
has a name (
az1
in the example). This is the name of the availability zone (you can set it to any
string you like). You need this name when you go to register the availability zone
with the cloud.
In this example, the head node uses the file system as backing store for Walrus
and the SC. By default, the directories
/var/lib/eucalyptus/bukkits
are used
by Walrus and
/var/lib/eucalyptus/volumes
by the SC as top-level directories
for the backing store files. A typical CentOS installation on a machine with a
single disk creates two separate partition s for the root file system and for home
directories and puts the bulk of the available disk space in the home directory
273
12.3. Single-c luster Eucalyptus Cloud
partition. If you leave the
bukkits
and
volumes
directories on the root partition,
it may run out of disk space.
Eucalyptus follows symbolic links for these two directories. If the root partition
is small compared with your home partition, then create symbolic links by running
the following commands as the root user. (Alternatively, you can reconfigure your
root partition to have more space and avoid the use of these symbolic links.)
rm -rf / var /lib / eucalyptus / bukkits
rm -rf / var /lib / eucalyptus / volumes
mkdir -p /home/ bukkits
mkdir -p /home/ volumes
chown eucalyptus: eucalyptus /home/ bukkits
chown eucalyptus: eucalyptus /home/ volumes
ln -s / home / bukkits /var /lib / eucalyptus / bukkits
ln -s / home / volumes /var /lib / eucalyptus / volumes
chmod 770 /home/bukkits
chmod 770 /home/volumes
12.3.6 Worker Node Configuration
On each worker node, ed it the file
/etc/eucalyptus/eucalyptus.conf
, and set
the following keyword parameters
VNET_MODE="EDGE"
VNET_PRIVINTERFACE="br0"
VNET_PUBINTERFACE="br0"
VNET_BRIDGE="br0"
VNET_DHCPDAEMON="/usr/sbin/dhcpd"
Each NC maintains a cache of VM images as well as a backing store for
running instances in the local file system. The path to the top-level direc-
tory for these storage requirements is given by the
INSTANCE_PATH
key-word
parameter in the file
/etc/eucalyptus/eucalyptus.conf
, and its default value
is
/var/lib/eucalyptus/instances
.Tomovethistothehomediskpartition
(see the discussion of disk space in the preced ing subsection), run the following
commands as the root user on each worker node:
mkdir -p / home/ instances
chown eucalyptus: eucalyptus /home/ instances
chmod 771 /home/instances
274
Chapter 12. Building Your Own Cloud with Eucalyptus
Next, edit the file
/etc/eucalyptus/eucalyptus.conf
, and set the key-word
parameter INSTANCE_PATH to be /home/instances. Installation is complete.
12.3.7 Bootstrapping
Eucalyptus needs to go through a one-time bootstrapping step after a clean install.
Note, however, that Eucalyptus also supports upgrades between vers ion s; the
bootstrapping process describ ed here is needed only after the packages are installed
for the first time. The bootstrapping process uses a set of command-line tools
that are installed on the head node, as shown in figure 12.3 on the next page.
Some of these tools are specific to head node operation, while others are part of
the standard Amazon command line tools that are part of the
euca2ools
[
14
]
command line interface. In what follows, you run all the commands from the Linux
shell as a root user for the node.
12.3.7.1 Registering the Eucalyptus Services
The next step is to register the various service comp onents with each other.
Registration requires that the head node use the Linux command
rsync
to transfer
configuration state. As such, it is easiest if the head node can use
rsync
without a
passphrase, both to
rsync
with each worker node and with itself [
170
]. Otherwise,
each registration step p romp ts the user to enter the root password either on
the head node or on the specific worker node that is being registered, possibly
several times for each. Passphrase-less
rsync
can subsequently be disabled once
registration is complete.
Eucalyptus works bes t if it uses the public IP addresses rather than the DNS
names of the nodes for registration. Also, you need the name of the AZ specified
in the network topology JSON file (az1 in this example).
Once all services are running (the registration step cannot take place wh en
the services are down or not yet ready), run this command on the head node to
generate a set of bootstrapping credentials.
eval clcadmin-assume-system-credentials
This command sets shell environment variables containing a temporary set of
credentials tha t allow subsequent commands to stitch the cloud services together
securely. Thus, you must use this shell for the remaining registration steps.
To register the user-facing services, determine the public IP address of the
head node, and choose human-readable names for the services. In this example,
275
12.3. Single-c luster Eucalyptus Cloud
# First run this command on the head node.
clcadmin - initialize - cloud
# Run these commands to have CentOS 7 bootstrapper restart cloud
# automatically when head node reboots. ( The tgtd service is needed
# for the SC to be able to export volumes for VMs.)
systemctl enable eucalyptus -cloud.service
systemctl enable eucalyptus -cluster.service
systemctl enable tgtd.service
systemctl enable eucaconsole.service
# Start the control plane services on the head node
systemctl start eucalyptus -cloud.service
systemctl start eucalyptus -cluster.service
systemctl start tgtd.service
systemctl start eucaconsole.service
# ( Optionally) enable the node controller to restart after a reboot
systemctl enable eucalyptus -node.service
systemctl enable eucanetd.service
# Start the node controller
systemctl start eucalyptus -node.service
systemctl start eucanetd.service
# Check that all components are running by running:
netstat -plnt
# and verifying that there are processes listening on ports 8773 and
# 8774 on the head node and 8775 on the worker nodes.
# ( Note that it may take a few minutes for services to be visible.)
Figure 12.3: Commands used to bootstrap a Eucalyptus cloud.
276
Chapter 12. Building Your Own Cloud with Eucalyptus
the public IP address for the head node is
128.111.49.10
,andweusethename
ufs_49.10
as the service name. To regi ster th e exam pl e user-facing services, you
run the following command on the head node.
euserv - register - service -t user - api -h 128.111.49.10 ufs_49 .10
Next, register the backend service for the Walrus object store. Again, using the
public IP ad dress for the head nod e and a service name, the registration command
for the example is as follows.
euserv - register - service -t walrusbackend -h 128.111.49.10 walrus_49 .10
The registration procedure for the CC and the SC is similar, but it requires
the AZ name from the network topology JSON file for the
-z
parameter. The
registration commands for the example are as follows. The third command installs
security keys in the appropriate place in the head node file system.
euserv - register - service -t cluster -h 128.111.49.10 -z az1 cc_49.10
euserv - register - service -t storage -h 128.111.49.10 -z az1 sc_49.10
clcadmin -copy - keys -z az1 128.111.49.10
To register the NC services running on each worker node, you must run the
node regis tration commands on the head node giving the IP address for each
worker node. In the example, these commands are as follows.
clusteradmin -register - nodes 128.111.49.11 128.111.49.12 128.111.49.13
clusteradmin -copy -keys 128.111.49.11 128.111.49.12 128.111.49.13
12.3.7.2 Runtime Bootstrap Configuration
With the services running and securely registered, the last bootstrapping step is to
configure the runtime system.
To configure DNS name resolution for cloud instances, you need the name of
the subdomain that is to be forwarded by the site DNS service to the head node.
In this example, we use the name
testcloud.ucsb.edu
. Thus, on the head node
you run th e fo llowing commands:
euctl system.dns. dnsdomain =testcloud.ucsb.edu
euctl bootstrap .webservices .use_instance_dns=true
The head node acts as the authoritative DNS service for names in the associated
subdomain:
testcloud.ucsb.edu
in our example. To test the linkage, run this
command on the head node:
277
12.3. Single-c luster Eucalyptus Cloud
host compute .testcloud.ucsb .edu
It should resolve to the head node’s public IP address. If it does not, check the
configuration of the site DNS to ensure that it is forwarding name requests for the
cloud subdomain to the head node.
Next, create permanent a dmi ni strative credentials for the cloud. These creden -
tials allow the cloud administrator full access to all resou rces (i.e., they are the
super-user credentials for the cloud). For security purposes, the SSL used internally
uses DNS resolution as part of its antispoofing authentication tests. Thus you need
to use the cloud subdomain specified in the previous comm and when generating
the administrator credentials. For the example, you run the fol lowing commands
on the head node.
cd / root
mkdir -p . euca
euare - usercreate - wld testcloud . ucsb . edu adminuser >\
/root/.euca/adminuser.ini
You also need to tell the local command line tools that you wish to contact th is
cloud (as opposed to other clouds or Amazon itself) by setting the region to the
cloud’s local subdomain. For the example cloud, you run the following commands
in the shell where you were using the temporary credentials.
eval euare-releaserole
export AWS_DEFAULT_REGION =testcloud .ucsb.edu
If you want the root user always to contact the cloud running on the head
node, add these commands to the file
/root/.bashrc
; they are th en set when the
root user logs in.
The next step is to upload the network topology JSON file to the cloud. With
the permanent administrative credentials installed as described previously, run the
followin g command.
euctl cloud. network .network_configuration =@/etc/ eucalyptus /network.json
Next, the storage options for the cloud must be configured. In this example,
we are using the local file system on the head node for both object storage and
volu me storage. For the volume storage configuration, you need to specify the
name of the AZ (
az1
in this example). You use the following commands to enable
this storage configuration.
278
Chapter 12. Building Your Own Cloud with Eucalyptus
euctl objectstorage .providerclient=walrus
euctl az1.storage. blockstoragemanager =overlay
To enable the imaging service, run the following command using the local cloud
subdomain as the region.
esi- install - i mage -- regio n te st cl ou d . ucs b . edu - -install -d efault
Eucalyptus then installs a VM that can import raw disk images for use as
volume-backed instances with this service.
12.3.7.3 Quick Health and Status Checks
At this point, your cloud should be up and functional. To ensure that it is working,
run the following status command.
euserv - describe - services
All services should report in the state
enabled
. To determine available instance
capacities, execute the following command with administrator credenti als .
euca - describe - availability - zones verbose
If all NCs are properly registered, the sum of their capacities should be displayed.
In this example, each worker nod e supports four cores. Thus the cloud should be
able to run 12
m1.small
instance types when all three NCs are registered and no
other VMs are running.
12.3.8 Image Installation
Eucalyptus maintains a repository of network accessible curated images that can
be installed automatically. To install from the image repository, as the root user
with the administrator credentials enabled, run the followi ng com man ds .
yum install -y xz
bash <( curl -Ls eucalyptus .com/install -emis )
The installation script then prompts you for the images to instal l. This script
also checks to make sure that all of the dependencies needed to install images are
present. If not, the script prompts you to ask whether it should install any that
are missing using the yum utility.
279
12.3. Single-c luster Eucalyptus Cloud
Note that the image i nstal lation is using the cloud administrator credentials.
As a result, the image is accessible only by the cloud administrator. To make it
available to all users, run the following command. Note the image identifier in the
output that begins with the string emi-.
euca - describe - images -a
You will also see in the output another installed image with an
emi-
identifier:
the image that hosts the i mag ing service. Choose the identifier for the image that
you jus t instal led .
Then, run the
euca-modify-image-attribute
with the
emi-
identifier, set the
-a
flag to
all
,andaddthe
-l
flag. For exampl e, if the image installation installed
emi-1e78481f, then run the following command to set the launch permissions so
that all accounts m ay launch an instance from the image.
euca - modify - image - attribute -a all emi -1e 78481f - l
12.3.9 User Credentials
The cloud administrator can create accounts for users other than the administrative
user. Each account has its own administrative user that can create others users in
the account. Unlike the cloud ad mi nis trator, however, these account administrators
do not have access to resources outside of their specific accounts.
To create a user account, you need a unique account name. For example, to
create an account for user1, run the following command.
euare - accountcreate -wl user1 -d testcloud . ucsb .edu > user1 . ini
This command outputs a credentials file that this user can install in the
user’s
.euca
directory for use with the cloud. The user must also set the
AWS_DEFAULT_REGION
environ ment variable to the name of the local cloud DNS
subdomain (
testcloud.ucsb.edu
in this example). These cred entials allow the
user to access the cloud via the command line interface.
To enable the user to access the cloud from the Eucalyptus Management console,
run the followi ng com man d, wi th
initialpassword
being the password that you
use to gain initial access (and should change).
euare - useraddloginprofile --as - account user1 -u admin -p initialpassword
With thi s password, you can point a web browser to the head node and attempt
to log in. In the test example, you would use the URL
https://128.111.49.10
280
Chapter 12. Building Your Own Cloud with Eucalyptus
to contact the Management console. The certificate is self-signed, so most browsers
will ask you to confirm that they wish to make a security exception. At the login
screen, you then need to enter
user1
as the account name,
admin
as the user in
that account, and the password. Once l ogged in, you can change the password.
Note that this password is only for the Management Console. The command line
tools use Amazon-style credentials that are embedded in the
.ini
file generated
when the account was created.
12.4 Summary
We have described how to build a private cloud using Eucalyptus. We first explained
how private clouds implement the cloud abstractions. Eucalyptus supports API
compatibility with Amazon and implements the same cloud abstractions so that
workloads and data can move seamlessly b etween any Eucalyptu s clouds, regardless
of deployment architecture, and also between Eucalyptus and Amazon. We also
discussed the role that deployment architecture has on SLAs and SLOs in a private
cloud. We concluded the chapter with a step-by-step description of how to deploy
a production-ready Eucalyptus private cloud using commod ity servers connected
to a l ocal-area network. The deployment steps comprise cloud software installation
and configuration, secure bootstrapping of the cloud, and i niti al administrative
actions necessary to make the cloud available for users.
12.5 Resources
The Eucalyptus website
www.eucalyptus.com
provid es access to a wide range of
documentation and examples, as well as the E ucal yptus code.
281