Chapter 13. Building Your Own Cloud with OpenStack
stances. Consequently, virtualized networking can have an adverse impact on the
performance of tightly coupled workloads.
The overheads introduced by virtualized networking can be bypas sed through
use of
Single-Root I/O Virtualization
(SR-IOV), although this feature is not
supported by all NICs. This PCI hardware capability specifies how a PCI device
can be shared, through creation of a number of shadow devices referred to as
virtual functions
. A hypervisor that supports SR-IOV enables the passthrough
of a virtual function device into a compute i nstan ce, giving the virtual instance
direct access to the un derlying physical network device’s hardware interface. The
resulting direct path from compute instances to p hysical networks circumvents
the software-defined networking implementation by the hypervisor. While this
approach delivers high levels of performance, it also bypasses the security group
firewall rules that OpenStack applies to an instance. Consequently, SR-IOV
networking should only be used on internal (trusted) networks and should be
configured in conjunction with conventional network configurations for managing
connectivity with untrusted networks.
The performan ce of latency-sensitive workloads can be improved further through
smarter process scheduling. Pinning virtual processor cores to physical cores im-
proves cache locality. Memory access performance is improved by leveraging affinity
between physical processor cores and memory regions. We briefly explain these
concepts. Modern system architectures tend to incorporate multiple processors,
each with an integrated memory controller and with memory directly attached to
each processor. A single-memory system is constructed with coherent access to all
memory from all CPU cores, with hardware buses between processors to ensure
consistency. A consequence of this design is that memory and CPUs are unevenly
coupled: wha t is referred to as non-uniform memory access. Making the virtual
compute instances aware of the NUMA topology of the physical host enables better
scheduling an d placement decisions by the guest kernel. The compute hypervisors
and OpenStack services can be configured to enab le these optimizations.
13.3.3 Hierarchical Storage and Parallel Fi le Systems
A workload may require hi gh-performance coupling with a data source rather than
between compu te hosts. OpenStack can support storage services of mu ltip le types,
including types suitable for different tiers in a storage hierarchy, concurrently.
However, it itself does not have a native implementation of hierarchical storage
management. The HPC data-movement protocol
iSCSI Extensions for RDMA
(iSER) is supported for serving data for OpenStack block storage (Cinder). i SER-
enabled Cinder storage requires an RDMA-capable NIC in both the compute
287