0% found this document useful (0 votes)
18 views119 pages

BK RPC Installation 20160219

The Rackspace Private Cloud Powered By OpenStack (RPCO) Installation Guide provides instructions for installing an OpenStack-powered private cloud using Ansible and Linux Containers (LXC). It details the architecture, configuration, and deployment processes necessary for setting up RPCO v11, including required services and support options. The guide is aimed at Rackspace customers and includes extensive information on installation workflows, network configuration, and operational management.

Uploaded by

Arnab Sen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views119 pages

BK RPC Installation 20160219

The Rackspace Private Cloud Powered By OpenStack (RPCO) Installation Guide provides instructions for installing an OpenStack-powered private cloud using Ansible and Linux Containers (LXC). It details the architecture, configuration, and deployment processes necessary for setting up RPCO v11, including required services and support options. The guide is aimed at Rackspace customers and includes extensive information on installation workflows, network configuration, and operational management.

Uploaded by

Arnab Sen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 119

rackspace.

com/cloud/private
RPCO Installation Guide February 19, 2016 RPCO v11

Installation Guide: Rackspace Private Cloud Powered By OpenStack


RPCO v11 (2016-02-19)
Copyright © 2015 Rackspace All rights reserved.

This documentation is intended for Rackspace customers who are interested in installing an Open-
Stack-powered private cloud according to the recommendations of Rackspace.

ii
RPCO Installation Guide February 19, 2016 RPCO v11

Table of Contents
1. Preface ........................................................................................................................ 1
1.1. About Rackspace Private Cloud Powered By Openstack ..................................... 1
1.2. RPCO configuration .......................................................................................... 1
1.3. Rackspace Private Cloud support ...................................................................... 1
2. Overview ..................................................................................................................... 3
2.1. Ansible ............................................................................................................. 3
2.2. Linux Containers (LXC) ..................................................................................... 3
2.3. Reference architecture ...................................................................................... 4
2.4. Host layout ...................................................................................................... 5
2.5. Host networking .............................................................................................. 7
2.6. OpenStack Networking ................................................................................... 12
2.7. Installation requirements ................................................................................ 15
2.8. Installation workflow ...................................................................................... 15
3. Deployment host ....................................................................................................... 17
3.1. Installing the operating system ....................................................................... 17
3.2. Configuring the operating system ................................................................... 17
3.3. Installing source and dependencies ................................................................. 17
3.4. Configuring Secure Shell (SSH) keys ................................................................ 18
4. Target hosts .............................................................................................................. 19
4.1. Installing the operating system ....................................................................... 19
4.2. Configuring Secure Shell (SSH) keys ................................................................ 19
4.3. Configuring the operating system ................................................................... 20
4.4. Configuring LVM ............................................................................................ 20
4.5. Configuring the network ................................................................................ 20
4.5.1. Reference architecture ......................................................................... 21
4.5.2. Configuring the network on a target host ............................................ 24
5. Deployment configuration ......................................................................................... 30
5.1. Prerequisites ................................................................................................... 30
5.2. Configuring target host networking ................................................................ 31
5.3. Configuring target hosts ................................................................................. 34
5.4. Configuring service credentials ........................................................................ 36
5.5. Configuring proxy environment variables (optional) ........................................ 37
5.6. Configuring the hypervisor (optional) ............................................................. 37
5.7. Configuring the Image service (optional) ........................................................ 38
5.8. Configuring the Block Storage service (optional) ............................................. 40
5.8.1. Configuring the Block Storage service to use LVM ................................ 40
5.8.2. Configuring the Block Storage service to use NetApp ............................ 41
5.8.3. Configuring the Block Storage service with NFS protocols ..................... 43
5.8.4. Configuring Block Storage backups to Object Storage ........................... 43
5.8.5. Configuring Block Storage backups to external Cloud Files .................... 44
5.8.6. Creating Block Storage availability zones .............................................. 45
5.9. Configuring the Object Storage service (optional) ........................................... 46
5.9.1. Enabling the trusty-backports repository .............................................. 46
5.9.2. Configure and mount storage devices .................................................. 46
5.9.3. Configure an Object Storage deployment ............................................. 48
5.9.4. Deploying Object Storage on an existing Rackspace Private Cloud
Powered By OpenStack v11 Software ............................................................ 54
5.9.5. Object Storage monitoring ................................................................... 55

iii
RPCO Installation Guide February 19, 2016 RPCO v11

5.9.6. Integrate Object Storage with the Image Service .................................. 56


5.10. Configuring HAProxy (optional) .................................................................... 57
5.11. Configuring Dashboard SSL (optional) ........................................................... 57
5.11.1. Self-signed SSL certificates .................................................................. 57
5.11.2. User-provided SSL certificates ............................................................. 58
6. Ceph ......................................................................................................................... 59
6.1. Deploying Ceph .............................................................................................. 59
6.2. Adding new Ceph storage servers to cluster .................................................... 63
6.3. Replacing a failed drive in a Ceph cluster ........................................................ 63
6.4. RAW Linux Images ......................................................................................... 65
7. Foundation playbooks ............................................................................................... 67
7.1. Running the foundation playbook .................................................................. 67
7.2. Troubleshooting ............................................................................................. 68
8. Infrastructure playbooks ............................................................................................ 69
8.1. Running the infrastructure playbook ............................................................... 69
8.2. Verifying infrastructure operation ................................................................... 70
9. OpenStack playbooks ................................................................................................ 71
9.1. Utility Container Overview .............................................................................. 71
9.2. Running the OpenStack playbook ................................................................... 72
9.3. Verifying OpenStack operation ....................................................................... 73
10. Operations .............................................................................................................. 75
10.1. Monitoring ................................................................................................... 75
10.1.1. Service and response .......................................................................... 75
10.1.2. Hardware monitoring ........................................................................ 75
10.1.3. Software monitoring .......................................................................... 75
10.1.4. CDM monitoring ................................................................................ 76
10.1.5. Network monitoring .......................................................................... 77
10.2. Adding a compute host ................................................................................ 77
10.3. Galera cluster maintenance ........................................................................... 77
10.3.1. Removing nodes ................................................................................ 78
10.3.2. Starting a cluster ................................................................................ 78
10.4. Galera cluster recovery ................................................................................. 80
10.4.1. Single-node failure ............................................................................. 80
10.4.2. Multi-node failure .............................................................................. 80
10.4.3. Complete failure ................................................................................ 82
10.4.4. Rebuilding a container ....................................................................... 82
A. OSAD configuration files ........................................................................................... 85
A.1. openstack_user_config.yml example configuration file ......................... 85
A.2. user_secrets.yml configuration file ......................................................... 96
A.3. user_variables.yml configuration file ..................................................... 98
A.4. swift.yml example configuration file ........................................................ 102
A.5. extra_container.yml configuration file ................................................. 111
A.6. Environment configuration files .................................................................... 111
11. Document history and additional information ........................................................ 113
11.1. Document change history ........................................................................... 113
11.2. Additional resources ................................................................................... 113

iv
RPCO Installation Guide February 19, 2016 RPCO v11

List of Figures
2.1. Host Layout Overview .............................................................................................. 7
2.2. Network components ............................................................................................... 8
2.3. Container network architecture .............................................................................. 10
2.4. Compute host network architecture ....................................................................... 11
2.5. Block Storage host network architecture ................................................................. 12
2.6. Networking agents containers ................................................................................ 13
2.7. Compute hosts ....................................................................................................... 14
2.8. Installation workflow .............................................................................................. 16
3.1. Installation work flow ............................................................................................. 17
4.1. Installation workflow .............................................................................................. 19
4.2. Infrastructure services target hosts ......................................................................... 25
4.3. Compute target hosts ............................................................................................. 26
4.4. Block Storage target hosts ...................................................................................... 27
5.1. Installation work flow ............................................................................................. 30
6.1. Ceph service layout ................................................................................................ 59
6.2. Ceph networking configuration .............................................................................. 62
7.1. Installation work flow ............................................................................................. 67
8.1. Installation workflow .............................................................................................. 69
9.1. Installation work flow ............................................................................................. 71

v
RPCO Installation Guide February 19, 2016 RPCO v11

List of Tables
5.1. Mounted devices .................................................................................................... 48

vi
RPCO Installation Guide February 19, 2016 RPCO v11

1. Preface
Rackspace Private Cloud Powered By OpenStack (RPCO) uses openstack-ansible to quickly
install an OpenStack private cloud, configured as recommended by Rackspace OpenStack
specialists.

1.1. About Rackspace Private Cloud Powered By


Openstack
RPCO uses the Ansible IT automation framework to create an OpenStack cluster on Ubuntu
Linux. OpenStack components are installed into Linux Containers (LXC) for isolation, scala-
bility, and ease of maintenance.

1.2. RPCO configuration


RPCO installs and manages OpenStack Kilo with the following services:

• Compute (nova)

• Object Storage (swift)

• Block Storage (cinder)

• Networking (neutron)

• Dashboard (horizon)

• Identity (keystone)

• Image service (glance)

• Orchestration (heat)

RPCO also provides the following infrastructure, monitoring, and logging services to sup-
port OpenStack:

• Galera with MariaDB

• RabbitMQ

• Memcached

• Rsyslog

• Logstash

• Elasticsearch with Kibana

1.3. Rackspace Private Cloud support


Rackspace offers 365x24x7 support for RPCO. To learn about support for
your cloud, or to take advantage of our training offerings, contact us at:
<opencloudinfo@rackspace.com>.

1
RPCO Installation Guide February 19, 2016 RPCO v11

You can also visit the RPCO community forums, which are open to all Rackspace Private
Cloud users. They are moderated and maintained by Rackspace personnel and OpenStack
specialists. See https://community.rackspace.com/products/f/45

For more information about RPCO, visit the Rackspace Private Cloud pages:

• Software and Reference Architecture

• Support

• Resources

For the very latest information about RPCO, refer to the Rackspace Private Cloud v11 Re-
lease Notes.

Rackspace® and Fanatical Support® are service marks of Rackspace US, Inc. and are regis-
tered in the United States and other countries. Rackspace does not claim trademark rights
in abbreviations of its service or product names, unless noted otherwise. All other trade-
marks, service marks, images, products and brands remain the sole property of their respec-
tive holders and do not imply endorsement or sponsorship.

2
RPCO Installation Guide February 19, 2016 RPCO v11

2. Overview
RPCO Powered By OpenStack (RPCO) uses a combination of Ansible and Linux Containers
(LXC) to install and manage OpenStack Kilo. This chapter discusses the following topics:

• The technology used by RPCO

• The environment and network architecture

• Requirements to install RPCO

• The installation process workflow

2.1. Ansible
RPCO is based on openstack-ansible, which uses a combination of Ansible and Linux Con-
tainers (LXC) to install and manage OpenStack Kilo. Ansible provides an automation plat-
form to simplify system and application deployment. Ansible manages systems using Secure
Shell (SSH) instead of unique protocols that require remote daemons or agents.

Ansible uses playbooks written in the YAML language for orchestration. For more informa-
tion, see Ansible - Intro to Playbooks.

In this guide, Rackspace refers to the host running Ansible playbooks as the deployment
host and the hosts on which Ansible installs RPCO as the target hosts.

A recommended minimal layout for installing RPCO involves five target hosts in total: three
infrastructure hosts, one compute host, and one logging host. RPCO also supports one or
more optional storage hosts. All hosts require at least four 10 Gbps network interfaces. In
Rackspace datacenters, hosts can use an additional 1 Gbps network interface for service
network access. More information on setting up target hosts can be found in Section 2.4,
“Host layout” [5].

For more information on physical, logical, and virtual network interfaces within hosts see
Section 2.5, “Host networking” [7].

2.2. Linux Containers (LXC)


Containers provide operating-system level virtualization by enhancing the concept of ch-
root environments, which isolate resources and file systems for a particular group of pro-
cesses without the overhead and complexity of virtual machines. They access the same ker-
nel, devices, and file systems on the underlying host and provide a thin operational layer
built around a set of rules.

The Linux Containers (LXC) project implements operating system level virtualization on Lin-
ux using kernel namespaces and includes the following features:

• Resource isolation including CPU, memory, block I/O, and network using cgroups.

• Selective connectivity to physical and virtual network devices on the underlying physical
host.

• Support for a variety of backing stores including LVM.

3
RPCO Installation Guide February 19, 2016 RPCO v11

• Built on a foundation of stable Linux technologies with an active development and sup-
port community.

Useful commands:

• List containers and summary information such as operational state and network configu-
ration:

# lxc-ls --fancy

• Show container details including operational state, resource utilization, and veth pairs:

# lxc-info --name container_name

• Start a container:

# lxc-start --name container_name

• Attach to a container:

# lxc-attach --name container_name

• Stop a container:

# lxc-stop --name container_name

2.3. Reference architecture


The RPCO reference architecture enables the delivery of a stable and scalable produc-
tion-ready private cloud powered by OpenStack. RPCO is designed and built by the ex-
perts who co-founded OpenStack and run one of the world’s largest OpenStack-pow-
ered clouds. RPCO v11 is built on the Kilo release of OpenStack. For more information, see
www.rackspace.com/cloud/private/openstack/.

The RPCO reference architecture is a recommended set of software and infrastructure com-
ponents designed to provide the scalability, stability, and high availability you need to sup-
port enterprise production workloads. Additionally, every RPCO customer has access to our
team of architecture advisors who provide workload-specific guidance for planning, design-
ing, and architecting a private cloud environment to help meet your unique needs.

RPCO v11 is composed of OpenStack services, automation, and tooling. Services are
grouped into logical layers, each providing key aspects of the overall solution. The follow-
ing are the layers and their contents:

• Rackspace Fanatical Support and training

• Operations tooling layer

• Ansible

• Capacity planning

4
RPCO Installation Guide February 19, 2016 RPCO v11

• Cloud monitoring (MaaS)

• Presentation layer - Dashboard (horizon)

• Orchestration layer (heat)

• Heat-API

• Heat-API-CFN

• Heat-Engine

• Heat templates

• CloudFormation (CFN) template

• Infrastructure as a service layer

• Block Storage (cinder)

• Compute (nova)

• Identity (keystone)

• Image service (glance)

• Networking (neutron)

• Object Storage (swift)

• Deployment automation layer

• Ansible

• LXC

• OpenStack source

• Infrastructure database

• MariaDB

• Galera

• Infrastructure message queue

• RabbitMQ

• RabbitMQ clustering

2.4. Host layout


The recommended layout contains a minimum of five hosts (or servers).

• Three control plane infrastructure hosts

5
RPCO Installation Guide February 19, 2016 RPCO v11

• One logging infrastructure host

• One compute host

To use the optional Block Storage (cinder) service, a sixth host is required. Block Storage
hosts require an LVM volume group named cinder-volumes. See Section 2.7, “Installation re-
quirements” [15] and Section 4.4, “Configuring LVM” [20] for more information.

The hosts are called target hosts because Ansible deploys the RPCO environment within
these hosts. The RPCO environment also requires a deployment host from which Ansible or-
chestrates the deployment process. One of the target hosts can function as the deployment
host.

At least one hardware load balancer must be included to manage the traffic among the
target hosts.

Infrastructure Control Plane target hosts contain the following services:

• Infrastructure:

• Galera

• RabbitMQ

• Memcached

• Logging

• OpenStack:

• Identity (keystone)

• Image service (glance)

• Compute management (nova)

• Networking (neutron)

• Orchestration (heat)

• Dashboard (horizon)

Infrastructure Logging target hosts contain the following services:

• Rsyslog

• Logstash

• Elasticsearch with Kibana

Compute target hosts contain the following services:

• Compute virtualization

• Logging

6
RPCO Installation Guide February 19, 2016 RPCO v11

(Optional) Storage target hosts contain the following services:

• Block Storage scheduler

• Block Storage volumes

Figure 2.1. Host Layout Overview

2.5. Host networking


The combination of containers and flexible deployment options requires implementation of
advanced Linux networking features such as bridges and namespaces.

Bridges provide layer 2 connectivity (similar to switches) among physical, logical, and virtual
network interfaces within a host. After creating a bridge, the network interfaces are virtual-
ly "plugged in" to it.

RPCO uses bridges to connect physical and logical network interfaces on the host to virtual
network interfaces within containers on the host.

Namespaces provide logically separate layer 3 environments (similar to routers) within a


host. Namespaces use virtual interfaces to connect with other namespaces including the
host namespace. These interfaces, often called veth pairs, are virtually "plugged in" be-
tween namespaces similar to patch cables connecting physical devices such as switches and
routers.

Each container has a namespace that connects to the host namespace with one or more
veth pairs. Unless specified, the system generates random names for veth pairs.

The relationship between physical interfaces, logical interfaces, bridges, and virtual inter-
faces within containers is shown in Figure 2.2, “Network components” [8].

7
RPCO Installation Guide February 19, 2016 RPCO v11

Figure 2.2. Network components

Target hosts can contain the following network bridges:

• LXC internal lxcbr0:

• Mandatory (automatic).

• Provides external (typically internet) connectivity to containers.

• Automatically created and managed by LXC. Does not directly attach to any physical or
logical interfaces on the host because iptables handle connectivity. Attaches to eth0 in
each container.

• Container management br-mgmt:

• Mandatory.

• Provides management of and communication among infrastructure and OpenStack


services.

• Manually created and attaches to a physical or logical interface, typically a bond0


VLAN subinterface. Also attaches to eth1 in each container.

• Storage br-storage:

• Optional.

8
RPCO Installation Guide February 19, 2016 RPCO v11

• Provides segregated access to block storage devices between Compute and Block Stor-
age hosts.

• Manually created and attaches to a physical or logical interface, typically a bond0


VLAN subinterface. Also attaches to eth2 in each associated container.

• OpenStack Networking tunnel/overlay br-vxlan:

• Mandatory.

• Provides infrastructure for VXLAN tunnel/overlay networks.

• Manually created and attaches to a physical or logical interface, typically a bond1


VLAN subinterface. Also attaches to eth10 in each associated container.

• OpenStack Networking provider br-vlan:

• Mandatory.

• Provides infrastructure for VLAN and flat networks.

• Manually created and attaches to a physical or logical interface, typically bond1. Al-
so attaches to eth11 in each associated container. Does not contain an IP address be-
cause it only handles layer 2 connectivity.

Figure 2.3, “Container network architecture” [10] provides a visual representation of


network components for services in containers.

9
RPCO Installation Guide February 19, 2016 RPCO v11

Figure 2.3. Container network architecture

The RPCO architecture uses bare metal rather than a container for compute hosts. Fig-
ure 2.4, “Compute host network architecture” [11] provides a visual representation of
the network architecture on compute hosts.

10
RPCO Installation Guide February 19, 2016 RPCO v11

Figure 2.4. Compute host network architecture

As of v11, the RPCO architecture uses bare metal rather than a container for Block Storage
hosts. The Block Storage service lacks interaction with the OpenStack Networking service
and therefore only requires one pair of network interfaces in a bond for the management
and storage networks. However, implementing the same network interfaces on all hosts
provides greater flexibility for future growth of the deployment. Figure 2.5, “Block Storage
host network architecture” [12] provides a visual representation of the network archi-

11
RPCO Installation Guide February 19, 2016 RPCO v11

tecture on Block Storage hosts. For more information on how this change impacts upgrades
from earlier releases, see the upgrade content in the Operations Guide.

Figure 2.5. Block Storage host network architecture

2.6. OpenStack Networking


OpenStack Networking (neutron) is configured to use a DHCP agent, L3 Agent and Linux
Bridge agent within a networking agents container. Figure 2.6, “Networking agents con-
tainers” [13] shows the interaction of these agents, network components, and connec-
tion to a physical network.

12
RPCO Installation Guide February 19, 2016 RPCO v11

Figure 2.6. Networking agents containers

13
RPCO Installation Guide February 19, 2016 RPCO v11

The Compute service uses the KVM hypervisor. Figure 2.7, “Compute hosts” [14] shows
the interaction of instances, Linux Bridge agent, network components, and connection to a
physical network.

Figure 2.7. Compute hosts

14
RPCO Installation Guide February 19, 2016 RPCO v11

2.7. Installation requirements


Deployment host:

• Required items:

• Ubuntu 14.04 LTS (Trusty Tahr) or compatible operating system that meets all other re-
quirements.

• Secure Shell (SSH) client supporting public key authentication.

• Synchronized network time (NTP) client.

• Python 2.7 or later.

Target hosts:

• Required items:

• Ubuntu Server 14.04 LTS (Trusty Tahr) 64-bit operating system, with Linux kernel ver-
sion 3.13.0-34-generic or later.

• SSH server supporting public key authentication.

• Synchronized NTP client.

• Optional items:

• For hosts providing Block Storage (cinder) service volumes, a Logical Volume Manager
(LVM) volume group named cinder-volumes.

• LVM volume group named lxc to store container file systems. If the lxc volume group
does not exist, containers will be automatically installed in the root file system of the
host.

Note
By default, ansible creates a 5 GB logical volume. Plan storage accordingly
to support the quantity of containers on each target host.

2.8. Installation workflow


This diagram shows the general workflow associated with RPCO installation.

15
RPCO Installation Guide February 19, 2016 RPCO v11

Figure 2.8. Installation workflow

16
RPCO Installation Guide February 19, 2016 RPCO v11

3. Deployment host
Figure 3.1. Installation work flow

The RPCO installation process requires one deployment host. The deployment host contains
Ansible and orchestrates the RPCO installation on the target hosts. One of the target hosts,
preferably one of the infrastructure variants, can be used as the deployment host. To use a
deployment host as a target host, follow the steps in Chapter 4, “Target hosts” [19] on
the deployment host. This guide assumes separate deployment and target hosts.

3.1. Installing the operating system


Install the Ubuntu Server 14.04 (Trusty Tahr) LTS 64-bit operating system on the deploy-
ment host with at least one network interface configured to access the Internet or suitable
local repositories.

3.2. Configuring the operating system


Install additional software packages and configure NTP.

1. Install additional software packages if not already installed during operating system in-
stallation:

# apt-get install aptitude build-essential git ntp ntpdate \


openssh-server python-dev sudo

2. Configure NTP to synchronize with a suitable time source.

3.3. Installing source and dependencies


Install the source and dependencies for the deployment host.

17
RPCO Installation Guide February 19, 2016 RPCO v11

1. Clone the OSAD repository into the /opt/openstack-ansible directory:

# git clone -b TAG https://github.com/openstack/openstack-ansible.git /


opt/openstack-ansible

Replace TAG with the current stable release tag.

2. Change to the /opt/openstack-ansible directory, and run the Ansible bootstrap


script:
# scripts/bootstrap-ansible.sh

3.4. Configuring Secure Shell (SSH) keys


Ansible uses Secure Shell (SSH) with public key authentication for connectivity between the
deployment and target hosts. To reduce user interaction during Ansible operations, key
pairs should not include passphrases. However, if a passphrase is required, consider using
the ssh-agent and ssh-add commands to temporarily store the passphrase before perform-
ing Ansible operations.

18
RPCO Installation Guide February 19, 2016 RPCO v11

4. Target hosts
Figure 4.1. Installation workflow

The RPCO installation process requires at least five target hosts that will contain the Open-
Stack environment and supporting infrastructure. On each target host, perform the follow-
ing tasks:

• Naming target hosts.

• Install the operating system.

• Generate and set up security measures.

• Update the operating system and install additional software packages.

• Create LVM volume groups.

• Configure networking devices.

4.1. Installing the operating system


Install the Ubuntu Server 14.04 (Trusty Tahr) LTS 64-bit operating system on the target
host with at least one network interface configured to access the Internet or suitable local
repositories.

Note
On target hosts without local (console) access, Rackspace recommends adding
the Secure Shell (SSH) server packages to the installation.

4.2. Configuring Secure Shell (SSH) keys


Ansible uses Secure Shell (SSH) for connectivity between the deployment and target hosts.

19
RPCO Installation Guide February 19, 2016 RPCO v11

1. Copy the contents of the public key file on the deployment host to the /root/.ssh/
authorized_keys on each target host.

2. Test public key authentication from the deployment host to each target host. SSH
should provide a shell without asking for a password.

4.3. Configuring the operating system


Check the kernel version, install additional software packages, and configure NTP.

1. Check the kernel version. It should be 3.13.0-34-generic or later.

2. Install additional software packages if not already installed during operating system in-
stallation:

# apt-get install bridge-utils debootstrap ifenslave ifenslave-2.6 \


lsof lvm2 ntp ntpdate openssh-server sudo tcpdump vlan

Note
During the installation of RPCO, unattended upgrades are disabled. For
long-running systems, periodically check for and apply security updates.

3. Add the appropriate kernel modules to the /etc/modules file to enable VLAN and
bond interfaces:
# echo 'bonding' >> /etc/modules
# echo '8021q' >> /etc/modules

4. Configure NTP to synchronize with a suitable time source.

5. Reboot the host to activate the changes.

4.4. Configuring LVM


1. To use the optional Block Storage (cinder) service, create an LVM volume group
named cinder-volumes on the Block Storage host. A metadata size of 2048 must be
specified during physical volume creation. For example:

# pvcreate --metadatasize 2048 physical_volume_device_path


# vgcreate cinder-volumes physical_volume_device_path

2. Optionally, create an LVM volume group named lxc for container file systems. If the lxc
volume group does not exist, containers will be automatically installed into the file sys-
tem under /var/lib/lxc by default.

4.5. Configuring the network


Although Ansible automates most deployment operations, networking on target hosts re-
quires manual configuration because it can vary dramatically per environment. For demon-
stration purposes, these instructions use a reference architecture with example network in-

20
RPCO Installation Guide February 19, 2016 RPCO v11

terface names, networks, and IP addresses. Modify these values as needed for the particu-
lar environment.

The reference architecture for target hosts contains the following components:

• A bond0 interface using two physical interfaces. For redundancy purposes, avoid using
more than one port on network interface cards containing multiple ports. The exam-
ple configuration uses eth0 and eth2. Actual interface names can vary depending on
hardware and drivers. Configure the bond0 interface with a static IP address on the host
management network.

• A bond1 interface using two physical interfaces. For redundancy purposes, avoid using
more than one port on network interface cards containing multiple ports. The example
configuration uses eth1 and eth3. Actual interface names can vary depending on hard-
ware and drivers. Configure the bond1 interface without an IP address.

Note
Recommended but not required for Block Storage target hosts.

• Container management network subinterface on the bond0 interface and br-mgmt


bridge with a static IP address.

• The OpenStack Networking VXLAN subinterface on the bond1 interface and br-vxlan
bridge with a static IP address.

Note
Recommended but not required for Block Storage target hosts.

• The OpenStack Networking VLAN br-vlan bridge on the bond1 interface without an IP
address.

Note
Recommended but not required for Block Storage target hosts.

The reference architecture for target hosts can also contain the following optional compo-
nents:

• Storage network subinterface on the bond0 interface and br-storage bridge with a
static IP address.

For more information, see OpenStack Ansible Networking.

4.5.1. Reference architecture


After establishing initial host management network connectivity using the bond0 interface,
modify the /etc/network/interfaces file as described in the following procedure.

Note
For simplicity, the reference architecture assumes that all target hosts contain
the same network interfaces.

21
RPCO Installation Guide February 19, 2016 RPCO v11

Procedure 4.1. Modifying the network interfaces file

1. Physical interfaces:

# Physical interface 1
auto eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0

# Physical interface 2
auto eth1
iface eth1 inet manual
bond-master bond1
bond-primary eth1

# Physical interface 3
auto eth2
iface eth2 inet manual
bond-master bond0

# Physical interface 4
auto eth3
iface eth3 inet manual
bond-master bond1

2. Bonding interfaces:

# Bond interface 0 (physical interfaces 1 and 3)


auto bond0
iface bond0 inet static
bond-slaves eth0 eth2
bond-mode active-backup
bond-miimon 100
bond-downdelay 200
bond-updelay 200
address HOST_IP_ADDRESS
netmask HOST_NETMASK
gateway HOST_GATEWAY
dns-nameservers HOST_DNS_SERVERS

# Bond interface 1 (physical interfaces 2 and 4)


auto bond1
iface bond1 inet manual
bond-slaves eth1 eth3
bond-mode active-backup
bond-miimon 100
bond-downdelay 250
bond-updelay 250

If not already complete, replace HOST_IP_ADDRESS, HOST_NETMASK,


HOST_GATEWAY, and HOST_DNS_SERVERS with the appropriate configuration for
the host management network.

3. Logical (VLAN) interfaces:

22
RPCO Installation Guide February 19, 2016 RPCO v11

# Container management VLAN interface


iface bond0.CONTAINER_MGMT_VLAN_ID inet manual
vlan-raw-device bond0

# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface


iface bond1.TUNNEL_VLAN_ID inet manual
vlan-raw-device bond1

# Storage network VLAN interface (optional)


iface bond0.STORAGE_VLAN_ID inet manual
vlan-raw-device bond0

Replace *_VLAN_ID with the appropriate configuration for the environment.

4. Bridge devices:

# Container management bridge


auto br-mgmt
iface br-mgmt inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port references tagged interface
bridge_ports bond0.CONTAINER_MGMT_VLAN_ID
address CONTAINER_MGMT_BRIDGE_IP_ADDRESS
netmask CONTAINER_MGMT_BRIDGE_NETMASK
dns-nameservers CONTAINER_MGMT_BRIDGE_DNS_SERVERS

# OpenStack Networking VXLAN (tunnel/overlay) bridge


auto br-vxlan
iface br-vxlan inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port references tagged interface
bridge_ports bond1.TUNNEL_VLAN_ID
address TUNNEL_BRIDGE_IP_ADDRESS
netmask TUNNEL_BRIDGE_NETMASK

# OpenStack Networking VLAN bridge


auto br-vlan
iface br-vlan inet manual
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port references untagged interface
bridge_ports bond1

# Storage bridge (optional)


auto br-storage
iface br-storage inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port reference tagged interface
bridge_ports bond0.STORAGE_VLAN_ID
address STORAGE_BRIDGE_IP_ADDRESS
netmask STORAGE_BRIDGE_NETMASK

23
RPCO Installation Guide February 19, 2016 RPCO v11

Replace *_VLAN_ID, *_BRIDGE_IP_ADDRESS, and *_BRIDGE_NETMASK,


*_BRIDGE_DNS_SERVERS with the appropriate configuration for the environment.

4.5.2. Configuring the network on a target host


This example uses the following parameters to configure networking on a single target
host. See Figure 4.2, “Infrastructure services target hosts” [25], Figure 4.3, “Compute
target hosts” [26], and Figure 4.4, “Block Storage target hosts” [27] for a visual rep-
resentation of these parameters in the architecture.

Note
For simplicity, this example assumes that all target hosts contain the same net-
work interfaces.

• VLANs:

• Host management: Untagged/Native

• Container management: 10

• Tunnels: 30

• Storage: 20

Networks:

• Host management: 10.240.0.0/22

• Container management: 172.29.236.0/22

• Tunnel: 172.29.240.0/22

• Storage: 172.29.244.0/22

Addresses:

• Host management: 10.240.0.11

• Host management gateway: 10.240.0.1

• DNS servers: 69.20.0.164 69.20.0.196

• Container management: 172.29.236.11

• Tunnel: 172.29.240.11

• Storage: 172.29.244.11

24
RPCO Installation Guide February 19, 2016 RPCO v11

Figure 4.2. Infrastructure services target hosts

25
RPCO Installation Guide February 19, 2016 RPCO v11

Figure 4.3. Compute target hosts

26
RPCO Installation Guide February 19, 2016 RPCO v11

Figure 4.4. Block Storage target hosts

27
RPCO Installation Guide February 19, 2016 RPCO v11

Contents of the /etc/network/interfaces file:

# Physical interface 1
auto eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0

# Physical interface 2
auto eth1
iface eth1 inet manual
bond-master bond1
bond-primary eth1

# Physical interface 3
auto eth2
iface eth2 inet manual
bond-master bond0

# Physical interface 4
auto eth3
iface eth3 inet manual
bond-master bond1

# Bond interface 0 (physical interfaces 1 and 3)


auto bond0
iface bond0 inet static
bond-slaves eth0 eth2
bond-mode active-backup
bond-miimon 100
bond-downdelay 200
bond-updelay 200
address 10.240.0.11
netmask 255.255.252.0
gateway 10.240.0.1
dns-nameservers 69.20.0.164 69.20.0.196

# Bond interface 1 (physical interfaces 2 and 4)


auto bond1
iface bond1 inet manual
bond-slaves eth1 eth3
bond-mode active-backup
bond-miimon 100
bond-downdelay 250
bond-updelay 250

# Container management VLAN interface


iface bond0.10 inet manual
vlan-raw-device bond0

# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface


iface bond1.30 inet manual
vlan-raw-device bond1

# Storage network VLAN interface (optional)


iface bond0.20 inet manual
vlan-raw-device bond0

# Container management bridge


auto br-mgmt

28
RPCO Installation Guide February 19, 2016 RPCO v11

iface br-mgmt inet static


bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port references tagged interface
bridge_ports bond0.10
address 172.29.236.11
netmask 255.255.252.0
dns-nameservers 69.20.0.164 69.20.0.196

# OpenStack Networking VXLAN (tunnel/overlay) bridge


auto br-vxlan
iface br-vxlan inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port references tagged interface
bridge_ports bond1.30
address 172.29.240.11
netmask 255.255.252.0

# OpenStack Networking VLAN bridge


auto br-vlan
iface br-vlan inet manual
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port references untagged interface
bridge_ports bond1

# Storage bridge (optional)


auto br-storage
iface br-storage inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port reference tagged interface
bridge_ports bond0.20
address 172.29.244.11
netmask 255.255.252.0

In non-Rackspace data centers, the service network configuration should be commented


out of /etc/openstack_deploy/openstack_user_config.yml. The comment-
ed-out section should appear as follows:
# Cidr used in the Service network
# snet: 172.29.248.0/22

#- network:
# group_binds:
# - glance_api
# - nova_compute
# - neutron_linuxbridge_agent
# type: "raw"
# container_bridge: "br-snet"
# container_interface: "eth3"
# ip_from_q: "snet"

29
RPCO Installation Guide February 19, 2016 RPCO v11

5. Deployment configuration
Figure 5.1. Installation work flow

Ansible references a handful of files containing mandatory and optional configuration di-
rectives. These files must be modified to define the target environment before running the
Ansible playbooks. Perform the following tasks:

• Configure Target host networking to define bridge interfaces and networks

• Configure a list of target hosts on which to install the software

• Configure virtual and physical network relationships for OpenStack Networking (neu-
tron)

• Configure service credentials

• (Optional) Set proxy environment variables

• (Optional) Configure the hypervisor

• (Optional) Configure Block Storage (cinder) to use the NetApp back end

• (Optional) Configure Block Storage (cinder) backups.

• (Optional) Configure Block Storage availability zones

• Configure passwords for all services

• Configure Dashboard SSL settings (optional)

5.1. Prerequisites
1. Recursively copy the openstack-ansible-deployment files:
cp -r /opt/openstack-ansible/etc/openstack_deploy /etc/

30
RPCO Installation Guide February 19, 2016 RPCO v11

2. Change into the /etc/openstack_deploy directory and copy


openstack_user_config.yml.example to openstack_user_config.yml:

cd /etc/openstack_deploy
cp openstack_user_config.yml.example openstack_user_config.yml

5.2. Configuring target host networking


Edit the /etc/openstack_deploy/openstack_user_config.yml file to configure
target host networking.

1. Configure the IP address ranges associated with each network in the cidr_networks
section:

cidr_networks:
# Management (same range as br-mgmt on the target hosts)
management: CONTAINER_MGMT_CIDR
# Tunnel endpoints for VXLAN tenant networks
# (same range as br-vxlan on the target hosts)
tunnel: TUNNEL_CIDR
#Storage (same range as br-storage on the target hosts)
storage: STORAGE_CIDR

Replace *_CIDR with the appropriate IP address range in CIDR notation. For example,
203.0.113.0/24.

Note
Use the same IP address ranges as the underlying physical network in-
terfaces or bridges configured in Section 4.5, “Configuring the net-
work” [20]. For example, if the container network uses 203.0.113.0/24, the
CONTAINER_MGMT_CIDR should also use 203.0.113.0/24.

The default configuration includes the optional storage and service net-
works. To remove one or both of them, comment out the appropriate net-
work name.

2. Configure the existing IP addresses in the used_ips section:

used_ips:
- EXISTING_IP_ADDRESSES

Replace EXISTING_IP_ADDRESSES with a list of existing IP addresses in the ranges


defined in the previous step. This list should include all IP addresses manually config-
ured on target hosts in the Section 4.5, “Configuring the network” [20], internal load
balancers, service network bridge, and any other devices to avoid conflicts during the
automatic IP address generation process.

Note
Add individual IP addresses on separate lines. For example, to prevent use
of 203.0.113.101 and 201:

31
RPCO Installation Guide February 19, 2016 RPCO v11

used_ips:
- 203.0.113.101
- 203.0.113.201

Add a range of IP addresses using a comma. For example, to prevent use of


203.0.113.101-201:

used_ips:
- 203.0.113.101, 203.0.113.201

3. Configure load balancing in the global_overrides section:

global_overrides:
# Internal load balancer VIP address
internal_lb_vip_address: INTERNAL_LB_VIP_ADDRESS
# External (DMZ) load balancer VIP address
external_lb_vip_address: EXTERNAL_LB_VIP_ADDRESS
# Container network bridge device
management_bridge: "MGMT_BRIDGE"
# Tunnel network bridge device
tunnel_bridge: "TUNNEL_BRIDGE"

Replace INTERNAL_LB_VIP_ADDRESS with the internal IP address of the load bal-


ancer. Infrastructure and OpenStack services use this IP address for internal communi-
cation.

Replace EXTERNAL_LB_VIP_ADDRESS with the external, public, or DMZ IP address


of the load balancer. Users primarily use this IP address for external API and web inter-
faces access.

Replace MGMT_BRIDGE with the container bridge device name, typically br-mgmt.

Replace TUNNEL_BRIDGE with the tunnel/overlay bridge device name, typically br-
vxlan.

4. Configure the management network in the provider_networks subsection:

provider_networks:
- network:
group_binds:
- all_containers
- hosts
type: "raw"
container_bridge: "br-mgmt"
container_interface: "eth1"
container_type: "veth"
ip_from_q: "management"
is_container_address: true
is_ssh_address: true

5. Configure optional networks in the provider_networks subsection. For example, a


storage network:

32
RPCO Installation Guide February 19, 2016 RPCO v11

provider_networks:
- network:
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
type: "raw"
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"

Note
The default configuration includes one or more optional networks. To re-
move any of them, comment out the entire associated stanza beginning
with the - network: line.

6. Configure OpenStack Networking tunnel/overlay VXLAN networks in the


provider_networks subsection:

provider_networks:
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vxlan"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "TUNNEL_ID_RANGE"
net_name: "vxlan"

Replace TUNNEL_ID_RANGE with the tunnel ID range. For example, 1:1000.

7. Configure OpenStack Networking flat (untagged) and VLAN (tagged) networks in the
provider_networks subsection:

provider_networks:
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth12"
host_bind_override: "eth12"
type: "flat"
net_name: "flat"
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
type: "vlan"
range: VLAN_ID_RANGE
net_name: "vlan"

33
RPCO Installation Guide February 19, 2016 RPCO v11

Replace VLAN_ID_RANGE with the VLAN ID range for each VLAN provider network.
For example, 1:1000. Supports more than one range of VLANs on a particular network.
For example, 1:1000,2001:3000. Create a similar stanza for each additional network.

Note
Optionally, you can add one or more static routes to interfaces within contain-
ers. Each route requires a destination network in CIDR notation and a gateway.
For example:

provider_networks:
- network:
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
type: "raw"
container_bridge: "br-storage"
container_interface: "eth2"
container_type: "veth"
ip_from_q: "storage"
static_routes:
- cidr: 10.176.0.0/12
gateway: 172.29.248.1

This example adds the following content to the /etc/net-


work/interfaces.d/eth2.cfg file in the appropriate containers:

post-up ip route add 10.176.0.0/12 via 172.29.248.1 || true

5.3. Configuring target hosts


Modify the /etc/openstack_deploy/openstack_user_config.yml file to config-
ure the target hosts.

Warning
Do not assign the same IP address to different target hostnames. Unexpected
results may occur. Each IP address and hostname must be a matching pair. To
use the same host in multiple roles, for example infrastructure and networking,
specify the same hostname and IP in each section.

Use short hostnames rather than fully-qualified domain names (FQDN) to pre-
vent length limitation issues with LXC and SSH. For example, a suitable short
hostname for a compute host might be: 123456-Compute001.

1. Configure a list containing at least three infrastructure target hosts in the


infra_hosts section:

34
RPCO Installation Guide February 19, 2016 RPCO v11

infra_hosts:
603975-infra01:
ip: INFRA01_IP_ADDRESS
603989-infra02:
ip: INFRA02_IP_ADDRESS
627116-infra03:
ip: INFRA03_IP_ADDRESS
628771-infra04: ...

Replace *_IP_ADDRESS with the IP address of the br-mgmt container management


bridge on each infrastructure target host. Use the same net block as bond0 on the
nodes, for example:

infra_hosts:
603975-infra01:
ip: 10.240.0.80
603989-infra02:
ip: 10.240.0.81
627116-infra03:
ip: 10.240.0.184

2. Configure a list containing at least one network target host in the network_hosts
section:

network_hosts:
602117-network01:
ip: NETWORK01_IP_ADDRESS
602534-network02: ...

Replace *_IP_ADDRESS with the IP address of the br-mgmt container management


bridge on each network target host.

3. Configure a list containing at least one compute target host in the compute_hosts
section:

compute_hosts:
900089-compute001:
ip: COMPUTE001_IP_ADDRESS
900090-compute002: ...

Replace *_IP_ADDRESS with the IP address of the br-mgmt container management


bridge on each compute target host.

4. Configure a list containing at least one logging target host in the log_hosts section:

log_hosts:
900088-logging01:
ip: LOGGER1_IP_ADDRESS
903877-logging02: ...

Replace *_IP_ADDRESS with the IP address of the br-mgmt container management


bridge on each logging target host.

5. Configure a list containing at least one repository target host in the re-
po-infra_hosts section:

35
RPCO Installation Guide February 19, 2016 RPCO v11

repo-infra_hosts:
903939-repo01:
ip: REPO1_IP_ADDRESS
907963-repo02: ...

Replace *_IP_ADDRESS with the IP address of the br-mgmt container management


bridge on each repository target host.

The repository typically resides on one or more infrastructure hosts. Alterna-


tively, specify a value for the openstack_repo_url variable in the /etc/
openstack_deploy/user_group_vars.yml file. The value should contain a URL
for a host with the appropriate layout. For example:

openstack_repo_url: "https://rpc-repo.rackspace.com/"

Using repo-infra_hosts configures a local repository with the appropriate layout


and sets openstack_repo_url for you.

6. Configure a list containing at least one optional storage host in the storage_hosts
section:

storage_hosts:
100338-storage01:
ip: STORAGE01_IP_ADDRESS
100392-storage02: ...

Replace *_IP_ADDRESS with the IP address of the br-mgmt container management


bridge on each storage target host. Each storage host also requires additional configu-
ration to define the back end driver.

Note
The default configuration includes an optional storage host. To install
without storage hosts, comment out the stanza beginning with the
storage_hosts: line.

5.4. Configuring service credentials


Configure credentials for each service in the /etc/openstack_deploy/
*_secrets.yml files. Consider using Ansible Vault to increase security by encrypting any
files containing credentials.

Warning
Adjust permissions on these files to restrict access by non-privileged users.

Note that the following options configure passwords for the web interfaces:

• keystone_auth_admin_password configures the admin tenant password for both


the OpenStack API and dashboard access.

• kibana_password configures the password for Kibana web interface access.

36
RPCO Installation Guide February 19, 2016 RPCO v11

Recommended: Use the pw-token-gen.py script to generate random values for the vari-
ables in each file that contains service credentials:

$ cd /opt/openstack-ansible/scripts
$ python pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml

To regenerate existing passwords, add the --regen flag.

5.5. Configuring proxy environment variables (op-


tional)
To run the deployment framework from behind a proxy, set environment variables so that
pip and other services work as expected.

In the /etc/openstack_deploy/user_variables.yml file, the following comment-


ed examples are provided in the file:
## Example environment variable setup:
# proxy_env_url: http://username:pa$$w0rd@10.10.10.9:9000/
# no_proxy_env: "localhost,127.0.0.1,{% for host in groups['all_containers']
%}{{ hostvars[host]['container_address'] }}{% if not loop.last %},{% endif %}
{% endfor %}"
# global_environment_variables:
# HTTP_PROXY: "{{ proxy_env_url }}"
# HTTPS_PROXY: "{{ proxy_env_url }}"
# NO_PROXY: "{{ no_proxy_env }}"

To use a proxy, uncomment this section and define variables as appropriate for your envi-
ronment.

• Set the proxy_env_url environment variable to define the proxy so that pip and oth-
er programs can reach hosts beyond the proxy. The preferred format for this variable is
http://, since Python HTTPS proxy support is limited. To use https://, be sure to ver-
ify that Python programs connect as expected before continuing the installation.

• The no_proxy_env variable defines addresses that should skip the proxy. For example
localhost, container IPs, and so on do not attempt to use the proxy setting. If neces-
sary, prepend items to this list.

• Items appended to global_environment_variables set environment variables on


every host and container during the container creation process. Generally, it is not neces-
sary to modify this variable.

5.6. Configuring the hypervisor (optional)


By default, the KVM hypervisor is used. If you are deploying to a host that does not support
KVM hardware acceleration extensions, select a suitable hypervisor type such as qemu or
lxc. To change the hypervisor type, uncomment and edit the following line in the /etc/
openstack_deploy/user_variables.yml file:
# nova_virt_type: kvm

37
RPCO Installation Guide February 19, 2016 RPCO v11

5.7. Configuring the Image service (optional)


In an all-in-one deployment with a single infrastructure node, the Image service uses the lo-
cal file system on the target host to store images. In a Rackspace deployment with multiple
infrastructure nodes, the Image service must use Cloud Files or NetApp.

The following procedure describes how to modify the /etc/openstack_deploy/


user_variables.yml file to enable Cloud Files usage.

1. Change the default store to use Object Storage (swift), the underlying architecture of
Cloud Files:
glance_default_store: swift

2. Set the appropriate authentication URL:

For US Rackspace cloud accounts:


rackspace_cloud_auth_url: https://identity.api.rackspacecloud.com/v2.0

For UK Rackspace cloud accounts:


rackspace_cloud_auth_url: https://lon.identity.api.rackspacecloud.com/v2.0

3. Set the Rackspace cloud account credentials:


rackspace_cloud_tenant_id: RAX_CLOUD_TENANT_ID
rackspace_cloud_username: RAX_CLOUD_USER_NAME
rackspace_cloud_password: RAX_CLOUD_PASSWORD

The RAX_CLOUD_TENANT_ID can be found by logging into mycloud.rackspace.com,


and clicking on the Account: $USERNAME link in the upper right section of the screen.

Replace RAX_CLOUD_USER_NAME and RAX_CLOUD_PASSWORD with the appropriate


account credentials.

4. Change the glance_swift_store_endpoint_type from the default inter-


nalURL settings to publicURL if needed. Glance services are typically backed by
Rackspace cloud files in the Rackspace Data Center. If the OpenStack environment
must run outside the data center, adjust the key value:

glance_swift_store_endpoint_type: publicURL

5. If glance_swift_store_endpoint_type is set to the internalURL, replace


RAX_CLOUD_* with the appropriate Rackspace cloud account credential components.

6. Define the store name:


glance_swift_store_container: STORE_NAME

Replace STORE_NAME with the container name in Cloud Files to be used for storing im-
ages. If the container doesn't exist, it will be automatically created.

7. Define the store region:

38
RPCO Installation Guide February 19, 2016 RPCO v11

glance_swift_store_region: STORE_REGION

Replace STORE_REGION with one of the following region codes: DFW, HKG, IAD,
LON, ORD, SYD.

Note
UK Rackspace cloud accounts must use the LON region. US Rackspace cloud
accounts can use any region except LON.

8. (Optional) Set the paste deploy flavor:

glance_flavor: GLANCE_FLAVOR

By default, the Image service uses caching and authenticates with the Identity service.
The default maximum size of the image cache is 10 GB. The default Image service con-
tainer size is 12 GB. In some configurations, the Image service might attempt to cache
an image which exceeds the available disk space. If necessary, you can disable caching.
For example, to use Identity without caching, replace GLANCE_FLAVOR with key-
stone:

glance_flavor: keystone

Or, to disable both authentication and caching, set GLANCE_FLAVOR to no value:

glance_flavor:

Note
This option is set by default to use authentication and cache management
in the playbooks/roles/os_glance/defaults/main.yml file. To
override the default behavior, set glance_flavor to a different value in
/etc/openstack_deploy/user_variables.yml.

The possible values for GLANCE_FLAVOR are:

• (Nothing)

• caching

• cachemanagement

• keystone

• keystone+caching

• keystone+cachemanagement (default)

• trusted-auth

• trusted-auth+cachemanagement

39
RPCO Installation Guide February 19, 2016 RPCO v11

5.8. Configuring the Block Storage service (option-


al)
Block Storage (cinder) provides persistent storage to guest instances. The service manages
volumes and snapshots.

5.8.1. Configuring the Block Storage service to use LVM


Logical Volume Management (LVM) is the default back end for Block Storage. To configure
an LVM back end, edit the /etc/openstack_deploy/openstack_user_config.yml
file and configure each storage node that uses LVM.

Important
When using LVM, the cinder-volumes service is closely tied to the underly-
ing disks of the physical host, and does not benefit greatly from containeriza-
tion. We recommend setting up cinder-volumes on the physical host and us-
ing the is_metal flag to avoid issues.

1. Add the lvm stanza under the cinder_backends stanza for each storage node:

cinder_backends:
lvm:

The options in subsequent steps fit under the lvm stanza.

2. Provide a volume_backend_name. This name is arbitrary and becomes a volume type


within the Block Storage service.

volume_backend_name: LVM_iSCSI

3. Configure the volume_driver:

volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver

4. Configure the volume_group:

volume_group: cinder-volumes

5. Enable cinder on metal deployment in the etc/openstack_deploy/env.d/


cinder.yml file by setting is_metal to true:

is_metal: true

The cinder-volume service does not run in an LXC container.

6. Confirm that the openstack_user_config.yml configuration is accurate:

40
RPCO Installation Guide February 19, 2016 RPCO v11

storage_hosts:
xxxxxx-Infra01:
ip: 172.29.236.16
container_vars:
cinder_storage_availability_zone: cinderAZ_1
cinder_default_availability_zone: cinderAZ_1
cinder_backends:
lvm:
volume_backend_name: LVM_iSCSI
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group: cinder-volumes

5.8.2. Configuring the Block Storage service to use NetApp


By default, the Block Storage service uses the LVM back end. To use a NetApp storage ap-
pliance back end, edit the /etc/openstack_deploy/openstack_user_config.yml
file and configure each storage node that will use it:

1. Add the netapp stanza under the cinder_backends stanza for each storage node:
cinder_backends:
netapp:

The options in subsequent steps fit under the netapp stanza.

Note
The back end name is arbitrary and becomes a volume type within the
Block Storage service.

2. Configure the storage family:


netapp_storage_family: STORAGE_FAMILY

Replace STORAGE_FAMILY with ontap_7mode for Data ONTAP operating in 7-mode


or ontap_cluster for Data ONTAP operating as a cluster.

3. Configure the storage protocol:


netapp_storage_protocol: STORAGE_PROTOCOL

Replace STORAGE_PROTOCOL with iscsi for iSCSI or nfs for NFS.

For the NFS protocol, you must also specify the location of the configuration file that
lists the shares available to the Block Storage service:
nfs_shares_config: SHARE_CONFIG

Replace SHARE_CONFIG with the location of the share configuration file. For example,
/etc/cinder/nfs_shares.

4. Configure the server:


netapp_server_hostname: SERVER_HOSTNAME

Replace SERVER_HOSTNAME with the hostnames for both netapp controllers.

41
RPCO Installation Guide February 19, 2016 RPCO v11

5. Configure the server API port:

netapp_server_port: PORT_NUMBER

Replace PORT_NUMBER with 80 for HTTP or 443 for HTTPS.

6. Configure the server credentials:

netapp_login: USER_NAME
netapp_password: PASSWORD

Replace USER_NAME and PASSWORD with the appropriate values.

7. Select the NetApp driver:

volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver

8. Configure the volume back end name:

volume_backend_name: BACKEND_NAME

Replace BACKEND_NAME with a suitable value that provides a hint for the Block Stor-
age scheduler. For example, NETAPP_iSCSI.

9. Disable cinder on metal deployment in the etc/openstack_deploy/env.d/


cinder.yml file by setting is_metal to false:

...
# leave is_metal off, alternatively you will have to migrate your volumes
once
# deployed on metal.
is_metal: false

The cinder-volume service will run in an LXC container.

10. Check that the openstack_user_config.yml configuration is accurate:

storage_hosts:
xxxxxx-Infra01:
ip: 172.29.236.16
container_vars:
cinder_backends:
limit_container_types: cinder_volume
netapp:
netapp_storage_family: ontap_7mode
netapp_storage_protocol: nfs
netapp_server_hostname: 111.222.333.444
netapp_server_port: 80
netapp_login: openstack_cinder
netapp_password: password
volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
volume_backend_name: NETAPP_NFS

For netapp_server_hostname, specify the IP address of the Data ONTAP server.


Include iSCSI or NFS for the netapp_storage_family depending on the configura-
tion. Add 80 if using HTTP or 443 if using HTTPS for netapp_server_port.

42
RPCO Installation Guide February 19, 2016 RPCO v11

The cinder-volume.yml playbook will automatically install the nfs-common file


across the hosts, transitioning from an LVM to a NetApp back end.

5.8.3. Configuring the Block Storage service with NFS proto-


cols
If the NetApp back end is configured to use an NFS storage protocol, edit /etc/
openstack_deploy/openstack_user_config.yml, and configure the NFS client on
each storage node that will use it.

1. Add the cinder_backends stanza (which includes cinder_nfs_client) under


the container_vars stanza for each storage node:
container_vars:
cinder_backends:
cinder_nfs_client:

2. Configure the location of the file that lists shares available to the block storage service.
This configuration file must include nfs_shares_config:
nfs_shares_config: SHARE_CONFIG

Replace SHARE_CONFIG with the location of the share configuration file. For example,
/etc/cinder/nfs_shares.

3. Configure one or more NFS shares:


shares:
- { ip: “NFS_HOST”, share: “NFS_SHARE” }

Replace NFS_HOST with the IP address or hostname of the NFS server, and the
NFS_SHARE with the absolute path to an existing and accessible NFS share.

4. Disable cinder on metal deployment in the etc/openstack_deploy/env.d/


cinder.yml file by setting is_metal to false:
...
# leave is_metal off, alternatively you will have to migrate your volumes
once
# deployed on metal.
is_metal: false

The cinder-volume service will run in an LXC container.

5.8.4. Configuring Block Storage backups to Object Storage


You can configure Block Storage (cinder) to back up volumes to Object Storage (swift)
by setting variables. If enabled, the default configuration backs up volumes to an Ob-
ject Storage installation accessible within your environment. Alternatively, you can set
cinder_service_backup_swift_url and other variables listed below to back up to
an external Object Storage installation.

1. Add or edit the following line in the /etc/openstack_deploy/


user_variables.yml file and set the value to True:

43
RPCO Installation Guide February 19, 2016 RPCO v11

cinder_service_backup_program_enabled: True

2. By default, Block Storage will use the access credentials of the user initiating the back-
up. Default values are set in the /opt/openstack-ansible/playbooks/roles/
os_cinder/defaults/main.yml file. You can override those defaults by setting
variables in /etc/openstack_deploy/user_variables.yml to change how
Block Storage performs backups. As needed, add and edit any of the following vari-
ables to the /etc/openstack_deploy/user_variables.yml file:
...
# cinder_service_backup_swift_auth: Options include 'per_user' or
'single_user', we default to
# 'per_user' so that backups are saved
to a user's swift account.
cinder_service_backup_swift_auth: per_user
# cinder_service_backup_swift_url: This is your swift storage url when
using 'per_user', or keystone
# endpoint when using 'single_user'.
When using 'per_user', you
# can leave this as empty or as None to
allow cinder-backup to
# obtain storage url from environment.
cinder_service_backup_swift_url:
cinder_service_backup_swift_auth_version: 2
cinder_service_backup_swift_user:
cinder_service_backup_swift_tenant:
cinder_service_backup_swift_key:
cinder_service_backup_swift_container: volumebackups
cinder_service_backup_swift_object_size: 52428800
cinder_service_backup_swift_retry_attempts: 3
cinder_service_backup_swift_retry_backoff: 2
cinder_service_backup_compression_algorithm: zlib
cinder_service_backup_metadata_version: 2

During installation of Block Storage, the backup service is configured. For more information
about swift, refer to the Standalone Object Storage Deployment guide.

5.8.5. Configuring Block Storage backups to external Cloud


Files
You can configure Block Storage (cinder) to back up volumes to external Cloud Files by set-
ting variables.

1. Add or edit the following line in the /etc/openstack_deploy/


user_variables.yml file and set the value to True:
cinder_service_backup_program_enabled: True

2. By default, Block Storage will use the access credentials of the user initiating the back-
up. Default values are set in the /opt/openstack-ansible/playbooks/roles/
os_cinder/defaults/main.yml file. You can override those defaults by setting
variables in /etc/openstack_deploy/user_variables.yml to change how
Block Storage performs backups. As needed, add and edit any of the following vari-
ables to the /etc/openstack_deploy/user_variables.yml file:

44
RPCO Installation Guide February 19, 2016 RPCO v11

...
cinder_service_backup_swift_auth: single_user
cinder_service_backup_swift_url: https://identity.api.rackspacecloud.com/
v2.0
cinder_service_backup_swift_auth_version: 2
cinder_service_backup_swift_user:
cinder_service_backup_swift_tenant:
cinder_service_backup_swift_key:
cinder_service_backup_swift_container: volumebackups
cinder_service_backup_swift_object_size: 52428800
cinder_service_backup_swift_retry_attempts: 3
cinder_service_backup_swift_retry_backoff: 2
cinder_service_backup_compression_algorithm: zlib
cinder_service_backup_metadata_version: 2

For cinder_service_backup_swift_user, specify the Cloud Files user.

For cinder_service_backup_swift_tenant, specify the public cloud account


number.

For cinder_service_backup_swift_key, specify the Cloud Files password. The


Cloud Files password is NOT the API key.

During installation of Block Storage, the backup service is configured.

5.8.6. Creating Block Storage availability zones


You can create multiple availability zones to manage Block Storage storage hosts. Ed-
it the /etc/openstack_deploy/openstack_user_config.yml and /etc/
openstack_deploy/user_variables.yml files to set up availability zones.

1. For each cinder storage host, configure the availability zone un-
der the container_vars stanza in /etc/openstack_deploy/
openstack_user_config.yml :

cinder_storage_availability_zone: CINDERAZ

Replace CINDERAZ with a suitable name. For example cinderAZ_2

2. When creating more than one availability zone, configure the default avail-
ability zone for all hosts by adding a default availability zone value to /etc/
openstack_deploy/user_variables.yml:

cinder_default_availability_zone: CINDERAZ_DEFAULT

Replace CINDERAZ_DEFAULT with a suitable name. For example, cinderAZ_1. The


default availability zone should be the same for all cinder storage hosts.

Important
If you do not define cinder_default_availability_zone, the de-
fault variable is used (nova). This may cause horizon's volume creation to
fail.

45
RPCO Installation Guide February 19, 2016 RPCO v11

5.9. Configuring the Object Storage service (op-


tional)
Object Storage (swift) is a multi-tenant object storage system. It is highly scalable, can man-
age large amounts of unstructured data, and provides a RESTful HTTP API.

The following procedure describes how to set up storage devices and modify the Object
Storage configuration files to enable Object Storage usage.

1. Enable the trusty-backports repository. The trusty-backports repository is required to


install Object Storage. Add repository details in /etc/apt/sources.list, and up-
date the package list:

$ cd /opt/openstack-ansible/rpc_deployment

# ansible hosts -m shell -a "sed -r -i 's/^# \


(deb.*trusty-backports.*)$/\1/' /etc/apt/sources.list; apt-get update"

2. Section 5.9.2, “Configure and mount storage devices” [46]

3. Section 5.9.3, “Configure an Object Storage deployment” [48]

4. Optionally, allow all Identity users to use Object Storage by setting


swift_allow_all_users in the user_variables.yml file to True. Any users
with the _member_ role (all authorized Identity (keystone) users) can create contain-
ers and upload objects to Object Storage.

Note
If this value is False, then by default, only users with the admin or swift-
operator role are allowed to create containers or manage tenants.

When the backend type for the Image Service (glance) is set to swift, the
Image Service can access the Object Storage cluster regardless of whether
this value is True or False.

5.9.1. Enabling the trusty-backports repository


The trusty-backports repository is required to install or upgrade Object Storage. Add reposi-
tory details in /etc/apt/sources.list, and update the package list:

$ cd /opt/openstack-ansible/playbooks

# ansible hosts -m shell -a "sed -r -i 's/^# \


(deb.*trusty-backports.*)$/\1/' /etc/apt/sources.list; apt-get update"

5.9.2. Configure and mount storage devices


This section offers a set of prerequisite instructions for setting up Object Storage storage
devices. The storage devices must be set up before installing Object Storage.

46
RPCO Installation Guide February 19, 2016 RPCO v11

Procedure 5.1. Configuring and mounting storage devices


RPCO Object Storage requires a minimum of three Object Storage devices with mounted
storage drives. The example commands in this procedure assume the storage devices for
Object Storage are devices sdc through sdg.

1. Determine the storage devices on the node to be used for Object Storage.

2. Format each device on the node used for storage with XFS. While formatting the de-
vices, add a unique label for each device.

Note
Without labels, a failed drive can cause mount points to shift and data to
become inaccessible.

For example, create the file systems on the devices using the mkfs command
$ apt-get install xfsprogs

$ mkfs.xfs -f -i size=1024 -L sdc /dev/sdc


$ mkfs.xfs -f -i size=1024 -L sdd /dev/sdd
$ mkfs.xfs -f -i size=1024 -L sde /dev/sde
$ mkfs.xfs -f -i size=1024 -L sdf /dev/sdf
$ mkfs.xfs -f -i size=1024 -L sdg /dev/sdg

3. Add the mount locations to the /etc/fstab file so that the storage devices are re-
mounted on boot. The following example mount options are recommended when us-
ing XFS.

Finish all modifications to the /etc/fstab file before mounting the new filesystems
created within the storage devices.
LABEL=sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8,
nobootwait 0 0
LABEL=sdd /srv/node/sdd xfs noatime,nodiratime,nobarrier,logbufs=8,
nobootwait 0 0
LABEL=sde /srv/node/sde xfs noatime,nodiratime,nobarrier,logbufs=8,
nobootwait 0 0
LABEL=sdf /srv/node/sdf xfs noatime,nodiratime,nobarrier,logbufs=8,
nobootwait 0 0
LABEL=sdg /srv/node/sdg xfs noatime,nodiratime,nobarrier,logbufs=8,
nobootwait 0 0

4. Create the mount points for the devices using the mkdir command.
$ mkdir -p /srv/node/sdc
$ mkdir -p /srv/node/sdd
$ mkdir -p /srv/node/sde
$ mkdir -p /srv/node/sdf
$ mkdir -p /srv/node/sdg

The mount point is referenced as the mount_pointparameter in the swift.yml file


(/etc/rpc_deploy/conf.d/swift.yml).
$ mount /srv/node/sdc
$ mount /srv/node/sdd
$ mount /srv/node/sde

47
RPCO Installation Guide February 19, 2016 RPCO v11

$ mount /srv/node/sdf
$ mount /srv/node/sdg

To view an annotated example of the swift.yml file, see Appendix A, OSAD configura-
tion files [85].

For the following mounted devices:

Table 5.1. Mounted devices


Device Mount location
/dev/sdc /srv/node/sdc
/dev/sdd /srv/node/sdd
/dev/sde /srv/node/sde
/dev/sdf /srv/node/sdf
/dev/sdg /srv/node/sdg

The entry in the swift.yml would be:


# drives:
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
# - name: sdg
# mount_point: /srv/node

5.9.3. Configure an Object Storage deployment


Object Storage is configured using the /etc/openstack_deploy/conf.d/
swift.yml.example file and the /etc/openstack_deploy/user_variables.yml
file.

The group variables in the /etc/openstack_deploy/conf.d/swift.yml.example


file are used by the Ansible playbooks when installing Object Storage. Some variables can-
not be changed after they are set, while some changes require re-running the playbooks.
The values in the swift_hosts section supersede values in the swift section.

To view the configuration files, including information about which variables are required
and which are optional, see Appendix A, OSAD configuration files [85].

5.9.3.1. Configuring Object Storage


Procedure 5.2. Updating the Object Storage configuration swift.yml file
1. Copy the /etc/openstack_deploy/conf.d/swift.yml.example file to /etc/
openstack_deploy/conf.d/swift.yml:
#cp /etc/openstack_deploy/conf.d/swift.yml.example \
/etc/openstack_deploy/conf.d/swift.yml

2. Update the global override values:


# global_overrides:
# swift:
# part_power: 8
# weight: 100

48
RPCO Installation Guide February 19, 2016 RPCO v11

# min_part_hours: 1
# repl_number: 3
# storage_network: 'br-storage'
# replication_network: 'br-repl'
# drives:
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
# mount_point: /mnt
# account:
# container:
# storage_policies:
# - policy:
# name: gold
# index: 0
# default: True
# - policy:
# name: silver
# index: 1
# repl_number: 3
# deprecated: True

part_power Set the partition power value based on the total amount
of storage the entire ring will use.

Multiply the maximum number of drives ever used with


this Object Storage installation by 100 and round that val-
ue up to the closest power of two value. For example, a
maximum of six drives, times 100, equals 600. The nearest
power of two above 600 is two to the power of nine, so
the partition power is nine. The partition power cannot be
changed after the Object Storage rings are built.

weight The default weight is 100. If the drives are different sizes,
set the weight value to avoid uneven distribution of da-
ta. For example, a 1 TB disk would have a weight of 100,
while a 2 TB drive would have a weight of 200.

min_part_hours The default value is 1. Set the minimum partition hours


to the amount of time to lock a partition's replicas after a
partition has been moved. Moving multiple replicas at the
same time might make data inaccessible. This value can be
set separately in the swift, container, account, and policy
sections with the value in lower sections superseding the
value in the swift section.

repl_number The default value is 3. Set the replication number to the


number of replicas of each object. This value can be set
separately in the swift, container, account, and policy sec-
tions with the value in the more granular sections super-
seding the value in the swift section.

49
RPCO Installation Guide February 19, 2016 RPCO v11

storage_network By default, the swift services will listen on the default man-
agement IP. Optionally, specify the interface of the stor-
age network.

Note
If the storage_network is not set,
but the storage_ips per host are
set (or the storage_ip is not on the
storage_network interface) the proxy serv-
er will not be able to connect to the storage
services.

replication_network Optionally, specify a dedicated replication network inter-


face, so dedicated replication can be setup. If this value is
not specified, no dedicated replication_network is
set.

Note
As with the storage_network,
if the repl_ip is not set on the
replication_network interface, replica-
tion will not work properly.

drives Set the default drives per host. This is useful when all hosts
have the same drives. These can be overridden on a per
host basis.

mount_point Set the mount_point value to the location where the


swift drives are mounted. For example, with a mount
point of /mnt and a drive of sdc, a drive is mounted at /
mnt/sdc on the swift_host. This can be overridden on
a per-host basis.

storage_policies Storage policies determine on which hardware data is


stored, how the data is stored across that hardware, and
in which region the data resides. Each storage policy must
have an unique name and a unique index. There must be
a storage policy with an index of 0 in the swift.yml file
to use any legacy containers created before storage poli-
cies were instituted.

default Set the default value to yes for at least one policy. This is
the default storage policy for any non-legacy containers
that are created.

deprecated Set the deprecated value to yes to turn off storage poli-
cies.

50
RPCO Installation Guide February 19, 2016 RPCO v11

Note
For account and container rings,
min_part_hours and repl_number are
the only values that can be set. Setting them in
this section overrides the defaults for the spe-
cific ring.

3. Update the Object Storage proxy hosts values:

# swift-proxy_hosts:
# infra-node1:
# ip: 192.0.2.1
# infra-node2:
# ip: 192.0.2.2
# infra-node3:
# ip: 192.0.2.3

swift-proxy_hosts Set the IP address of the hosts that Ansible will connect
to to deploy the swift-proxy containers. The swift-
proxy_hosts value should match the infra nodes.

4. Update the Object Storage hosts values:

# swift_hosts:
# swift-node1:
# ip: 192.0.2.4
# container_vars:
# swift_vars:
# zone: 0
# swift-node2:
# ip: 192.0.2.5
# container_vars:
# swift_vars:
# zone: 1
# swift-node3:
# ip: 192.0.2.6
# container_vars:
# swift_vars:
# zone: 2
# swift-node4:
# ip: 192.0.2.7
# container_vars:
# swift_vars:
# zone: 3
# swift-node5:
# ip: 192.0.2.8
# container_vars:
# swift_vars:
# storage_ip: 198.51.100.8
# repl_ip: 203.0.113.8
# zone: 4
# region: 3
# weight: 200
# groups:
# - account
# - container

51
RPCO Installation Guide February 19, 2016 RPCO v11

# - silver
# drives:
# - name: sdb
# storage_ip: 198.51.100.9
# repl_ip: 203.0.113.9
# weight: 75
# groups:
# - gold
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf

swift_hosts Specify the hosts to be used as the storage nodes.


The ip is the address of the host to which Ansible
connects. Set the name and IP address of each Ob-
ject Storage host. The swift_hosts section is not
required.

swift_vars Contains the Object Storage host specific values.

storage_ip and repl_ip These values are based on the IP address-


es of the host's storage_network or
replication_network. For example, if the
storage_network is br-storage and host1
has an IP address of 1.1.1.1 on br-storage,
then that is the IP address that will be used for
storage_ip. If only the storage_ip is specified
then the repl_ip defaults to the storage_ip. If
neither are specified, both will default to the host
IP address.

Note
Overriding these values on a host
or drive basis can cause prob-
lems if the IP address that the
service listens on is based on a
specified storage_network or
replication_network and the ring
is set to a different IP address.

zone The default is 0. Optionally, set the Object Storage


zone for the ring.

region Optionally, set the Object Storage region for the


ring.

weight The default weight is 100. If the drives are different


sizes, set the weight value to avoid uneven distribu-
tion of data. This value can be specified on a host
or drive basis (if specified at both, the drive setting
takes precedence).

52
RPCO Installation Guide February 19, 2016 RPCO v11

groups Set the groups to list the rings to which a host's


drive belongs. This can be set on a per drive basis
which will override the host setting.

drives Set the names of the drives on this Object Storage


host. At least one name must be specified.

weight The default weight is 100. If the drives are different


sizes, set the weight value to avoid uneven distribu-
tion of data. This value can be specified on a host
or drive basis (if specified at both, the drive setting
takes precedence).

In the following example, swift-node5 shows values in the swift_hosts section


that will override the global values. Groups are set, which overrides the global settings
for drive sdb. The weight is overridden for the host and specifically adjusted on drive
sdb. Also, the storage_ip and repl_ip are set differently for sdb.

# swift-node5:
# ip: 192.0.2.8
# container_vars:
# swift_vars:
# storage_ip: 198.51.100.8
# repl_ip: 203.0.113.8
# zone: 4
# region: 3
# weight: 200
# groups:
# - account
# - container
# - silver
# drives:
# - name: sdb
# storage_ip: 198.51.100.9
# repl_ip: 203.0.113.9
# weight: 75
# groups:
# - gold
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf

5. Ensure the swift.yml is in the /etc/rpc_deploy/conf.d/ folder.

5.9.3.2. Storage Policies


Storage Policies allow segmenting the cluster for various purposes through the creation
of multiple object rings. Using policies, different devices can belong to different rings with
varying levels of replication. By supporting multiple object rings, Object Storage can segre-
gate the objects within a single cluster.

Storage policies can be used for the following situations:

53
RPCO Installation Guide February 19, 2016 RPCO v11

• Differing levels of replication: A provider may want to offer 2x replication and 3x repli-
cation, but does not want to maintain two separate clusters. They can set up a 2x policy
and a 3x policy and assign the nodes to their respective rings.

• Improving performance: Just as solid state drives (SSD) can be used as the exclusive mem-
bers of an account or database ring, an SSD-only object ring can be created to implement
a low-latency or high performance policy.

• Collecting nodes into groups: Different object rings can have different physical servers so
that objects in specific storage policies are always placed in a specific data center or geog-
raphy.

• Differing storage implementations: A policy can be used to direct traffic to collected


nodes that use a different disk file (for example, Kinetic, GlusterFS).

Most storage clusters do not require more than one storage policy. The following problems
can occur if using multiple storage policies per cluster:

• Creating a second storage policy without any specified drives (all drives are part of only
the account, container, and default storage policy groups) creates an empty ring for that
storage policy.

• A non-default storage policy is used only if specified when creating a container, using the
X-Storage-Policy: <policy-name> header. After the container is created, it uses
the created storage policy. Other containers continue using the default or another stor-
age policy specified when created.

For more information about storage policies, see: Storage Policies

5.9.4. Deploying Object Storage on an existing Rackspace


Private Cloud Powered By OpenStack v11 Software
Complete the following procedure to deploy Object Storage on a previously installed RPCO.

1. Section 5.9.2, “Configure and mount storage devices” [46]

2. Section 5.9.3, “Configure an Object Storage deployment” [48]

3. Optionally, allow all Identity users to use Object Storage by setting


swift_allow_all_users in the user_variables.yml file to True. Any users
with the _member_ role (all authorized Identity (keystone) users) can create contain-
ers and upload objects to Object Storage.

Note
If this value is False, then by default, only users with the admin or swift-
operator role are allowed to create containers or manage tenants.

When the backend type for the Image Service (glance) is set to swift, the
Image Service can access the Object Storage cluster regardless of whether
this value is True or False.

4. Run the Object Storage play:

54
RPCO Installation Guide February 19, 2016 RPCO v11

$ cd /opt/openstack-ansible/playbooks
$ openstack-ansible os-swift-install.yml

5.9.5. Object Storage monitoring


Rackspace Cloud Monitoring Service allows Rackspace Private Cloud customers to monitor
system performance, and safeguard critical data.

5.9.5.1. Service and response


When a threshold is reached or functionality fails, the Rackspace Cloud Monitoring Ser-
vice generates an alert, which creates a ticket in the Rackspace ticketing system. This ticket
moves into the RPCO support queue. Tickets flagged as monitoring alerts are given highest
priority, and response is delivered according to the Service Level Agreement (SLA). Refer to
the SLA for detailed information about incident severity levels and corresponding response
times.

Specific monitoring alert guidelines can be set for the installation. These details should be
arranged by a Rackspace account manager.

5.9.5.2. Object Storage monitors


Object Storage has its own set of monitors and alerts. For more information about in-
stalling the monitoring tools, see the RPCO Installation Guide.

The following checks are performed on services:

• Health checks on the services on each server.

Object Storage makes a request to /healthcheck for each service to ensure it is re-
sponding appropriately. If this check fails, determine why the service is failing and fix it
accordingly.

• Health check against the proxy service on the virtual IP (VIP).

Object Storage checks the load balancer address for the swift-proxy-server service,
and monitors the proxy servers as a whole rather than individually which is managed by
the individual service health checks. If this check fails, it suggests that there is no access to
the VIP or all of the services are failing.

The following checks are performed against the output of the swift-recon middleware:

• md5sum checks on the ring files across all Object Storage nodes.

This check ensures that the ring files are the same on each node. If this check fails, deter-
mine why the md5sum for the ring is different and determine which of the ring files is
correct. Copy the correct ring file onto the node that is causing the md5sum to fail.

• md5sum checks on the swift.conf across all swift nodes.

If this check fails, determine why the swift.conf is different and determine which of
the swift.conf is correct. Copy the correct swift.conf onto the node that is causing
the md5sum to fail.

55
RPCO Installation Guide February 19, 2016 RPCO v11

• Asyncs pending

This check monitors the average number of async pending requests and the percentage
that are put in async pending. This happens when a PUT or DELETE fails (due to, for ex-
ample, timeouts, heavy usage, failed disk). If this check fails, determine why requests are
failing and being put in async pending status and fix accordingly.

• Quarantine

This check monitors the percentage of objects that are quarantined (objects that are
found to have errors and moved to quarantine). An alert is set up against account, con-
tainer, and object servers. If this fails, determine the cause of the corrupted objects and
fix accordingly.

• Replication

This check monitors replication success percentage. An alert is set up against account,
container, and object servers. If this fails, determine why objects are not replicating and
fix accordingly.

5.9.6. Integrate Object Storage with the Image Service


Optionally, the images created by the Image Service (glance) can be stored using Object
Storage.

Note
If there is an existing Image Service (glance) backend (for example, cloud files)
but want to add Object Storage (swift) to use as the Image Service back end,
re-add any images from the Image Service after moving to Object Storage. If
the Image Service variables are changed (as described below) and begin using
Object storage, any images in the Image Service will no longer be available.

Procedure 5.3. Integrating Object Storage with Image Service


This procedure requires the following:

• Rackspace Private Cloud Powered By OpenStack v11 Software (Kilo)

• Object Storage v 2.2.0

1. Update the glance options in the /etc/openstack_deploy/


user_variables.yml file:
# Glance Options
glance_default_store: swift
glance_swift_store_auth_address: '{{ auth_identity_uri }}'
glance_swift_store_container: glance_images
glance_swift_store_endpoint_type: internalURL
glance_swift_store_key: '{{ glance_service_password }}'
glance_swift_store_region: RegionOne
glance_swift_store_user: 'service:glance'

• glance_default_store: Set the default store to swift.

56
RPCO Installation Guide February 19, 2016 RPCO v11

• glance_swift_store_auth_address: Set to the local authentication address


using the '{{ auth_identity_uri }}' variable.

• glance_swift_store_container: Set the container name.

• glance_swift_store_endpoint_type: Set the endpoint type to inter-


nalURL.

• glance_swift_store_key: Set the Image Service password using the


'{{ glance_service_password }}' variable.

• glance_swift_store_region: Set the region. The default value is RegionOne.

• glance_swift_store_user: Set the tenant and user name to


'service:glance'.

2. Rerun the Image Service (glance) configuration plays.

3. Run the Image Service (glance) playbook:


$ cd /opt/openstack-ansible/playbooks
$ openstack-ansible os-glance-install.yml --tags “glance-config”

5.10. Configuring HAProxy (optional)


For evaluation, testing, and development, HAProxy can temporarily provide load balanc-
ing services in lieu of hardware load balancers. The default HAProxy configuration does
not provide highly-available load balancing services. For production deployments, deploy a
hardware load balancer prior to deploying OSAD.

• In the /etc/openstack_deploy/openstack_user_config.yml file, add the


haproxy_hosts section with one or more infrastructure target hosts, for example:
haproxy_hosts:
123456-infra01:
ip: 172.29.236.51
123457-infra02:
ip: 172.29.236.52
123458-infra03:
ip: 172.29.236.53

5.11. Configuring Dashboard SSL (optional)


Customizing the Dashboard deployment is done within the os-horizon role in play-
books/roles/os_horizon/defaults.main.yml.

There are two options for deploying SSL certificates with Dashboard: self-signed and us-
er-provided certificates. Auto-generated self-signed certificates are currently the default.

5.11.1. Self-signed SSL certificates


For self-signed certificates, users can configure the subject of the certificate us-
ing the horizon_ssl_self_signed_subject variable. By default, the play-

57
RPCO Installation Guide February 19, 2016 RPCO v11

book will not regenerate a self-signed SSL certificate if one already exists on the tar-
get. To force the certificate to be regenerated the next time the playbook runs, set
horizon_ssl_self_signed_regen to true. The playbook then distributes the certifi-
cates and keys to each horizon container.

Note
When self-signed certificates are regenerated, they overwrite any existing cer-
tificates and keys, including ones that were previously user-provided.

5.11.2. User-provided SSL certificates


Users can provide their own trusted certificates. Copy the SSL certificate, key, and CA certifi-
cate to the deployment host. In the /etc/openstack_deploy/user_variables.yml
file, set the following variables:

• horizon_user_ssl_cert - path to the SSL certificate in the container


• horizon_user_ssl_key - path to the key in the container
• horizon_user_ssl_ca_cert - path to the CA certificate in the container

If those three variables are provided, self-signed certificate generation and usage are dis-
abled. However, the user is responsible for deploying those certificates and keys within
each container.

58
RPCO Installation Guide February 19, 2016 RPCO v11

6. Ceph
Ceph is a distributed object store and file system designed to provide performance, reliabil-
ity, and scalability. With Ceph, object and block storage form a single distributed computer
cluster.

Figure 6.1. Ceph service layout

6.1. Deploying Ceph


Generally, these instructions apply to a green field installation and not to an upgrade, since
Ceph requires additional hardware resources that must be planned for in advance beyond
a basic installation. However, if you are attempting a Ceph installation during an upgrade,
ensure that all appropriate Ceph variables are set. Refer to github.com/rcbops/rpc-open-
stack/blob/master/rpcd/etc/openstack_deploy/user_extras_variables.yml and github.com/
rcbops/rpc-openstack/blob/master/rpcd/etc/openstack_deploy/user_extras_secrets for a
list of variables.

1. Clone the rpc-openstack repository.

2. Follow the steps for an os-ansible-deployment cluster in preparation for an install.

3. To define the Ceph environment variables, copy ceph.yml to /etc/


openstack_deploy/env.d/:
# cp -a /opt/rpc-openstack/rpcd/etc/openstack_deploy/env.d/ceph.yml \

59
RPCO Installation Guide February 19, 2016 RPCO v11

/etc/openstack_deploy/env.d

4. To define the Ceph nodes (mons_hosts, osds_hosts, and storage_hosts), cre-


ate a ceph.yml configuration file in the /etc/openstack_deploy/conf.d/ di-
rectory. Set the container_vars variable to define the device and journal layout for
each host in osds_hosts. Create this file based on the following example:
container_vars:
# The devices must be specified in conjunction with raw_journal_devices
below.
# If you want one journal device per five drives, specify the same
journal_device five times.
devices:
- /dev/drive1
- /dev/drive2
- /dev/drive3
- /dev/drive4
- /dev/drive5
- /dev/drive6
- …
# Note: If a device does not have a matching "raw_journal_device",
# it will co-locate the journal on the device.
raw_journal_devices:
- /dev/journaldrive1
- /dev/journaldrive1
- /dev/journaldrive1
- /dev/journaldrive1
- /dev/journaldrive1
- /dev/journaldrive2
- …

Note
If the container_vars for each host in osds_hosts are iden-
tical, specify them only once in /etc/openstack_deploy/
user_extras_variables.yml. However, as the environment grows,
the devices and raw_journal_devices can change. In this case, spec-
ify them individually for each host.

Note
This step defines the storage_hosts in ceph.yml; there is no
need to define storage_hosts in /etc/openstack_deploy/
openstack_user_config.yml.

5. Add the Ceph backend to the storage_hosts under


"container_vars:" -> "cinder_backends:"

Note
If LVM backends are not used in conjunction with Ceph (for
storage_hosts), edit the /etc/openstack_deploy/
env.d/cinder.yml file and set "is_metal: false" under
cinder_volumes_container. This setting causes cinder_volumes
to run in a container rather than on metal. Refer to: /opt/rpc-

60
RPCO Installation Guide February 19, 2016 RPCO v11

openstack/openstack-ansible/etc/openstack_deploy/
openstack_user_config.yml.example for an example.

Ceph backend example configuration:

storage_hosts:
infra01:
ip: 172.24.240.11
container_vars:
cinder_backends:
limit_container_types: cinder_volume
ceph:
volume_driver: cinder.volume.drivers.rbd.RBDDriver
rbd_pool: volumes
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot: 'false'
rbd_max_clone_depth: 5
rbd_store_chunk_size: 4
rados_connect_timeout: -1
glance_api_version: 2
volume_backend_name: ceph
rbd_user: "{{ cinder_ceph_client }}"
rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"

6. Add mons to the list of containers that have the storage network in
openstack_user_config.yml.

7. Set at minimum the following Ceph cluster variables in /etc/openstack_deploy/


user_extras_variables.yml:

# The interface within the mon containers for the Ceph mon service to
listen on. This is usually eth1.
monitor_interface: [device]
# The network CIDR for the network over which clients will access Ceph
mons and osds. This is usually
# br-storage network CIDR.
public_network: [storage_network range]
# The network CIDR for osd to osd replication. This is usually the br-repl
network CIDR when using dedicated
# replication, however this can also be the br-storage network CIDR.
cluster_network: [repl_network]

61
RPCO Installation Guide February 19, 2016 RPCO v11

Figure 6.2. Ceph networking configuration

monitor_interface: [device]
public_network: [storage_network range] # This is the network from
external -> storage (and mons).
cluster_network: [repl_network] # Can be the same as public_network range.
This is OSD to OSD replication.
ceph_stable: true
fsid: '{{ fsid_uuid }}'

8. Set the following OpenStack-related variables in /etc/openstack_deploy/


user_variables.yml:

glance_default_store: rbd
nova_libvirt_images_rbd_pool: vms
nova_force_config_drive: False
nova_nova_conf_overrides:
libvirt:
live_migration_uri: qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.
ssh/id_rsa&no_verify=1

Important
Setting nova_force_config_drive to False ensures the deployer is
able to live migrate instances backed by Ceph.

9. Set at minimum the following variables in the /etc/openstack_deploy/


user_extras_secrets.yml file:
cinder_ceph_client_uuid:
fsid_uuid:

10. Run the playbooks:

a. Set the DEPLOY_CEPH environment variable:


export DEPLOY_CEPH=yes

Ceph is installed along with other OpenStack services by the deploy.sh script in
the rpc-openstack/scripts directory.

62
RPCO Installation Guide February 19, 2016 RPCO v11

b. Alternatively, run the relevant playbooks manually:


$ cd /opt/rpc-openstack
$ cd openstack-ansible
$ scripts/bootstrap-ansible.sh
$ scripts/pw-token-gen.py --file \
/etc/openstack_deploy/user_secrets.yml
$ scripts/pw-token-gen.py --file \
/etc/openstack_deploy/user_extras_secrets.yml
$ cd playbooks
$ openstack-ansible setup-hosts.yml
$ openstack-ansible haproxy-install.yml
$ cd ../../rpcd/playbooks
$ openstack-ansible ceph-all.yml
$ openstack-ansible repo-pip-setup.yml

You can also run the following playbooks for logging and MaaS:
$ openstack-ansible setup-logging.yml
$ openstack-ansible setup-maas.yml
$ openstack-ansible test-maas.yml

11. Run the installation for os-ansible-deployment as normal.

6.2. Adding new Ceph storage servers to cluster


1. Drives should be unformatted with no filesystem. If this is not the case, use "ceph-disk
zap /dev/sdb" (as an example for /dev/sdb) on the host. This will prepare the disk.

Note
Drives should not be mounted/formatted or prepared (the plays will do
this).

2. To bootstrap a new Ceph storage server, adjust the /etc/openstack_deploy/conf.d/


ceph.yml file to include the new devices, define container_vars with the correct devices
and raw_journal_devices layout, and then run:

$ cd /opt/rpc-openstack/openstack-ansible/playbooks
$ openstack-ansible setup-hosts.yml --limit NEWHOST
$ cd ../../rpcd/playbooks
$ openstack-ansible ceph-osd.yml

Important
Do not pass --limit to openstack-ansible ceph-osd.yml because
the ceph-osd role requires access to all MONs to properly build the
ceph.conf configuration.

6.3. Replacing a failed drive in a Ceph cluster


As drives fail, Ceph will re-balance data onto other drives in the cluster. Isolated drive fail-
ures should not cause problems, since a failed drive by default removes itself from the clus-

63
RPCO Installation Guide February 19, 2016 RPCO v11

ter. However, it is generally recommended to replace failed drives promptly to maintain ca-
pacity across the cluster. Follow these steps to manually remove a disk for replacement. In
this example, the failed drive we wish to replace is osd.0.

1. In a MON container, mark the OSD as out:

# ceph osd out 0

2. On the storage node hosting osd.0, make a note of the journal associated with the
OSD and then stop / unmount the osd:

#ceph-disk list | grep osd.0 | sed -e 's/.*, journal //g'


/dev/sdd1
# stop ceph-osd id=0
ceph-osd stop/waiting
# umount /var/lib/ceph/osd/ceph-0

Note
The journal in this instance is /dev/sdd1.

3. Log back into the MON and remove the OSD from the CRUSH map, remove the OSD's
key, and finally remove it from the cluster:

# ceph osd crush remove osd.0


# ceph auth del osd.0
# ceph osd rm 0

4. If you are removing the drives/servers permanently you will need to remove them
from /etc/openstack_deploy/conf.d/ceph.yml.

Note
If you are removing a full server you will need to run the ./invento-
ry-manage.py -r.

5. Once the disk has been physically replaced, log back into the disk's storage node and
prepare the disk for re-deployment:

# ceph-disk zap /dev/sdg


Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.

Warning! Main and backup partition tables differ! Use the 'c' and 'e'
options
on the recovery and transformation menu to examine the two tables.

Warning! One or more CRCs don't match. You should repair the disk!

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but
disk
verification and recovery are STRONGLY recommended.

64
RPCO Installation Guide February 19, 2016 RPCO v11

****************************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk
or
other utilities.
Creating new GPT entries.
The operation has completed successfully.

6. The journal device for an osd exists as a symlink with the osd's directory in /var/lib/
ceph/osd/. To find the journal partition for an osd, readlink on the journal device. In
this example, the output is /dev/sdd1.

# readlink -e /var/lib/ceph/osd/ceph-0/journal
/dev/sdd1

7. While still logged into the disk's storage node, remove the journal partition from the
disk. In this example, osd.0 is using /dev/sdd1. Replace the arguments "/dev/
sdd" and "-d 1" with the correct drive and partition for the environment.

# sgdisk -d 1 /dev/sdd
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.

Note
In the above, ensure you replace arguments /dev/sdd and -d 1 with the
correct drive and partition for your scenario.

8. On the deployment node, re-deploy the new OSD:

$ cd /opt/rpc-openstack/rpcd/playbooks
$ openstack-ansible ceph-osd.yml

9. Validate all OSDs are in/up and that the cluster is in a HEALTH_OK status:

# ceph osd stat


osdmap e536: 9 osds: 9 up, 9 in
# ceph health
HEALTH_OK

6.4. RAW Linux Images


The ceph.com/docs/master/rbd/rbd-openstack/ document states that Ceph does not
support QCOW2 for hosting a virtual machine disk. However, this is incorrect. Note
that with QCOW2 images, Ceph cannot clone copy on write images when booting in-
stances with RBD-backed disks. The benefits of using copy on write cloning are outlined
in specs.openstack.org/openstack/nova-specs/specs/juno/implemented/rbd-clone-im-
age-handler.htm.

To convert a Trusty QCOW2 image to RAW, follow this procedure in a utility or similar con-
tainer:

65
RPCO Installation Guide February 19, 2016 RPCO v11

# apt-get install qemu-utils


# wget https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-
amd64-disk1.img
# file trusty-server-cloudimg-amd64-disk1.img
trusty-server-cloudimg-amd64-disk1.img: QEMU QCOW Image (v2), 2361393152 bytes
# qemu-img convert -f qcow2 -O raw trusty-server-cloudimg-amd64-disk1.img
trusty-server-cloudimg-amd64-disk1.raw
# file trusty-server-cloudimg-amd64-disk1.raw
trusty-server-cloudimg-amd64-disk1.raw: x86 boot sector

At this point, the RAW image can be uploaded to Glance:

# source /root/openrc
# glance image-create --name trusty --container-format bare --disk-format raw
--file trusty-server-cloudimg-amd64-disk1.raw

Now when you boot an instance from the trusty image, Ceph will snapshot and then clone
the image for use by the VM. Ceph recommends setting a number of image properties on
Glance image to make optimal use of Ceph as a backend.

66
RPCO Installation Guide February 19, 2016 RPCO v11

7. Foundation playbooks
Figure 7.1. Installation work flow

Note
RPCO by default configures containers with a rootfs directory of /var/lib/
lxc/{container_name}/rootfs. To set a different rootfs directory,
override the lxc_container_rootfs_directory variable in /etc/
openstack_user_config.yml.

The main Ansible foundation playbook prepares the target hosts for infrastructure and
OpenStack services and performs the following operations:

• Perform deployment host initial setup

• Build containers on target hosts

• Restart containers on target hosts

• Install common components into containers on target hosts

7.1. Running the foundation playbook


1. Change to the /opt/openstack-ansible/playbooks directory.

2. Run the host setup playbook, which runs a series of sub-playbooks:

$ openstack-ansible setup-hosts.yml

Confirm satisfactory completion with zero items unreachable or failed:

67
RPCO Installation Guide February 19, 2016 RPCO v11

PLAY RECAP
********************************************************************
...
deployment_host : ok=18 changed=11 unreachable=0
failed=0

3. If using HAProxy, run the playbook to deploy it:


$ openstack-ansible haproxy-install.yml

7.2. Troubleshooting
Q: How do I resolve the following error after running a playbook?
failed: [target_host] => (item=target_host_horizon_container-69099e06) =>
{"err": "lxc-attach: No such file or directory - failed to open
'/proc/12440/ns/mnt'\nlxc-attach: failed to enter the namespace\n",
"failed":
true, "item": "target_host_horizon_container-69099e06", "rc": 1}
msg: Failed executing lxc-attach.

A: The lxc-attach sometimes fails to execute properly. This issue can be resolved by run-
ning the playbook again.

68
RPCO Installation Guide February 19, 2016 RPCO v11

8. Infrastructure playbooks
Figure 8.1. Installation workflow

The main Ansible infrastructure playbook installs infrastructure services and performs the
following operations:

• Install Memcached

• Install Galera

• Install RabbitMQ

• Install Rsyslog

• Install Elasticsearch

• Install Logstash

• Install Kibana

• Install Elasticsearch command-line utilities

• Configure Rsyslog

8.1. Running the infrastructure playbook


1. Change to the /opt/openstack-ansible/playbooks directory.

2. Run the infrastructure setup playbook, which runs a series of sub-playbooks:

$ openstack-ansible setup-infrastructure.yml

69
RPCO Installation Guide February 19, 2016 RPCO v11

Confirm satisfactory completion with zero items unreachable or failed:


PLAY RECAP
********************************************************************
...
deployment_host : ok=27 changed=0 unreachable=0
failed=0

8.2. Verifying infrastructure operation


Verify the database cluster and Kibana web interface operation.

Procedure 8.1. Verifying the database cluster


1. Determine the Galera container name:
$ lxc-ls | grep galera
infra1_galera_container-4ed0d84a

2. Access the Galera container:


$ lxc-attach -n infra1_galera_container-4ed0d84a

3. Run the MariaDB client, show cluster status, and exit the client:

$ mysql -u root -p
MariaDB> show status like 'wsrep_cluster%';
+--------------------------+--------------------------------------+
| Variable_name | Value |
+--------------------------+--------------------------------------+
| wsrep_cluster_conf_id | 3 |
| wsrep_cluster_size | 3 |
| wsrep_cluster_state_uuid | bbe3f0f6-3a88-11e4-bd8f-f7c9e138dd07 |
| wsrep_cluster_status | Primary |
+--------------------------+--------------------------------------+
MariaDB> exit

The wsrep_cluster_size field should indicate the number of nodes in the cluster
and the wsrep_cluster_status field should indicate primary.

Procedure 8.2. Verifying the Kibana web interface


1. With a web browser, access the Kibana web interface using the external load bal-
ancer IP address defined by the external_lb_vip_address option in the /etc/
openstack_deploy/openstack_user_config.yml file. The Kibana web inter-
face uses HTTPS on port 8443.

2. Authenticate using the username kibana and password defined by


the kibana_password option in the /etc/openstack_deploy/
user_variables.yml file.

70
RPCO Installation Guide February 19, 2016 RPCO v11

9. OpenStack playbooks
Figure 9.1. Installation work flow

The main Ansible OpenStack playbook installs OpenStack services and performs the follow-
ing operations:

• Install common components

• Create utility container that provides utilities to interact with services in other containers

• Install Identity (keystone)

• Generate service IDs for all services

• Install the Image service (glance)

• Install Orchestration (heat)

• Install Compute (nova)

• Install Networking (neutron)

• Install Block Storage (cinder)

• Install Dashboard (horizon)

• Reconfigure Rsyslog

9.1. Utility Container Overview


The utility container provides a space where miscellaneous tools and other software can be
installed. Tools and objects can be placed in a utility container if they do not require a ded-
icated container or if it is impractical to create a new container for a single tool or object.
Utility containers can also be used when tools cannot be installed directly onto a host.

71
RPCO Installation Guide February 19, 2016 RPCO v11

For example, the tempest playbooks are installed on the utility container since tempest test-
ing does not need a container of its own. For another example of using the utility contain-
er, see Section 9.3, “Verifying OpenStack operation” [73].

9.2. Running the OpenStack playbook


1. Change to the /opt/openstack-ansible/playbooks directory.

2. Run the OpenStack setup playbook, which runs a series of sub-playbooks:

$ openstack-ansible setup-openstack.yml

Note
The openstack-common.yml sub-playbook builds all OpenStack services
from source and takes up to 30 minutes to complete. As the playbook pro-
gresses, the quantity of containers in the "polling" state will approach zero.
If any operations take longer than 30 minutes to complete, the playbook
will terminate with an error.
changed: [target_host_glance_container-f2ebdc06]
changed: [target_host_heat_engine_container-36022446]
changed: [target_host_neutron_agents_container-08ec00cd]
changed: [target_host_heat_apis_container-4e170279]
changed: [target_host_keystone_container-c6501516]
changed: [target_host_neutron_server_container-94d370e5]
changed: [target_host_nova_api_metadata_container-600fe8b3]
changed: [target_host_nova_compute_container-7af962fe]
changed: [target_host_cinder_api_container-df5d5929]
changed: [target_host_cinder_volumes_container-ed58e14c]
changed: [target_host_horizon_container-e68b4f66]
<job 802849856578.7262> finished
on target_host_heat_engine_container-36022446
<job 802849856578.7739> finished
on target_host_keystone_container-c6501516
<job 802849856578.7262> finished
on target_host_heat_apis_container-4e170279
<job 802849856578.7359> finished
on target_host_cinder_api_container-df5d5929
<job 802849856578.7386> finished
on target_host_cinder_volumes_container-ed58e14c
<job 802849856578.7886> finished
on target_host_horizon_container-e68b4f66
<job 802849856578.7582> finished
on target_host_nova_compute_container-7af962fe
<job 802849856578.7604> finished
on target_host_neutron_agents_container-08ec00cd
<job 802849856578.7459> finished
on target_host_neutron_server_container-94d370e5
<job 802849856578.7327> finished
on target_host_nova_api_metadata_container-600fe8b3
<job 802849856578.7363> finished
on target_host_glance_container-f2ebdc06
<job 802849856578.7339> polling, 1675s remaining
<job 802849856578.7338> polling, 1675s remaining

72
RPCO Installation Guide February 19, 2016 RPCO v11

<job 802849856578.7322> polling, 1675s remaining


<job 802849856578.7319> polling, 1675s remaining

Note
Setting up the compute hosts takes up to 30 minutes to complete, particu-
larly in environments with many compute hosts. As the playbook progress-
es, the quantity of containers in the "polling" state will approach zero. If
any operations take longer than 30 minutes to complete, the playbook will
terminate with an error.
ok: [target_host_nova_conductor_container-2b495dc4]
ok: [target_host_nova_api_metadata_container-600fe8b3]
ok: [target_host_nova_api_ec2_container-6c928c30]
ok: [target_host_nova_scheduler_container-c3febca2]
ok: [target_host_nova_api_os_compute_container-9fa0472b]
<job 409029926086.9909> finished
on target_host_nova_api_os_compute_container-9fa0472b
<job 409029926086.9890> finished
on target_host_nova_api_ec2_container-6c928c30
<job 409029926086.9910> finished
on target_host_nova_conductor_container-2b495dc4
<job 409029926086.9882> finished
on target_host_nova_scheduler_container-c3febca2
<job 409029926086.9898> finished
on target_host_nova_api_metadata_container-600fe8b3
<job 409029926086.8330> polling, 1775s remaining

Confirm satisfactory completion with zero items unreachable or failed:


PLAY RECAP
**********************************************************************
...
deployment_host : ok=44 changed=11 unreachable=0
failed=0

9.3. Verifying OpenStack operation


Verify basic operation of the OpenStack API and dashboard.

Procedure 9.1. Verifying the API


The utility container provides a CLI environment for additional configuration and testing.

1. Determine the utility container name:


$ lxc-ls | grep utility
infra1_utility_container-161a4084

2. Access the utility container:


$ lxc-attach -n infra1_utility_container-161a4084

3. Source the admin tenant credentials:


# source openrc

4. Run an OpenStack command that uses one or more APIs. For example:

73
RPCO Installation Guide February 19, 2016 RPCO v11

# openstack user list


+----------------------------------+--------------------+
| ID | Name |
+----------------------------------+--------------------+
| 04007b990d9442b59009b98a828aa981 | glance |
| 0ccf5f2020ca4820847e109edd46e324 | keystone |
| 1dc5f638d4d840c690c23d5ea83c3429 | neutron |
| 3073d0fa5ced46f098215d3edb235d00 | cinder |
| 5f3839ee1f044eba921a7e8a23bb212d | admin |
| 61bc8ee7cc9b4530bb18acb740ee752a | stack_domain_admin |
| 77b604b67b79447eac95969aafc81339 | alt_demo |
| 85c5bf07393744dbb034fab788d7973f | nova |
| a86fc12ade404a838e3b08e1c9db376f | swift |
| bbac48963eff4ac79314c42fc3d7f1df | ceilometer |
| c3c9858cbaac4db9914e3695b1825e41 | dispersion |
| cd85ca889c9e480d8ac458f188f16034 | demo |
| efab6dc30c96480b971b3bd5768107ab | heat |
+----------------------------------+--------------------+

Procedure 9.2. Verifying the dashboard


1. With a web browser, access the dashboard using the external load balancer IP
address defined by the external_lb_vip_address option in the /etc/
openstack_deploy/openstack_user_config.yml file. The dashboard uses
HTTPS on port 443.

2. Authenticate using the username admin and password defined by the


keystone_auth_admin_password option in the /etc/openstack_deploy/
user_variables.yml file.

Procedure 9.3. Verifying the MaaS


• To run this play, run the verify-maas.yml playbook inside the playbooks directory
in rpc-openstack.

Important
This is not included in a MaaS install run by default because there is a brief
period between the agent restarting with new checks, and when the
checks are reflected in MaaS.

74
RPCO Installation Guide February 19, 2016 RPCO v11

10. Operations
The following operations apply to environments after initial installation.

10.1. Monitoring
Rackspace Cloud Monitoring Service allows Rackspace Private Cloud customers to monitor
system performance, and safeguard critical data.

10.1.1. Service and response


When a threshold is reached or functionality fails, the Rackspace Cloud Monitoring Ser-
vice generates an alert, which creates a ticket in the Rackspace ticketing system. This ticket
moves into the RPCO support queue. Tickets flagged as monitoring alerts are given highest
priority, and response is delivered according to the Service Level Agreement (SLA). Refer to
the SLA for detailed information about incident severity levels and corresponding response
times.

Specific monitoring alert guidelines can be set for the installation. These details should be
arranged by a Rackspace account manager.

10.1.2. Hardware monitoring


Hardware monitoring is available only for customers whose clouds are hosted within a
Rackspace data center. Customers whose clouds are hosted in their own data centers are
responsible for monitoring their own hardware.

For clouds hosted within a Rackspace data center, Rackspace will provision monitoring sup-
port for the customer. Rackspace Support assists in handling functionality failure, running
system health checks, and managing system capacity. Rackspace Cloud Monitoring Service
will notify Support when a host is down, or when hardware fails.

10.1.3. Software monitoring


For software monitoring, polling time is determined by the maas_check_period setting
in /etc/openstack_deploy/user_variables.yml, which defaults to 60 seconds.

RPCO Monitoring Service has two kinds of checks:

• Local: These agent.plugin checks are performed against containers. The checks poll
the API and gather lists of metrics.

These checks will generate a critical alert after three consecutive failures.

Local checks are performed on the following services:

• Compute (nova)

• Block Storage (cinder)

75
RPCO Installation Guide February 19, 2016 RPCO v11

• Identity (keystone)

• Networking (neutron)

• Orchestration (heat)

• Image service (glance): The check connects to the glance registry and tests status by
calling an arbitrary URL.

• Dashboard (horizon): The check verifies that the login page is available and uses the
credentials from openrc-maas to log in.

• Galera: The check connects to each member of a Galera cluster and verifies that the
members are fully synchronized and active.

• RabbitMQ: The check connects to each member of a RabbitMQ cluster and gathers
statistics from the API.

• Memcached: The check connects to a Memcached server.

• Global: These remote.http checks poll the load-balanced public endpoints, such as a
public nova API. If a service is marked as administratively down, the check will skip it.

These checks will generate a critical alert after one failure.

Global checks are performed on the following services:

• Compute (nova)

• Block Storage (cinder)

• Identity (keystone)

• Networking (neutron)

• Image service (glance)

• Orchestration (heat)

10.1.4. CDM monitoring


The setup_maas playbook also configures CDM monitoring for the following services and
generates alerts at the specified thresholds.

• CPU Idle: < 10%

• Memory used: > 95%

• Disk space used: > 95%

The playbook also configures Object Storage mount point checks. These checks monitor
disk space on the mount points and generate alerts if they are unmounted or unavailable.
For clouds using OpenStack Object Storage, it is important to re-run the setup playbook
whenever the number of Object Storage nodes changes.

76
RPCO Installation Guide February 19, 2016 RPCO v11

10.1.5. Network monitoring


The network_checks_list variable consists of the interfaces eth0, eth1, eth2, and eth3
by default. If the physical host does not have those specific interfaces, the variable will need
to be set in /etc/openstack_deploy/user_extras_variables.yml.

For example, if the interfaces were em0, em1, em2, and em3, set the following variable in /
etc/openstack_deploy/user_extras_variables.yml:

{ name: "em0", group: "hosts", max_speed: "{{ net_max_speed }}", rx_pct_warn:


"{{ net_rx_pct_warn }}", rx_pct_crit: "{{ net_rx_pct_crit }}", tx_pct_warn:
"{{ net_tx_pct_warn }}", tx_pct_crit: "{{ net_tx_pct_crit }}"}
{ name: "em1", group: "hosts", max_speed: "{{ net_max_speed }}", rx_pct_warn:
"{{ net_rx_pct_warn }}", rx_pct_crit: "{{ net_rx_pct_crit }}", tx_pct_warn:
"{{ net_tx_pct_warn }}", tx_pct_crit: "{{ net_tx_pct_crit }}"}
{ name: "em2", group: "hosts", max_speed: "{{ net_max_speed }}", rx_pct_warn:
"{{ net_rx_pct_warn }}", rx_pct_crit: "{{ net_rx_pct_crit }}", tx_pct_warn:
"{{ net_tx_pct_warn }}", tx_pct_crit: "{{ net_tx_pct_crit }}"}
{ name: "em3", group: "hosts", max_speed: "{{ net_max_speed }}", rx_pct_warn:
"{{ net_rx_pct_warn }}", rx_pct_crit: "{{ net_rx_pct_crit }}", tx_pct_warn:
"{{ net_tx_pct_warn }}", tx_pct_crit: "{{ net_tx_pct_crit }}"}

10.2. Adding a compute host


Use the following procedure to add a compute host to an operational cluster.

1. Configure the host as a target host. See Chapter 4, “Target hosts” [19] for more infor-
mation.

2. Edit the /etc/openstack_deploy/openstack_user_config.yml file and add


the host to the compute_hosts stanza.

Note
If necessary, also modify the used_ips stanza.

3. Run the following commands to add the host. Replace NEW_HOST_NAME with the
name of the new host. Run setup-openstack.yml on all hosts to ensure the set-
up-openstack.yml playbook creates the nova_pubkey attribute.
$ cd /opt/openstack-ansible/playbooks
$ openstack-ansible setup-hosts.yml --limit NEW_HOST_NAME
$ openstack-ansible os-nova-install.yml --limit NEW_HOST_NAME --skip-tags
nova-key-distribute
$ openstack-ansible os-nova-install.yml --tags nova-key-create,nova-key-
distribute
$ openstack-ansible os-neutron-install.yml --limit NEW_HOST_NAME

10.3. Galera cluster maintenance


Routine maintenance includes gracefully adding or removing nodes from the cluster with-
out impacting operation and also starting a cluster after gracefully shutting down all nodes.

77
RPCO Installation Guide February 19, 2016 RPCO v11

10.3.1. Removing nodes


In the following example, all but one node was shut down gracefully:
$ cd /opt/openstack-ansible/playbooks

$ ansible galera_container -m shell -a "mysql -h localhost\


-e 'show status like \"%wsrep_cluster_%\";'"
node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>
ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (2)

node2_galera_container-49a47d25 | FAILED | rc=1 >>


ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (2)

node4_galera_container-76275635 | success | rc=0 >>


Variable_name Value
wsrep_cluster_conf_id 7
wsrep_cluster_size 1
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary

Compare this example output with the output from the multi-node failure scenario where
the remaining operational node is non-primary and stops processing SQL requests. Grace-
fully shutting down the MariaDB service on all but one node allows the remaining opera-
tional node to continue processing SQL requests. When gracefully shutting down multiple
nodes, perform the actions sequentially to retain operation.

10.3.2. Starting a cluster


Gracefully shutting down all nodes destroys the cluster. Starting or restarting a cluster from
zero nodes requires creating a new cluster on one of the nodes.

1. The new cluster should be started on the most advanced node. Run the following com-
mand to check the seqno value in the grastate.dat file on all of the nodes:
$ cd /opt/openstack-ansible/playbooks

$ ansible galera_container -m shell -a "cat /var/lib/mysql/grastate.dat"


node2_galera_container-49a47d25 | success | rc=0 >>
# GALERA saved state version: 2.1
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
seqno: 31
cert_index:

node3_galera_container-3ea2cbd3 | success | rc=0 >>


# GALERA saved state version: 2.1
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
seqno: 31
cert_index:

node4_galera_container-76275635 | success | rc=0 >>


# GALERA saved state version: 2.1
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
seqno: 31

78
RPCO Installation Guide February 19, 2016 RPCO v11

cert_index:

In this example, all nodes in the cluster contain the same positive seqno values be-
cause they were synchronized just prior to graceful shutdown. If all seqno values are
equal, any node can start the new cluster.

$ /etc/init.d/mysql start --wsrep-new-cluster

This command results in a cluster containing a single node. The


wsrep_cluster_size value shows the number of nodes in the cluster.

node2_galera_container-49a47d25 | FAILED | rc=1 >>


ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (111)

node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>


ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (2)

node4_galera_container-76275635 | success | rc=0 >>


Variable_name Value
wsrep_cluster_conf_id 1
wsrep_cluster_size 1
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary

2. Restart MariaDB on the other nodes and verify that they rejoin the cluster.

node2_galera_container-49a47d25 | success | rc=0 >>


Variable_name Value
wsrep_cluster_conf_id 3
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary

node3_galera_container-3ea2cbd3 | success | rc=0 >>


Variable_name Value
wsrep_cluster_conf_id 3
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary

node4_galera_container-76275635 | success | rc=0 >>


Variable_name Value
wsrep_cluster_conf_id 3
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary

79
RPCO Installation Guide February 19, 2016 RPCO v11

10.4. Galera cluster recovery


10.4.1. Single-node failure
If a single node fails, the other nodes maintain quorum and continue to process SQL re-
quests.

1. Run the following Ansible command to determine the failed node:

$ cd /opt/openstack-ansible/playbooks

$ ansible galera_container -m shell -a "mysql -h localhost\


-e 'show status like \"%wsrep_cluster_%\";'"
node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>
ERROR 2002 (HY000): Can't connect to local MySQL server through
socket '/var/run/mysqld/mysqld.sock' (111)

node2_galera_container-49a47d25 | success | rc=0 >>


Variable_name Value
wsrep_cluster_conf_id 17
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary

node4_galera_container-76275635 | success | rc=0 >>


Variable_name Value
wsrep_cluster_conf_id 17
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary

In this example, node 3 has failed.

2. Restart MariaDB on the failed node and verify that it rejoins the cluster.

3. If MariaDB fails to start, run the mysqld command and perform further analysis on the
output. As a last resort, rebuild the container for the node.

10.4.2. Multi-node failure


When all but one node fails, the remaining node cannot achieve quorum and stops process-
ing SQL requests. In this situation, failed nodes that recover cannot join the cluster because
it no longer exists.

1. Run the following Ansible command to show the failed nodes:

$ cd /opt/openstack-ansible/playbooks

$ ansible galera_container -m shell -a "mysql \


-h localhost -e 'show status like \"%wsrep_cluster_%\";'"
node2_galera_container-49a47d25 | FAILED | rc=1 >>
ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (111)

80
RPCO Installation Guide February 19, 2016 RPCO v11

node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>


ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (111)

node4_galera_container-76275635 | success | rc=0 >>


Variable_name Value
wsrep_cluster_conf_id 18446744073709551615
wsrep_cluster_size 1
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status non-Primary

In this example, nodes 2 and 3 have failed. The remaining operational server indicates
non-Primary because it cannot achieve quorum.

2. Run the following command to rebootstrap the operational node into the cluster.

$ mysql -e "SET GLOBAL wsrep_provider_options='pc.bootstrap=yes';"


node4_galera_container-76275635 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 15
wsrep_cluster_size 1
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary

node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>


ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (111)

node2_galera_container-49a47d25 | FAILED | rc=1 >>


ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (111)

The remaining operational node becomes the primary node and begins processing SQL
requests.

3. Restart MariaDB on the failed nodes and verify that they rejoin the cluster.

$ ansible galera_container -m shell -a "mysql \


-h localhost -e 'show status like \"%wsrep_cluster_%\";'"
node3_galera_container-3ea2cbd3 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 17
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary

node2_galera_container-49a47d25 | success | rc=0 >>


Variable_name Value
wsrep_cluster_conf_id 17
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary

node4_galera_container-76275635 | success | rc=0 >>


Variable_name Value
wsrep_cluster_conf_id 17

81
RPCO Installation Guide February 19, 2016 RPCO v11

wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary

4. If MariaDB fails to start on any of the failed nodes, run the mysqld command and
perform further analysis on the output. As a last resort, rebuild the container for the
node.

10.4.3. Complete failure


If all of the nodes in a Galera cluster fail (do not shutdown gracefully), then the integrity of
the database can no longer be guaranteed and should be restored from backup. Run the
following command to determine if all nodes in the cluster have failed:

$ cd /opt/openstack-ansible/playbooks

$ ansible galera_container -m shell -a "cat /var/lib/mysql/grastate.dat"


node3_galera_container-3ea2cbd3 | success | rc=0 >>
# GALERA saved state
version: 2.1
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
seqno: -1
cert_index:

node2_galera_container-49a47d25 | success | rc=0 >>


# GALERA saved state
version: 2.1
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
seqno: -1
cert_index:

node4_galera_container-76275635 | success | rc=0 >>


# GALERA saved state
version: 2.1
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
seqno: -1
cert_index:

All the nodes have failed if mysqld is not running on any of the nodes and all of the nodes
contain a seqno value of -1.

Note
If any single node has a positive seqno value, then that node can be used to
restart the cluster. However, because there is no guarantee that each node has
an identical copy of the data, it is not recommended to restart the cluster using
the --wsrep-new-cluster command on one node.

10.4.4. Rebuilding a container


Sometimes recovering from a failure requires rebuilding one or more containers.

1. Disable the failed node on the load balancer.

82
RPCO Installation Guide February 19, 2016 RPCO v11

Note
Do not rely on the load balancer health checks to disable the node. If the
node is not disabled, the load balancer will send SQL requests to it before it
rejoins the cluster and cause data inconsistencies.

2. Use the following commands to destroy the container and remove MariaDB data
stored outside of the container. In this example, node 3 failed.

$ lxc-stop -n node3_galera_container-3ea2cbd3
$ lxc-destroy -n node3_galera_container-3ea2cbd3
$ rm -rf /openstack/node3_galera_container-3ea2cbd3/*

3. Run the host setup playbook to rebuild the container specifically on node 3:

$ openstack-ansible setup-hosts.yml -l node3 \


-l node3_galera_container-3ea2cbd3

Note
The playbook will also restart all other containers on the node.

4. Run the infrastructure playbook to configure the container specifically on node 3:

$ openstack-ansible infrastructure-setup.yml \
-l node3_galera_container-3ea2cbd3

Note
The new container runs a single-node Galera cluster, which is a dangerous
state because the environment contains more than one active database
with potentially different data.
$ cd /opt/openstack-ansible/playbooks

$ ansible galera_container -m shell -a "mysql \


-h localhost -e 'show status like \"%wsrep_cluster_%\";'"
node3_galera_container-3ea2cbd3 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 1
wsrep_cluster_size 1
wsrep_cluster_state_uuid da078d01-29e5-11e4-a051-03d896dbdb2d
wsrep_cluster_status Primary

node2_galera_container-49a47d25 | success | rc=0 >>


Variable_name Value
wsrep_cluster_conf_id 4
wsrep_cluster_size 2
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary

83
RPCO Installation Guide February 19, 2016 RPCO v11

node4_galera_container-76275635 | success | rc=0 >>


Variable_name Value
wsrep_cluster_conf_id 4
wsrep_cluster_size 2
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary

5. Restart MariaDB in the new container and verify that it rejoins the cluster.

$ ansible galera_container -m shell -a "mysql \


-h localhost -e 'show status like \"%wsrep_cluster_%\";'"
node2_galera_container-49a47d25 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 5
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary

node3_galera_container-3ea2cbd3 | success | rc=0 >>


Variable_name Value
wsrep_cluster_conf_id 5
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary

node4_galera_container-76275635 | success | rc=0 >>


Variable_name Value
wsrep_cluster_conf_id 5
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary

6. Enable the failed node on the load balancer.

84
RPCO Installation Guide February 19, 2016 RPCO v11

Appendix A. OSAD configuration files

Table of Contents
A.1. openstack_user_config.yml example configuration file ................................. 85
A.2. user_secrets.yml configuration file ................................................................. 96
A.3. user_variables.yml configuration file ............................................................. 98
A.4. swift.yml example configuration file ................................................................ 102
A.5. extra_container.yml configuration file ......................................................... 111
A.6. Environment configuration files ............................................................................ 111

A.1. openstack_user_config.yml example


configuration file
The openstack_user_config.yml file contains variables to configure target host net-
working and configuring the Block Storage service. RPCO v11 has new host networking sec-
tions which specify the placement of the containers:

shared-infra_hosts MariaDB and memcache RabbitMQ containers.

os-infra_hosts OpenStack API minus cinder and keystone containers.

storage_hosts cinder volumes in etc/lvm directory.

repo-infra_hosts Where python wheels are stored during the build process, and
pip package protection.

---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Overview
# ========
#
# This file contains the configuration for OpenStack Ansible Deployment
# (OSA) core services. Optional service configuration resides in the
# conf.d directory.
#
# You can customize the options in this file and copy it to
# /etc/openstack_deploy/openstack_user_config.yml or create a new

85
RPCO Installation Guide February 19, 2016 RPCO v11

# file containing only necessary options for your environment


# before deployment.
#
# OSA implements PyYAML to parse YAML files and therefore supports structure
# and formatting options that augment traditional YAML. For example, aliases
# or references. For more information on PyYAML, see the documentation at
#
# http://pyyaml.org/wiki/PyYAMLDocumentation
#
# Configuration reference
# =======================
#
# Level: cidr_networks (required)
# Contains an arbitrary list of networks for the deployment. For each network,
# the inventory generator uses the IP address range to create a pool of IP
# addresses for network interfaces inside containers. A deployment requires
# at least one network for management.
#
# Option: <value> (required, string)
# Name of network and IP address range in CIDR notation. This IP address
# range coincides with the IP address range of the bridge for this network
# on the target host.
#
# Example:
#
# Define networks for a typical deployment.
#
# - Management network on 172.29.236.0/22. Control plane for infrastructure
# services, OpenStack APIs, and horizon.
# - Tunnel network on 172.29.240.0/22. Data plane for project (tenant) VXLAN
# networks.
# - Storage network on 172.29.244.0/22. Data plane for storage services such
# as cinder and swift.
#
# cidr_networks:
# container: 172.29.236.0/22
# tunnel: 172.29.240.0/22
# storage: 172.29.244.0/22
#
# Example:
#
# Define additional service network on 172.29.248.0/22 for deployment in a
# Rackspace data center.
#
# snet: 172.29.248.0/22
#
# --------
#
# Level: used_ips (optional)
# For each network in the 'cidr_networks' level, specify a list of IP
addresses
# or a range of IP addresses that the inventory generator should exclude from
# the pools of IP addresses for network interfaces inside containers. To use a
# range, specify the lower and upper IP addresses (inclusive) with a comma
# separator.
#
# Example:
#
# The management network includes a router (gateway) on 172.29.236.1 and
# DNS servers on 172.29.236.11-12. The deployment includes seven target

86
RPCO Installation Guide February 19, 2016 RPCO v11

# servers on 172.29.236.101-103, 172.29.236.111, 172.29.236.121, and


# 172.29.236.131. However, the inventory generator automatically excludes
# these IP addresses. The deployment host itself isn't automatically
# excluded. Network policy at this particular example organization
# also reserves 231-254 in the last octet at the high end of the range for
# network device management.
#
# used_ips:
# - 172.29.236.1
# - 172.29.236.11,172.29.236.12
# - 172.29.239.231,172.29.239.254
#
# --------
#
# Level: global_overrides (required)
# Contains global options that require customization for a deployment. For
# example, load balancer virtual IP addresses (VIP). This level also provides
# a mechanism to override other options defined in the playbook structure.
#
# Option: internal_lb_vip_address (required, string)
# Load balancer VIP for the following items:
#
# - Local package repository
# - Galera SQL database cluster
# - Administrative and internal API endpoints for all OpenStack services
# - Glance registry
# - Nova compute source of images
# - Cinder source of images
# - Instance metadata
#
# Option: external_lb_vip_address (required, string)
# Load balancer VIP for the following items:
#
# - Public API endpoints for all OpenStack services
# - Horizon
#
# Option: management_bridge (required, string)
# Name of management network bridge on target hosts. Typically 'br-mgmt'.
#
# Option: tunnel_bridge (optional, string)
# Name of tunnel network bridge on target hosts. Typically 'br-vxlan'.
#
# Level: provider_networks (required)
# List of container and bare metal networks on target hosts.
#
# Level: network (required)
# Defines a container or bare metal network. Create a level for each
# network.
#
# Option: type (required, string)
# Type of network. Networks other than those for neutron such as
# management and storage typically use 'raw'. Neutron networks use
# 'flat', 'vlan', or 'vxlan'. Coincides with ML2 plug-in configuration
# options.
#
# Option: container_bridge (required, string)
# Name of unique bridge on target hosts to use for this network. Typical
# values include 'br-mgmt', 'br-storage', 'br-vlan', 'br-vxlan', etc.
#
# Option: container_interface (required, string)

87
RPCO Installation Guide February 19, 2016 RPCO v11

# Name of unique interface in containers to use for this network.


# Typical values include 'eth1', 'eth2', etc.
#
# Option: container_type (required, string)
# Name of mechanism that connects interfaces in containers to the bridge
# on target hosts for this network. Typically 'veth'.
#
# Option: container_mtu (optional, string)
# Sets the MTU within LXC for a given network type.
#
# Option: ip_from_q (optional, string)
# Name of network in 'cidr_networks' level to use for IP address pool.
Only
# valid for 'raw' and 'vxlan' types.
#
# Option: is_container_address (required, boolean)
# If true, the load balancer uses this IP address to access services
# in the container. Only valid for networks with 'ip_from_q' option.
#
# Option: is_ssh_address (required, boolean)
# If true, Ansible uses this IP address to access the container via SSH.
# Only valid for networks with 'ip_from_q' option.
#
# Option: group_binds (required, string)
# List of one or more Ansible groups that contain this
# network. For more information, see the openstack_environment.yml file.
#
# Option: net_name (optional, string)
# Name of network for 'flat' or 'vlan' types. Only valid for these
# types. Coincides with ML2 plug-in configuration options.
#
# Option: range (optional, string)
# For 'vxlan' type neutron networks, range of VXLAN network identifiers
# (VNI). For 'vlan' type neutron networks, range of VLAN tags. Supports
# more than one range of VLANs on a particular network. Coincides with
# ML2 plug-in configuration options.
#
# Example:
#
# Define a typical network architecture:
#
# - Network of type 'raw' that uses the 'br-mgmt' bridge and 'management'
# IP address pool. Maps to the 'eth1' device in containers. Applies to all
# containers and bare metal hosts. Both the load balancer and Ansible
# use this network to access containers and services.
# - Network of type 'raw' that uses the 'br-storage' bridge and 'storage'
# IP address pool. Maps to the 'eth2' device in containers. Applies to
# nova compute and all storage service containers. Optionally applies to
# to the swift proxy service.
# - Network of type 'vxlan' that contains neutron VXLAN tenant networks
# 1 to 1000 and uses 'br-vxlan' bridge on target hosts. Maps to the
# 'eth10' device in containers. Applies to all neutron agent containers
# and neutron agents on bare metal hosts.
# - Network of type 'vlan' that contains neutron VLAN networks 101 to 200
# and 301 to 400 and uses the 'br-vlan' bridge on target hosts. Maps to
# the 'eth11' device in containers. Applies to all neutron agent
# containers and neutron agents on bare metal hosts.
# - Network of type 'flat' that contains one neutron flat network and uses
# the 'br-vlan' bridge on target hosts. Maps to the 'eth12' device in
# containers. Applies to all neutron agent containers and neutron agents

88
RPCO Installation Guide February 19, 2016 RPCO v11

# on bare metal hosts.


#
# Note: A pair of 'vlan' and 'flat' networks can use the same bridge because
# one only handles tagged frames and the other only handles untagged frames
# (the native VLAN in some parlance). However, additional 'vlan' or 'flat'
# networks require additional bridges.
#
# provider_networks:
# - network:
# group_binds:
# - all_containers
# - hosts
# type: "raw"
# container_bridge: "br-mgmt"
# container_interface: "eth1"
# container_type: "veth"
# ip_from_q: "container"
# is_container_address: true
# is_ssh_address: true
# - network:
# group_binds:
# - glance_api
# - cinder_api
# - cinder_volume
# - nova_compute
# # Uncomment the next line if using swift with a storage network.
# # - swift_proxy
# type: "raw"
# container_bridge: "br-storage"
# container_type: "veth"
# container_interface: "eth2"
# container_mtu: "9000"
# ip_from_q: "storage"
# - network:
# group_binds:
# - neutron_linuxbridge_agent
# container_bridge: "br-vxlan"
# container_type: "veth"
# container_interface: "eth10"
# container_mtu: "9000"
# ip_from_q: "tunnel"
# type: "vxlan"
# range: "1:1000"
# net_name: "vxlan"
# - network:
# group_binds:
# - neutron_linuxbridge_agent
# container_bridge: "br-vlan"
# container_type: "veth"
# container_interface: "eth11"
# type: "vlan"
# range: "101:200,301:400"
# net_name: "vlan"
# - network:
# group_binds:
# - neutron_linuxbridge_agent
# container_bridge: "br-vlan"
# container_type: "veth"
# container_interface: "eth12"
# host_bind_override: "eth12"

89
RPCO Installation Guide February 19, 2016 RPCO v11

# type: "flat"
# net_name: "flat"
#
# --------
#
# Level: shared-infra_hosts (required)
# List of target hosts on which to deploy shared infrastructure services
# including the Galera SQL database cluster, RabbitMQ, and Memcached.
Recommend
# three minimum target hosts for these services.
#
# Level: <value> (required, string)
# Hostname of a target host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Example:
#
# Define three shared infrastructure hosts:
#
# shared-infra_hosts:
# infra1:
# ip: 172.29.236.101
# infra2:
# ip: 172.29.236.102
# infra3:
# ip: 172.29.236.103
#
# --------
#
# Level: repo-infra_hosts (optional)
# List of target hosts on which to deploy the package repository. Recommend
# minimum three target hosts for this service. Typically contains the same
# target hosts as the 'shared-infra_hosts' level.
#
# Level: <value> (required, string)
# Hostname of a target host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Example:
#
# Define three package repository hosts:
#
# repo-infra_hosts:
# infra1:
# ip: 172.29.236.101
# infra2:
# ip: 172.29.236.102
# infra3:
# ip: 172.29.236.103
#
# If you choose not to implement repository target hosts, you must configure
# the 'openstack_repo_url' variable in the user_group_vars.yml file to
# contain the URL of a host with an existing repository.
#

90
RPCO Installation Guide February 19, 2016 RPCO v11

# --------
#
# Level: os-infra_hosts (required)
# List of target hosts on which to deploy the glance API, nova API, heat API,
# and horizon. Recommend three minimum target hosts for these services.
# Typically contains the same target hosts as 'shared-infra_hosts' level.
#
# Level: <value> (required, string)
# Hostname of a target host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Example:
#
# Define three OpenStack infrastructure hosts:
#
# os-infra_hosts:
# infra1:
# ip: 172.29.236.100
# infra2:
# ip: 172.29.236.101
# infra3:
# ip: 172.29.236.102
#
# --------
#
# Level: identity_hosts (required)
# List of target hosts on which to deploy the keystone service. Recommend
# three minimum target hosts for this service. Typically contains the same
# target hosts as the 'shared-infra_hosts' level.
#
# Level: <value> (required, string)
# Hostname of a target host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Example:
#
# Define three OpenStack identity hosts:
#
# identity_hosts:
# infra1:
# ip: 172.29.236.101
# infra2:
# ip: 172.29.236.102
# infra3:
# ip: 172.29.236.103
#
# --------
#
# Level: network_hosts (required)
# List of target hosts on which to deploy neutron services. Recommend three
# minimum target hosts for this service. Typically contains the same target
# hosts as the 'shared-infra_hosts' level.
#
# Level: <value> (required, string)

91
RPCO Installation Guide February 19, 2016 RPCO v11

# Hostname of a target host.


#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Example:
#
# Define three OpenStack network hosts:
#
# network_hosts:
# infra1:
# ip: 172.29.236.101
# infra2:
# ip: 172.29.236.102
# infra3:
# ip: 172.29.236.103
#
# --------
#
# Level: compute_hosts (required)
# List of target hosts on which to deploy the nova compute service. Recommend
# one minimum target host for this service. Typically contains target hosts
# that do not reside in other levels.
#
# Level: <value> (required, string)
# Hostname of a target host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Example:
#
# Define an OpenStack compute host:
#
# compute_hosts:
# compute1:
# ip: 172.29.236.111
#
# --------
#
# Level: storage-infra_hosts (required)
# List of target hosts on which to deploy the cinder API. Recommend three
# minimum target hosts for this service. Typically contains the same target
# hosts as the 'shared-infra_hosts' level.
#
# Level: <value> (required, string)
# Hostname of a target host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Example:
#
# Define three OpenStack storage infrastructure hosts:
#
# storage-infra_hosts:
# infra1:

92
RPCO Installation Guide February 19, 2016 RPCO v11

# ip: 172.29.236.101
# infra2:
# ip: 172.29.236.102
# infra3:
# ip: 172.29.236.103
#
# --------
#
# Level: storage_hosts (required)
# List of target hosts on which to deploy the cinder volume service. Recommend
# one minimum target host for this service. Typically contains target hosts
# that do not reside in other levels.
#
# Level: <value> (required, string)
# Hostname of a target host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Level: container_vars (required)
# Contains storage options for this target host.
#
# Option: cinder_storage_availability_zone (optional, string)
# Cinder availability zone.
#
# Option: cinder_default_availability_zone (optional, string)
# If the deployment contains more than one cinder availability zone,
# specify a default availability zone.
#
# Level: cinder_backends (required)
# Contains cinder backends.
#
# Option: limit_container_types (optional, string)
# Container name string in which to apply these options. Typically
# any container with 'cinder_volume' in the name.
#
# Level: <value> (required, string)
# Arbitrary name of the backend. Each backend contains one or more
# options for the particular backend driver. The template for the
# cinder.conf file can generate configuration for any backend
# providing that it includes the necessary driver options.
#
# Option: volume_backend_name (required, string)
# Name of backend, arbitrary.
#
# The following options apply to the LVM backend driver:
#
# Option: volume_driver (required, string)
# Name of volume driver, typically
# 'cinder.volume.drivers.lvm.LVMVolumeDriver'.
#
# Option: volume_group (required, string)
# Name of LVM volume group, typically 'cinder-volumes'.
#
# The following options apply to the NFS backend driver:
#
# Option: volume_driver (required, string)
# Name of volume driver,
# 'cinder.volume.drivers.nfs.NfsDriver'.

93
RPCO Installation Guide February 19, 2016 RPCO v11

# NB. When using NFS driver you may want to adjust your
# env.d/cinder.yml file to run cinder-volumes in containers.
#
# Option: nfs_shares_config (optional, string)
# File containing list of NFS shares available to cinder, typically
# '/etc/cinder/nfs_shares'.
#
# Option: nfs_mount_point_base (optional, string)
# Location in which to mount NFS shares, typically
# '$state_path/mnt'.
#
# The following options apply to the NetApp backend driver:
#
# Option: volume_driver (required, string)
# Name of volume driver,
# 'cinder.volume.drivers.netapp.common.NetAppDriver'.
# NB. When using NetApp drivers you may want to adjust your
# env.d/cinder.yml file to run cinder-volumes in containers.
#
# Option: netapp_storage_family (required, string)
# Access method, typically 'ontap_7mode' or 'ontap_cluster'.
#
# Option: netapp_storage_protocol (required, string)
# Transport method, typically 'scsi' or 'nfs'. NFS transport also
# requires the 'nfs_shares_config' option.
#
# Option: nfs_shares_config (required, string)
# For NFS transport, name of the file containing shares. Typically
# '/etc/cinder/nfs_shares'.
#
# Option: netapp_server_hostname (required, string)
# NetApp server hostname.
#
# Option: netapp_server_port (required, integer)
# NetApp server port, typically 80 or 443.
#
# Option: netapp_login (required, string)
# NetApp server username.
#
# Option: netapp_password (required, string)
# NetApp server password.
#
# Level: cinder_nfs_client (optional)
# Automates management of the file that cinder references for a list of
# NFS mounts.
#
# Option: nfs_shares_config (required, string)
# File containing list of NFS shares available to cinder, typically
# typically /etc/cinder/nfs_shares.
#
# Level: shares (required)
# List of shares to populate the 'nfs_shares_config' file. Each share
# uses the following format:
#
# - { ip: "{{ ip_nfs_server }}", share: "/vol/cinder" }
#
# Example:
#
# Define an OpenStack storage host:
#

94
RPCO Installation Guide February 19, 2016 RPCO v11

# storage_hosts:
# storage1:
# ip: 172.29.236.121
#
# Example:
#
# Use the LVM iSCSI backend in availability zone 'cinderAZ_1':
#
# container_vars:
# cinder_storage_availability_zone: cinderAZ_1
# cinder_default_availability_zone: cinderAZ_1
# cinder_backends:
# lvm:
# volume_backend_name: LVM_iSCSI
# volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
# volume_group: cinder-volumes
# limit_container_types: cinder_volume
#
# Example:
#
# Use the NetApp iSCSI backend via Data ONTAP 7-mode in availability zone
# 'cinderAZ_2':
#
# container_vars:
# cinder_storage_availability_zone: cinderAZ_2
# cinder_default_availability_zone: cinderAZ_1
# cinder_backends:
# netapp:
# volume_backend_name: NETAPP_iSCSI
# volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
# netapp_storage_family: ontap_7mode
# netapp_storage_protocol: iscsi
# netapp_server_hostname: hostname
# netapp_server_port: 443
# netapp_login: username
# netapp_password: password
#
#
# Example:
#
# Use the ceph RBD backend in availability zone 'cinderAZ_3':
#
# container_vars:
# cinder_storage_availability_zone: cinderAZ_3
# cinder_default_availability_zone: cinderAZ_1
# cinder_backends:
# limit_container_types: cinder_volume
# volumes_hdd:
# volume_driver: cinder.volume.drivers.rbd.RBDDriver
# rbd_pool: volumes_hdd
# rbd_ceph_conf: /etc/ceph/ceph.conf
# rbd_flatten_volume_from_snapshot: 'false'
# rbd_max_clone_depth: 5
# rbd_store_chunk_size: 4
# rados_connect_timeout: -1
# glance_api_version: 2
# volume_backend_name: volumes_hdd
# rbd_user: "{{ cinder_ceph_client }}"
# rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
#

95
RPCO Installation Guide February 19, 2016 RPCO v11

#
# --------
#
# Level: log_hosts (required)
# List of target hosts on which to deploy logging services. Recommend
# one minimum target host for this service.
#
# Level: <value> (required, string)
# Hostname of a target host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Example:
#
# Define a logging host:
#
# log_hosts:
# log1:
# ip: 172.29.236.131

A.2. user_secrets.yml configuration file


---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

## Rabbitmq Options
rabbitmq_cookie_token:

## Tokens
memcached_encryption_key:

## Container default user


container_openstack_password:

## Galera Options
galera_root_password:

## Keystone Options
keystone_container_mysql_password:
keystone_auth_admin_token:
keystone_auth_admin_password:

96
RPCO Installation Guide February 19, 2016 RPCO v11

keystone_service_password:
keystone_rabbitmq_password:

## Ceilometer Options:
ceilometer_container_db_password:
ceilometer_service_password:
ceilometer_telemetry_secret:
ceilometer_rabbitmq_password:

## Aodh Options:
aodh_container_db_password:
aodh_service_password:
aodh_rabbitmq_password:

## Cinder Options
cinder_container_mysql_password:
cinder_service_password:
cinder_v2_service_password:
cinder_profiler_hmac_key:
cinder_rabbitmq_password:

## Ceph/rbd: a UUID to be used by libvirt to refer to the client.cinder user


cinder_ceph_client_uuid:

## Glance Options
glance_container_mysql_password:
glance_service_password:
glance_profiler_hmac_key:
glance_rabbitmq_password:

## Heat Options
heat_stack_domain_admin_password:
heat_container_mysql_password:
### THE HEAT AUTH KEY NEEDS TO BE 32 CHARACTERS LONG ##
heat_auth_encryption_key:
### THE HEAT AUTH KEY NEEDS TO BE 32 CHARACTERS LONG ##
heat_service_password:
heat_cfn_service_password:
heat_profiler_hmac_key:
heat_rabbitmq_password:

## Horizon Options
horizon_container_mysql_password:
horizon_secret_key:

## Neutron Options
neutron_container_mysql_password:
neutron_service_password:
neutron_rabbitmq_password:
neutron_ha_vrrp_auth_password:

## Nova Options
nova_container_mysql_password:
nova_metadata_proxy_secret:
nova_ec2_service_password:
nova_service_password:
nova_v3_service_password:
nova_v21_service_password:
nova_s3_service_password:
nova_rabbitmq_password:

97
RPCO Installation Guide February 19, 2016 RPCO v11

## Swift Options:
swift_service_password:
swift_container_mysql_password:
swift_dispersion_password:
### Once the swift cluster has been setup DO NOT change these hash values!
swift_hash_path_suffix:
swift_hash_path_prefix:

## haproxy stats password


haproxy_stats_password:
haproxy_keepalived_authentication_password:

A.3. user_variables.yml configuration file


The user_variables.yml filecontains options to configure the Image service, Compute
service and Object Storage.

---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

## Ceilometer Options
ceilometer_db_type: mongodb
ceilometer_db_ip: localhost
ceilometer_db_port: 27017
swift_ceilometer_enabled: False
heat_ceilometer_enabled: False
cinder_ceilometer_enabled: False
glance_ceilometer_enabled: False
nova_ceilometer_enabled: False
neutron_ceilometer_enabled: False
keystone_ceilometer_enabled: False

## Aodh Options
aodh_db_type: mongodb
aodh_db_ip: localhost
aodh_db_port: 27017

## Glance Options
# Set glance_default_store to "swift" if using Cloud Files or swift backend
# or "rbd" if using ceph backend; the latter will trigger ceph to get
# installed on glance
glance_default_store: file

98
RPCO Installation Guide February 19, 2016 RPCO v11

glance_notification_driver: noop

# `internalURL` will cause glance to speak to swift via ServiceNet, use


# `publicURL` to communicate with swift over the public network
glance_swift_store_endpoint_type: internalURL

# Ceph client user for glance to connect to the ceph cluster


#glance_ceph_client: glance
# Ceph pool name for Glance to use
#glance_rbd_store_pool: images
#glance_rbd_store_chunk_size: 8

## Nova
# When nova_libvirt_images_rbd_pool is defined, ceph will be installed on nova
# hosts.
#nova_libvirt_images_rbd_pool: vms
# by default we assume you use rbd for both cinder and nova, and as libvirt
# needs to access both volumes (cinder) and boot disks (nova) we default to
# reuse the cinder_ceph_client
# only need to change this if you'd use ceph for boot disks and not for
volumes
#nova_ceph_client:
#nova_ceph_client_uuid:

# This defaults to KVM, if you are deploying on a host that is not KVM capable
# change this to your hypervisor type: IE "qemu", "lxc".
# nova_virt_type: kvm
# nova_cpu_allocation_ratio: 2.0
# nova_ram_allocation_ratio: 1.0

# If you wish to change the dhcp_domain configured for both nova and neutron
# dhcp_domain:

## Glance with Swift


# Extra options when configuring swift as a glance back-end. By default it
# will use the local swift installation. Set these when using a remote swift
# as a glance backend.
#
# NOTE: Ensure that the auth version matches your authentication endpoint.
#
# NOTE: If the password for glance_swift_store_key contains a dollar sign ($),
# it must be escaped with an additional dollar sign ($$), not a backslash. For
# example, a password of "super$ecure" would need to be entered as
# "super$$ecure" below. See Launchpad Bug #1259729 for more details.
#
#glance_swift_store_auth_version: 3
#glance_swift_store_auth_address: "https://some.auth.url.com"
#glance_swift_store_user: "OPENSTACK_TENANT_ID:OPENSTACK_USER_NAME"
#glance_swift_store_key: "OPENSTACK_USER_PASSWORD"
#glance_swift_store_container: "NAME_OF_SWIFT_CONTAINER"
#glance_swift_store_region: "NAME_OF_REGION"

## Cinder
# Ceph client user for cinder to connect to the ceph cluster
#cinder_ceph_client: cinder

## Ceph
# Enable these if you use ceph rbd for at least one component (glance, cinder,
nova)
#ceph_apt_repo_url_region: "www" # or "eu" for Netherlands based mirror

99
RPCO Installation Guide February 19, 2016 RPCO v11

#ceph_stable_release: hammer
# Ceph Authentication - by default cephx is true
#cephx: true
# Ceph Monitors
# A list of the IP addresses for your Ceph monitors
#ceph_mons:
# - 10.16.5.40
# - 10.16.5.41
# - 10.16.5.42
# Custom Ceph Configuration File (ceph.conf)
# By default, your deployment host will connect to one of the mons defined
above to
# obtain a copy of your cluster's ceph.conf. If you prefer, uncomment
ceph_conf_file
# and customise to avoid ceph.conf being copied from a mon.
#ceph_conf_file: |
# [global]
# fsid = 00000000-1111-2222-3333-444444444444
# mon_initial_members = mon1.example.local,mon2.example.local,mon3.example.
local
# mon_host = 10.16.5.40,10.16.5.41,10.16.5.42
# # optionally, you can use this construct to avoid defining this list twice:
# # mon_host = {{ ceph_mons|join(',') }}
# auth_cluster_required = cephx
# auth_service_required = cephx

## SSL Settings
# Adjust these settings to change how SSL connectivity is configured for
# various services. For more information, see the openstack-ansible
# documentation section titled "Securing services with SSL certificates".
#
## SSL: Keystone
# These do not need to be configured unless you're creating certificates for
# services running behind Apache (currently, Horizon and Keystone).
ssl_protocol: "ALL -SSLv2 -SSLv3"
# Cipher suite string from https://hynek.me/articles/hardening-your-web-
servers-ssl-ciphers/
ssl_cipher_suite: "ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH
+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS"
# To override for Keystone only:
# - keystone_ssl_protocol
# - keystone_ssl_cipher_suite
# To override for Horizon only:
# - horizon_ssl_protocol
# - horizon_ssl_cipher_suite
#
## SSL: RabbitMQ
# Set these variables if you prefer to use existing SSL certificates, keys and
# CA certificates with the RabbitMQ SSL/TLS Listener
#
#rabbitmq_user_ssl_cert: <path to cert on ansible deployment host>
#rabbitmq_user_ssl_key: <path to cert on ansible deployment host>
#rabbitmq_user_ssl_ca_cert: <path to cert on ansible deployment host>
#
# By default, openstack-ansible configures all OpenStack services to talk to
# RabbitMQ over encrypted connections on port 5671. To opt-out of this
default,
# set the rabbitmq_use_ssl variable to 'false'. The default setting of 'true'
# is highly recommended for securing the contents of RabbitMQ messages.

100
RPCO Installation Guide February 19, 2016 RPCO v11

#rabbitmq_use_ssl: true

## Additional pinning generator that will allow for more packages to be pinned
as you see fit.
## All pins allow for package and versions to be defined. Be careful using
this as versions
## are always subject to change and updates regarding security will become
your problem from this
## point on. Pinning can be done based on a package version, release, or
origin. Use "*" in the
## package name to indicate that you want to pin all package to a particular
constraint.
# apt_pinned_packages:
# - { package: "lxc", version: "1.0.7-0ubuntu0.1" }
# - { package: "libvirt-bin", version: "1.2.2-0ubuntu13.1.9" }
# - { package: "rabbitmq-server", origin: "www.rabbitmq.com" }
# - { package: "*", release: "MariaDB" }

## Environment variable settings


# This allows users to specify the additional environment variables to be set
# which is useful in setting where you working behind a proxy. If working
behind
# a proxy It's important to always specify the scheme as "http://". This is
what
# the underlying python libraries will handle best. This proxy information
will be
# placed both on the hosts and inside the containers.

## Example environment variable setup:


# proxy_env_url: http://username:pa$$w0rd@10.10.10.9:9000/
# no_proxy_env: "localhost,127.0.0.1,{% for host in groups['all_containers']
%}{{ hostvars[host]['container_address'] }}{% if not loop.last %},{% endif %}
{% endfor %}"
# global_environment_variables:
# HTTP_PROXY: "{{ proxy_env_url }}"
# HTTPS_PROXY: "{{ proxy_env_url }}"
# NO_PROXY: "{{ no_proxy_env }}"

## Multiple region support in Horizon:


# For multiple regions uncomment this configuration, and
# add the extra endpoints below the first list item.
# horizon_available_regions:
# - { url: "{{ keystone_service_internalurl }}", name:
"{{ keystone_service_region }}" }
# - { url: "http://cluster1.example.com:5000/v2.0", name: "RegionTwo" }

## SSH connection wait time


# If an increased delay for the ssh connection check is desired,
# uncomment this variable and set it appropriately.
#ssh_delay: 5

## HAProxy
# Uncomment this to disable keepalived installation (cf. documentation)
#haproxy_use_keepalived: False
#
# HAProxy Keepalived configuration (cf. documentation)
haproxy_keepalived_external_vip_cidr: "{{external_lb_vip_address}}/32"
haproxy_keepalived_internal_vip_cidr: "{{internal_lb_vip_address}}/32"

101
RPCO Installation Guide February 19, 2016 RPCO v11

#haproxy_keepalived_external_interface:
#haproxy_keepalived_internal_interface:
# Defines the default VRRP id used for keepalived with haproxy.
# Overwrite it to your value to make sure you don't overlap
# with existing VRRPs id on your network. Default is 10 for the external and
11 for the
# internal VRRPs
#haproxy_keepalived_external_virtual_router_id:
#haproxy_keepalived_internal_virtual_router_id:
# Defines the VRRP master/backup priority. Defaults respectively to 100 and 20
#haproxy_keepalived_priority_master:
#haproxy_keepalived_priority_backup:
# All the previous variables are used in a var file, fed to the keepalived
role.
# To use another file to feed the role, override the following var:
#haproxy_keepalived_vars_file: 'vars/configs/keepalived_haproxy.yml'

## Host security hardening


# The openstack-ansible-security role provides security hardening for hosts
# by applying security configurations from the STIG. Hardening is disabled by
# default, but it can be applied to all hosts by adjusting the following
# variable to 'true'.
#
# Docs: http://docs.openstack.org/developer/openstack-ansible-security/
apply_security_hardening: false

A.4. swift.yml example configuration file


---
# Copyright 2015, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Overview
# ========
#
# This file contains the configuration for the OpenStack Ansible Deployment
# (OSA) Object Storage (swift) service. Only enable these options for
# deployments that contain the Object Storage service. For more information on
# these options, see the documentation at
#
# http://docs.openstack.org/developer/swift/index.html
#
# You can customize the options in this file and copy it to
# /etc/openstack_deploy/conf.d/swift.yml or create a new
# file containing only necessary options for your environment

102
RPCO Installation Guide February 19, 2016 RPCO v11

# before deployment.
#
# OSA implements PyYAML to parse YAML files and therefore supports structure
# and formatting options that augment traditional YAML. For example, aliases
# or references. For more information on PyYAML, see the documentation at
#
# http://pyyaml.org/wiki/PyYAMLDocumentation
#
# Configuration reference
# =======================
#
# Level: global_overrides (required)
# Contains global options that require customization for a deployment. For
# example, the ring stricture. This level also provides a mechanism to
# override other options defined in the playbook structure.
#
# Level: swift (required)
# Contains options for swift.
#
# Option: storage_network (required, string)
# Name of the storage network bridge on target hosts. Typically
# 'br-storage'.
#
# Option: repl_network (optional, string)
# Name of the replication network bridge on target hosts. Typically
# 'br-repl'. Defaults to the value of the 'storage_network' option.
#
# Option: part_power (required, integer)
# Partition power. Applies to all rings unless overridden at the 'account'
# or 'container' levels or within a policy in the 'storage_policies'
level.
# Immutable without rebuilding the rings.
#
# Option: repl_number (optional, integer)
# Number of replicas for each partition. Applies to all rings unless
# overridden at the 'account' or 'container' levels or within a policy
# in the 'storage_policies' level. Defaults to 3.
#
# Option: min_part_hours (optional, integer)
# Minimum time in hours between multiple moves of the same partition.
# Applies to all rings unless overridden at the 'account' or 'container'
# levels or within a policy in the 'storage_policies' level. Defaults
# to 1.
#
# Option: region (optional, integer)
# Region of a disk. Applies to all disks in all storage hosts unless
# overridden deeper in the structure. Defaults to 1.
#
# Option: zone (optional, integer)
# Zone of a disk. Applies to all disks in all storage hosts unless
# overridden deeper in the structure. Defaults to 0.
#
# Option: weight (optional, integer)
# Weight of a disk. Applies to all disks in all storage hosts unless
# overridden deeper in the structure. Defaults to 100.
#
# Option: reclaim_age (optional, integer, default 604800)
# The amount of time in seconds before items, such as tombstones are
# reclaimed, default is 604800 (7 Days).
#

103
RPCO Installation Guide February 19, 2016 RPCO v11

# Option: statsd_host (optional, string)


# Swift supports statsd metrics, this option sets the statsd host that
will
# receive statsd metrics. Specifying this here will apply to all hosts in
# a cluster. It can be overridden or specified deeper in the structure if
you
# want to only catch statsd metrics on certain hosts.
#
# Option: statsd_port (optional, integer, default 8125)
# Statsd port, requires statsd_host set.
#
# Option: statsd_metric_prefix (optional, string, default ansible_host)
# Specify a prefix that will be prepended to all metrics, this should be
specified
# deeper in the configuration so different host metrics can be separated.
#
# The following statsd related options are a little more complicated and
are
# used to tune how many samples are sent to statsd. If you need to tweak
these
# settings then first read: http://docs.openstack.org/developer/swift/
admin_guide.html
#
# Option: statsd_default_sample_rate (optional, float, default 1.0)
# Option: statsd_sample_rate_factor (optional, float, default 1.0)
#
# Example:
#
# Define a typical deployment:
#
# - Storage network that uses the 'br-storage' bridge. Proxy containers
# typically use the 'storage' IP address pool. However, storage hosts
# use bare metal and require manual configuration of the 'br-storage'
# bridge on each host.
# - Replication network that uses the 'br-repl' bridge. Only storage hosts
# contain this network. Storage hosts use bare metal and require manual
# configuration of the bridge on each host.
# - Ring configuration with partition power of 8, three replicas of each
# file, and minimum 1 hour between migrations of the same partition. All
# rings use region 1 and zone 0. All disks include a weight of 100.
#
# swift:
# storage_network: 'br-storage'
# replication_network: 'br-repl'
# part_power: 8
# repl_number: 3
# min_part_hours: 1
# region: 1
# zone: 0
# weight: 100
# statsd_host: statsd.example.lan
# statsd_port: 8125
#
# Note: Most typical deployments override the 'zone' option in the
# 'swift_vars' level to use a unique zone for each storage host.
#
# Option: mount_point (required, string)
# Top-level directory for mount points of disks. Defaults to /mnt.
# Applies to all hosts unless overridden deeper in the structure.
#

104
RPCO Installation Guide February 19, 2016 RPCO v11

# Level: drives (required)


# Contains the mount points of disks.
# Applies to all hosts unless overridden deeper in the structure.
#
# Option: name (required, string)
# Mount point of a disk. Use one entry for each disk.
# Applies to all hosts unless overridden deeper in the structure.
#
# Example:
#
# Mount disks 'sdc', 'sdd', 'sde', and 'sdf' to the '/mnt' directory on
all
# storage hosts:
#
# mount_point: /mnt
# drives:
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
#
# Level: account (optional)
# Contains 'min_part_hours' and 'repl_number' options specific to the
# account ring.
#
# Level: container (optional)
# Contains 'min_part_hours' and 'repl_number' options specific to the
# container ring.
#
# Level: storage_policies (required)
# Contains storage policies. Minimum one policy. One policy must include
# the 'index: 0' and 'default: True' options.
#
# Level: policy (required)
# Contains a storage policy. Define for each policy.
#
# Option: name (required, string)
# Policy name.
#
# Option: index (required, integer)
# Policy index. One policy must include this option with a '0'
# value.
#
# Option: policy_type (optional, string)
# Defines policy as replication or erasure coding. Accepts
'replication'
# 'erasure_coding' values. Defaults to 'replication' value if omitted.
#
# Option: ec_type (conditionally required, string)
# Defines the erasure coding algorithm. Required for erasure coding
# policies.
#
# Option: ec_num_data_fragments (conditionally required, integer)
# Defines the number of object data fragments. Required for erasure
# coding policies.
#
# Option: ec_num_parity_fragments (conditionally required, integer)
# Defines the number of object parity fragments. Required for erasure
# coding policies.
#

105
RPCO Installation Guide February 19, 2016 RPCO v11

# Option: ec_object_segment_size (conditionally required, integer)


# Defines the size of object segments in bytes. Swift sends incoming
# objects to an erasure coding policy in segments of this size.
# Required for erasure coding policies.
#
# Option: default (conditionally required, boolean)
# Defines the default policy. One policy must include this option
# with a 'True' value.
#
# Option: deprecated (optional, boolean)
# Defines a deprecated policy.
#
# Note: The following levels and options override any values higher
# in the structure and generally apply to advanced deployments.
#
# Option: repl_number (optional, integer)
# Number of replicas of each partition in this policy.
#
# Option: min_part_hours (optional, integer)
# Minimum time in hours between multiple moves of the same partition
# in this policy.
#
# Example:
#
# Define three storage policies: A default 'gold' policy, a deprecated
# 'silver' policy, and an erasure coding 'ec10-4' policy.
#
# storage_policies:
# - policy:
# name: gold
# index: 0
# default: True
# - policy:
# name: silver
# index: 1
# repl_number: 3
# deprecated: True
# - policy:
# name: ec10-4
# index: 2
# policy_type: erasure_coding
# ec_type: jerasure_rs_vand
# ec_num_data_fragments: 10
# ec_num_parity_fragments: 4
# ec_object_segment_size: 1048576
#
# --------
#
# Level: swift_proxy-hosts (required)
# List of target hosts on which to deploy the swift proxy service. Recommend
# three minimum target hosts for these services. Typically contains the same
# target hosts as the 'shared-infra_hosts' level in complete OpenStack
# deployments.
#
# Level: <value> (optional, string)
# Name of a proxy host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.

106
RPCO Installation Guide February 19, 2016 RPCO v11

#
# Level: container_vars (optional)
# Contains options for this target host.
#
# Level: swift_proxy_vars (optional)
# Contains swift proxy options for this target host. Typical deployments
# use this level to define read/write affinity settings for proxy hosts.
#
# Option: read_affinity (optional, string)
# Specify which region/zones the proxy server should prefer for reads
# from the account, container and object services.
# E.g. read_affinity: "r1=100" this would prefer region 1
# read_affinity: "r1z1=100, r1=200" this would prefer region 1 zone 1
# if that is unavailable region 1, otherwise any available region/
zone.
# Lower number is higher priority. When this option is specified the
# sorting_method is set to 'affinity' automatically.
#
# Option: write_affinity (optional, string)
# Specify which region to prefer when object PUT requests are made.
# E.g. write_affinity: "r1" - favours region 1 for object PUTs
#
# Option: write_affinity_node_count (optional, string)
# Specify how many copies to prioritise in specified region on
# handoff nodes for Object PUT requests.
# Requires "write_affinity" to be set in order to be useful.
# This is a short term way to ensure replication happens locally,
# Swift's eventual consistency will ensure proper distribution over
# time.
# e.g. write_affinity_node_count: "2 * replicas" - this would try to
# store Object PUT replicas on up to 6 disks in region 1 assuming
# replicas is 3, and write_affinity = r1
#
# Option: statsd_host (optional, string)
# Swift supports statsd metrics, this option sets the statsd host that
will
# receive statsd metrics.
#
# Option: statsd_port (optional, integer, default 8125)
# Statsd port, requires statsd_host set.
#
# Option: statsd_metric_prefix (optional, string, default
ansible_host)
# Specify a prefix that will be prepended to all metrics on this host.
#
# The following statsd related options are a little more complicated
and are
# used to tune how many samples are sent to statsd. If you need to
tweak these
# settings then first read: http://docs.openstack.org/developer/swift/
admin_guide.html
#
# Option: statsd_default_sample_rate (optional, float, default 1.0)
# Option: statsd_sample_rate_factor (optional, float, default 1.0)
#
# Example:
#
# Define three swift proxy hosts:
#
# swift_proxy-hosts:

107
RPCO Installation Guide February 19, 2016 RPCO v11

#
# infra1:
# ip: 172.29.236.101
# container_vars:
# swift_proxy_vars:
# read_affinity: "r1=100"
# write_affinity: "r1"
# write_affinity_node_count: "2 * replicas"
# infra2:
# ip: 172.29.236.102
# container_vars:
# swift_proxy_vars:
# read_affinity: "r2=100"
# write_affinity: "r2"
# write_affinity_node_count: "2 * replicas"
# infra3:
# ip: 172.29.236.103
# container_vars:
# swift_proxy_vars:
# read_affinity: "r3=100"
# write_affinity: "r3"
# write_affinity_node_count: "2 * replicas"
#
# --------
#
# Level: swift_hosts (required)
# List of target hosts on which to deploy the swift storage services.
# Recommend three minimum target hosts for these services.
#
# Level: <value> (required, string)
# Name of a storage host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Note: The following levels and options override any values higher
# in the structure and generally apply to advanced deployments.
#
# Level: container_vars (optional)
# Contains options for this target host.
#
# Level: swift_vars (optional)
# Contains swift options for this target host. Typical deployments
# use this level to define a unique zone for each storage host.
#
# Option: storage_ip (optional, string)
# IP address to use for accessing the account, container, and object
# services if different than the IP address of the storage network
# bridge on the target host. Also requires manual configuration of
# the host.
#
# Option: repl_ip (optional, string)
# IP address to use for replication services if different than the IP
# address of the replication network bridge on the target host. Also
# requires manual configuration of the host.
#
# Option: region (optional, integer)
# Region of all disks.
#

108
RPCO Installation Guide February 19, 2016 RPCO v11

# Option: zone (optional, integer)


# Zone of all disks.
#
# Option: weight (optional, integer)
# Weight of all disks.
#
# Option: statsd_host (optional, string)
# Swift supports statsd metrics, this option sets the statsd host that
will
# receive statsd metrics.
#
# Option: statsd_port (optional, integer, default 8125)
# Statsd port, requires statsd_host set.
#
# Option: statsd_metric_prefix (optional, string, default
ansible_host)
# Specify a prefix that will prepended all metrics on this host.
#
# The following statsd related options are a little more complicated
and are
# used to tune how many samples are sent to statsd. If you need to
tweak these
# settings then first read: http://docs.openstack.org/developer/swift/
admin_guide.html
#
# Option: statsd_default_sample_rate (optional, float, default 1.0)
# Option: statsd_sample_rate_factor (optional, float, default 1.0)
#
# Level: groups (optional)
# List of one of more Ansible groups that apply to this host.
#
# Example:
#
# Deploy the account ring, container ring, and 'silver' policy.
#
# groups:
# - account
# - container
# - silver
#
# Level: drives (optional)
# Contains the mount points of disks specific to this host.
#
# Level or option: name (optional, string)
# Mount point of a disk specific to this host. Use one entry for
# each disk. Functions as a level for disks that contain additional
# options.
#
# Option: storage_ip (optional, string)
# IP address to use for accessing the account, container, and object
# services of a disk if different than the IP address of the storage
# network bridge on the target host. Also requires manual
# configuration of the host.
#
# Option: repl_ip (optional, string)
# IP address to use for replication services of a disk if different
# than the IP address of the replication network bridge on the
target
# host. Also requires manual configuration of the host.
#

109
RPCO Installation Guide February 19, 2016 RPCO v11

# Option: region (optional, integer)


# Region of a disk.
#
# Option: zone (optional, integer)
# Zone of a disk.
#
# Option: weight (optional, integer)
# Weight of a disk.
#
# Level: groups (optional)
# List of one or more Ansible groups that apply to this disk.
#
# Example:
#
# Define four storage hosts. The first three hosts contain typical options
# and the last host contains advanced options.
#
# swift_hosts:
# swift-node1:
# ip: 172.29.236.151
# container_vars:
# swift_vars:
# zone: 0
# swift-node2:
# ip: 172.29.236.152
# container_vars:
# swift_vars:
# zone: 1
# swift-node3:
# ip: 172.29.236.153
# container_vars:
# swift_vars:
# zone: 2
# swift-node4:
# ip: 172.29.236.154
# container_vars:
# swift_vars:
# storage_ip: 198.51.100.11
# repl_ip: 203.0.113.11
# region: 2
# zone: 0
# weight: 200
# statsd_host: statsd2.example.net
# statsd_metric_prefix: swift-node4
# groups:
# - account
# - container
# - silver
# drives:
# - name: sdc
# storage_ip: 198.51.100.21
# repl_ip: 203.0.113.21
# weight: 75
# groups:
# - gold
# - name: sdd
# - name: sde
# - name: sdf

110
RPCO Installation Guide February 19, 2016 RPCO v11

A.5. extra_container.yml configuration file


---
# Copyright 2014, Rackspace US, Inc.
# #
# # Licensed under the Apache License, Version 2.0 (the "License");
# # you may not use this file except in compliance with the License.
# # You may obtain a copy of the License at
# #
# # http://www.apache.org/licenses/LICENSE-2.0
# #
# # Unless required by applicable law or agreed to in writing, software
# # distributed under the License is distributed on an "AS IS" BASIS,
# # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# # See the License for the specific language governing permissions and
# # limitations under the License.

component_skel:
example_api:
belongs_to:
# This is a meta group of a given component type.
- example_all

container_skel:
example_api_container:
belongs_to:
# This is a group of containers mapped to a physical host.
- example-infra_containers
contains:
# This maps back to an item in the component_skel.
- example_api
properties:
# These are arbitrary key value pairs.
service_name: example_service
# This is the image that the lxc container will be built from.
container_release: trusty

physical_skel:
# This maps back to items in the container_skel.
example-infra_containers:
belongs_to:
- all_containers
# This is a required pair for the container physical entry.
example-infra_hosts:
belongs_to:
- hosts

A.6. Environment configuration files


The openstack-ansible default environment is managed by the dynamic inventory, which
sources values from /etc/openstack_deploy/openstack_environment.ymland
the files in/etc/openstack_deploy/env.d directory to the deployed environment.

111
RPCO Installation Guide February 19, 2016 RPCO v11

Users with specialized requirements can edit the host/container group mappings and oth-
er settings for different services in /etc/openstack_deploy/env.d. For example, to
deploy Block Storage on bare metal instead of in a container, the is_metal flag in /etc/
openstack_deploy/env.d/cinder.yml is set to true.

Note
RPCO users should not change the env.d files unless instructed to do so by
Rackspace support.

112
RPCO Installation Guide February 19, 2016 RPCO v11

11. Document history and additional


information
11.1. Document change history
This version replaces and obsoletes all previous versions. The most recent versions are listed
in the following table:

Revision Date Release information


2016-02-18 • Rackspace Private Cloud r11.1.4 release
2016-01-29 • Rackspace Private Cloud r11.1.3 release
2016-01-28 • Rackspace Private Cloud r11.1.2 release
2016-01-14 • Rackspace Private Cloud r11.1.1 release
2015-12-10 • Rackspace Private Cloud r11.1.0 release

11.2. Additional resources


These additional resources help you learn more about Rackspace Private Cloud Powered By
OpenStack and OpenStack.

• Rackspace Private Cloud v11 Administrator Guide

• Rackspace Private Cloud v11 FAQ

• Rackspace Private Cloud v11 Installation Guide

• Rackspace Private Cloud v11 Object Storage Deployment

• Rackspace Private Cloud v11 Operations Guide

• Rackspace Private Cloud v11 Release Notes

• Rackspace Private Cloud v11 Upgrade Guide

• Rackspace Private Cloud Knowledge Center


• OpenStack Documentation
• OpenStack Developer Documentation
• OpenStack API Quick Start
• OpenStack Block Storage (cinder) Developer Documentation
• OpenStack Compute (nova) Developer Documentation
• OpenStack Compute API v2 Developer Guide
• OpenStack Dashboard (horizon) Developer Documentation
• OpenStack Identity (keystone) Developer Documentation
• OpenStack Image service (glance) Developer Documentation
• OpenStack Object Storage (swift) Developer Documentation

113

You might also like