BK RPC Installation 20160219
BK RPC Installation 20160219
com/cloud/private
RPCO Installation Guide February 19, 2016 RPCO v11
This documentation is intended for Rackspace customers who are interested in installing an Open-
Stack-powered private cloud according to the recommendations of Rackspace.
ii
RPCO Installation Guide February 19, 2016 RPCO v11
Table of Contents
1. Preface ........................................................................................................................ 1
1.1. About Rackspace Private Cloud Powered By Openstack ..................................... 1
1.2. RPCO configuration .......................................................................................... 1
1.3. Rackspace Private Cloud support ...................................................................... 1
2. Overview ..................................................................................................................... 3
2.1. Ansible ............................................................................................................. 3
2.2. Linux Containers (LXC) ..................................................................................... 3
2.3. Reference architecture ...................................................................................... 4
2.4. Host layout ...................................................................................................... 5
2.5. Host networking .............................................................................................. 7
2.6. OpenStack Networking ................................................................................... 12
2.7. Installation requirements ................................................................................ 15
2.8. Installation workflow ...................................................................................... 15
3. Deployment host ....................................................................................................... 17
3.1. Installing the operating system ....................................................................... 17
3.2. Configuring the operating system ................................................................... 17
3.3. Installing source and dependencies ................................................................. 17
3.4. Configuring Secure Shell (SSH) keys ................................................................ 18
4. Target hosts .............................................................................................................. 19
4.1. Installing the operating system ....................................................................... 19
4.2. Configuring Secure Shell (SSH) keys ................................................................ 19
4.3. Configuring the operating system ................................................................... 20
4.4. Configuring LVM ............................................................................................ 20
4.5. Configuring the network ................................................................................ 20
4.5.1. Reference architecture ......................................................................... 21
4.5.2. Configuring the network on a target host ............................................ 24
5. Deployment configuration ......................................................................................... 30
5.1. Prerequisites ................................................................................................... 30
5.2. Configuring target host networking ................................................................ 31
5.3. Configuring target hosts ................................................................................. 34
5.4. Configuring service credentials ........................................................................ 36
5.5. Configuring proxy environment variables (optional) ........................................ 37
5.6. Configuring the hypervisor (optional) ............................................................. 37
5.7. Configuring the Image service (optional) ........................................................ 38
5.8. Configuring the Block Storage service (optional) ............................................. 40
5.8.1. Configuring the Block Storage service to use LVM ................................ 40
5.8.2. Configuring the Block Storage service to use NetApp ............................ 41
5.8.3. Configuring the Block Storage service with NFS protocols ..................... 43
5.8.4. Configuring Block Storage backups to Object Storage ........................... 43
5.8.5. Configuring Block Storage backups to external Cloud Files .................... 44
5.8.6. Creating Block Storage availability zones .............................................. 45
5.9. Configuring the Object Storage service (optional) ........................................... 46
5.9.1. Enabling the trusty-backports repository .............................................. 46
5.9.2. Configure and mount storage devices .................................................. 46
5.9.3. Configure an Object Storage deployment ............................................. 48
5.9.4. Deploying Object Storage on an existing Rackspace Private Cloud
Powered By OpenStack v11 Software ............................................................ 54
5.9.5. Object Storage monitoring ................................................................... 55
iii
RPCO Installation Guide February 19, 2016 RPCO v11
iv
RPCO Installation Guide February 19, 2016 RPCO v11
List of Figures
2.1. Host Layout Overview .............................................................................................. 7
2.2. Network components ............................................................................................... 8
2.3. Container network architecture .............................................................................. 10
2.4. Compute host network architecture ....................................................................... 11
2.5. Block Storage host network architecture ................................................................. 12
2.6. Networking agents containers ................................................................................ 13
2.7. Compute hosts ....................................................................................................... 14
2.8. Installation workflow .............................................................................................. 16
3.1. Installation work flow ............................................................................................. 17
4.1. Installation workflow .............................................................................................. 19
4.2. Infrastructure services target hosts ......................................................................... 25
4.3. Compute target hosts ............................................................................................. 26
4.4. Block Storage target hosts ...................................................................................... 27
5.1. Installation work flow ............................................................................................. 30
6.1. Ceph service layout ................................................................................................ 59
6.2. Ceph networking configuration .............................................................................. 62
7.1. Installation work flow ............................................................................................. 67
8.1. Installation workflow .............................................................................................. 69
9.1. Installation work flow ............................................................................................. 71
v
RPCO Installation Guide February 19, 2016 RPCO v11
List of Tables
5.1. Mounted devices .................................................................................................... 48
vi
RPCO Installation Guide February 19, 2016 RPCO v11
1. Preface
Rackspace Private Cloud Powered By OpenStack (RPCO) uses openstack-ansible to quickly
install an OpenStack private cloud, configured as recommended by Rackspace OpenStack
specialists.
• Compute (nova)
• Networking (neutron)
• Dashboard (horizon)
• Identity (keystone)
• Orchestration (heat)
RPCO also provides the following infrastructure, monitoring, and logging services to sup-
port OpenStack:
• RabbitMQ
• Memcached
• Rsyslog
• Logstash
1
RPCO Installation Guide February 19, 2016 RPCO v11
You can also visit the RPCO community forums, which are open to all Rackspace Private
Cloud users. They are moderated and maintained by Rackspace personnel and OpenStack
specialists. See https://community.rackspace.com/products/f/45
For more information about RPCO, visit the Rackspace Private Cloud pages:
• Support
• Resources
For the very latest information about RPCO, refer to the Rackspace Private Cloud v11 Re-
lease Notes.
Rackspace® and Fanatical Support® are service marks of Rackspace US, Inc. and are regis-
tered in the United States and other countries. Rackspace does not claim trademark rights
in abbreviations of its service or product names, unless noted otherwise. All other trade-
marks, service marks, images, products and brands remain the sole property of their respec-
tive holders and do not imply endorsement or sponsorship.
2
RPCO Installation Guide February 19, 2016 RPCO v11
2. Overview
RPCO Powered By OpenStack (RPCO) uses a combination of Ansible and Linux Containers
(LXC) to install and manage OpenStack Kilo. This chapter discusses the following topics:
2.1. Ansible
RPCO is based on openstack-ansible, which uses a combination of Ansible and Linux Con-
tainers (LXC) to install and manage OpenStack Kilo. Ansible provides an automation plat-
form to simplify system and application deployment. Ansible manages systems using Secure
Shell (SSH) instead of unique protocols that require remote daemons or agents.
Ansible uses playbooks written in the YAML language for orchestration. For more informa-
tion, see Ansible - Intro to Playbooks.
In this guide, Rackspace refers to the host running Ansible playbooks as the deployment
host and the hosts on which Ansible installs RPCO as the target hosts.
A recommended minimal layout for installing RPCO involves five target hosts in total: three
infrastructure hosts, one compute host, and one logging host. RPCO also supports one or
more optional storage hosts. All hosts require at least four 10 Gbps network interfaces. In
Rackspace datacenters, hosts can use an additional 1 Gbps network interface for service
network access. More information on setting up target hosts can be found in Section 2.4,
“Host layout” [5].
For more information on physical, logical, and virtual network interfaces within hosts see
Section 2.5, “Host networking” [7].
The Linux Containers (LXC) project implements operating system level virtualization on Lin-
ux using kernel namespaces and includes the following features:
• Resource isolation including CPU, memory, block I/O, and network using cgroups.
• Selective connectivity to physical and virtual network devices on the underlying physical
host.
3
RPCO Installation Guide February 19, 2016 RPCO v11
• Built on a foundation of stable Linux technologies with an active development and sup-
port community.
Useful commands:
• List containers and summary information such as operational state and network configu-
ration:
# lxc-ls --fancy
• Show container details including operational state, resource utilization, and veth pairs:
• Start a container:
• Attach to a container:
• Stop a container:
The RPCO reference architecture is a recommended set of software and infrastructure com-
ponents designed to provide the scalability, stability, and high availability you need to sup-
port enterprise production workloads. Additionally, every RPCO customer has access to our
team of architecture advisors who provide workload-specific guidance for planning, design-
ing, and architecting a private cloud environment to help meet your unique needs.
RPCO v11 is composed of OpenStack services, automation, and tooling. Services are
grouped into logical layers, each providing key aspects of the overall solution. The follow-
ing are the layers and their contents:
• Ansible
• Capacity planning
4
RPCO Installation Guide February 19, 2016 RPCO v11
• Heat-API
• Heat-API-CFN
• Heat-Engine
• Heat templates
• Compute (nova)
• Identity (keystone)
• Networking (neutron)
• Ansible
• LXC
• OpenStack source
• Infrastructure database
• MariaDB
• Galera
• RabbitMQ
• RabbitMQ clustering
5
RPCO Installation Guide February 19, 2016 RPCO v11
To use the optional Block Storage (cinder) service, a sixth host is required. Block Storage
hosts require an LVM volume group named cinder-volumes. See Section 2.7, “Installation re-
quirements” [15] and Section 4.4, “Configuring LVM” [20] for more information.
The hosts are called target hosts because Ansible deploys the RPCO environment within
these hosts. The RPCO environment also requires a deployment host from which Ansible or-
chestrates the deployment process. One of the target hosts can function as the deployment
host.
At least one hardware load balancer must be included to manage the traffic among the
target hosts.
• Infrastructure:
• Galera
• RabbitMQ
• Memcached
• Logging
• OpenStack:
• Identity (keystone)
• Networking (neutron)
• Orchestration (heat)
• Dashboard (horizon)
• Rsyslog
• Logstash
• Compute virtualization
• Logging
6
RPCO Installation Guide February 19, 2016 RPCO v11
Bridges provide layer 2 connectivity (similar to switches) among physical, logical, and virtual
network interfaces within a host. After creating a bridge, the network interfaces are virtual-
ly "plugged in" to it.
RPCO uses bridges to connect physical and logical network interfaces on the host to virtual
network interfaces within containers on the host.
Each container has a namespace that connects to the host namespace with one or more
veth pairs. Unless specified, the system generates random names for veth pairs.
The relationship between physical interfaces, logical interfaces, bridges, and virtual inter-
faces within containers is shown in Figure 2.2, “Network components” [8].
7
RPCO Installation Guide February 19, 2016 RPCO v11
• Mandatory (automatic).
• Automatically created and managed by LXC. Does not directly attach to any physical or
logical interfaces on the host because iptables handle connectivity. Attaches to eth0 in
each container.
• Mandatory.
• Storage br-storage:
• Optional.
8
RPCO Installation Guide February 19, 2016 RPCO v11
• Provides segregated access to block storage devices between Compute and Block Stor-
age hosts.
• Mandatory.
• Mandatory.
• Manually created and attaches to a physical or logical interface, typically bond1. Al-
so attaches to eth11 in each associated container. Does not contain an IP address be-
cause it only handles layer 2 connectivity.
9
RPCO Installation Guide February 19, 2016 RPCO v11
The RPCO architecture uses bare metal rather than a container for compute hosts. Fig-
ure 2.4, “Compute host network architecture” [11] provides a visual representation of
the network architecture on compute hosts.
10
RPCO Installation Guide February 19, 2016 RPCO v11
As of v11, the RPCO architecture uses bare metal rather than a container for Block Storage
hosts. The Block Storage service lacks interaction with the OpenStack Networking service
and therefore only requires one pair of network interfaces in a bond for the management
and storage networks. However, implementing the same network interfaces on all hosts
provides greater flexibility for future growth of the deployment. Figure 2.5, “Block Storage
host network architecture” [12] provides a visual representation of the network archi-
11
RPCO Installation Guide February 19, 2016 RPCO v11
tecture on Block Storage hosts. For more information on how this change impacts upgrades
from earlier releases, see the upgrade content in the Operations Guide.
12
RPCO Installation Guide February 19, 2016 RPCO v11
13
RPCO Installation Guide February 19, 2016 RPCO v11
The Compute service uses the KVM hypervisor. Figure 2.7, “Compute hosts” [14] shows
the interaction of instances, Linux Bridge agent, network components, and connection to a
physical network.
14
RPCO Installation Guide February 19, 2016 RPCO v11
• Required items:
• Ubuntu 14.04 LTS (Trusty Tahr) or compatible operating system that meets all other re-
quirements.
Target hosts:
• Required items:
• Ubuntu Server 14.04 LTS (Trusty Tahr) 64-bit operating system, with Linux kernel ver-
sion 3.13.0-34-generic or later.
• Optional items:
• For hosts providing Block Storage (cinder) service volumes, a Logical Volume Manager
(LVM) volume group named cinder-volumes.
• LVM volume group named lxc to store container file systems. If the lxc volume group
does not exist, containers will be automatically installed in the root file system of the
host.
Note
By default, ansible creates a 5 GB logical volume. Plan storage accordingly
to support the quantity of containers on each target host.
15
RPCO Installation Guide February 19, 2016 RPCO v11
16
RPCO Installation Guide February 19, 2016 RPCO v11
3. Deployment host
Figure 3.1. Installation work flow
The RPCO installation process requires one deployment host. The deployment host contains
Ansible and orchestrates the RPCO installation on the target hosts. One of the target hosts,
preferably one of the infrastructure variants, can be used as the deployment host. To use a
deployment host as a target host, follow the steps in Chapter 4, “Target hosts” [19] on
the deployment host. This guide assumes separate deployment and target hosts.
1. Install additional software packages if not already installed during operating system in-
stallation:
17
RPCO Installation Guide February 19, 2016 RPCO v11
18
RPCO Installation Guide February 19, 2016 RPCO v11
4. Target hosts
Figure 4.1. Installation workflow
The RPCO installation process requires at least five target hosts that will contain the Open-
Stack environment and supporting infrastructure. On each target host, perform the follow-
ing tasks:
Note
On target hosts without local (console) access, Rackspace recommends adding
the Secure Shell (SSH) server packages to the installation.
19
RPCO Installation Guide February 19, 2016 RPCO v11
1. Copy the contents of the public key file on the deployment host to the /root/.ssh/
authorized_keys on each target host.
2. Test public key authentication from the deployment host to each target host. SSH
should provide a shell without asking for a password.
2. Install additional software packages if not already installed during operating system in-
stallation:
Note
During the installation of RPCO, unattended upgrades are disabled. For
long-running systems, periodically check for and apply security updates.
3. Add the appropriate kernel modules to the /etc/modules file to enable VLAN and
bond interfaces:
# echo 'bonding' >> /etc/modules
# echo '8021q' >> /etc/modules
2. Optionally, create an LVM volume group named lxc for container file systems. If the lxc
volume group does not exist, containers will be automatically installed into the file sys-
tem under /var/lib/lxc by default.
20
RPCO Installation Guide February 19, 2016 RPCO v11
terface names, networks, and IP addresses. Modify these values as needed for the particu-
lar environment.
The reference architecture for target hosts contains the following components:
• A bond0 interface using two physical interfaces. For redundancy purposes, avoid using
more than one port on network interface cards containing multiple ports. The exam-
ple configuration uses eth0 and eth2. Actual interface names can vary depending on
hardware and drivers. Configure the bond0 interface with a static IP address on the host
management network.
• A bond1 interface using two physical interfaces. For redundancy purposes, avoid using
more than one port on network interface cards containing multiple ports. The example
configuration uses eth1 and eth3. Actual interface names can vary depending on hard-
ware and drivers. Configure the bond1 interface without an IP address.
Note
Recommended but not required for Block Storage target hosts.
• The OpenStack Networking VXLAN subinterface on the bond1 interface and br-vxlan
bridge with a static IP address.
Note
Recommended but not required for Block Storage target hosts.
• The OpenStack Networking VLAN br-vlan bridge on the bond1 interface without an IP
address.
Note
Recommended but not required for Block Storage target hosts.
The reference architecture for target hosts can also contain the following optional compo-
nents:
• Storage network subinterface on the bond0 interface and br-storage bridge with a
static IP address.
Note
For simplicity, the reference architecture assumes that all target hosts contain
the same network interfaces.
21
RPCO Installation Guide February 19, 2016 RPCO v11
1. Physical interfaces:
# Physical interface 1
auto eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0
# Physical interface 2
auto eth1
iface eth1 inet manual
bond-master bond1
bond-primary eth1
# Physical interface 3
auto eth2
iface eth2 inet manual
bond-master bond0
# Physical interface 4
auto eth3
iface eth3 inet manual
bond-master bond1
2. Bonding interfaces:
22
RPCO Installation Guide February 19, 2016 RPCO v11
4. Bridge devices:
23
RPCO Installation Guide February 19, 2016 RPCO v11
Note
For simplicity, this example assumes that all target hosts contain the same net-
work interfaces.
• VLANs:
• Container management: 10
• Tunnels: 30
• Storage: 20
Networks:
• Tunnel: 172.29.240.0/22
• Storage: 172.29.244.0/22
Addresses:
• Tunnel: 172.29.240.11
• Storage: 172.29.244.11
24
RPCO Installation Guide February 19, 2016 RPCO v11
25
RPCO Installation Guide February 19, 2016 RPCO v11
26
RPCO Installation Guide February 19, 2016 RPCO v11
27
RPCO Installation Guide February 19, 2016 RPCO v11
# Physical interface 1
auto eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0
# Physical interface 2
auto eth1
iface eth1 inet manual
bond-master bond1
bond-primary eth1
# Physical interface 3
auto eth2
iface eth2 inet manual
bond-master bond0
# Physical interface 4
auto eth3
iface eth3 inet manual
bond-master bond1
28
RPCO Installation Guide February 19, 2016 RPCO v11
#- network:
# group_binds:
# - glance_api
# - nova_compute
# - neutron_linuxbridge_agent
# type: "raw"
# container_bridge: "br-snet"
# container_interface: "eth3"
# ip_from_q: "snet"
29
RPCO Installation Guide February 19, 2016 RPCO v11
5. Deployment configuration
Figure 5.1. Installation work flow
Ansible references a handful of files containing mandatory and optional configuration di-
rectives. These files must be modified to define the target environment before running the
Ansible playbooks. Perform the following tasks:
• Configure virtual and physical network relationships for OpenStack Networking (neu-
tron)
• (Optional) Configure Block Storage (cinder) to use the NetApp back end
5.1. Prerequisites
1. Recursively copy the openstack-ansible-deployment files:
cp -r /opt/openstack-ansible/etc/openstack_deploy /etc/
30
RPCO Installation Guide February 19, 2016 RPCO v11
cd /etc/openstack_deploy
cp openstack_user_config.yml.example openstack_user_config.yml
1. Configure the IP address ranges associated with each network in the cidr_networks
section:
cidr_networks:
# Management (same range as br-mgmt on the target hosts)
management: CONTAINER_MGMT_CIDR
# Tunnel endpoints for VXLAN tenant networks
# (same range as br-vxlan on the target hosts)
tunnel: TUNNEL_CIDR
#Storage (same range as br-storage on the target hosts)
storage: STORAGE_CIDR
Replace *_CIDR with the appropriate IP address range in CIDR notation. For example,
203.0.113.0/24.
Note
Use the same IP address ranges as the underlying physical network in-
terfaces or bridges configured in Section 4.5, “Configuring the net-
work” [20]. For example, if the container network uses 203.0.113.0/24, the
CONTAINER_MGMT_CIDR should also use 203.0.113.0/24.
The default configuration includes the optional storage and service net-
works. To remove one or both of them, comment out the appropriate net-
work name.
used_ips:
- EXISTING_IP_ADDRESSES
Note
Add individual IP addresses on separate lines. For example, to prevent use
of 203.0.113.101 and 201:
31
RPCO Installation Guide February 19, 2016 RPCO v11
used_ips:
- 203.0.113.101
- 203.0.113.201
used_ips:
- 203.0.113.101, 203.0.113.201
global_overrides:
# Internal load balancer VIP address
internal_lb_vip_address: INTERNAL_LB_VIP_ADDRESS
# External (DMZ) load balancer VIP address
external_lb_vip_address: EXTERNAL_LB_VIP_ADDRESS
# Container network bridge device
management_bridge: "MGMT_BRIDGE"
# Tunnel network bridge device
tunnel_bridge: "TUNNEL_BRIDGE"
Replace MGMT_BRIDGE with the container bridge device name, typically br-mgmt.
Replace TUNNEL_BRIDGE with the tunnel/overlay bridge device name, typically br-
vxlan.
provider_networks:
- network:
group_binds:
- all_containers
- hosts
type: "raw"
container_bridge: "br-mgmt"
container_interface: "eth1"
container_type: "veth"
ip_from_q: "management"
is_container_address: true
is_ssh_address: true
32
RPCO Installation Guide February 19, 2016 RPCO v11
provider_networks:
- network:
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
type: "raw"
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
Note
The default configuration includes one or more optional networks. To re-
move any of them, comment out the entire associated stanza beginning
with the - network: line.
provider_networks:
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vxlan"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "TUNNEL_ID_RANGE"
net_name: "vxlan"
7. Configure OpenStack Networking flat (untagged) and VLAN (tagged) networks in the
provider_networks subsection:
provider_networks:
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth12"
host_bind_override: "eth12"
type: "flat"
net_name: "flat"
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
type: "vlan"
range: VLAN_ID_RANGE
net_name: "vlan"
33
RPCO Installation Guide February 19, 2016 RPCO v11
Replace VLAN_ID_RANGE with the VLAN ID range for each VLAN provider network.
For example, 1:1000. Supports more than one range of VLANs on a particular network.
For example, 1:1000,2001:3000. Create a similar stanza for each additional network.
Note
Optionally, you can add one or more static routes to interfaces within contain-
ers. Each route requires a destination network in CIDR notation and a gateway.
For example:
provider_networks:
- network:
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
type: "raw"
container_bridge: "br-storage"
container_interface: "eth2"
container_type: "veth"
ip_from_q: "storage"
static_routes:
- cidr: 10.176.0.0/12
gateway: 172.29.248.1
Warning
Do not assign the same IP address to different target hostnames. Unexpected
results may occur. Each IP address and hostname must be a matching pair. To
use the same host in multiple roles, for example infrastructure and networking,
specify the same hostname and IP in each section.
Use short hostnames rather than fully-qualified domain names (FQDN) to pre-
vent length limitation issues with LXC and SSH. For example, a suitable short
hostname for a compute host might be: 123456-Compute001.
34
RPCO Installation Guide February 19, 2016 RPCO v11
infra_hosts:
603975-infra01:
ip: INFRA01_IP_ADDRESS
603989-infra02:
ip: INFRA02_IP_ADDRESS
627116-infra03:
ip: INFRA03_IP_ADDRESS
628771-infra04: ...
infra_hosts:
603975-infra01:
ip: 10.240.0.80
603989-infra02:
ip: 10.240.0.81
627116-infra03:
ip: 10.240.0.184
2. Configure a list containing at least one network target host in the network_hosts
section:
network_hosts:
602117-network01:
ip: NETWORK01_IP_ADDRESS
602534-network02: ...
3. Configure a list containing at least one compute target host in the compute_hosts
section:
compute_hosts:
900089-compute001:
ip: COMPUTE001_IP_ADDRESS
900090-compute002: ...
4. Configure a list containing at least one logging target host in the log_hosts section:
log_hosts:
900088-logging01:
ip: LOGGER1_IP_ADDRESS
903877-logging02: ...
5. Configure a list containing at least one repository target host in the re-
po-infra_hosts section:
35
RPCO Installation Guide February 19, 2016 RPCO v11
repo-infra_hosts:
903939-repo01:
ip: REPO1_IP_ADDRESS
907963-repo02: ...
openstack_repo_url: "https://rpc-repo.rackspace.com/"
6. Configure a list containing at least one optional storage host in the storage_hosts
section:
storage_hosts:
100338-storage01:
ip: STORAGE01_IP_ADDRESS
100392-storage02: ...
Note
The default configuration includes an optional storage host. To install
without storage hosts, comment out the stanza beginning with the
storage_hosts: line.
Warning
Adjust permissions on these files to restrict access by non-privileged users.
Note that the following options configure passwords for the web interfaces:
36
RPCO Installation Guide February 19, 2016 RPCO v11
Recommended: Use the pw-token-gen.py script to generate random values for the vari-
ables in each file that contains service credentials:
$ cd /opt/openstack-ansible/scripts
$ python pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml
To use a proxy, uncomment this section and define variables as appropriate for your envi-
ronment.
• Set the proxy_env_url environment variable to define the proxy so that pip and oth-
er programs can reach hosts beyond the proxy. The preferred format for this variable is
http://, since Python HTTPS proxy support is limited. To use https://, be sure to ver-
ify that Python programs connect as expected before continuing the installation.
• The no_proxy_env variable defines addresses that should skip the proxy. For example
localhost, container IPs, and so on do not attempt to use the proxy setting. If neces-
sary, prepend items to this list.
37
RPCO Installation Guide February 19, 2016 RPCO v11
1. Change the default store to use Object Storage (swift), the underlying architecture of
Cloud Files:
glance_default_store: swift
glance_swift_store_endpoint_type: publicURL
Replace STORE_NAME with the container name in Cloud Files to be used for storing im-
ages. If the container doesn't exist, it will be automatically created.
38
RPCO Installation Guide February 19, 2016 RPCO v11
glance_swift_store_region: STORE_REGION
Replace STORE_REGION with one of the following region codes: DFW, HKG, IAD,
LON, ORD, SYD.
Note
UK Rackspace cloud accounts must use the LON region. US Rackspace cloud
accounts can use any region except LON.
glance_flavor: GLANCE_FLAVOR
By default, the Image service uses caching and authenticates with the Identity service.
The default maximum size of the image cache is 10 GB. The default Image service con-
tainer size is 12 GB. In some configurations, the Image service might attempt to cache
an image which exceeds the available disk space. If necessary, you can disable caching.
For example, to use Identity without caching, replace GLANCE_FLAVOR with key-
stone:
glance_flavor: keystone
glance_flavor:
Note
This option is set by default to use authentication and cache management
in the playbooks/roles/os_glance/defaults/main.yml file. To
override the default behavior, set glance_flavor to a different value in
/etc/openstack_deploy/user_variables.yml.
• (Nothing)
• caching
• cachemanagement
• keystone
• keystone+caching
• keystone+cachemanagement (default)
• trusted-auth
• trusted-auth+cachemanagement
39
RPCO Installation Guide February 19, 2016 RPCO v11
Important
When using LVM, the cinder-volumes service is closely tied to the underly-
ing disks of the physical host, and does not benefit greatly from containeriza-
tion. We recommend setting up cinder-volumes on the physical host and us-
ing the is_metal flag to avoid issues.
1. Add the lvm stanza under the cinder_backends stanza for each storage node:
cinder_backends:
lvm:
volume_backend_name: LVM_iSCSI
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group: cinder-volumes
is_metal: true
40
RPCO Installation Guide February 19, 2016 RPCO v11
storage_hosts:
xxxxxx-Infra01:
ip: 172.29.236.16
container_vars:
cinder_storage_availability_zone: cinderAZ_1
cinder_default_availability_zone: cinderAZ_1
cinder_backends:
lvm:
volume_backend_name: LVM_iSCSI
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group: cinder-volumes
1. Add the netapp stanza under the cinder_backends stanza for each storage node:
cinder_backends:
netapp:
Note
The back end name is arbitrary and becomes a volume type within the
Block Storage service.
For the NFS protocol, you must also specify the location of the configuration file that
lists the shares available to the Block Storage service:
nfs_shares_config: SHARE_CONFIG
Replace SHARE_CONFIG with the location of the share configuration file. For example,
/etc/cinder/nfs_shares.
41
RPCO Installation Guide February 19, 2016 RPCO v11
netapp_server_port: PORT_NUMBER
netapp_login: USER_NAME
netapp_password: PASSWORD
volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
volume_backend_name: BACKEND_NAME
Replace BACKEND_NAME with a suitable value that provides a hint for the Block Stor-
age scheduler. For example, NETAPP_iSCSI.
...
# leave is_metal off, alternatively you will have to migrate your volumes
once
# deployed on metal.
is_metal: false
storage_hosts:
xxxxxx-Infra01:
ip: 172.29.236.16
container_vars:
cinder_backends:
limit_container_types: cinder_volume
netapp:
netapp_storage_family: ontap_7mode
netapp_storage_protocol: nfs
netapp_server_hostname: 111.222.333.444
netapp_server_port: 80
netapp_login: openstack_cinder
netapp_password: password
volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
volume_backend_name: NETAPP_NFS
42
RPCO Installation Guide February 19, 2016 RPCO v11
2. Configure the location of the file that lists shares available to the block storage service.
This configuration file must include nfs_shares_config:
nfs_shares_config: SHARE_CONFIG
Replace SHARE_CONFIG with the location of the share configuration file. For example,
/etc/cinder/nfs_shares.
Replace NFS_HOST with the IP address or hostname of the NFS server, and the
NFS_SHARE with the absolute path to an existing and accessible NFS share.
43
RPCO Installation Guide February 19, 2016 RPCO v11
cinder_service_backup_program_enabled: True
2. By default, Block Storage will use the access credentials of the user initiating the back-
up. Default values are set in the /opt/openstack-ansible/playbooks/roles/
os_cinder/defaults/main.yml file. You can override those defaults by setting
variables in /etc/openstack_deploy/user_variables.yml to change how
Block Storage performs backups. As needed, add and edit any of the following vari-
ables to the /etc/openstack_deploy/user_variables.yml file:
...
# cinder_service_backup_swift_auth: Options include 'per_user' or
'single_user', we default to
# 'per_user' so that backups are saved
to a user's swift account.
cinder_service_backup_swift_auth: per_user
# cinder_service_backup_swift_url: This is your swift storage url when
using 'per_user', or keystone
# endpoint when using 'single_user'.
When using 'per_user', you
# can leave this as empty or as None to
allow cinder-backup to
# obtain storage url from environment.
cinder_service_backup_swift_url:
cinder_service_backup_swift_auth_version: 2
cinder_service_backup_swift_user:
cinder_service_backup_swift_tenant:
cinder_service_backup_swift_key:
cinder_service_backup_swift_container: volumebackups
cinder_service_backup_swift_object_size: 52428800
cinder_service_backup_swift_retry_attempts: 3
cinder_service_backup_swift_retry_backoff: 2
cinder_service_backup_compression_algorithm: zlib
cinder_service_backup_metadata_version: 2
During installation of Block Storage, the backup service is configured. For more information
about swift, refer to the Standalone Object Storage Deployment guide.
2. By default, Block Storage will use the access credentials of the user initiating the back-
up. Default values are set in the /opt/openstack-ansible/playbooks/roles/
os_cinder/defaults/main.yml file. You can override those defaults by setting
variables in /etc/openstack_deploy/user_variables.yml to change how
Block Storage performs backups. As needed, add and edit any of the following vari-
ables to the /etc/openstack_deploy/user_variables.yml file:
44
RPCO Installation Guide February 19, 2016 RPCO v11
...
cinder_service_backup_swift_auth: single_user
cinder_service_backup_swift_url: https://identity.api.rackspacecloud.com/
v2.0
cinder_service_backup_swift_auth_version: 2
cinder_service_backup_swift_user:
cinder_service_backup_swift_tenant:
cinder_service_backup_swift_key:
cinder_service_backup_swift_container: volumebackups
cinder_service_backup_swift_object_size: 52428800
cinder_service_backup_swift_retry_attempts: 3
cinder_service_backup_swift_retry_backoff: 2
cinder_service_backup_compression_algorithm: zlib
cinder_service_backup_metadata_version: 2
1. For each cinder storage host, configure the availability zone un-
der the container_vars stanza in /etc/openstack_deploy/
openstack_user_config.yml :
cinder_storage_availability_zone: CINDERAZ
2. When creating more than one availability zone, configure the default avail-
ability zone for all hosts by adding a default availability zone value to /etc/
openstack_deploy/user_variables.yml:
cinder_default_availability_zone: CINDERAZ_DEFAULT
Important
If you do not define cinder_default_availability_zone, the de-
fault variable is used (nova). This may cause horizon's volume creation to
fail.
45
RPCO Installation Guide February 19, 2016 RPCO v11
The following procedure describes how to set up storage devices and modify the Object
Storage configuration files to enable Object Storage usage.
$ cd /opt/openstack-ansible/rpc_deployment
Note
If this value is False, then by default, only users with the admin or swift-
operator role are allowed to create containers or manage tenants.
When the backend type for the Image Service (glance) is set to swift, the
Image Service can access the Object Storage cluster regardless of whether
this value is True or False.
$ cd /opt/openstack-ansible/playbooks
46
RPCO Installation Guide February 19, 2016 RPCO v11
1. Determine the storage devices on the node to be used for Object Storage.
2. Format each device on the node used for storage with XFS. While formatting the de-
vices, add a unique label for each device.
Note
Without labels, a failed drive can cause mount points to shift and data to
become inaccessible.
For example, create the file systems on the devices using the mkfs command
$ apt-get install xfsprogs
3. Add the mount locations to the /etc/fstab file so that the storage devices are re-
mounted on boot. The following example mount options are recommended when us-
ing XFS.
Finish all modifications to the /etc/fstab file before mounting the new filesystems
created within the storage devices.
LABEL=sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8,
nobootwait 0 0
LABEL=sdd /srv/node/sdd xfs noatime,nodiratime,nobarrier,logbufs=8,
nobootwait 0 0
LABEL=sde /srv/node/sde xfs noatime,nodiratime,nobarrier,logbufs=8,
nobootwait 0 0
LABEL=sdf /srv/node/sdf xfs noatime,nodiratime,nobarrier,logbufs=8,
nobootwait 0 0
LABEL=sdg /srv/node/sdg xfs noatime,nodiratime,nobarrier,logbufs=8,
nobootwait 0 0
4. Create the mount points for the devices using the mkdir command.
$ mkdir -p /srv/node/sdc
$ mkdir -p /srv/node/sdd
$ mkdir -p /srv/node/sde
$ mkdir -p /srv/node/sdf
$ mkdir -p /srv/node/sdg
47
RPCO Installation Guide February 19, 2016 RPCO v11
$ mount /srv/node/sdf
$ mount /srv/node/sdg
To view an annotated example of the swift.yml file, see Appendix A, OSAD configura-
tion files [85].
To view the configuration files, including information about which variables are required
and which are optional, see Appendix A, OSAD configuration files [85].
48
RPCO Installation Guide February 19, 2016 RPCO v11
# min_part_hours: 1
# repl_number: 3
# storage_network: 'br-storage'
# replication_network: 'br-repl'
# drives:
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
# mount_point: /mnt
# account:
# container:
# storage_policies:
# - policy:
# name: gold
# index: 0
# default: True
# - policy:
# name: silver
# index: 1
# repl_number: 3
# deprecated: True
part_power Set the partition power value based on the total amount
of storage the entire ring will use.
weight The default weight is 100. If the drives are different sizes,
set the weight value to avoid uneven distribution of da-
ta. For example, a 1 TB disk would have a weight of 100,
while a 2 TB drive would have a weight of 200.
49
RPCO Installation Guide February 19, 2016 RPCO v11
storage_network By default, the swift services will listen on the default man-
agement IP. Optionally, specify the interface of the stor-
age network.
Note
If the storage_network is not set,
but the storage_ips per host are
set (or the storage_ip is not on the
storage_network interface) the proxy serv-
er will not be able to connect to the storage
services.
Note
As with the storage_network,
if the repl_ip is not set on the
replication_network interface, replica-
tion will not work properly.
drives Set the default drives per host. This is useful when all hosts
have the same drives. These can be overridden on a per
host basis.
default Set the default value to yes for at least one policy. This is
the default storage policy for any non-legacy containers
that are created.
deprecated Set the deprecated value to yes to turn off storage poli-
cies.
50
RPCO Installation Guide February 19, 2016 RPCO v11
Note
For account and container rings,
min_part_hours and repl_number are
the only values that can be set. Setting them in
this section overrides the defaults for the spe-
cific ring.
# swift-proxy_hosts:
# infra-node1:
# ip: 192.0.2.1
# infra-node2:
# ip: 192.0.2.2
# infra-node3:
# ip: 192.0.2.3
swift-proxy_hosts Set the IP address of the hosts that Ansible will connect
to to deploy the swift-proxy containers. The swift-
proxy_hosts value should match the infra nodes.
# swift_hosts:
# swift-node1:
# ip: 192.0.2.4
# container_vars:
# swift_vars:
# zone: 0
# swift-node2:
# ip: 192.0.2.5
# container_vars:
# swift_vars:
# zone: 1
# swift-node3:
# ip: 192.0.2.6
# container_vars:
# swift_vars:
# zone: 2
# swift-node4:
# ip: 192.0.2.7
# container_vars:
# swift_vars:
# zone: 3
# swift-node5:
# ip: 192.0.2.8
# container_vars:
# swift_vars:
# storage_ip: 198.51.100.8
# repl_ip: 203.0.113.8
# zone: 4
# region: 3
# weight: 200
# groups:
# - account
# - container
51
RPCO Installation Guide February 19, 2016 RPCO v11
# - silver
# drives:
# - name: sdb
# storage_ip: 198.51.100.9
# repl_ip: 203.0.113.9
# weight: 75
# groups:
# - gold
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
Note
Overriding these values on a host
or drive basis can cause prob-
lems if the IP address that the
service listens on is based on a
specified storage_network or
replication_network and the ring
is set to a different IP address.
52
RPCO Installation Guide February 19, 2016 RPCO v11
# swift-node5:
# ip: 192.0.2.8
# container_vars:
# swift_vars:
# storage_ip: 198.51.100.8
# repl_ip: 203.0.113.8
# zone: 4
# region: 3
# weight: 200
# groups:
# - account
# - container
# - silver
# drives:
# - name: sdb
# storage_ip: 198.51.100.9
# repl_ip: 203.0.113.9
# weight: 75
# groups:
# - gold
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
53
RPCO Installation Guide February 19, 2016 RPCO v11
• Differing levels of replication: A provider may want to offer 2x replication and 3x repli-
cation, but does not want to maintain two separate clusters. They can set up a 2x policy
and a 3x policy and assign the nodes to their respective rings.
• Improving performance: Just as solid state drives (SSD) can be used as the exclusive mem-
bers of an account or database ring, an SSD-only object ring can be created to implement
a low-latency or high performance policy.
• Collecting nodes into groups: Different object rings can have different physical servers so
that objects in specific storage policies are always placed in a specific data center or geog-
raphy.
Most storage clusters do not require more than one storage policy. The following problems
can occur if using multiple storage policies per cluster:
• Creating a second storage policy without any specified drives (all drives are part of only
the account, container, and default storage policy groups) creates an empty ring for that
storage policy.
• A non-default storage policy is used only if specified when creating a container, using the
X-Storage-Policy: <policy-name> header. After the container is created, it uses
the created storage policy. Other containers continue using the default or another stor-
age policy specified when created.
Note
If this value is False, then by default, only users with the admin or swift-
operator role are allowed to create containers or manage tenants.
When the backend type for the Image Service (glance) is set to swift, the
Image Service can access the Object Storage cluster regardless of whether
this value is True or False.
54
RPCO Installation Guide February 19, 2016 RPCO v11
$ cd /opt/openstack-ansible/playbooks
$ openstack-ansible os-swift-install.yml
Specific monitoring alert guidelines can be set for the installation. These details should be
arranged by a Rackspace account manager.
Object Storage makes a request to /healthcheck for each service to ensure it is re-
sponding appropriately. If this check fails, determine why the service is failing and fix it
accordingly.
Object Storage checks the load balancer address for the swift-proxy-server service,
and monitors the proxy servers as a whole rather than individually which is managed by
the individual service health checks. If this check fails, it suggests that there is no access to
the VIP or all of the services are failing.
The following checks are performed against the output of the swift-recon middleware:
• md5sum checks on the ring files across all Object Storage nodes.
This check ensures that the ring files are the same on each node. If this check fails, deter-
mine why the md5sum for the ring is different and determine which of the ring files is
correct. Copy the correct ring file onto the node that is causing the md5sum to fail.
If this check fails, determine why the swift.conf is different and determine which of
the swift.conf is correct. Copy the correct swift.conf onto the node that is causing
the md5sum to fail.
55
RPCO Installation Guide February 19, 2016 RPCO v11
• Asyncs pending
This check monitors the average number of async pending requests and the percentage
that are put in async pending. This happens when a PUT or DELETE fails (due to, for ex-
ample, timeouts, heavy usage, failed disk). If this check fails, determine why requests are
failing and being put in async pending status and fix accordingly.
• Quarantine
This check monitors the percentage of objects that are quarantined (objects that are
found to have errors and moved to quarantine). An alert is set up against account, con-
tainer, and object servers. If this fails, determine the cause of the corrupted objects and
fix accordingly.
• Replication
This check monitors replication success percentage. An alert is set up against account,
container, and object servers. If this fails, determine why objects are not replicating and
fix accordingly.
Note
If there is an existing Image Service (glance) backend (for example, cloud files)
but want to add Object Storage (swift) to use as the Image Service back end,
re-add any images from the Image Service after moving to Object Storage. If
the Image Service variables are changed (as described below) and begin using
Object storage, any images in the Image Service will no longer be available.
56
RPCO Installation Guide February 19, 2016 RPCO v11
There are two options for deploying SSL certificates with Dashboard: self-signed and us-
er-provided certificates. Auto-generated self-signed certificates are currently the default.
57
RPCO Installation Guide February 19, 2016 RPCO v11
book will not regenerate a self-signed SSL certificate if one already exists on the tar-
get. To force the certificate to be regenerated the next time the playbook runs, set
horizon_ssl_self_signed_regen to true. The playbook then distributes the certifi-
cates and keys to each horizon container.
Note
When self-signed certificates are regenerated, they overwrite any existing cer-
tificates and keys, including ones that were previously user-provided.
If those three variables are provided, self-signed certificate generation and usage are dis-
abled. However, the user is responsible for deploying those certificates and keys within
each container.
58
RPCO Installation Guide February 19, 2016 RPCO v11
6. Ceph
Ceph is a distributed object store and file system designed to provide performance, reliabil-
ity, and scalability. With Ceph, object and block storage form a single distributed computer
cluster.
59
RPCO Installation Guide February 19, 2016 RPCO v11
/etc/openstack_deploy/env.d
Note
If the container_vars for each host in osds_hosts are iden-
tical, specify them only once in /etc/openstack_deploy/
user_extras_variables.yml. However, as the environment grows,
the devices and raw_journal_devices can change. In this case, spec-
ify them individually for each host.
Note
This step defines the storage_hosts in ceph.yml; there is no
need to define storage_hosts in /etc/openstack_deploy/
openstack_user_config.yml.
Note
If LVM backends are not used in conjunction with Ceph (for
storage_hosts), edit the /etc/openstack_deploy/
env.d/cinder.yml file and set "is_metal: false" under
cinder_volumes_container. This setting causes cinder_volumes
to run in a container rather than on metal. Refer to: /opt/rpc-
60
RPCO Installation Guide February 19, 2016 RPCO v11
openstack/openstack-ansible/etc/openstack_deploy/
openstack_user_config.yml.example for an example.
storage_hosts:
infra01:
ip: 172.24.240.11
container_vars:
cinder_backends:
limit_container_types: cinder_volume
ceph:
volume_driver: cinder.volume.drivers.rbd.RBDDriver
rbd_pool: volumes
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot: 'false'
rbd_max_clone_depth: 5
rbd_store_chunk_size: 4
rados_connect_timeout: -1
glance_api_version: 2
volume_backend_name: ceph
rbd_user: "{{ cinder_ceph_client }}"
rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
6. Add mons to the list of containers that have the storage network in
openstack_user_config.yml.
# The interface within the mon containers for the Ceph mon service to
listen on. This is usually eth1.
monitor_interface: [device]
# The network CIDR for the network over which clients will access Ceph
mons and osds. This is usually
# br-storage network CIDR.
public_network: [storage_network range]
# The network CIDR for osd to osd replication. This is usually the br-repl
network CIDR when using dedicated
# replication, however this can also be the br-storage network CIDR.
cluster_network: [repl_network]
61
RPCO Installation Guide February 19, 2016 RPCO v11
monitor_interface: [device]
public_network: [storage_network range] # This is the network from
external -> storage (and mons).
cluster_network: [repl_network] # Can be the same as public_network range.
This is OSD to OSD replication.
ceph_stable: true
fsid: '{{ fsid_uuid }}'
glance_default_store: rbd
nova_libvirt_images_rbd_pool: vms
nova_force_config_drive: False
nova_nova_conf_overrides:
libvirt:
live_migration_uri: qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.
ssh/id_rsa&no_verify=1
Important
Setting nova_force_config_drive to False ensures the deployer is
able to live migrate instances backed by Ceph.
Ceph is installed along with other OpenStack services by the deploy.sh script in
the rpc-openstack/scripts directory.
62
RPCO Installation Guide February 19, 2016 RPCO v11
You can also run the following playbooks for logging and MaaS:
$ openstack-ansible setup-logging.yml
$ openstack-ansible setup-maas.yml
$ openstack-ansible test-maas.yml
Note
Drives should not be mounted/formatted or prepared (the plays will do
this).
$ cd /opt/rpc-openstack/openstack-ansible/playbooks
$ openstack-ansible setup-hosts.yml --limit NEWHOST
$ cd ../../rpcd/playbooks
$ openstack-ansible ceph-osd.yml
Important
Do not pass --limit to openstack-ansible ceph-osd.yml because
the ceph-osd role requires access to all MONs to properly build the
ceph.conf configuration.
63
RPCO Installation Guide February 19, 2016 RPCO v11
ter. However, it is generally recommended to replace failed drives promptly to maintain ca-
pacity across the cluster. Follow these steps to manually remove a disk for replacement. In
this example, the failed drive we wish to replace is osd.0.
2. On the storage node hosting osd.0, make a note of the journal associated with the
OSD and then stop / unmount the osd:
Note
The journal in this instance is /dev/sdd1.
3. Log back into the MON and remove the OSD from the CRUSH map, remove the OSD's
key, and finally remove it from the cluster:
4. If you are removing the drives/servers permanently you will need to remove them
from /etc/openstack_deploy/conf.d/ceph.yml.
Note
If you are removing a full server you will need to run the ./invento-
ry-manage.py -r.
5. Once the disk has been physically replaced, log back into the disk's storage node and
prepare the disk for re-deployment:
Warning! Main and backup partition tables differ! Use the 'c' and 'e'
options
on the recovery and transformation menu to examine the two tables.
Warning! One or more CRCs don't match. You should repair the disk!
****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but
disk
verification and recovery are STRONGLY recommended.
64
RPCO Installation Guide February 19, 2016 RPCO v11
****************************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk
or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
6. The journal device for an osd exists as a symlink with the osd's directory in /var/lib/
ceph/osd/. To find the journal partition for an osd, readlink on the journal device. In
this example, the output is /dev/sdd1.
# readlink -e /var/lib/ceph/osd/ceph-0/journal
/dev/sdd1
7. While still logged into the disk's storage node, remove the journal partition from the
disk. In this example, osd.0 is using /dev/sdd1. Replace the arguments "/dev/
sdd" and "-d 1" with the correct drive and partition for the environment.
# sgdisk -d 1 /dev/sdd
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.
Note
In the above, ensure you replace arguments /dev/sdd and -d 1 with the
correct drive and partition for your scenario.
$ cd /opt/rpc-openstack/rpcd/playbooks
$ openstack-ansible ceph-osd.yml
9. Validate all OSDs are in/up and that the cluster is in a HEALTH_OK status:
To convert a Trusty QCOW2 image to RAW, follow this procedure in a utility or similar con-
tainer:
65
RPCO Installation Guide February 19, 2016 RPCO v11
# source /root/openrc
# glance image-create --name trusty --container-format bare --disk-format raw
--file trusty-server-cloudimg-amd64-disk1.raw
Now when you boot an instance from the trusty image, Ceph will snapshot and then clone
the image for use by the VM. Ceph recommends setting a number of image properties on
Glance image to make optimal use of Ceph as a backend.
66
RPCO Installation Guide February 19, 2016 RPCO v11
7. Foundation playbooks
Figure 7.1. Installation work flow
Note
RPCO by default configures containers with a rootfs directory of /var/lib/
lxc/{container_name}/rootfs. To set a different rootfs directory,
override the lxc_container_rootfs_directory variable in /etc/
openstack_user_config.yml.
The main Ansible foundation playbook prepares the target hosts for infrastructure and
OpenStack services and performs the following operations:
$ openstack-ansible setup-hosts.yml
67
RPCO Installation Guide February 19, 2016 RPCO v11
PLAY RECAP
********************************************************************
...
deployment_host : ok=18 changed=11 unreachable=0
failed=0
7.2. Troubleshooting
Q: How do I resolve the following error after running a playbook?
failed: [target_host] => (item=target_host_horizon_container-69099e06) =>
{"err": "lxc-attach: No such file or directory - failed to open
'/proc/12440/ns/mnt'\nlxc-attach: failed to enter the namespace\n",
"failed":
true, "item": "target_host_horizon_container-69099e06", "rc": 1}
msg: Failed executing lxc-attach.
A: The lxc-attach sometimes fails to execute properly. This issue can be resolved by run-
ning the playbook again.
68
RPCO Installation Guide February 19, 2016 RPCO v11
8. Infrastructure playbooks
Figure 8.1. Installation workflow
The main Ansible infrastructure playbook installs infrastructure services and performs the
following operations:
• Install Memcached
• Install Galera
• Install RabbitMQ
• Install Rsyslog
• Install Elasticsearch
• Install Logstash
• Install Kibana
• Configure Rsyslog
$ openstack-ansible setup-infrastructure.yml
69
RPCO Installation Guide February 19, 2016 RPCO v11
3. Run the MariaDB client, show cluster status, and exit the client:
$ mysql -u root -p
MariaDB> show status like 'wsrep_cluster%';
+--------------------------+--------------------------------------+
| Variable_name | Value |
+--------------------------+--------------------------------------+
| wsrep_cluster_conf_id | 3 |
| wsrep_cluster_size | 3 |
| wsrep_cluster_state_uuid | bbe3f0f6-3a88-11e4-bd8f-f7c9e138dd07 |
| wsrep_cluster_status | Primary |
+--------------------------+--------------------------------------+
MariaDB> exit
The wsrep_cluster_size field should indicate the number of nodes in the cluster
and the wsrep_cluster_status field should indicate primary.
70
RPCO Installation Guide February 19, 2016 RPCO v11
9. OpenStack playbooks
Figure 9.1. Installation work flow
The main Ansible OpenStack playbook installs OpenStack services and performs the follow-
ing operations:
• Create utility container that provides utilities to interact with services in other containers
• Reconfigure Rsyslog
71
RPCO Installation Guide February 19, 2016 RPCO v11
For example, the tempest playbooks are installed on the utility container since tempest test-
ing does not need a container of its own. For another example of using the utility contain-
er, see Section 9.3, “Verifying OpenStack operation” [73].
$ openstack-ansible setup-openstack.yml
Note
The openstack-common.yml sub-playbook builds all OpenStack services
from source and takes up to 30 minutes to complete. As the playbook pro-
gresses, the quantity of containers in the "polling" state will approach zero.
If any operations take longer than 30 minutes to complete, the playbook
will terminate with an error.
changed: [target_host_glance_container-f2ebdc06]
changed: [target_host_heat_engine_container-36022446]
changed: [target_host_neutron_agents_container-08ec00cd]
changed: [target_host_heat_apis_container-4e170279]
changed: [target_host_keystone_container-c6501516]
changed: [target_host_neutron_server_container-94d370e5]
changed: [target_host_nova_api_metadata_container-600fe8b3]
changed: [target_host_nova_compute_container-7af962fe]
changed: [target_host_cinder_api_container-df5d5929]
changed: [target_host_cinder_volumes_container-ed58e14c]
changed: [target_host_horizon_container-e68b4f66]
<job 802849856578.7262> finished
on target_host_heat_engine_container-36022446
<job 802849856578.7739> finished
on target_host_keystone_container-c6501516
<job 802849856578.7262> finished
on target_host_heat_apis_container-4e170279
<job 802849856578.7359> finished
on target_host_cinder_api_container-df5d5929
<job 802849856578.7386> finished
on target_host_cinder_volumes_container-ed58e14c
<job 802849856578.7886> finished
on target_host_horizon_container-e68b4f66
<job 802849856578.7582> finished
on target_host_nova_compute_container-7af962fe
<job 802849856578.7604> finished
on target_host_neutron_agents_container-08ec00cd
<job 802849856578.7459> finished
on target_host_neutron_server_container-94d370e5
<job 802849856578.7327> finished
on target_host_nova_api_metadata_container-600fe8b3
<job 802849856578.7363> finished
on target_host_glance_container-f2ebdc06
<job 802849856578.7339> polling, 1675s remaining
<job 802849856578.7338> polling, 1675s remaining
72
RPCO Installation Guide February 19, 2016 RPCO v11
Note
Setting up the compute hosts takes up to 30 minutes to complete, particu-
larly in environments with many compute hosts. As the playbook progress-
es, the quantity of containers in the "polling" state will approach zero. If
any operations take longer than 30 minutes to complete, the playbook will
terminate with an error.
ok: [target_host_nova_conductor_container-2b495dc4]
ok: [target_host_nova_api_metadata_container-600fe8b3]
ok: [target_host_nova_api_ec2_container-6c928c30]
ok: [target_host_nova_scheduler_container-c3febca2]
ok: [target_host_nova_api_os_compute_container-9fa0472b]
<job 409029926086.9909> finished
on target_host_nova_api_os_compute_container-9fa0472b
<job 409029926086.9890> finished
on target_host_nova_api_ec2_container-6c928c30
<job 409029926086.9910> finished
on target_host_nova_conductor_container-2b495dc4
<job 409029926086.9882> finished
on target_host_nova_scheduler_container-c3febca2
<job 409029926086.9898> finished
on target_host_nova_api_metadata_container-600fe8b3
<job 409029926086.8330> polling, 1775s remaining
4. Run an OpenStack command that uses one or more APIs. For example:
73
RPCO Installation Guide February 19, 2016 RPCO v11
Important
This is not included in a MaaS install run by default because there is a brief
period between the agent restarting with new checks, and when the
checks are reflected in MaaS.
74
RPCO Installation Guide February 19, 2016 RPCO v11
10. Operations
The following operations apply to environments after initial installation.
10.1. Monitoring
Rackspace Cloud Monitoring Service allows Rackspace Private Cloud customers to monitor
system performance, and safeguard critical data.
Specific monitoring alert guidelines can be set for the installation. These details should be
arranged by a Rackspace account manager.
For clouds hosted within a Rackspace data center, Rackspace will provision monitoring sup-
port for the customer. Rackspace Support assists in handling functionality failure, running
system health checks, and managing system capacity. Rackspace Cloud Monitoring Service
will notify Support when a host is down, or when hardware fails.
• Local: These agent.plugin checks are performed against containers. The checks poll
the API and gather lists of metrics.
These checks will generate a critical alert after three consecutive failures.
• Compute (nova)
75
RPCO Installation Guide February 19, 2016 RPCO v11
• Identity (keystone)
• Networking (neutron)
• Orchestration (heat)
• Image service (glance): The check connects to the glance registry and tests status by
calling an arbitrary URL.
• Dashboard (horizon): The check verifies that the login page is available and uses the
credentials from openrc-maas to log in.
• Galera: The check connects to each member of a Galera cluster and verifies that the
members are fully synchronized and active.
• RabbitMQ: The check connects to each member of a RabbitMQ cluster and gathers
statistics from the API.
• Global: These remote.http checks poll the load-balanced public endpoints, such as a
public nova API. If a service is marked as administratively down, the check will skip it.
• Compute (nova)
• Identity (keystone)
• Networking (neutron)
• Orchestration (heat)
The playbook also configures Object Storage mount point checks. These checks monitor
disk space on the mount points and generate alerts if they are unmounted or unavailable.
For clouds using OpenStack Object Storage, it is important to re-run the setup playbook
whenever the number of Object Storage nodes changes.
76
RPCO Installation Guide February 19, 2016 RPCO v11
For example, if the interfaces were em0, em1, em2, and em3, set the following variable in /
etc/openstack_deploy/user_extras_variables.yml:
1. Configure the host as a target host. See Chapter 4, “Target hosts” [19] for more infor-
mation.
Note
If necessary, also modify the used_ips stanza.
3. Run the following commands to add the host. Replace NEW_HOST_NAME with the
name of the new host. Run setup-openstack.yml on all hosts to ensure the set-
up-openstack.yml playbook creates the nova_pubkey attribute.
$ cd /opt/openstack-ansible/playbooks
$ openstack-ansible setup-hosts.yml --limit NEW_HOST_NAME
$ openstack-ansible os-nova-install.yml --limit NEW_HOST_NAME --skip-tags
nova-key-distribute
$ openstack-ansible os-nova-install.yml --tags nova-key-create,nova-key-
distribute
$ openstack-ansible os-neutron-install.yml --limit NEW_HOST_NAME
77
RPCO Installation Guide February 19, 2016 RPCO v11
Compare this example output with the output from the multi-node failure scenario where
the remaining operational node is non-primary and stops processing SQL requests. Grace-
fully shutting down the MariaDB service on all but one node allows the remaining opera-
tional node to continue processing SQL requests. When gracefully shutting down multiple
nodes, perform the actions sequentially to retain operation.
1. The new cluster should be started on the most advanced node. Run the following com-
mand to check the seqno value in the grastate.dat file on all of the nodes:
$ cd /opt/openstack-ansible/playbooks
78
RPCO Installation Guide February 19, 2016 RPCO v11
cert_index:
In this example, all nodes in the cluster contain the same positive seqno values be-
cause they were synchronized just prior to graceful shutdown. If all seqno values are
equal, any node can start the new cluster.
2. Restart MariaDB on the other nodes and verify that they rejoin the cluster.
79
RPCO Installation Guide February 19, 2016 RPCO v11
$ cd /opt/openstack-ansible/playbooks
2. Restart MariaDB on the failed node and verify that it rejoins the cluster.
3. If MariaDB fails to start, run the mysqld command and perform further analysis on the
output. As a last resort, rebuild the container for the node.
$ cd /opt/openstack-ansible/playbooks
80
RPCO Installation Guide February 19, 2016 RPCO v11
In this example, nodes 2 and 3 have failed. The remaining operational server indicates
non-Primary because it cannot achieve quorum.
2. Run the following command to rebootstrap the operational node into the cluster.
The remaining operational node becomes the primary node and begins processing SQL
requests.
3. Restart MariaDB on the failed nodes and verify that they rejoin the cluster.
81
RPCO Installation Guide February 19, 2016 RPCO v11
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
4. If MariaDB fails to start on any of the failed nodes, run the mysqld command and
perform further analysis on the output. As a last resort, rebuild the container for the
node.
$ cd /opt/openstack-ansible/playbooks
All the nodes have failed if mysqld is not running on any of the nodes and all of the nodes
contain a seqno value of -1.
Note
If any single node has a positive seqno value, then that node can be used to
restart the cluster. However, because there is no guarantee that each node has
an identical copy of the data, it is not recommended to restart the cluster using
the --wsrep-new-cluster command on one node.
82
RPCO Installation Guide February 19, 2016 RPCO v11
Note
Do not rely on the load balancer health checks to disable the node. If the
node is not disabled, the load balancer will send SQL requests to it before it
rejoins the cluster and cause data inconsistencies.
2. Use the following commands to destroy the container and remove MariaDB data
stored outside of the container. In this example, node 3 failed.
$ lxc-stop -n node3_galera_container-3ea2cbd3
$ lxc-destroy -n node3_galera_container-3ea2cbd3
$ rm -rf /openstack/node3_galera_container-3ea2cbd3/*
3. Run the host setup playbook to rebuild the container specifically on node 3:
Note
The playbook will also restart all other containers on the node.
$ openstack-ansible infrastructure-setup.yml \
-l node3_galera_container-3ea2cbd3
Note
The new container runs a single-node Galera cluster, which is a dangerous
state because the environment contains more than one active database
with potentially different data.
$ cd /opt/openstack-ansible/playbooks
83
RPCO Installation Guide February 19, 2016 RPCO v11
5. Restart MariaDB in the new container and verify that it rejoins the cluster.
84
RPCO Installation Guide February 19, 2016 RPCO v11
Table of Contents
A.1. openstack_user_config.yml example configuration file ................................. 85
A.2. user_secrets.yml configuration file ................................................................. 96
A.3. user_variables.yml configuration file ............................................................. 98
A.4. swift.yml example configuration file ................................................................ 102
A.5. extra_container.yml configuration file ......................................................... 111
A.6. Environment configuration files ............................................................................ 111
repo-infra_hosts Where python wheels are stored during the build process, and
pip package protection.
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Overview
# ========
#
# This file contains the configuration for OpenStack Ansible Deployment
# (OSA) core services. Optional service configuration resides in the
# conf.d directory.
#
# You can customize the options in this file and copy it to
# /etc/openstack_deploy/openstack_user_config.yml or create a new
85
RPCO Installation Guide February 19, 2016 RPCO v11
86
RPCO Installation Guide February 19, 2016 RPCO v11
87
RPCO Installation Guide February 19, 2016 RPCO v11
88
RPCO Installation Guide February 19, 2016 RPCO v11
89
RPCO Installation Guide February 19, 2016 RPCO v11
# type: "flat"
# net_name: "flat"
#
# --------
#
# Level: shared-infra_hosts (required)
# List of target hosts on which to deploy shared infrastructure services
# including the Galera SQL database cluster, RabbitMQ, and Memcached.
Recommend
# three minimum target hosts for these services.
#
# Level: <value> (required, string)
# Hostname of a target host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Example:
#
# Define three shared infrastructure hosts:
#
# shared-infra_hosts:
# infra1:
# ip: 172.29.236.101
# infra2:
# ip: 172.29.236.102
# infra3:
# ip: 172.29.236.103
#
# --------
#
# Level: repo-infra_hosts (optional)
# List of target hosts on which to deploy the package repository. Recommend
# minimum three target hosts for this service. Typically contains the same
# target hosts as the 'shared-infra_hosts' level.
#
# Level: <value> (required, string)
# Hostname of a target host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Example:
#
# Define three package repository hosts:
#
# repo-infra_hosts:
# infra1:
# ip: 172.29.236.101
# infra2:
# ip: 172.29.236.102
# infra3:
# ip: 172.29.236.103
#
# If you choose not to implement repository target hosts, you must configure
# the 'openstack_repo_url' variable in the user_group_vars.yml file to
# contain the URL of a host with an existing repository.
#
90
RPCO Installation Guide February 19, 2016 RPCO v11
# --------
#
# Level: os-infra_hosts (required)
# List of target hosts on which to deploy the glance API, nova API, heat API,
# and horizon. Recommend three minimum target hosts for these services.
# Typically contains the same target hosts as 'shared-infra_hosts' level.
#
# Level: <value> (required, string)
# Hostname of a target host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Example:
#
# Define three OpenStack infrastructure hosts:
#
# os-infra_hosts:
# infra1:
# ip: 172.29.236.100
# infra2:
# ip: 172.29.236.101
# infra3:
# ip: 172.29.236.102
#
# --------
#
# Level: identity_hosts (required)
# List of target hosts on which to deploy the keystone service. Recommend
# three minimum target hosts for this service. Typically contains the same
# target hosts as the 'shared-infra_hosts' level.
#
# Level: <value> (required, string)
# Hostname of a target host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Example:
#
# Define three OpenStack identity hosts:
#
# identity_hosts:
# infra1:
# ip: 172.29.236.101
# infra2:
# ip: 172.29.236.102
# infra3:
# ip: 172.29.236.103
#
# --------
#
# Level: network_hosts (required)
# List of target hosts on which to deploy neutron services. Recommend three
# minimum target hosts for this service. Typically contains the same target
# hosts as the 'shared-infra_hosts' level.
#
# Level: <value> (required, string)
91
RPCO Installation Guide February 19, 2016 RPCO v11
92
RPCO Installation Guide February 19, 2016 RPCO v11
# ip: 172.29.236.101
# infra2:
# ip: 172.29.236.102
# infra3:
# ip: 172.29.236.103
#
# --------
#
# Level: storage_hosts (required)
# List of target hosts on which to deploy the cinder volume service. Recommend
# one minimum target host for this service. Typically contains target hosts
# that do not reside in other levels.
#
# Level: <value> (required, string)
# Hostname of a target host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Level: container_vars (required)
# Contains storage options for this target host.
#
# Option: cinder_storage_availability_zone (optional, string)
# Cinder availability zone.
#
# Option: cinder_default_availability_zone (optional, string)
# If the deployment contains more than one cinder availability zone,
# specify a default availability zone.
#
# Level: cinder_backends (required)
# Contains cinder backends.
#
# Option: limit_container_types (optional, string)
# Container name string in which to apply these options. Typically
# any container with 'cinder_volume' in the name.
#
# Level: <value> (required, string)
# Arbitrary name of the backend. Each backend contains one or more
# options for the particular backend driver. The template for the
# cinder.conf file can generate configuration for any backend
# providing that it includes the necessary driver options.
#
# Option: volume_backend_name (required, string)
# Name of backend, arbitrary.
#
# The following options apply to the LVM backend driver:
#
# Option: volume_driver (required, string)
# Name of volume driver, typically
# 'cinder.volume.drivers.lvm.LVMVolumeDriver'.
#
# Option: volume_group (required, string)
# Name of LVM volume group, typically 'cinder-volumes'.
#
# The following options apply to the NFS backend driver:
#
# Option: volume_driver (required, string)
# Name of volume driver,
# 'cinder.volume.drivers.nfs.NfsDriver'.
93
RPCO Installation Guide February 19, 2016 RPCO v11
# NB. When using NFS driver you may want to adjust your
# env.d/cinder.yml file to run cinder-volumes in containers.
#
# Option: nfs_shares_config (optional, string)
# File containing list of NFS shares available to cinder, typically
# '/etc/cinder/nfs_shares'.
#
# Option: nfs_mount_point_base (optional, string)
# Location in which to mount NFS shares, typically
# '$state_path/mnt'.
#
# The following options apply to the NetApp backend driver:
#
# Option: volume_driver (required, string)
# Name of volume driver,
# 'cinder.volume.drivers.netapp.common.NetAppDriver'.
# NB. When using NetApp drivers you may want to adjust your
# env.d/cinder.yml file to run cinder-volumes in containers.
#
# Option: netapp_storage_family (required, string)
# Access method, typically 'ontap_7mode' or 'ontap_cluster'.
#
# Option: netapp_storage_protocol (required, string)
# Transport method, typically 'scsi' or 'nfs'. NFS transport also
# requires the 'nfs_shares_config' option.
#
# Option: nfs_shares_config (required, string)
# For NFS transport, name of the file containing shares. Typically
# '/etc/cinder/nfs_shares'.
#
# Option: netapp_server_hostname (required, string)
# NetApp server hostname.
#
# Option: netapp_server_port (required, integer)
# NetApp server port, typically 80 or 443.
#
# Option: netapp_login (required, string)
# NetApp server username.
#
# Option: netapp_password (required, string)
# NetApp server password.
#
# Level: cinder_nfs_client (optional)
# Automates management of the file that cinder references for a list of
# NFS mounts.
#
# Option: nfs_shares_config (required, string)
# File containing list of NFS shares available to cinder, typically
# typically /etc/cinder/nfs_shares.
#
# Level: shares (required)
# List of shares to populate the 'nfs_shares_config' file. Each share
# uses the following format:
#
# - { ip: "{{ ip_nfs_server }}", share: "/vol/cinder" }
#
# Example:
#
# Define an OpenStack storage host:
#
94
RPCO Installation Guide February 19, 2016 RPCO v11
# storage_hosts:
# storage1:
# ip: 172.29.236.121
#
# Example:
#
# Use the LVM iSCSI backend in availability zone 'cinderAZ_1':
#
# container_vars:
# cinder_storage_availability_zone: cinderAZ_1
# cinder_default_availability_zone: cinderAZ_1
# cinder_backends:
# lvm:
# volume_backend_name: LVM_iSCSI
# volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
# volume_group: cinder-volumes
# limit_container_types: cinder_volume
#
# Example:
#
# Use the NetApp iSCSI backend via Data ONTAP 7-mode in availability zone
# 'cinderAZ_2':
#
# container_vars:
# cinder_storage_availability_zone: cinderAZ_2
# cinder_default_availability_zone: cinderAZ_1
# cinder_backends:
# netapp:
# volume_backend_name: NETAPP_iSCSI
# volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
# netapp_storage_family: ontap_7mode
# netapp_storage_protocol: iscsi
# netapp_server_hostname: hostname
# netapp_server_port: 443
# netapp_login: username
# netapp_password: password
#
#
# Example:
#
# Use the ceph RBD backend in availability zone 'cinderAZ_3':
#
# container_vars:
# cinder_storage_availability_zone: cinderAZ_3
# cinder_default_availability_zone: cinderAZ_1
# cinder_backends:
# limit_container_types: cinder_volume
# volumes_hdd:
# volume_driver: cinder.volume.drivers.rbd.RBDDriver
# rbd_pool: volumes_hdd
# rbd_ceph_conf: /etc/ceph/ceph.conf
# rbd_flatten_volume_from_snapshot: 'false'
# rbd_max_clone_depth: 5
# rbd_store_chunk_size: 4
# rados_connect_timeout: -1
# glance_api_version: 2
# volume_backend_name: volumes_hdd
# rbd_user: "{{ cinder_ceph_client }}"
# rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
#
95
RPCO Installation Guide February 19, 2016 RPCO v11
#
# --------
#
# Level: log_hosts (required)
# List of target hosts on which to deploy logging services. Recommend
# one minimum target host for this service.
#
# Level: <value> (required, string)
# Hostname of a target host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Example:
#
# Define a logging host:
#
# log_hosts:
# log1:
# ip: 172.29.236.131
## Rabbitmq Options
rabbitmq_cookie_token:
## Tokens
memcached_encryption_key:
## Galera Options
galera_root_password:
## Keystone Options
keystone_container_mysql_password:
keystone_auth_admin_token:
keystone_auth_admin_password:
96
RPCO Installation Guide February 19, 2016 RPCO v11
keystone_service_password:
keystone_rabbitmq_password:
## Ceilometer Options:
ceilometer_container_db_password:
ceilometer_service_password:
ceilometer_telemetry_secret:
ceilometer_rabbitmq_password:
## Aodh Options:
aodh_container_db_password:
aodh_service_password:
aodh_rabbitmq_password:
## Cinder Options
cinder_container_mysql_password:
cinder_service_password:
cinder_v2_service_password:
cinder_profiler_hmac_key:
cinder_rabbitmq_password:
## Glance Options
glance_container_mysql_password:
glance_service_password:
glance_profiler_hmac_key:
glance_rabbitmq_password:
## Heat Options
heat_stack_domain_admin_password:
heat_container_mysql_password:
### THE HEAT AUTH KEY NEEDS TO BE 32 CHARACTERS LONG ##
heat_auth_encryption_key:
### THE HEAT AUTH KEY NEEDS TO BE 32 CHARACTERS LONG ##
heat_service_password:
heat_cfn_service_password:
heat_profiler_hmac_key:
heat_rabbitmq_password:
## Horizon Options
horizon_container_mysql_password:
horizon_secret_key:
## Neutron Options
neutron_container_mysql_password:
neutron_service_password:
neutron_rabbitmq_password:
neutron_ha_vrrp_auth_password:
## Nova Options
nova_container_mysql_password:
nova_metadata_proxy_secret:
nova_ec2_service_password:
nova_service_password:
nova_v3_service_password:
nova_v21_service_password:
nova_s3_service_password:
nova_rabbitmq_password:
97
RPCO Installation Guide February 19, 2016 RPCO v11
## Swift Options:
swift_service_password:
swift_container_mysql_password:
swift_dispersion_password:
### Once the swift cluster has been setup DO NOT change these hash values!
swift_hash_path_suffix:
swift_hash_path_prefix:
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## Ceilometer Options
ceilometer_db_type: mongodb
ceilometer_db_ip: localhost
ceilometer_db_port: 27017
swift_ceilometer_enabled: False
heat_ceilometer_enabled: False
cinder_ceilometer_enabled: False
glance_ceilometer_enabled: False
nova_ceilometer_enabled: False
neutron_ceilometer_enabled: False
keystone_ceilometer_enabled: False
## Aodh Options
aodh_db_type: mongodb
aodh_db_ip: localhost
aodh_db_port: 27017
## Glance Options
# Set glance_default_store to "swift" if using Cloud Files or swift backend
# or "rbd" if using ceph backend; the latter will trigger ceph to get
# installed on glance
glance_default_store: file
98
RPCO Installation Guide February 19, 2016 RPCO v11
glance_notification_driver: noop
## Nova
# When nova_libvirt_images_rbd_pool is defined, ceph will be installed on nova
# hosts.
#nova_libvirt_images_rbd_pool: vms
# by default we assume you use rbd for both cinder and nova, and as libvirt
# needs to access both volumes (cinder) and boot disks (nova) we default to
# reuse the cinder_ceph_client
# only need to change this if you'd use ceph for boot disks and not for
volumes
#nova_ceph_client:
#nova_ceph_client_uuid:
# This defaults to KVM, if you are deploying on a host that is not KVM capable
# change this to your hypervisor type: IE "qemu", "lxc".
# nova_virt_type: kvm
# nova_cpu_allocation_ratio: 2.0
# nova_ram_allocation_ratio: 1.0
# If you wish to change the dhcp_domain configured for both nova and neutron
# dhcp_domain:
## Cinder
# Ceph client user for cinder to connect to the ceph cluster
#cinder_ceph_client: cinder
## Ceph
# Enable these if you use ceph rbd for at least one component (glance, cinder,
nova)
#ceph_apt_repo_url_region: "www" # or "eu" for Netherlands based mirror
99
RPCO Installation Guide February 19, 2016 RPCO v11
#ceph_stable_release: hammer
# Ceph Authentication - by default cephx is true
#cephx: true
# Ceph Monitors
# A list of the IP addresses for your Ceph monitors
#ceph_mons:
# - 10.16.5.40
# - 10.16.5.41
# - 10.16.5.42
# Custom Ceph Configuration File (ceph.conf)
# By default, your deployment host will connect to one of the mons defined
above to
# obtain a copy of your cluster's ceph.conf. If you prefer, uncomment
ceph_conf_file
# and customise to avoid ceph.conf being copied from a mon.
#ceph_conf_file: |
# [global]
# fsid = 00000000-1111-2222-3333-444444444444
# mon_initial_members = mon1.example.local,mon2.example.local,mon3.example.
local
# mon_host = 10.16.5.40,10.16.5.41,10.16.5.42
# # optionally, you can use this construct to avoid defining this list twice:
# # mon_host = {{ ceph_mons|join(',') }}
# auth_cluster_required = cephx
# auth_service_required = cephx
## SSL Settings
# Adjust these settings to change how SSL connectivity is configured for
# various services. For more information, see the openstack-ansible
# documentation section titled "Securing services with SSL certificates".
#
## SSL: Keystone
# These do not need to be configured unless you're creating certificates for
# services running behind Apache (currently, Horizon and Keystone).
ssl_protocol: "ALL -SSLv2 -SSLv3"
# Cipher suite string from https://hynek.me/articles/hardening-your-web-
servers-ssl-ciphers/
ssl_cipher_suite: "ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH
+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS"
# To override for Keystone only:
# - keystone_ssl_protocol
# - keystone_ssl_cipher_suite
# To override for Horizon only:
# - horizon_ssl_protocol
# - horizon_ssl_cipher_suite
#
## SSL: RabbitMQ
# Set these variables if you prefer to use existing SSL certificates, keys and
# CA certificates with the RabbitMQ SSL/TLS Listener
#
#rabbitmq_user_ssl_cert: <path to cert on ansible deployment host>
#rabbitmq_user_ssl_key: <path to cert on ansible deployment host>
#rabbitmq_user_ssl_ca_cert: <path to cert on ansible deployment host>
#
# By default, openstack-ansible configures all OpenStack services to talk to
# RabbitMQ over encrypted connections on port 5671. To opt-out of this
default,
# set the rabbitmq_use_ssl variable to 'false'. The default setting of 'true'
# is highly recommended for securing the contents of RabbitMQ messages.
100
RPCO Installation Guide February 19, 2016 RPCO v11
#rabbitmq_use_ssl: true
## Additional pinning generator that will allow for more packages to be pinned
as you see fit.
## All pins allow for package and versions to be defined. Be careful using
this as versions
## are always subject to change and updates regarding security will become
your problem from this
## point on. Pinning can be done based on a package version, release, or
origin. Use "*" in the
## package name to indicate that you want to pin all package to a particular
constraint.
# apt_pinned_packages:
# - { package: "lxc", version: "1.0.7-0ubuntu0.1" }
# - { package: "libvirt-bin", version: "1.2.2-0ubuntu13.1.9" }
# - { package: "rabbitmq-server", origin: "www.rabbitmq.com" }
# - { package: "*", release: "MariaDB" }
## HAProxy
# Uncomment this to disable keepalived installation (cf. documentation)
#haproxy_use_keepalived: False
#
# HAProxy Keepalived configuration (cf. documentation)
haproxy_keepalived_external_vip_cidr: "{{external_lb_vip_address}}/32"
haproxy_keepalived_internal_vip_cidr: "{{internal_lb_vip_address}}/32"
101
RPCO Installation Guide February 19, 2016 RPCO v11
#haproxy_keepalived_external_interface:
#haproxy_keepalived_internal_interface:
# Defines the default VRRP id used for keepalived with haproxy.
# Overwrite it to your value to make sure you don't overlap
# with existing VRRPs id on your network. Default is 10 for the external and
11 for the
# internal VRRPs
#haproxy_keepalived_external_virtual_router_id:
#haproxy_keepalived_internal_virtual_router_id:
# Defines the VRRP master/backup priority. Defaults respectively to 100 and 20
#haproxy_keepalived_priority_master:
#haproxy_keepalived_priority_backup:
# All the previous variables are used in a var file, fed to the keepalived
role.
# To use another file to feed the role, override the following var:
#haproxy_keepalived_vars_file: 'vars/configs/keepalived_haproxy.yml'
102
RPCO Installation Guide February 19, 2016 RPCO v11
# before deployment.
#
# OSA implements PyYAML to parse YAML files and therefore supports structure
# and formatting options that augment traditional YAML. For example, aliases
# or references. For more information on PyYAML, see the documentation at
#
# http://pyyaml.org/wiki/PyYAMLDocumentation
#
# Configuration reference
# =======================
#
# Level: global_overrides (required)
# Contains global options that require customization for a deployment. For
# example, the ring stricture. This level also provides a mechanism to
# override other options defined in the playbook structure.
#
# Level: swift (required)
# Contains options for swift.
#
# Option: storage_network (required, string)
# Name of the storage network bridge on target hosts. Typically
# 'br-storage'.
#
# Option: repl_network (optional, string)
# Name of the replication network bridge on target hosts. Typically
# 'br-repl'. Defaults to the value of the 'storage_network' option.
#
# Option: part_power (required, integer)
# Partition power. Applies to all rings unless overridden at the 'account'
# or 'container' levels or within a policy in the 'storage_policies'
level.
# Immutable without rebuilding the rings.
#
# Option: repl_number (optional, integer)
# Number of replicas for each partition. Applies to all rings unless
# overridden at the 'account' or 'container' levels or within a policy
# in the 'storage_policies' level. Defaults to 3.
#
# Option: min_part_hours (optional, integer)
# Minimum time in hours between multiple moves of the same partition.
# Applies to all rings unless overridden at the 'account' or 'container'
# levels or within a policy in the 'storage_policies' level. Defaults
# to 1.
#
# Option: region (optional, integer)
# Region of a disk. Applies to all disks in all storage hosts unless
# overridden deeper in the structure. Defaults to 1.
#
# Option: zone (optional, integer)
# Zone of a disk. Applies to all disks in all storage hosts unless
# overridden deeper in the structure. Defaults to 0.
#
# Option: weight (optional, integer)
# Weight of a disk. Applies to all disks in all storage hosts unless
# overridden deeper in the structure. Defaults to 100.
#
# Option: reclaim_age (optional, integer, default 604800)
# The amount of time in seconds before items, such as tombstones are
# reclaimed, default is 604800 (7 Days).
#
103
RPCO Installation Guide February 19, 2016 RPCO v11
104
RPCO Installation Guide February 19, 2016 RPCO v11
105
RPCO Installation Guide February 19, 2016 RPCO v11
106
RPCO Installation Guide February 19, 2016 RPCO v11
#
# Level: container_vars (optional)
# Contains options for this target host.
#
# Level: swift_proxy_vars (optional)
# Contains swift proxy options for this target host. Typical deployments
# use this level to define read/write affinity settings for proxy hosts.
#
# Option: read_affinity (optional, string)
# Specify which region/zones the proxy server should prefer for reads
# from the account, container and object services.
# E.g. read_affinity: "r1=100" this would prefer region 1
# read_affinity: "r1z1=100, r1=200" this would prefer region 1 zone 1
# if that is unavailable region 1, otherwise any available region/
zone.
# Lower number is higher priority. When this option is specified the
# sorting_method is set to 'affinity' automatically.
#
# Option: write_affinity (optional, string)
# Specify which region to prefer when object PUT requests are made.
# E.g. write_affinity: "r1" - favours region 1 for object PUTs
#
# Option: write_affinity_node_count (optional, string)
# Specify how many copies to prioritise in specified region on
# handoff nodes for Object PUT requests.
# Requires "write_affinity" to be set in order to be useful.
# This is a short term way to ensure replication happens locally,
# Swift's eventual consistency will ensure proper distribution over
# time.
# e.g. write_affinity_node_count: "2 * replicas" - this would try to
# store Object PUT replicas on up to 6 disks in region 1 assuming
# replicas is 3, and write_affinity = r1
#
# Option: statsd_host (optional, string)
# Swift supports statsd metrics, this option sets the statsd host that
will
# receive statsd metrics.
#
# Option: statsd_port (optional, integer, default 8125)
# Statsd port, requires statsd_host set.
#
# Option: statsd_metric_prefix (optional, string, default
ansible_host)
# Specify a prefix that will be prepended to all metrics on this host.
#
# The following statsd related options are a little more complicated
and are
# used to tune how many samples are sent to statsd. If you need to
tweak these
# settings then first read: http://docs.openstack.org/developer/swift/
admin_guide.html
#
# Option: statsd_default_sample_rate (optional, float, default 1.0)
# Option: statsd_sample_rate_factor (optional, float, default 1.0)
#
# Example:
#
# Define three swift proxy hosts:
#
# swift_proxy-hosts:
107
RPCO Installation Guide February 19, 2016 RPCO v11
#
# infra1:
# ip: 172.29.236.101
# container_vars:
# swift_proxy_vars:
# read_affinity: "r1=100"
# write_affinity: "r1"
# write_affinity_node_count: "2 * replicas"
# infra2:
# ip: 172.29.236.102
# container_vars:
# swift_proxy_vars:
# read_affinity: "r2=100"
# write_affinity: "r2"
# write_affinity_node_count: "2 * replicas"
# infra3:
# ip: 172.29.236.103
# container_vars:
# swift_proxy_vars:
# read_affinity: "r3=100"
# write_affinity: "r3"
# write_affinity_node_count: "2 * replicas"
#
# --------
#
# Level: swift_hosts (required)
# List of target hosts on which to deploy the swift storage services.
# Recommend three minimum target hosts for these services.
#
# Level: <value> (required, string)
# Name of a storage host.
#
# Option: ip (required, string)
# IP address of this target host, typically the IP address assigned to
# the management bridge.
#
# Note: The following levels and options override any values higher
# in the structure and generally apply to advanced deployments.
#
# Level: container_vars (optional)
# Contains options for this target host.
#
# Level: swift_vars (optional)
# Contains swift options for this target host. Typical deployments
# use this level to define a unique zone for each storage host.
#
# Option: storage_ip (optional, string)
# IP address to use for accessing the account, container, and object
# services if different than the IP address of the storage network
# bridge on the target host. Also requires manual configuration of
# the host.
#
# Option: repl_ip (optional, string)
# IP address to use for replication services if different than the IP
# address of the replication network bridge on the target host. Also
# requires manual configuration of the host.
#
# Option: region (optional, integer)
# Region of all disks.
#
108
RPCO Installation Guide February 19, 2016 RPCO v11
109
RPCO Installation Guide February 19, 2016 RPCO v11
110
RPCO Installation Guide February 19, 2016 RPCO v11
component_skel:
example_api:
belongs_to:
# This is a meta group of a given component type.
- example_all
container_skel:
example_api_container:
belongs_to:
# This is a group of containers mapped to a physical host.
- example-infra_containers
contains:
# This maps back to an item in the component_skel.
- example_api
properties:
# These are arbitrary key value pairs.
service_name: example_service
# This is the image that the lxc container will be built from.
container_release: trusty
physical_skel:
# This maps back to items in the container_skel.
example-infra_containers:
belongs_to:
- all_containers
# This is a required pair for the container physical entry.
example-infra_hosts:
belongs_to:
- hosts
111
RPCO Installation Guide February 19, 2016 RPCO v11
Users with specialized requirements can edit the host/container group mappings and oth-
er settings for different services in /etc/openstack_deploy/env.d. For example, to
deploy Block Storage on bare metal instead of in a container, the is_metal flag in /etc/
openstack_deploy/env.d/cinder.yml is set to true.
Note
RPCO users should not change the env.d files unless instructed to do so by
Rackspace support.
112
RPCO Installation Guide February 19, 2016 RPCO v11
113