Installing OpenStack With Rackspace Private Cloud Tools

This section discusses the process for installing an OpenStack environment with the Rackspace Private Cloud cookbooks.

Before you begin, it is strongly recommended that you review “Installation Prerequisites and Concepts” to ensure that you have completed all necessary preparations for the installation process.

For information about OpenStack Networking (Neutron, formerly Quantum), see Configuring OpenStack Networking.

NOTE: For information about Rackspace Private Cloud v. 2.0 and v. 3.0, see Rackspace Private Cloud Software: Archived Versions.

Table of Contents

Prepare the Nodes

 

Install Chef Server, Cookbooks, and chef-client

Install Chef Server

 

Install the Rackspace Private Cloud Cookbooks

 

Install chef-client

 

Installing OpenStack

Overview of the Configuration

 

Create an Environment

 

Define Network Attributes

 

Set the Node Environments

 

Add a Controller Node

 

Add a Compute Node

 

Controller Node High Availability

 

Troubleshooting the Installation

Prepare the Nodes

This environment has been tested on Ubuntu 12.04 and CentOS 6.3 and, for Rackspace Private Cloud v 4.1.2 and later, on CentOS 6.4. It has not been tested on versions of Ubuntu greater than 12.04 or on RHEL.

Before you begin, ensure that the OS is up to date on the devices. Log into each device and run the appropriate update for the OS and the package manager.

You should also have an administrative user (such as admin) with the same user name configured across all nodes that will be part of your environment.

Install Chef Server, Cookbooks, and chef-client

Your environment must have a Chef server, the latest versions of the Rackspace Private Cloud cookbooks, and chef-client on each node within the environment. You must install Chef server first.

Installation is performed via a curl command that launches an installation script. The script downloads the packages from Github and uses the packages to install the components.

Install Chef Server

The Chef server should be a device that is accessible by the devices that will be configured as Controller and Compute nodes on ports 443 and 80.

Log in to the server as root and execute the following curl command on the device that will become the Chef server:

 

# curl -s -L https://raw.github.com/rcbops/support-tools/master/chef-install/install-chef-server.sh | \

bash

By default, the script installs Chef 11.0.8-1 with a set of randomly generated passwords, and also installs a Knife configuration that is set up for the root user.

The following variables will be added to your environment:

  • CHEF_SERVER_VERSION: defaults to 11.0.8-1
  • CHEF_URL: defaults to https://<hostURL>:443
  • CHEF_UNIX_USER: the user for which the Knife configuration is set; defaults to root.
  • A set of randomly generated passwords:
    • CHEF_WEBUI_PASSWORD
    • CHEF_AMQP_PASSWORD
    • CHEF_POSTGRESQL_PASSWORD
    • CHEF_POSTGRESQL_RO_PASSWORD

You must log off and log back on as root to use Knife. When you log on, verify that Knife is running by executing the command knife client list. If the command executes successfully, the installation is complete.

Install the Rackspace Private Cloud Cookbooks

The Rackspace Private Cloud cookbooks are set up as git submodules and are hosted at http://github.com/rcbops/chef-cookbooks,with individual cookbook repositories at http://github.com/rcbops-cookbooks.

When the Chef server installation is complete and Knife installation is verified, execute the following curl command on the server node:

 

# curl -s -L https://raw.github.com/rcbops/support-tools/master/chef-install/install-cookbooks.sh | \

bash

The cookbooks will be downloaded from the repositories, and the following variables will be added to your environment:

  • COOKBOOK_BRANCH: defaults to the most recent version.
  • COOKBOOK_PATH: defaults to $HOME/chef-cookbooks.

Downloading Cookbooks From Github

You can also download the cookbooks from the rcbops Github repository without the script. The following procedure describes the dowload process for the full suite, but you can also download individual cookbook repositories, such as the Nova repository at https://github.com/rcbops-cookbooks/nova.

  1. Log into your Chef server or on a workstation that has knife access to the Chef server.
  2. Verify that the knife.rb configuration file contains the correct cookbook_path setting.
  3. Use git cloneto dowload the cookbooks.

    # git clone https://github.com/rcbops/chef-cookbooks.git

  4. Navigate to the chef-cookbooksdirectory.

    # cd chef-cookbooks

  5. Check out the desired version of the cookbooks. The currently available releases are v4.1.2 and v4.0.0.

    # git checkout <version>

  6. # git submodule init
  7. # git submodule sync
  8. # git submodule update
  9. Upload the cookbooks to the Chef server.

    # knife cookbook upload -a -o cookbooks

  10. Apply the updated roles.

    # knife role from file roles/*rb

Your Chef cookbooks are now up to date.

Install chef-client

All of the nodes in your OpenStack cluster need to have chef-client installed and configured to communicate with the Chef server. To expedite the installation process, you should have passwordless ssh access to each device on which you will be installing chef-client. The script will connect over ssh to the same user that you are logged in as on the server (for example, if you are logged in as admin on the server, it will attempt to log you in as admin on the target device), and will run the remote commands using sudo.

Rackspace recommends that you configure an administrative user with sudo access across all nodes and use it for this process. Connecting via ssh as root is also possible.

  1. While logged in to the Chef server as root, generate an ssh key with the ssh-keygen command. Accept the defaults when prompted.
  2. Copy the ssh key to the root user of the device on which you will install chef-client.

    $ ssh-copy-id root@<deviceHostname>

  3. Download the install-chef-client.shscript file to the Chef server

    $ curl -skS https://raw.github.com/rcbops/support-tools/master/chef-install/install-chef-client.sh \

  4.   > install-chef-client.sh
  5. Edit the permissions on the script to ensure that you can run it directly.

    $ chmod +x install-chef-client.sh

  6. Run the installation script against the target device.

    $ ./install-chef-client.sh <deviceHostname>

Repeat steps 2-5 for each device on which you want to install chef-client.

Installing OpenStack

At this point, you have now created a configuration management system for your OpenStack cluster, based on Chef, and given Chef the ability to manage the nodes in the environment. You are now ready to use the Rackspace Private Cloud cookbooks to deploy OpenStack.

This section demonstrates a typical OpenStack installation, and includes additional information about customizing or modifying your installation.

Overview of the Configuration

A typical OpenStack installation configured with Rackspace Private Cloud cookbooks consists of the following components:

  • One or two infrastructure controller nodes that host central services, such as rabbitmq, MySQL, and the Horizon dashboard. These nodes will be referred to as Controller nodes in this document.
  • One or more servers that host virtual machines. These nodes will be referred to as Compute nodes.
  • If you are using OpenStack Networking, you may have a standalone network node. Networking roles can also be applied to the Controller node. This is explained in detail in Configuring OpenStack Networking.

The cookbooks are based on the following assumptions:

  • All OpenStack services, such as Nova and Glance, use MySQL as their database.
  • High availability is provided by VRRP.
  • Load balancing is provided by haproxy.
  • KVM is the hypervisor.
  • The network will be flat HA as nova-network, or will be Neutron-controlled.

Create an Environment

The first step is to create an environment on the Chef server. In this example, the knife environment create command is used to create an environment called grizzly. The -d flag is used to add a description of the environment.

 

# knife environment create grizzly -d “Grizzly OpenStack Environment”

This creates an JSON environment file that can be directly edited to add attributes specific to your configuration. To edit the environment, run the knife environment edit command:

 

# knife environment edit grizzly

This will open a text editor where the environment settings can be modified and override attributes added.

Define Network Attributes

You must now add a set of override attributes to define the nova, public, and management networks in your environment. For more information about the information you need to configure networking, refer to Network Requirements.

NOTE: These instructions will configure nova-network, which is what a Rackspace Private Cloud environment uses by default. If you want to use OpenStack Networking, see Configuring OpenStack Networking.

To define override attributes, you will need to run the knife environment edit command and add a networking section, substituting your network information.

Version Differences in Override Attributes

The v4.1.2  cookbooks use hash syntax to define network attributes, and v4.0.0 use array syntax.

The v4.1.2 hash syntax is as follows:

 

“override_attributes”: {

“nova”: {

“network”: {

“public_interface”: “<publicInterface>

},

“networks”: {

“public”: {

“label”: “public”,

“bridge_dev”: “<VMNetworkInterface>“,

“dns2”: “8.8.4.4”,

“ipv4_cidr”: “<VMNetworkCIDR>“,

“bridge”: “<networkBridge>“,

“dns1”: “8.8.8.8”

}

}

},

“mysql”: {

“allow_remote_root”: true,

“root_network_acl”: “%”

},

“osops_networks”: {

“nova”: “<novaNetworkCIDR>“,

“public”: “<publicNetworkCIDR>“,

“management”: “<managementNetworkCIDR>

}

}

The v4.0.0array syntax is as follows:

 

“override_attributes”: {

“nova”: {

“networks”: [

{

“label”: “public”,

“bridge_dev”: “<VMNetworkInterface>“,

“dns2”: “8.8.4.4”,

“ipv4_cidr”: “<VMNetworkCIDR>“,

“bridge”: “<networkBridge>“,

“dns1”: “8.8.8.8”

}

]

},

“mysql”: {

“allow_remote_root”: true,

“root_network_acl”: “%”

},

“osops_networks”: {

“nova”: “<novaNetworkCIDR>“,

“public”: “<publicNetworkCIDR>“,

“management”: “<managementNetworkCIDR>

}

}

Override Attributes Example

The following example shows a v4.1.2 environment configuration in which all three networks are folded onto a single physical network. This network has an IP address in the 192.0.2.0/24 range. All internal services, API endpoints, and monitoring and management functions run over this network. VMs are brought up on a 198.51.100.0/24 network on eth1, connected to a bridge called br100.

 

“override_attributes”: {

“nova”: {

“network”: {

“public_interface”: “br100”

},

“networks”: {

“public”: {

“label”: “public”,

“bridge_dev”: “eth1”,

“dns2”: “8.8.4.4”,

“ipv4_cidr”: “192.51.100.0/24”,

“bridge”: “br100”,

“dns1”: “8.8.8.8”

}

}

},

“mysql”: {

“allow_remote_root”: true,

“root_network_acl”: “%”

},

“osops_networks”: {

“nova”: “192.0.2.0/24”,

“public”: “192.0.2.0/24”,

“management”: “192.0.2.0/24”

}

}

The following example shows a v4.0.0 environment configuration in which all three networks are folded onto a single physical network. This network has an IP address in the 192.0.2.0/24 range. All internal services, API endpoints, and monitoring and management functions run over this network. VMs are brought up on a 198.51.100.0/24 network on eth1, connected to a bridge called br100.

 

“override_attributes”: {

“nova”: {

“networks”: [

{

“label”: “public”,

“bridge_dev”: “eth1”,

“dns2”: “8.8.4.4”,

“ipv4_cidr”: “192.51.100.0/24”,

“bridge”: “br100”,

“dns1”: “8.8.8.8”

}

]

},

“mysql”: {

“allow_remote_root”: true,

“root_network_acl”: “%”

},

“osops_networks”: {

“nova”: “192.0.2.0/24”,

“public”: “192.0.2.0/24”,

“management”: “192.0.2.0/24”

}

}

 

Set the Node Environments

To ensure that all changes are made correctly, you must now set the environments of the client nodes to match the node created on the Chef server. While logged on to the Chef server, run the following command:

 

# knife exec -E ‘nodes.transform(“chef_environment:_default”) \

{ |n| n.chef_environment(“grizzly”) }’

Add a Controller Node

The Controller node (also known as an infrastructure node) must be installed before any Compute nodes are added. Until the Controller node chef-client run is complete, the endpoint information will not be pushed back to the Chef server, and the Compute nodes will be unable to locate or connect to infrastructure services.

A device with the single-controller role assigned will include all core OpenStack services.

NOTE: If you require an HA environment, you must configure controller nodes as defined in Controller Node HIgh Availability. The single-controller role should not be used in HA environments.

This procedure assumes that you have already installed chef-client on the device, as described in Install Chef Client, and that you are logged in to the Chef server.

  1. Add the single-controller  role to the target node’s run list.

    # knife node run_list add <deviceHostname> ‘role[single-controller]’

  2. Log in to the target node via ssh.
  3. Run chef-client on the node.

It will take chef-client several minutes to complete the installation tasks. chef-client will provide output to help you monitor the progress of the installation.

Add a Compute Node

The Compute nodes can be installed after the Controller node installation is complete.

  1. Add the single-compute role to the target node’s run list.

    # knife node run_list add <deviceHostname> ‘role[single-compute]’

  2. Log in to the target node via ssh.
  3. Run chef-client on the node.

It will take chef-client several minutes to complete the installation tasks. chef-client will provide output to help you monitor the progress of the installation.

Repeat this process on each Compute node. You will also need to run chef-client on each existing Compute node when additional Compute nodes are added.

Controller Node High Availability

By creating two Controller nodes in the environment and applying the ha-controller roles to them, you can create a pair of Controller nodes that provide HA with VRRP and monitored by keepalived. Each service has a VIP of its own, and failover occurs on a service-by-service basis. Refer to High Availability Concepts for more information about HA configuration.

Before you configure HA in your environment, you must allocate IP addresses for the MySQL, rabbitmq, and haproxy VIPs on an interface available to both Controller nodes. You will then add the VIPs to the override attributes. The following example shows the attributes for a VIP configuration where the RabbitMQ VIP is 192.0.2.51, the HAProxy VIP is 192.0.2.52, and the MySQL VIP is 192.0.2.53:

 

“override_attributes”: {

“vips”: {

“rabbitmq-queue”: “192.0.2.51”,

“horizon-dash”: “192.0.2.52”,

“horizon-dash_ssl”: “192.0.2.52”,

“keystone-service-api”: “192.0.2.52”,

“keystone-admin-api”: “192.0.2.52”,

“keystone-internal-api”: “192.0.2.52”,

“nova-xvpvnc-proxy”: “192.0.2.52”,

“nova-api”: “192.0.2.52”,

“nova-ec2-public”: “192.0.2.52”,

“nova-novnc-proxy”: “192.0.2.52”,

“cinder-api”: “192.0.2.52”,

“glance-api”: “192.0.2.52”,

“glance-registry”: “192.0.2.52”,

“swift-proxy”: “192.0.2.52”,

“quantum-api”: “192.0.2.52”,

“mysql-db”: “192.0.2.53”,

“config”: {

“192.0.2.51”: {

“vrid”: 1,

“network”: “management”

},

“192.0.2.52”: {

“vrid”: 2,

“network”: “management”

},

“192.0.2.53”: {

“vrid”: 3,

“network”: “management”

}

}

}

}

NOTE: The single-controller role should not be used in HA environments.

  1. Open the environment file for editing.

    # knife environment edit <yourEnvironmentName>

  2. Locate the override_attributes section.
  3. Add the VIP information to the override_attributes. These attribute blocks define which VIPs are associated with which service, and they also define the virtual router ID (vrid) and network for each VIP. The quantum-api VIP only needs to be specified if you are deploying OpenStack Networking.

    “override_attributes”: {

  4.     “vips”: {
  5.         “rabbitmq-queue”: “<rabbitmqVIP>“,
  6.         “horizon-dash”: “<haproxyVIP>“,
  7.         “horizon-dash_ssl”: “<haproxyVIP>“,
  8.         “keystone-service-api”: “<haproxyVIP>“,
  9.         “keystone-admin-api”: “<haproxyVIP>“,
  10.         “keystone-internal-api”: “<haproxyVIP>“,
  11.         “nova-xvpvnc-proxy”: “<haproxyVIP>“,
  12.         “nova-api”: “<haproxyVIP>“,
  13.         “nova-ec2-public”: “<haproxyVIP>“,
  14.         “nova-novnc-proxy”: “<haproxyVIP>“,
  15.         “cinder-api”: “<haproxyVIP>“,
  16.         “glance-api”: “<haproxyVIP>“,
  17.         “glance-registry”: “<haproxyVIP>“,
  18.         “swift-proxy”: “<haproxyVIP>“,
  19.         “quantum-api”: “<haproxyVIP>“,
  20.         “mysql-db”: “<mysqlVIP>“,
  21.         “config”: {
  22.             “<rabbitmqVIP>“: {
  23.                 “vrid”: <rabbitmqVirtualRouterID>,
  24.                 “network”: “<networkName>
  25.             },
  26.             “<haproxyVIP>“: {
  27.                 “vrid”: <haproxyVirtualRouterID>,
  28.                 “network”: <networkName>
  29.             },
  30.             “<mysqlVIP>“: {
  31.                 “vrid”: <mysqlVirtualRouterID>,
  32.                 “network”: “<networkName>
  33.             }
  34.         }
  35.     }
  36. }
  37. On the first Controller node, add the ha-controller1  role.

    # knife node run_list add <deviceHostName> ‘role[ha-controller1]’

  38. On the second Controller node, add the ha-controller2  role.

    # knife node run_list add <deviceHostName> ‘role[ha-controller2]’

  39. Run chef-client on the first Controller node.
  40. Run chef-client on the second Controller node.
  41. Run chef-client on the first Controller node again.

Troubleshooting the Installation

If the installation is unsuccessful, it may be due to one of the following issues.

  • The node does not have access to the Internet. The installation process requires Internet access to download installation files, so ensure that the address for the nodes provides that access and that the proxy information that you entered is correct. You should also ensure that the nodes have access to a DNS server.
  • Your network firewall is preventing Internet access. Ensure the IP address that you assign to the Controller is available through the network firewall.

For more troubleshooting information and user discussion, you can also inquire at the Rackspace Private Cloud Support Forum at the following URL:

https://privatecloudforums.rackspace.com

댓글 남기기

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Videos, Slideshows and Podcasts by Cincopa Wordpress Plugin