#
msgid ""
msgstr ""
"Project-Id-Version: openstack-ansible 32.1.0.dev18\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-12-12 04:02+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: \n"
"Last-Translator: \n"
"Language-Team: Nepali\n"
"Language: ne\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
#: ../../source/admin/ansible-logging.rst:2
msgid "Ansible Logging Guide"
msgstr ""
#: ../../source/admin/ansible-logging.rst:4
msgid ""
"OpenStack-Ansible provides flexible options for collecting and analyzing "
"Ansible execution logs. Operators can use the default logging configuration, "
"or integrate with `ARA Records Ansible `_ "
"for advanced reporting."
msgstr ""
#: ../../source/admin/ansible-logging.rst:9
msgid "Default Log File"
msgstr ""
#: ../../source/admin/ansible-logging.rst:11
msgid "By default, OpenStack-Ansible stores all playbook logs in:"
msgstr ""
#: ../../source/admin/ansible-logging.rst:17
msgid ""
"This location is defined by the ``ANSIBLE_LOG_PATH`` environment variable."
msgstr ""
#: ../../source/admin/ansible-logging.rst:19
msgid "To change the path, override it in the deployment configuration file:"
msgstr ""
#: ../../source/admin/ansible-logging.rst:26
msgid "ARA Integration"
msgstr ""
#: ../../source/admin/ansible-logging.rst:28
msgid ""
"For richer reporting, OpenStack-Ansible can be integrated with **ARA "
"(Ansible Run Analysis)**."
msgstr ""
#: ../../source/admin/ansible-logging.rst:30
msgid "During the bootstrap process, set the following variable:"
msgstr ""
#: ../../source/admin/ansible-logging.rst:37
msgid "This installs the ARA client and configures it as an Ansible callback."
msgstr ""
#: ../../source/admin/ansible-logging.rst:39
msgid ""
"The client requires an ARA server to store data. The server is not included "
"in OpenStack-Ansible and must be deployed by the operator. The recommended "
"method is to use the ``recordsansible.ara`` collection."
msgstr ""
#: ../../source/admin/ansible-logging.rst:43
msgid "On the deployment host, configure the client with:"
msgstr ""
#: ../../source/admin/ansible-logging.rst:53
msgid ""
"If you prefer not to run an ARA server, you can still generate local reports:"
""
msgstr ""
#: ../../source/admin/ansible-logging.rst:59
msgid ""
"Each playbook run will then produce an HTML report stored on the deploy host."
""
msgstr ""
#: ../../source/admin/backup-restore.rst:5
msgid "Backup and restore your cloud"
msgstr ""
#: ../../source/admin/backup-restore.rst:7
msgid ""
"For disaster recovery purposes, it is a good practice to perform regular "
"backups of the database, configuration files, network information, and "
"OpenStack service details in your environment. For an OpenStack cloud "
"deployed using OpenStack-Ansible, back up the ``/etc/openstack_deploy/`` "
"directory."
msgstr ""
#: ../../source/admin/backup-restore.rst:14
msgid "Backup and restore the ``/etc/openstack_deploy/`` directory"
msgstr ""
#: ../../source/admin/backup-restore.rst:16
msgid ""
"The ``/etc/openstack_deploy/`` directory contains a live inventory, host "
"structure, network information, passwords, and options that are applied to "
"the configuration files for each service in your OpenStack deployment. "
"Backup the ``/etc/openstack_deploy/`` directory to a remote location."
msgstr ""
#: ../../source/admin/backup-restore.rst:22
msgid ""
"To restore the ``/etc/openstack_deploy/`` directory, copy the backup of the "
"directory to your cloud environment."
msgstr ""
#: ../../source/admin/backup-restore.rst:26
msgid "Database backups and recovery"
msgstr ""
#: ../../source/admin/backup-restore.rst:28
msgid ""
"MariaDB data is available on the infrastructure nodes. You can recover "
"databases, and rebuild the Galera cluster. For more information, see :ref:"
"`galera-cluster-recovery`."
msgstr ""
#: ../../source/admin/index.rst:5
msgid "Operations Guide"
msgstr ""
#: ../../source/admin/index.rst:7
msgid ""
"This guide provides information about operating your OpenStack-Ansible "
"deployment."
msgstr ""
#: ../../source/admin/index.rst:10
msgid ""
"For information on how to deploy your OpenStack-Ansible cloud, refer to the :"
"deploy_guide:`Deployment Guide ` for step-by-step instructions "
"on how to deploy the OpenStack packages and dependencies on your cloud using "
"OpenStack-Ansible."
msgstr ""
#: ../../source/admin/index.rst:15
msgid "For user guides, see the :ref:`user-guide`."
msgstr ""
#: ../../source/admin/index.rst:17
msgid ""
"For information on how to contribute, extend or develop OpenStack-Ansible, "
"see the :dev_docs:`Developer Documentation `."
msgstr ""
#: ../../source/admin/index.rst:20
msgid "For in-depth technical information, see the :ref:`reference-guide`."
msgstr ""
#: ../../source/admin/index.rst:22
msgid ""
"This guide ranges from first operations to verify your deployment, to the "
"major upgrades procedures."
msgstr ""
#: ../../source/admin/maintenance-tasks.rst:3
msgid "Maintenance tasks"
msgstr ""
#: ../../source/admin/maintenance-tasks.rst:5
msgid ""
"This chapter is intended for OpenStack-Ansible specific maintenance tasks."
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:2
msgid "Running ad-hoc Ansible plays"
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:4
msgid ""
"Being familiar with running ad-hoc Ansible commands is helpful when "
"operating your OpenStack-Ansible deployment. For a review, we can look at "
"the structure of the following Ansible command:"
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:12
msgid ""
"This command calls on Ansible to run the ``example_group`` using the ``-m`` "
"shell module with the ``-a`` argument which is the hostname command. You can "
"substitute example_group for any groups you may have defined. For example, "
"if you had ``compute_hosts`` in one group and ``infra_hosts`` in another, "
"supply either group name and run the command. You can also use the ``*`` "
"wild card if you only know the first part of the group name, for instance if "
"you know the group name starts with compute you would use ``compute_h*``. "
"The ``-m`` argument is for module."
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:21
msgid ""
"Modules can be used to control system resources or handle the execution of "
"system commands. For more information about modules, see `Module Index "
"`_ and `About "
"Modules `_."
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:26
msgid ""
"If you need to run a particular command against a subset of a group, you "
"could use the limit flag ``-l``. For example, if a ``compute_hosts`` group "
"contained ``compute1``, ``compute2``, ``compute3``, and ``compute4``, and "
"you only needed to execute a command on ``compute1`` and ``compute4`` you "
"could limit the command as follows:"
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:38
msgid "Each host is comma-separated with no spaces."
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:42
msgid ""
"Run the ad-hoc Ansible commands from the ``openstack-ansible/playbooks`` "
"directory."
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:45
msgid ""
"For more information, see `Inventory `_ and `Patterns `_."
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:49
msgid "Running the shell module"
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:51
msgid ""
"The two most common modules used are the ``shell`` and ``copy`` modules. The "
"``shell`` module takes the command name followed by a list of space "
"delimited arguments. It is almost like the command module, but runs the "
"command through a shell (``/bin/sh``) on the remote node."
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:56
msgid ""
"For example, you could use the shell module to check the amount of disk "
"space on a set of Compute hosts:"
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:63
msgid "To check on the status of your Galera cluster:"
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:70
msgid ""
"When a module is being used as an ad-hoc command, there are a few parameters "
"that are not required. For example, for the ``chdir`` command, there is no "
"need to :command:`chdir=/home/user ls` when running Ansible from the CLI:"
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:78
msgid ""
"For more information, see `shell - Execute commands in nodes `_."
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:82
msgid "Running the copy module"
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:84
msgid ""
"The copy module copies a file on a local machine to remote locations. To "
"copy files from remote locations to the local machine you would use the "
"fetch module. If you need variable interpolation in copied files, use the "
"template module. For more information, see `copy - Copies files to remote "
"locations `_."
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:90
msgid ""
"The following example shows how to move a file from your deployment host to "
"the ``/tmp`` directory on a set of remote machines:"
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:98
msgid ""
"The fetch module gathers files from remote machines and stores the files "
"locally in a file tree, organized by the hostname."
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:103
msgid ""
"This module transfers log files that might not be present, so a missing "
"remote file will not be an error unless ``fail_on_missing`` is set to "
"``true``."
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:107
msgid ""
"The following examples shows the :file:`nova-compute.log` file being pulled "
"from a single Compute host:"
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:126
msgid "Using tags"
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:128
msgid ""
"Tags are similar to the limit flag for groups, except tags are used to only "
"run specific tasks within a playbook. For more information on tags, see "
"`Tags `_."
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:133
msgid "Ansible forks"
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:135
msgid ""
"The default ``MaxSessions`` setting for the OpenSSH Daemon is 10. Each "
"Ansible fork makes use of a session. By default, Ansible sets the number of "
"forks to 5. However, you can increase the number of forks used in order to "
"improve deployment performance in large environments."
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:140
msgid ""
"Note that more than 10 forks will cause issues for any playbooks which use "
"``delegate_to`` or ``local_action`` in the tasks. It is recommended that the "
"number of forks are not raised when executing against the control plane, as "
"this is where delegation is most often used."
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:145
msgid ""
"When increasing the number of Ansible forks in, particularly beyond 10, SSH "
"connection issues can arise due to the default sshd setting MaxStartups 10:"
"30:100. This setting limits the number of simultaneous unauthenticated SSH "
"connections to 10, after which new connection attempts start getting dropped "
"probabilistically — with a 30% chance initially, increasing linearly up to "
"100% as the number of connections approaches 100."
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:152
msgid ""
"The number of forks used may be changed on a permanent basis by including "
"the appropriate change to the ``ANSIBLE_FORKS`` in your ``.bashrc`` file. "
"Alternatively it can be changed for a particular playbook execution by using "
"the ``--forks`` CLI parameter. For example, the following executes the nova "
"playbook against the control plane with 10 forks, then against the compute "
"nodes with 50 forks."
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:164
msgid "For more information about forks, please see the following references:"
msgstr ""
#: ../../source/admin/maintenance-tasks/ansible-modules.rst:166
msgid "Ansible `forks`_ entry for ansible.cfg"
msgstr ""
#: ../../source/admin/maintenance-tasks/containers.rst:2
msgid "Container management"
msgstr ""
#: ../../source/admin/maintenance-tasks/containers.rst:4
msgid ""
"With Ansible, the OpenStack installation process is entirely automated using "
"playbooks written in YAML. After installation, the settings configured by "
"the playbooks can be changed and modified. Services and containers can shift "
"to accommodate certain environment requirements. Scaling services are "
"achieved by adjusting services within containers, or adding new deployment "
"groups. It is also possible to destroy containers, if needed, after changes "
"and modifications are complete."
msgstr ""
#: ../../source/admin/maintenance-tasks/containers.rst:13
msgid "Scale individual services"
msgstr ""
#: ../../source/admin/maintenance-tasks/containers.rst:15
msgid ""
"Individual OpenStack services, and other open source project services, run "
"within containers. It is possible to scale out these services by modifying "
"the ``/etc/openstack_deploy/openstack_user_config.yml`` file."
msgstr ""
#: ../../source/admin/maintenance-tasks/containers.rst:19
msgid ""
"Navigate into the ``/etc/openstack_deploy/openstack_user_config.yml`` file."
msgstr ""
#: ../../source/admin/maintenance-tasks/containers.rst:22
msgid ""
"Access the deployment groups section of the configuration file. Underneath "
"the deployment group name, add an affinity value line to container scales "
"OpenStack services:"
msgstr ""
#: ../../source/admin/maintenance-tasks/containers.rst:36
msgid ""
"In this example, ``galera_container`` has a container value of one. In "
"practice, any containers that do not need adjustment can remain at the "
"default value of one, and should not be adjusted above or below the value of "
"one."
msgstr ""
#: ../../source/admin/maintenance-tasks/containers.rst:41
msgid ""
"The affinity value for each container is set at one by default. Adjust the "
"affinity value to zero for situations where the OpenStack services housed "
"within a specific container will not be needed when scaling out other "
"required services."
msgstr ""
#: ../../source/admin/maintenance-tasks/containers.rst:46
msgid ""
"Update the container number listed under the ``affinity`` configuration to "
"the desired number. The above example has ``galera_container`` set at one "
"and ``rabbit_mq_container`` at two, which scales RabbitMQ services, but "
"leaves Galera services fixed."
msgstr ""
#: ../../source/admin/maintenance-tasks/containers.rst:51
msgid ""
"Run the appropriate playbook commands after changing the configuration to "
"create the new containers, and install the appropriate services."
msgstr ""
#: ../../source/admin/maintenance-tasks/containers.rst:55
msgid ""
"For example, run the **openstack-ansible lxc-containers-create.yml rabbitmq-"
"install.yml** commands from the ``openstack-ansible/playbooks`` repository "
"to complete the scaling process described in the example above:"
msgstr ""
#: ../../source/admin/maintenance-tasks/containers.rst:66
msgid "Destroy and recreate containers"
msgstr ""
#: ../../source/admin/maintenance-tasks/containers.rst:68
msgid ""
"Resolving some issues may require destroying a container, and rebuilding "
"that container from the beginning. It is possible to destroy and re-create a "
"container with the ``lxc-containers-destroy.yml`` and ``lxc-containers-"
"create.yml`` commands. These Ansible scripts reside in the ``openstack-"
"ansible/playbooks`` repository."
msgstr ""
#: ../../source/admin/maintenance-tasks/containers.rst:74
msgid "Navigate to the ``openstack-ansible`` directory."
msgstr ""
#: ../../source/admin/maintenance-tasks/containers.rst:76
msgid ""
"Run the **openstack-ansible lxc-containers-destroy.yml** commands, "
"specifying the target containers and the container to be destroyed."
msgstr ""
#: ../../source/admin/maintenance-tasks/containers.rst:84
msgid "Replace ``CONTAINER_NAME`` with the target container."
msgstr ""
#: ../../source/admin/maintenance-tasks/firewalls.rst:2
msgid "Firewalls"
msgstr ""
#: ../../source/admin/maintenance-tasks/firewalls.rst:4
msgid ""
"OpenStack-Ansible does not configure firewalls for its infrastructure. It is "
"up to the deployer to define the perimeter and its firewall configuration."
msgstr ""
#: ../../source/admin/maintenance-tasks/firewalls.rst:7
msgid ""
"By default, OpenStack-Ansible relies on Ansible SSH connections, and needs "
"the TCP port 22 to be opened on all hosts internally."
msgstr ""
#: ../../source/admin/maintenance-tasks/firewalls.rst:10
msgid ""
"For more information on generic OpenStack firewall configuration, see the "
"`Firewalls and default ports `_"
msgstr ""
#: ../../source/admin/maintenance-tasks/firewalls.rst:13
msgid ""
"In each of the role's respective documentatione you can find the default "
"variables for the ports used within the scope of the role. Reviewing the "
"documentation allow you to find the variable names if you want to use a "
"different port."
msgstr ""
#: ../../source/admin/maintenance-tasks/firewalls.rst:18
msgid ""
"OpenStack-Ansible group vars conveniently expose the vars outside of the "
"`role scope `_ in case you are relying on the OpenStack-Ansible "
"groups to configure your firewall."
msgstr ""
#: ../../source/admin/maintenance-tasks/firewalls.rst:24
msgid "Finding ports for your external load balancer"
msgstr ""
#: ../../source/admin/maintenance-tasks/firewalls.rst:26
msgid ""
"As explained in the previous section, you can find (in each roles "
"documentation) the default variables used for the public interface endpoint "
"ports."
msgstr ""
#: ../../source/admin/maintenance-tasks/firewalls.rst:30
msgid ""
"For example, the `os_glance documentation `_ lists the variable "
"``glance_service_publicuri``. This contains the port used for the reaching "
"the service externally. In this example, it is equal to "
"``glance_service_port``, whose value is 9292."
msgstr ""
#: ../../source/admin/maintenance-tasks/firewalls.rst:37
msgid ""
"As a hint, you could find the list of all public URI defaults by executing "
"the following:"
msgstr ""
#: ../../source/admin/maintenance-tasks/firewalls.rst:47
msgid ""
"`HAProxy `_ can be configured with OpenStack-Ansible. The automatically "
"generated ``/etc/haproxy/haproxy.cfg`` file have enough information on the "
"ports to open for your environment."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:2
msgid "Galera cluster maintenance"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:4
msgid ""
"Routine maintenance includes gracefully adding or removing nodes from the "
"cluster without impacting operation and also starting a cluster after "
"gracefully shutting down all nodes."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:8
msgid ""
"MariaDB instances are restarted when creating a cluster, when adding a node, "
"when the service is not running, or when changes are made to the ``/etc/"
"mysql/my.cnf`` configuration file."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:13
msgid "Verify cluster status"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:15
msgid ""
"Compare the output of the following command with the following output. It "
"should give you information about the status of your cluster."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:37
msgid "In this example, only one node responded."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:39
msgid ""
"Gracefully shutting down the MariaDB service on all but one node allows the "
"remaining operational node to continue processing SQL requests. When "
"gracefully shutting down multiple nodes, perform the actions sequentially to "
"retain operation."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:45
msgid "Start a cluster"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:47
msgid ""
"Gracefully shutting down all nodes destroys the cluster. Starting or "
"restarting a cluster from zero nodes requires creating a new cluster on one "
"of the nodes."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:51
msgid ""
"Start a new cluster on the most advanced node. Change to the ``playbooks`` "
"directory and check the ``seqno`` value in the ``grastate.dat`` file on all "
"of the nodes:"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:76
msgid ""
"In this example, all nodes in the cluster contain the same positive "
"``seqno`` values as they were synchronized just prior to graceful shutdown. "
"If all ``seqno`` values are equal, any node can start the new cluster."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:90
msgid ""
"Please also have a look at `Starting the Cluster `_."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:93
msgid "This can also be done with the help of Ansible using the shell module:"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:100
msgid ""
"This command results in a cluster containing a single node. The "
"``wsrep_cluster_size`` value shows the number of nodes in the cluster."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:121
msgid ""
"Restart MariaDB on the other nodes (replace [0] from previous Ansible "
"command with [1:]) and verify that they rejoin the cluster."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:151
msgid "Galera cluster recovery"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:153
msgid ""
"Run the ``openstack.osa.galera_server`` playbook using the "
"``galera_force_bootstrap`` variable to automatically recover a node or an "
"entire environment."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:156
#: ../../source/admin/maintenance-tasks/galera.rst:226
msgid "Run the following Ansible command to show the failed nodes:"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:162
msgid ""
"You can additionally define a different bootstrap node through "
"``galera_server_bootstrap_node`` variable, in case current bootstrap node is "
"in desynced/broken state. You can check what node is currently selected for "
"bootstrap using this ad-hoc:"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:171
msgid ""
"The cluster comes back online after completion of this command. If this "
"fails, please review `restarting the cluster`_ and `recovering the primary "
"component`_ in the Galera documentation as they're invaluable for a full "
"cluster recovery."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:180
msgid "Recover a single-node failure"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:182
msgid ""
"If a single node fails, the other nodes maintain quorum and continue to "
"process SQL requests."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:185
msgid ""
"Change to the ``playbooks`` directory and run the following Ansible command "
"to determine the failed node:"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:210
msgid "In this example, node 3 has failed."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:212
msgid ""
"Restart MariaDB on the failed node and verify that it rejoins the cluster."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:215
msgid ""
"If MariaDB fails to start, run the ``mariadbd`` command and perform further "
"analysis on the output. As a last resort, rebuild the container for the node."
""
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:220
msgid "Recover a multi-node failure"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:222
msgid ""
"When all but one node fails, the remaining node cannot achieve quorum and "
"stops processing SQL requests. In this situation, failed nodes that recover "
"cannot join the cluster because it no longer exists."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:247
msgid ""
"In this example, nodes 2 and 3 have failed. The remaining operational server "
"indicates ``non-Primary`` because it cannot achieve quorum."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:250
msgid ""
"Run the following command to `rebootstrap `_ the operational node into the cluster:"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:272
msgid ""
"The remaining operational node becomes the primary node and begins "
"processing SQL requests."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:275
msgid ""
"Restart MariaDB on the failed nodes and verify that they rejoin the cluster:"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:303
msgid ""
"If MariaDB fails to start on any of the failed nodes, run the ``mariadbd`` "
"command and perform further analysis on the output. As a last resort, "
"rebuild the container for the node."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:308
msgid "Recover a complete environment failure"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:310
msgid ""
"Restore from backup if all of the nodes in a Galera cluster fail (do not "
"shutdown gracefully). Change to the ``playbook`` directory and run the "
"following command to determine if all nodes in the cluster have failed:"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:339
msgid ""
"All the nodes have failed if ``mariadbd`` is not running on any of the nodes "
"and all of the nodes contain a ``seqno`` value of -1."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:342
msgid ""
"If any single node has a positive ``seqno`` value, then that node can be "
"used to restart the cluster. However, because there is no guarantee that "
"each node has an identical copy of the data, we do not recommend to restart "
"the cluster using the ``--wsrep-new-cluster`` command on one node."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:349
msgid "Rebuild a container"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:351
msgid ""
"Recovering from certain failures require rebuilding one or more containers."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:353
msgid "Disable the failed node on the load balancer."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:357
msgid ""
"Do not rely on the load balancer health checks to disable the node. If the "
"node is not disabled, the load balancer sends SQL requests to it before it "
"rejoins the cluster and cause data inconsistencies."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:361
msgid ""
"Destroy the container and remove MariaDB data stored outside of the "
"container:"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:369
msgid "In this example, node 3 failed."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:371
msgid "Run the host setup playbook to rebuild the container on node 3:"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:378
msgid "The playbook restarts all other containers on the node."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:380
msgid ""
"Run the infrastructure playbook to configure the container specifically on "
"node 3:"
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:390
msgid ""
"The new container runs a single-node Galera cluster, which is a dangerous "
"state because the environment contains more than one active database with "
"potentially different data."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:419
msgid ""
"Restart MariaDB in the new container and verify that it rejoins the cluster."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:424
msgid ""
"In larger deployments, it may take some time for the MariaDB daemon to start "
"in the new container. It will be synchronizing data from the other MariaDB "
"servers during this time. You can monitor the status during this process by "
"tailing the ``journalctl -f -u mariadb`` log file."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:430
msgid ""
"Lines starting with ``WSREP_SST`` will appear during the sync process and "
"you should see a line with ``WSREP: SST complete, seqno: `` if the "
"sync was successful."
msgstr ""
#: ../../source/admin/maintenance-tasks/galera.rst:459
msgid "Enable the previously failed node on the load balancer."
msgstr ""
#: ../../source/admin/maintenance-tasks/inventory-backups.rst:2
msgid "Prune Inventory Backup Archive"
msgstr ""
#: ../../source/admin/maintenance-tasks/inventory-backups.rst:4
msgid ""
"The inventory backup archive will require maintenance over a long enough "
"period of time."
msgstr ""
#: ../../source/admin/maintenance-tasks/inventory-backups.rst:9
msgid "Bulk pruning"
msgstr ""
#: ../../source/admin/maintenance-tasks/inventory-backups.rst:11
msgid ""
"It is possible to do mass pruning of the inventory backup. The following "
"example will prune all but the last 15 inventories from the running archive."
msgstr ""
#: ../../source/admin/maintenance-tasks/inventory-backups.rst:23
msgid "Selective Pruning"
msgstr ""
#: ../../source/admin/maintenance-tasks/inventory-backups.rst:25
msgid ""
"To prune the inventory archive selectively, first identify the files you "
"wish to remove by listing them out."
msgstr ""
#: ../../source/admin/maintenance-tasks/inventory-backups.rst:37
msgid "Now delete the targeted inventory archive."
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:2
msgid "Logging Services in OpenStack-Ansible"
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:4
msgid ""
"Since the Train release, OpenStack-Ansible services have been configured to "
"save logs in ``systemd-journald`` instead of traditional log files. Journald "
"logs from containers are passed through to the physical host, so you can "
"read and manipulate all service logs directly from the metal hosts using "
"tools like ``journalctl``."
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:9
msgid ""
"``systemd-journald`` integrates well with a wide range of log collectors and "
"forwarders, including ``rsyslog``. However, while ``rsyslog`` stores data as "
"plain text (making it harder to index and search efficiently), journald uses "
"a structured format that allows logs to be queried and processed much more "
"efficiently by modern log analysis tools."
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:16
msgid "Log Locations"
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:18
msgid "All container journals are accessible on the host under:"
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:24
msgid ""
"This allows you to access and filter all service logs directly on the host "
"using tools such as journalctl. This also allows log collectors running on "
"the host to more seamlessly pick up and process journald log streams coming "
"from all service containers."
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:31
msgid ""
"Due to the adoption of ``systemd-journald`` as the primary logging backend, "
"the traditional mapping of ``/openstack/log/`` to ``/var/log/$SERVICE`` "
"inside the container is no longer present. Logs should be accessed directly "
"through journald tools such as ``journalctl`` or by examining the ``/var/log/"
"journal/`` directories on the host."
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:38
msgid "Configuring journald"
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:40
msgid ""
"The ``openstack_hosts`` role allows control over the behavior of ``systemd-"
"journald`` on the host. There are following variable to configure journald "
"settings:"
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:44
msgid "**Persistent journal storage**"
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:46
msgid ""
"By default, systemd journals are kept in memory and discarded after a reboot."
" OpenStack-Ansible sets the variable ``openstack_host_keep_journals: true`` "
"by default, which persists journals across reboots. You can explicitly "
"configure it in your ``user_variables.yml`` if needed:"
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:55
msgid ""
"This ensures that logs remain available for troubleshooting even after host "
"restarts."
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:58
msgid "**Custom journald configuration**"
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:60
msgid ""
"You can supply arbitrary journald configuration options by defining a "
"mapping in ``openstack_hosts_journald_config`` in your ``user_variables."
"yml``. For example:"
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:70
msgid ""
"This example limits journald's maximum disk usage to 20 GB and retains logs "
"for 7 days."
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:73
msgid ""
"After adjusting any journald-related variables, you can apply the changes by "
"re-running the ``openstack_hosts_setup`` role:"
msgstr ""
#: ../../source/admin/maintenance-tasks/logging.rst:80
msgid ""
"You can also check out our ELK role from `OPS repository `_ for "
"a ready-to-use ELK stack deployment and metrics collection."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:2
msgid "RabbitMQ cluster maintenance"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:4
msgid ""
"A RabbitMQ broker is a logical grouping of one or several Erlang nodes with "
"each node running the RabbitMQ application and sharing users, virtual hosts, "
"queues, exchanges, bindings, and runtime parameters. A collection of nodes "
"is often referred to as a `cluster`. For more information on RabbitMQ "
"clustering, see `RabbitMQ cluster `_."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:10
msgid ""
"Within OpenStack-Ansible, all data and states required for operation of the "
"RabbitMQ cluster is replicated across all nodes including the message queues "
"providing high availability. RabbitMQ nodes address each other using domain "
"names. The hostnames of all cluster members must be resolvable from all "
"cluster nodes, as well as any machines where CLI tools related to RabbitMQ "
"might be used. There are alternatives that may work in more restrictive "
"environments. For more details on that setup, see `Inet Configuration `_."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:20
msgid "Create a RabbitMQ cluster"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:22
msgid "RabbitMQ clusters can be formed in two ways:"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:24
msgid "Manually with ``rabbitmqctl``"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:26
msgid ""
"Declaratively (list of cluster nodes in a config, with ``rabbitmq-"
"autocluster``, or ``rabbitmq-clusterer`` plugins)"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:31
msgid ""
"RabbitMQ brokers can tolerate the failure of individual nodes within the "
"cluster. These nodes can start and stop at will as long as they have the "
"ability to reach previously known members at the time of shutdown."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:35
msgid ""
"There are two types of nodes you can configure: disk and RAM nodes. Most "
"commonly, you will use your nodes as disk nodes (preferred). Whereas RAM "
"nodes are more of a special configuration used in performance clusters."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:39
msgid ""
"RabbitMQ nodes and the CLI tools use an ``erlang cookie`` to determine "
"whether or not they have permission to communicate. The cookie is a string "
"of alphanumeric characters and can be as short or as long as you would like."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:45
msgid ""
"The cookie value is a shared secret and should be protected and kept private."
""
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:47
msgid ""
"The default location of the cookie on ``*nix`` environments is ``/var/lib/"
"rabbitmq/.erlang.cookie`` or in ``$HOME/.erlang.cookie``."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:52
msgid ""
"While troubleshooting, if you notice one node is refusing to join the "
"cluster, it is definitely worth checking if the erlang cookie matches the "
"other nodes. When the cookie is misconfigured (for example, not identical), "
"RabbitMQ will log errors such as \"Connection attempt from disallowed node\" "
"and \"Could not auto-cluster\". See `clustering `_ for more information."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:59
msgid ""
"To form a RabbitMQ Cluster, you start by taking independent RabbitMQ brokers "
"and re-configuring these nodes into a cluster configuration."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:62
msgid ""
"Using a 3 node example, you would be telling nodes 2 and 3 to join the "
"cluster of the first node."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:65
msgid "Login to the 2nd and 3rd node and stop the RabbitMQ application."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:67
msgid "Join the cluster, then restart the application:"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:79
msgid "Check the RabbitMQ cluster status"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:81
msgid "Run ``rabbitmqctl cluster_status`` from either node."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:83
msgid "You will see ``rabbit1`` and ``rabbit2`` are both running as before."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:85
msgid ""
"The difference is that the cluster status section of the output, both nodes "
"are now grouped together:"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:96
msgid ""
"To add the third RabbitMQ node to the cluster, repeat the above process by "
"stopping the RabbitMQ application on the third node."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:99
msgid "Join the cluster, and restart the application on the third node."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:101
msgid "Execute ``rabbitmq cluster_status`` to see all 3 nodes:"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:112
msgid "Stop and restart a RabbitMQ cluster"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:114
msgid ""
"To stop and start the cluster, keep in mind the order in which you shut the "
"nodes down. The last node you stop, needs to be the first node you start. "
"This node is the `master`."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:118
msgid ""
"If you start the nodes out of order, you could run into an issue where it "
"thinks the current `master` should not be the master and drops the messages "
"to ensure that no new messages are queued while the real master is down."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:123
msgid "RabbitMQ and Mnesia"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:125
msgid ""
"Mnesia is a distributed database that RabbitMQ uses to store information "
"about users, exchanges, queues, and bindings. Messages, however are not "
"stored in the database."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:129
msgid ""
"For more information about Mnesia, see the `Mnesia overview `_."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:132
msgid ""
"To view the locations of important RabbitMQ files, see `File Locations "
"`_."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:136
msgid "Repair a partitioned RabbitMQ cluster for a single-node"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:138
msgid ""
"Invariably due to something in your environment, you are likely to lose a "
"node in your cluster. In this scenario, multiple LXC containers on the same "
"host are running RabbitMQ and are in a single RabbitMQ cluster."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:142
msgid ""
"If the host still shows as part of the cluster, but it is not running, "
"execute:"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:149
msgid ""
"However, you may notice some issues with your application as clients may be "
"trying to push messages to the un-responsive node. To remedy this, forget "
"the node from the cluster by executing the following:"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:153
msgid "Ensure RabbitMQ is not running on the node:"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:159
msgid "On the RabbitMQ second node, execute:"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:165
msgid ""
"By doing this, the cluster can continue to run effectively and you can "
"repair the failing node."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:170
msgid ""
"Watch out when you restart the node, it will still think it is part of the "
"cluster and will require you to reset the node. After resetting, you should "
"be able to rejoin it to other nodes as needed."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:189
msgid "Repair a partitioned RabbitMQ cluster for a multi-node cluster"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:191
msgid ""
"The same concepts apply to a multi-node cluster that exist in a single-node "
"cluster. The only difference is that the various nodes will actually be "
"running on different hosts. The key things to keep in mind when dealing with "
"a multi-node cluster are:"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:196
msgid ""
"When the entire cluster is brought down, the last node to go down must be "
"the first node to be brought online. If this does not happen, the nodes will "
"wait 30 seconds for the last disc node to come back online, and fail "
"afterwards."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:200
msgid ""
"If the last node to go offline cannot be brought back up, it can be removed "
"from the cluster using the :command:`forget_cluster_node` command."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:203
msgid ""
"If all cluster nodes stop in a simultaneous and uncontrolled manner, (for "
"example, with a power cut) you can be left with a situation in which all "
"nodes think that some other node stopped after them. In this case you can "
"use the :command:`force_boot` command on one node to make it bootable again."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:209
msgid "Consult the rabbitmqctl manpage for more information."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:212
msgid "Migrate between HA and Quorum queues"
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:214
msgid ""
"In the 2024.1 (Caracal) release OpenStack-Ansible switches to use RabbitMQ "
"Quorum Queues by default, rather than the legacy High Availability classic "
"queues. Migration to Quorum Queues can be performed at upgrade time, but may "
"result in extended control plane downtime as this requires all OpenStack "
"services to be restarted with their new configuration."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:220
msgid ""
"In order to speed up the migration, the following playbooks can be run to "
"migrate either to or from Quorum Queues, whilst skipping package install and "
"other configuration tasks. These tasks are available from the 2024.1 release "
"onwards."
msgstr ""
#: ../../source/admin/maintenance-tasks/rabbitmq-maintain.rst:230
msgid ""
"In order to take advantage of these steps, we suggest setting "
"`oslomsg_rabbit_quorum_queues` to ``false`` before upgrading to 2024.1. "
"Then, once you have upgraded, set `oslomsg_rabbit_quorum_queues` back to the "
"default of ``true`` and run the playbooks above."
msgstr ""
#: ../../source/admin/openstack-firstrun.rst:3
msgid "Verify OpenStack-Ansible cloud"
msgstr ""
#: ../../source/admin/openstack-firstrun.rst:5
msgid ""
"This chapter is intended to document basic OpenStack operations to verify "
"your OpenStack-Ansible deployment."
msgstr ""
#: ../../source/admin/openstack-firstrun.rst:8
msgid ""
"It explains how CLIs can be used as an admin and a user, to ensure the well-"
"behavior of your cloud."
msgstr ""
#: ../../source/admin/openstack-operations.rst:3
msgid "Managing your cloud"
msgstr ""
#: ../../source/admin/openstack-operations.rst:5
msgid ""
"This chapter is intended to document OpenStack operations tasks that are "
"integral to the operations support in an OpenStack-Ansible deployment."
msgstr ""
#: ../../source/admin/openstack-operations.rst:8
msgid ""
"It explains operations such as managing images, instances, or networks."
msgstr ""
#: ../../source/admin/openstack-operations/cli-operations.rst:2
msgid "Use the command line clients"
msgstr ""
#: ../../source/admin/openstack-operations/cli-operations.rst:4
msgid ""
"This section describes some of the more common commands to use your "
"OpenStack cloud."
msgstr ""
#: ../../source/admin/openstack-operations/cli-operations.rst:7
msgid ""
"Log in to any utility container or install the OpenStack client on your "
"machine, and run the following commands:"
msgstr ""
#: ../../source/admin/openstack-operations/cli-operations.rst:10
msgid ""
"The **openstack flavor list** command lists the *flavors* that are available."
" These are different disk sizes that can be assigned to images:"
msgstr ""
#: ../../source/admin/openstack-operations/cli-operations.rst:27
msgid ""
"The **openstack floating ip list** command lists the currently available "
"floating IP addresses and the instances they are associated with:"
msgstr ""
#: ../../source/admin/openstack-operations/cli-operations.rst:41
msgid ""
"For more information about OpenStack client utilities, see these links:"
msgstr ""
#: ../../source/admin/openstack-operations/cli-operations.rst:43
msgid ""
"`OpenStack API Quick Start `__"
msgstr ""
#: ../../source/admin/openstack-operations/cli-operations.rst:46
msgid ""
"`OpenStackClient commands `__"
msgstr ""
#: ../../source/admin/openstack-operations/cli-operations.rst:49
msgid ""
"`Compute (nova) CLI commands `__"
msgstr ""
#: ../../source/admin/openstack-operations/cli-operations.rst:51
msgid ""
"`Compute (nova) CLI command cheat sheet `__"
msgstr ""
#: ../../source/admin/openstack-operations/managing-images.rst:2
msgid "Managing images"
msgstr ""
#: ../../source/admin/openstack-operations/managing-images.rst:10
msgid ""
"An image represents the operating system, software, and any settings that "
"instances may need depending on the project goals. Create images first "
"before creating any instances."
msgstr ""
#: ../../source/admin/openstack-operations/managing-images.rst:14
msgid ""
"Adding images can be done through the Dashboard, or the command line. "
"Another option available is the ``python-openstackclient`` tool, which can "
"be installed on the controller node, or on a workstation."
msgstr ""
#: ../../source/admin/openstack-operations/managing-images.rst:19
msgid "Adding an image using the Dashboard"
msgstr ""
#: ../../source/admin/openstack-operations/managing-images.rst:21
msgid ""
"In order to add an image using the Dashboard, prepare an image binary file, "
"which must be accessible over HTTP using a valid and direct URL. Images can "
"be compressed using ``.zip`` or ``.tar.gz``."
msgstr ""
#: ../../source/admin/openstack-operations/managing-images.rst:27
msgid ""
"Uploading images using the Dashboard will be available to users with "
"administrator privileges. Operators can set user access privileges."
msgstr ""
#: ../../source/admin/openstack-operations/managing-images.rst:31
msgid "Log in to the Dashboard."
msgstr ""
#: ../../source/admin/openstack-operations/managing-images.rst:33
msgid ""
"Select the :guilabel:`Admin` tab in the navigation pane and click :guilabel:"
"`Images`."
msgstr ""
#: ../../source/admin/openstack-operations/managing-images.rst:35
msgid ""
"Click the :guilabel:`Create Image` button. The **Create an Image** dialog "
"box will appear."
msgstr ""
#: ../../source/admin/openstack-operations/managing-images.rst:38
msgid ""
"Enter the details of the image, including the **Image Location**, which is "
"where the URL location of the image is required."
msgstr ""
#: ../../source/admin/openstack-operations/managing-images.rst:41
msgid ""
"Click the :guilabel:`Create Image` button. The newly created image may take "
"some time before it is completely uploaded since the image arrives in an "
"image queue."
msgstr ""
#: ../../source/admin/openstack-operations/managing-images.rst:47
msgid "Adding an image using the command line"
msgstr ""
#: ../../source/admin/openstack-operations/managing-images.rst:49
msgid ""
"The utility container provides a CLI environment for additional "
"configuration and management."
msgstr ""
#: ../../source/admin/openstack-operations/managing-images.rst:52
#: ../../source/admin/openstack-operations/verify-deploy.rst:12
msgid "Access the utility container:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-images.rst:58
msgid ""
"Use the OpenStack client within the utility container to manage all glance "
"images. `See the OpenStack client official documentation on managing images "
"`_."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:2
msgid "Managing instances"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:4
msgid "This chapter describes how to create and access instances."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:7
msgid "Creating an instance using the Dashboard"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:9
msgid "Using an image, create a new instance via the Dashboard options."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:11
msgid ""
"Log into the Dashboard, and select the :guilabel:`admin` project from the "
"drop down list."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:14
msgid ""
"On the :guilabel:`Project` tab, open the :guilabel:`Instances` tab and click "
"the :guilabel:`Launch Instance` button."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:19
msgid "**Figure Dashboard — Instances tab**"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:21
msgid ""
"Check the :guilabel:`Launch Instance` dialog, and find the :guilabel:"
"`Details` tab. Enter the appropriate values for the instance."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:26
msgid "**Instance Details**"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:28
msgid ""
"Click the :guilabel:`Source`. In the Source step, select the boot source: "
"Image, Volume (Volume Snapshot), or Instance Snapshot. If you choose Image, "
"pick the desired OS or custom image from the list to boot your instance. "
"Volume option will only be available if Block Storage service (cinder) is "
"enabled."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:36
msgid "**Instance Source**"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:38
msgid ""
"For more information on attaching Block Storage volumes to instances for "
"persistent storage, see the *Managing volumes for persistent storage* "
"section below."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:42
msgid ""
"In the :guilabel:`Launch Instance` dialog, click the :guilabel:`Flavor` tab "
"and select the prefered flavor for you instance."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:45
msgid ""
"Click the :guilabel:`Networks tab`. This tab will be unavailable if Network "
"service (neutron) has not been enabled. If networking is enabled, select the "
"networks on which the instance will reside."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:52
msgid "**Instance Networks**"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:54
msgid ""
"Click the :guilabel:`Keypair` tab and select the keypair or create new one."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:57
msgid ""
"Click the :guilabel:`Security Groups` tab and set the security group as "
"\"default\"."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:60
msgid ""
"Add customisation scripts, if needed, by clicking the :guilabel:"
"`Configuration`. These run after the instance has been created. Some "
"instances support user data, such as root passwords, or admin users. Enter "
"the information specific to the instance here if required."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:66
msgid ""
"Click :guilabel:`Launch` to create the instance. The instance will start on "
"a compute node. The **Instances** page will open and start creating a new "
"instance. The **Instances** page that opens will list the instance name, "
"size, status, and task. Power state and public and private IP addresses are "
"also listed here."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:72
msgid ""
"The process will take less than a minute to complete. Instance creation is "
"complete when the status is listed as active. Refresh the page to see the "
"new active instance."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:78
msgid "**Instances Page**"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:80
msgid "**Launching an instance options**"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:84
msgid "Field Name"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:85
#: ../../source/admin/openstack-operations/managing-instances.rst:93
#: ../../source/admin/openstack-operations/managing-instances.rst:98
#: ../../source/admin/openstack-operations/managing-instances.rst:102
#: ../../source/admin/openstack-operations/managing-instances.rst:107
#: ../../source/admin/openstack-operations/managing-instances.rst:111
#: ../../source/admin/openstack-operations/managing-instances.rst:116
msgid "Required"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:86
msgid "Details"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:87
msgid "**Availability Zone**"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:88
#: ../../source/admin/openstack-operations/managing-instances.rst:122
#: ../../source/admin/openstack-operations/managing-instances.rst:128
#: ../../source/admin/openstack-operations/managing-instances.rst:132
#: ../../source/admin/openstack-operations/managing-instances.rst:137
msgid "Optional"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:89
msgid ""
"The availability zone in which the image service creates the instance. If no "
"availability zones is defined, no instances will be found. The cloud "
"provider sets the availability zone to a specific value."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:92
msgid "**Instance Name**"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:94
msgid ""
"The name of the new instance, which becomes the initial host name of the "
"server. If the server name is changed in the API or directly changed, the "
"Dashboard names remain unchanged"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:97
#: ../../source/admin/openstack-operations/managing-instances.rst:115
msgid "**Image**"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:99
msgid ""
"The type of container format, one of ``raw``, ``qcow2``, ``iso``, "
"``vmdk``,``vdi`` etc."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:101
msgid "**Flavor**"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:103
msgid ""
"The vCPU, Memory, and Disk configuration. Note that larger flavors can take "
"a long time to create. If creating an instance for the first time and want "
"something small with which to test, select ``m1.small``."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:106
msgid "**Instance Count**"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:108
msgid ""
"If creating multiple instances with this configuration, enter an integer up "
"to the number permitted by the quota, which is ``10`` by default."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:110
msgid "**Instance Boot Source**"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:112
msgid ""
"Specify whether the instance will be based on an image or a snapshot. If it "
"is the first time creating an instance, there will not yet be any snapshots "
"available."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:117
msgid ""
"The instance will boot from the selected image. This option will be pre-"
"populated with the instance selected from the table. However, choose ``Boot "
"from Snapshot`` in **Instance Boot Source**, and it will default to "
"``Snapshot`` instead."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:121
msgid "**Security Groups**"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:123
msgid ""
"This option assigns security groups to an instance. The default security "
"group activates when no customised group is specified here. Security Groups, "
"similar to a cloud firewall, define which incoming network traffic is "
"forwarded to instances."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:127
msgid "**Keypair**"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:129
msgid ""
"Specify a key pair with this option. If the image uses a static key set (not "
"recommended), a key pair is not needed."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:131
msgid "**Networks**"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:133
msgid ""
"To add a network to an instance, click the **Downwards Arrow** symbol in the "
"**Networks field**."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:136
msgid "**Configuration**"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:138
msgid ""
"Specify a customisation script. This script runs after the instance launches "
"and becomes active."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:143
msgid "Creating an instance using the command line"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:145
msgid ""
"On the command line, instance creation is managed with the **openstack "
"server create** command. Before launching an instance, determine what images "
"and flavors are available to create a new instance using the **openstack "
"image list** and **openstack flavor list** commands."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:150
msgid "Log in to any utility container."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:152
msgid ""
"Issue the **openstack server create** command with a name for the instance, "
"along with the name of the image and flavor to use:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:188
msgid ""
"To check that the instance was created successfully, issue the **openstack "
"server list** command:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:202
msgid "Managing an instance"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:204
msgid ""
"Log in to the Dashboard. Select one of the projects, and click :guilabel:"
"`Instances`."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:207
msgid "Select an instance from the list of available instances."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:209
msgid ""
"Check the **Actions** column, and click on the down arrow. Select the action."
""
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:212
msgid "The **Actions** column includes the following options:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:214
msgid "Resize or rebuild any instance"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:216
msgid "Attach/Detach Volume"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:218
msgid "Attach/Detach Interface"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:220
msgid "View the instance console log"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:222
msgid "Edit the instance"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:224
msgid "Edit security groups"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:226
msgid "Pause, resume, rescue or suspend the instance"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:228
msgid "Soft or hard reset the instance"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:232
msgid "Delete the instance under the **Actions** column."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:236
msgid "Managing volumes for persistent storage"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:238
msgid ""
"Volumes attach to instances, enabling persistent storage. Volume storage "
"provides a source of memory for instances. Administrators can attach volumes "
"to a running instance, or move a volume from one instance to another."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:244
msgid "Instances live migration"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:246
msgid ""
"Nova is capable of live migration instances from one host to a different "
"host to support various operational tasks including:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:249
msgid "Host Maintenance"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:250
msgid "Host capacity management"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:251
msgid "Resizing and moving instances to better hardware"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:255
msgid "Nova configuration drive implication"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:257
msgid ""
"Depending on the OpenStack-Ansible version in use, Nova can be configured to "
"force configuration drive attachments to instances. In this case, a ISO9660 "
"CD-ROM image will be made available to the instance via the ``/mnt`` mount "
"point. This can be used by tools, such as cloud-init, to gain access to "
"instance metadata. This is an alternative way of accessing the Nova EC2-"
"style Metadata."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:264
msgid ""
"To allow live migration of Nova instances, this forced provisioning of the "
"config (CD-ROM) drive needs to either be turned off, or the format of the "
"configuration drive needs to be changed to a disk format like vfat, a format "
"which both Linux and Windows instances can access."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:269
msgid "This work around is required for all Libvirt versions prior 1.2.17."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:271
msgid ""
"To turn off the forced provisioning of and change the format of the "
"configuration drive to a hard disk style format, add the following override "
"to the ``/etc/openstack_deploy/user_variables.yml`` file:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:284
msgid "Tunneling versus direct transport"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:286
msgid ""
"In the default configuration, Nova determines the correct transport URL for "
"how to transfer the data from one host to the other. Depending on the "
"``nova_virt_type`` override the following configurations are used:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:291
msgid "kvm defaults to ``qemu+tcp://%s/system``"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:292
msgid "qemu defaults to ``qemu+tcp://%s/system``"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:294
msgid "Libvirt TCP port to transfer the data to migrate."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:296
msgid ""
"OpenStack-Ansible changes the default setting and used a encrypted SSH "
"connection to transfer the instance data."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:303
msgid ""
"Other configurations can be configured inside the ``/etc/openstack_deploy/"
"user_variables.yml`` file:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:315
msgid "Local versus shared storage"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:317
msgid ""
"By default, live migration assumes that your instances are stored on shared "
"storage and KVM/Libvirt only need to synchronize the memory and base image "
"of the instance to the new host. Live migrations on local storage will fail "
"as a result of that assumption. Migrations with local storage can be "
"accomplished by allowing instance disk migrations with the ``--block-"
"migrate`` option."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:324
msgid ""
"Additional flavor features like ephemeral storage or swap have an impact on "
"live migration performance and success."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:327
msgid ""
"Cinder attached volumes also require a Libvirt version larger or equal to 1."
"2.17."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:331
msgid "Executing the migration"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:333
msgid "The live migration is accessible via the nova client."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:339
msgid "Examplarery live migration on a local storage:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:347
msgid "Monitoring the status"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:349
msgid ""
"Once the live migration request has been accepted, the status can be "
"monitored with the nova client:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:362
msgid ""
"To filter the list, the options ``--host`` or ``--status`` can be used:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:368
msgid ""
"In cases where the live migration fails, both the source and destination "
"compute nodes need to be checked for errors. Usually it is sufficient to "
"search for the instance UUID only to find errors related to the live "
"migration."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:374
msgid "Other forms of instance migration"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:376
msgid ""
"Besides the live migration, Nova offers the option to migrate entire hosts "
"in a online (live) or offline (cold) migration."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:379
msgid "The following nova client commands are provided:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:381
msgid "``host-evacuate-live``"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:383
msgid ""
"Live migrate all instances of the specified host to other hosts if resource "
"utilzation allows. It is best to use shared storage like Ceph or NFS for "
"host evacuation."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:388
msgid "``host-servers-migrate``"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:390
msgid ""
"This command is similar to host evacuation but migrates all instances off "
"the specified host while they are shutdown."
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:394
msgid "``resize``"
msgstr ""
#: ../../source/admin/openstack-operations/managing-instances.rst:396
msgid ""
"Changes the flavor of an instance (increase) while rebooting and also "
"migrates (cold) the instance to a new host to accommodate the new resource "
"requirements. This operation can take considerate amount of time, depending "
"disk image sizes."
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:2
msgid "Managing networks"
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:4
msgid ""
"Operational considerations, like compliance, can make it necessary to manage "
"networks. For example, adding new provider networks to the OpenStack-Ansible "
"managed cloud. The following sections are the most common administrative "
"tasks outlined to complete those tasks."
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:9
msgid ""
"For more generic information on troubleshooting your network, see the "
"`Network Troubleshooting chapter `_ in the Operations Guide."
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:14
msgid ""
"For more in-depth information on Networking, see the `Networking Guide "
"`_."
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:18
msgid "Add provider bridges using new network interfaces"
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:20
msgid ""
"Add each provider network to your cloud to be made known to OpenStack-"
"Ansible and the operating system before you can execute the necessary "
"playbooks to complete the configuration."
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:25
msgid "OpenStack-Ansible configuration"
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:27
msgid ""
"All provider networks need to be added to the OpenStack-Ansible "
"configuration."
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:30
msgid ""
"Edit the file ``/etc/openstack_deploy/openstack_user_config.yml`` and add a "
"new block underneath the ``provider_networks`` section:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:45
msgid ""
"The ``container_bridge`` setting defines the physical network bridge used to "
"connect the veth pair from the physical host to the container. Inside the "
"container, the ``container_interface`` setting defines the name at which the "
"physical network will be made available. The ``container_interface`` setting "
"is not required when Neutron agents are deployed on bare metal. Make sure "
"that both settings are uniquely defined across their provider networks and "
"that the network interface is correctly configured inside your operating "
"system. ``group_binds`` define where this network need to attached to, to "
"either containers or physical hosts and is ultimately dependent on the "
"network stack in use. For example, Linuxbridge versus OVS. The configuration "
"``range`` defines Neutron physical segmentation IDs which are automatically "
"used by end users when creating networks via mainly horizon and the Neutron "
"API. Similar is true for the ``net_name`` configuration which defines the "
"addressable name inside the Neutron configuration. This configuration also "
"need to be unique across other provider networks."
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:64
msgid ""
"For more information, see :deploy_guide:`Configure the deployment ` in the OpenStack-Ansible Deployment Guide."
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:69
msgid "Updating the node with the new configuration"
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:71
msgid ""
"Run the appropriate playbooks depending on the ``group_binds`` section."
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:73
msgid ""
"For example, if you update the networks requiring a change in all nodes with "
"a linux bridge agent, assuming you have infra nodes named **infra01**, "
"**infra02**, and **infra03**, run:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:83
msgid "Then update the neutron configuration."
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:91
msgid "Then update your compute nodes if necessary."
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:95
msgid "Remove provider bridges from OpenStack"
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:97
msgid ""
"Similar to adding a provider network, the removal process uses the same "
"procedure but in a reversed order. The Neutron ports will need to be "
"removed, prior to the removal of the OpenStack-Ansible configuration."
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:101
msgid "Unassign all Neutron floating IPs:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:105
msgid "Export the Neutron network that is about to be removed as single UUID."
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:119
msgid "Remove all Neutron ports from the instances:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:129
msgid "Remove Neutron router ports and DHCP agents:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:145
msgid "Remove the Neutron network:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:152
msgid ""
"Remove the provider network from the ``provider_networks`` configuration of "
"the OpenStack-Ansible configuration ``/etc/openstack_deploy/"
"openstack_user_config.yml`` and re-run the following playbooks:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:166
msgid "Restart a Networking agent container"
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:168
msgid ""
"Under some circumstances, configuration or temporary issues, one specific or "
"all neutron agents container need to be restarted."
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:171
msgid "This can be accomplished with multiple commands:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:173
msgid "Example of rebooting still accessible containers."
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:175
msgid ""
"This example will issue a reboot to the container named with "
"``neutron_agents_container_hostname_name`` from inside:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:182
msgid "Example of rebooting one container at a time, 60 seconds apart:"
msgstr ""
#: ../../source/admin/openstack-operations/managing-networks.rst:188
msgid ""
"If the container does not respond, it can be restarted from the physical "
"network host:"
msgstr ""
#: ../../source/admin/openstack-operations/network-service.rst:2
msgid "Configure your first networks"
msgstr ""
#: ../../source/admin/openstack-operations/network-service.rst:4
msgid ""
"A newly deployed OpenStack-Ansible has no networks by default. If you need "
"to add networks, you can use the OpenStack CLI, or you can use the Ansible "
"modules for it."
msgstr ""
#: ../../source/admin/openstack-operations/network-service.rst:8
msgid ""
"An example on how to provision networks is in the `OpenStack-Ansible plugins "
"`_ repository, "
"where you can use the openstack_resources role:"
msgstr ""
#: ../../source/admin/openstack-operations/network-service.rst:11
msgid ""
"Define the variable openstack_resources_network according to the structure "
"in the role `defaults `"
msgstr ""
#: ../../source/admin/openstack-operations/network-service.rst:14
msgid ""
"Run the playbook openstack.osa.openstack_resources with the tag network-"
"resources:"
msgstr ""
#: ../../source/admin/openstack-operations/verify-deploy.rst:2
msgid "Check your OpenStack-Ansible cloud"
msgstr ""
#: ../../source/admin/openstack-operations/verify-deploy.rst:4
msgid ""
"This chapter goes through the verification steps for a basic operation of "
"the OpenStack API and dashboard, as an administrator."
msgstr ""
#: ../../source/admin/openstack-operations/verify-deploy.rst:9
msgid ""
"The utility container provides a CLI environment for additional "
"configuration and testing."
msgstr ""
#: ../../source/admin/openstack-operations/verify-deploy.rst:18
msgid "Source the ``admin`` project credentials:"
msgstr ""
#: ../../source/admin/openstack-operations/verify-deploy.rst:24
msgid "Run an OpenStack command that uses one or more APIs. For example:"
msgstr ""
#: ../../source/admin/openstack-operations/verify-deploy.rst:47
msgid ""
"With a web browser, access the Dashboard using the external load balancer "
"domain name or IP address. This is defined by the "
"``external_lb_vip_address`` option in the ``/etc/openstack_deploy/"
"openstack_user_config.yml`` file. The dashboard uses HTTPS on port 443."
msgstr ""
#: ../../source/admin/openstack-operations/verify-deploy.rst:53
msgid ""
"Authenticate using the username ``admin`` and password defined by the "
"``keystone_auth_admin_password`` option in the ``/etc/openstack_deploy/"
"user_secrets.yml`` file."
msgstr ""
#: ../../source/admin/openstack-operations/verify-deploy.rst:57
msgid ""
"Run an OpenStack command to reveal all endpoints from your deployment. For "
"example:"
msgstr ""
#: ../../source/admin/openstack-operations/verify-deploy.rst:107
msgid ""
"Run an OpenStack command to ensure all the compute services are working (the "
"output depends on your configuration) For example:"
msgstr ""
#: ../../source/admin/openstack-operations/verify-deploy.rst:123
msgid ""
"Run an OpenStack command to ensure the networking services are working (the "
"output also depends on your configuration) For example:"
msgstr ""
#: ../../source/admin/openstack-operations/verify-deploy.rst:141
msgid ""
"Run an OpenStack command to ensure the block storage services are working "
"(depends on your configuration). For example:"
msgstr ""
#: ../../source/admin/openstack-operations/verify-deploy.rst:156
msgid ""
"Run an OpenStack command to ensure the image storage service is working "
"(depends on your uploaded images). For example:"
msgstr ""
#: ../../source/admin/openstack-operations/verify-deploy.rst:169
msgid ""
"Check the backend API health on your load balancer nodes. For example, if "
"using HAProxy, ensure no backend is marked as \"DOWN\":"
msgstr ""
#: ../../source/admin/scale-environment.rst:3
msgid "Scaling your environment"
msgstr ""
#: ../../source/admin/scale-environment.rst:5
msgid ""
"This chapter is about scaling your environment using OpenStack-Ansible."
msgstr ""
#: ../../source/admin/scale-environment/add-compute-host.rst:4
msgid "Add a compute host"
msgstr ""
#: ../../source/admin/scale-environment/add-compute-host.rst:6
msgid ""
"Use the following procedure to add a compute host to an operational cluster."
msgstr ""
#: ../../source/admin/scale-environment/add-compute-host.rst:9
msgid ""
"Configure the host as a target host. See the :deploy_guide:`target hosts "
"configuration section ` of the deploy guide for more "
"information."
msgstr ""
#: ../../source/admin/scale-environment/add-compute-host.rst:13
msgid ""
"Edit the ``/etc/openstack_deploy/openstack_user_config.yml`` file and add "
"the host to the ``compute_hosts`` stanza."
msgstr ""
#: ../../source/admin/scale-environment/add-compute-host.rst:16
msgid "If necessary, also modify the ``used_ips`` stanza."
msgstr ""
#: ../../source/admin/scale-environment/add-compute-host.rst:18
msgid ""
"If the cluster is utilizing Telemetry/Metering (ceilometer), edit the ``/etc/"
"openstack_deploy/conf.d/ceilometer.yml`` file and add the host to the "
"``metering-compute_hosts`` stanza."
msgstr ""
#: ../../source/admin/scale-environment/add-compute-host.rst:22
msgid ""
"Run the following commands to add the host. Replace ``NEW_HOST_NAME`` with "
"the name of the new host."
msgstr ""
#: ../../source/admin/scale-environment/add-compute-host.rst:32
msgid ""
"Alternatively you can try using new compute nodes deployment script ``/opt/"
"openstack-ansible/scripts/add-compute.sh``."
msgstr ""
#: ../../source/admin/scale-environment/add-compute-host.rst:35
msgid ""
"You can provide this script with extra tasks that will be executed before or "
"right after OpenStack-Ansible roles. To do so you should set environment "
"variables ``PRE_OSA_TASKS`` or ``POST_OSA_TASKS`` with plays to run devided "
"with semicolon:"
msgstr ""
#: ../../source/admin/scale-environment/add-compute-host.rst:46
msgid "Test new compute nodes"
msgstr ""
#: ../../source/admin/scale-environment/add-compute-host.rst:48
msgid ""
"After creating a new node, test that the node runs correctly by launching an "
"instance on the new node:"
msgstr ""
#: ../../source/admin/scale-environment/add-compute-host.rst:57
msgid ""
"Ensure that the new instance can respond to a networking connection test "
"through the :command:`ping` command. Log in to your monitoring system, and "
"verify that the monitors return a green signal for the new node."
msgstr ""
#: ../../source/admin/scale-environment/add-new-infrastructure-host.rst:2
msgid "Add a new infrastructure host"
msgstr ""
#: ../../source/admin/scale-environment/add-new-infrastructure-host.rst:4
msgid ""
"While three infrastructure hosts are recommended, if further hosts are "
"needed in an environment, it is possible to create additional nodes."
msgstr ""
#: ../../source/admin/scale-environment/add-new-infrastructure-host.rst:9
msgid ""
"Make sure you back up your current OpenStack environment before adding any "
"new nodes. See :ref:`backup-restore` for more information."
msgstr ""
#: ../../source/admin/scale-environment/add-new-infrastructure-host.rst:13
msgid ""
"Add the node to the ``infra_hosts`` stanza of the ``/etc/openstack_deploy/"
"openstack_user_config.yml``:"
msgstr ""
#: ../../source/admin/scale-environment/add-new-infrastructure-host.rst:23
msgid "Change to playbook folder on the deployment host:"
msgstr ""
#: ../../source/admin/scale-environment/add-new-infrastructure-host.rst:29
msgid ""
"To prepare new hosts and deploy containers on them run ``setup-hosts.yml``: "
"playbook with the ``limit`` argument."
msgstr ""
#: ../../source/admin/scale-environment/add-new-infrastructure-host.rst:36
msgid ""
"In case you're relying on ``/etc/hosts`` content, you should also update it "
"for all hosts:"
msgstr ""
#: ../../source/admin/scale-environment/add-new-infrastructure-host.rst:42
msgid ""
"Next we need to expand Galera/RabbitMQ clusters, which is done during "
"``setup-infrastructure.yml``. So we will run this playbook without limits:"
msgstr ""
#: ../../source/admin/scale-environment/add-new-infrastructure-host.rst:47
msgid ""
"Make sure that containers from new infra host *does not* appear in inventory "
"as first one for groups ``galera_all``, ``rabbitmq_all`` and ``repo_all``. "
"You can verify that with ad-hoc commands:"
msgstr ""
#: ../../source/admin/scale-environment/add-new-infrastructure-host.rst:61
msgid ""
"Once infrastructure playboks are done, it's turn of OpenStack services to be "
"deployed. Most of the services are fine to be ran with limits, but some, "
"like keystone, are not. So we run keystone playbook separately from all "
"others:"
msgstr ""
#: ../../source/admin/scale-environment/add-new-infrastructure-host.rst:71
msgid "Test new infra nodes"
msgstr ""
#: ../../source/admin/scale-environment/add-new-infrastructure-host.rst:73
msgid ""
"After creating a new infra node, test that the node runs correctly by "
"launching a new instance. Ensure that the new node can respond to a "
"networking connection test through the :command:`ping` command. Log in to "
"your monitoring system, and verify that the monitors return a green signal "
"for the new node."
msgstr ""
#: ../../source/admin/scale-environment/destroying-containers.rst:2
msgid "Destroying containers"
msgstr ""
#: ../../source/admin/scale-environment/destroying-containers.rst:4
msgid "To destroy a container, execute the following:"
msgstr ""
#: ../../source/admin/scale-environment/destroying-containers.rst:12
msgid "You will be asked two questions:"
msgstr ""
#: ../../source/admin/scale-environment/destroying-containers.rst:14
msgid ""
"Are you sure you want to destroy the LXC containers? Are you sure you want "
"to destroy the LXC container data?"
msgstr ""
#: ../../source/admin/scale-environment/destroying-containers.rst:17
msgid ""
"The first will just remove the container but leave the data in the bind "
"mounts and logs. The second will remove the data in the bind mounts and logs "
"too."
msgstr ""
#: ../../source/admin/scale-environment/destroying-containers.rst:21
msgid ""
"If you remove the containers and data for the entire galera_server container "
"group you will lose all your databases! Also, if you destroy the first "
"container in many host groups you will lose other important items like "
"certificates, keys, etc. Be sure that you understand what you're doing when "
"using this tool."
msgstr ""
#: ../../source/admin/scale-environment/destroying-containers.rst:26
msgid "To create the containers again, execute the following:"
msgstr ""
#: ../../source/admin/scale-environment/recover-compute-host-failure.rst:2
msgid "Recover a compute host failure"
msgstr ""
#: ../../source/admin/scale-environment/recover-compute-host-failure.rst:4
msgid ""
"The following procedure addresses Compute node failure if shared storage is "
"used."
msgstr ""
#: ../../source/admin/scale-environment/recover-compute-host-failure.rst:9
msgid ""
"If shared storage is not used, data can be copied from the ``/var/lib/nova/"
"instances`` directory on the failed Compute node ``${FAILED_NODE}`` to "
"another node ``${RECEIVING_NODE}``\\ before performing the following "
"procedure. Please note this method is not supported."
msgstr ""
#: ../../source/admin/scale-environment/recover-compute-host-failure.rst:15
msgid "Re-launch all instances on the failed node."
msgstr ""
#: ../../source/admin/scale-environment/recover-compute-host-failure.rst:17
msgid "Invoke the MariaDB command line tool."
msgstr ""
#: ../../source/admin/scale-environment/recover-compute-host-failure.rst:19
msgid "Generate a list of instance UUIDs hosted on the failed node:"
msgstr ""
#: ../../source/admin/scale-environment/recover-compute-host-failure.rst:25
msgid "Set instances on the failed node to be hosted on a different node:"
msgstr ""
#: ../../source/admin/scale-environment/recover-compute-host-failure.rst:32
msgid ""
"Reboot each instance on the failed node listed in the previous query to "
"regenerate the XML files:"
msgstr ""
#: ../../source/admin/scale-environment/recover-compute-host-failure.rst:39
msgid ""
"Find the volumes to check the instance has successfully booted and is at the "
"login:"
msgstr ""
#: ../../source/admin/scale-environment/recover-compute-host-failure.rst:50
msgid ""
"If rows are found, detach and re-attach the volumes using the values listed "
"in the previous query:"
msgstr ""
#: ../../source/admin/scale-environment/recover-compute-host-failure.rst:58
msgid ""
"Rebuild or replace the failed node as described in :ref:`add-compute-host`."
msgstr ""
#: ../../source/admin/scale-environment/remove-compute-host.rst:2
msgid "Remove a compute host"
msgstr ""
#: ../../source/admin/scale-environment/remove-compute-host.rst:4
msgid ""
"The `OpenStack-Ansible Operator Tooling `_ repository contains a playbook for removing a "
"compute host from an OpenStack-Ansible environment. To remove a compute "
"host, follow the below procedure."
msgstr ""
#: ../../source/admin/scale-environment/remove-compute-host.rst:11
msgid ""
"This guide describes how to remove a compute node from an OpenStack-Ansible "
"environment completely. Perform these steps with caution, as the compute "
"node will no longer be in service after the steps have been completed. This "
"guide assumes that all data and instances have been properly migrated."
msgstr ""
#: ../../source/admin/scale-environment/remove-compute-host.rst:16
msgid ""
"Disable all OpenStack services running on the compute node. This can "
"include, but is not limited to, the ``nova-compute`` service and the neutron "
"agent service:"
msgstr ""
#: ../../source/admin/scale-environment/remove-compute-host.rst:22
msgid "Ensure this step is performed first."
msgstr ""
#: ../../source/admin/scale-environment/remove-compute-host.rst:30
msgid ""
"Clone the ``openstack-ansible-ops`` repository to your deployment host:"
msgstr ""
#: ../../source/admin/scale-environment/remove-compute-host.rst:37
msgid ""
"Run the ``remove_compute_node.yml`` Ansible playbook with the "
"``host_to_be_removed`` user variable set:"
msgstr ""
#: ../../source/admin/scale-environment/remove-compute-host.rst:46
msgid ""
"After the playbook completes, remove the compute node from the OpenStack-"
"Ansible configuration file in ``/etc/openstack_deploy/openstack_user_config."
"yml``."
msgstr ""
#: ../../source/admin/scale-environment/replacing-failed-hardware.rst:2
msgid "Replacing failed hardware"
msgstr ""
#: ../../source/admin/scale-environment/replacing-failed-hardware.rst:4
msgid ""
"It is essential to plan and know how to replace failed hardware in your "
"cluster without compromising your cloud environment."
msgstr ""
#: ../../source/admin/scale-environment/replacing-failed-hardware.rst:7
msgid "Consider the following to help establish a hardware replacement plan:"
msgstr ""
#: ../../source/admin/scale-environment/replacing-failed-hardware.rst:9
msgid "What type of node am I replacing hardware on?"
msgstr ""
#: ../../source/admin/scale-environment/replacing-failed-hardware.rst:10
msgid ""
"Can the hardware replacement be done without the host going down? For "
"example, a single disk in a RAID-10."
msgstr ""
#: ../../source/admin/scale-environment/replacing-failed-hardware.rst:12
msgid ""
"If the host DOES have to be brought down for the hardware replacement, how "
"should the resources on that host be handled?"
msgstr ""
#: ../../source/admin/scale-environment/replacing-failed-hardware.rst:15
msgid ""
"If you have a Compute (nova) host that has a disk failure on a RAID-10, you "
"can swap the failed disk without powering the host down. On the other hand, "
"if the RAM has failed, you would have to power the host down. Having a plan "
"in place for how you will manage these types of events is a vital part of "
"maintaining your OpenStack environment."
msgstr ""
#: ../../source/admin/scale-environment/replacing-failed-hardware.rst:21
msgid ""
"For a Compute host, shut down instances on the host before it goes down. For "
"a Block Storage (cinder) host using non-redundant storage, shut down any "
"instances with volumes attached that require that mount point. Unmount the "
"drive within your operating system and re-mount the drive once the Block "
"Storage host is back online."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:3
msgid "Scaling MariaDB and RabbitMQ"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:5
msgid "Contents"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:7
msgid ""
"OpenStack is a cloud computing platform that is designed to be highly "
"scalable. However, even though OpenStack is designed to be scalable, there "
"are a few potential bottlenecks that can occur in large deployments. These "
"bottlenecks typically involve the performance and throughput of RabbitMQ and "
"MariaDB clusters."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:13
msgid ""
"RabbitMQ is a message broker that is used to decouple different components "
"of OpenStack. MariaDB is a database that is used to store data for OpenStack."
" If these two components are not performing well, it can have a negative "
"impact on the performance of the entire OpenStack deployment."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:18
msgid ""
"There are a number of different methodologies that can be used to improve "
"the performance of RabbitMQ and MariaDB clusters. These methodologies "
"include scaling up the clusters, using a different message broker or "
"database, or optimizing the configuration of the clusters."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:23
msgid ""
"In this series of articles, will be discussed the potential bottlenecks that "
"can occur in large OpenStack deployments and ways to scale up deployments to "
"improve the performance of RabbitMQ and MariaDB clusters."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:29
msgid ""
"Examples provided in this documentation were made on OpenStack 2023.1 "
"(Antelope). It is possible to achieve the same flows in earlier releases, "
"but some extra steps or slightly different configurations might be required."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:36
msgid "Most Common Deployment"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:38
msgid ""
"Before talking about ways on how to improve things, let’s quickly describe "
"“starting point”, to understand what we’re dealing with at the starting "
"point."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:41
msgid ""
"The most common OpenStack-Ansible deployment design is three control nodes, "
"each one is running all OpenStack API services along with supporting "
"infrastructure, like MariaDB and RabbitMQ clusters. This is a good starting "
"point for small to medium-sized deployments. However, as the deployment "
"grows, you may start to experience performance problems. Typically "
"communication between services and MariaDB/RabbitMQ looks like this:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:50
msgid "**MariaDB**"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:52
msgid ""
"As you might see on the diagram, all connections to MariaDB come through the "
"HAProxy which has Internal Virtual IP (VIP). OpenStack-Ansible does "
"configure the Galera cluster for MariaDB, which is a multi-master "
"replication system. Although you can issue any request to any member of the "
"cluster, all write requests will be passed to the current “primary” instance "
"creating more internal traffic and raising the amount of work each instance "
"should do. So it is recommended to pass write requests only to the “primary” "
"instance."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:60
msgid ""
"However HAProxy is not capable of balancing MariaDB queries at an "
"application level (L7 of OSI model), to separate read and write requests, so "
"we have to balance TCP streams (L3) and pass all traffic without any "
"separation to the current “primary” node in the Galera cluster, which "
"creates a potential bottleneck."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:66
msgid "**RabbitMQ**"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:68
msgid ""
"RabbitMQ is clustered differently. We supply IP addresses of all cluster "
"members to clients and it’s up to the client to decide which backend it will "
"use for interaction. Only RabbitMQ management UI is balanced through "
"HAProxy, so the connection of clients to queues does not depend on HAProxy "
"in any way."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:74
msgid ""
"Though usage of HA queues and even quorum queues makes all messages and "
"queues to be mirrored to all or several cluster members. While quorum queues "
"show way better performance, they still suffer from clustering traffic which "
"still becomes a problem at a certain scale."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:82
msgid "Option 1: Independent clusters per service"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:84
msgid ""
"With this approach, you might provide the most loaded services, like Nova or "
"Neutron, their standalone MariaDB and RabbitMQ clusters. These new clusters "
"might reside on a separate hardware."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:88
msgid ""
"In the example below we assume that only Neutron is being reconfigured to "
"use the new standalone cluster, while other services remain sharing the "
"already existing one. So Neutron connectivity will look like this:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:94
msgid ""
"As you might have noticed, we still consume the same HAProxy instance for "
"MariaDB balancing to the new infra cluster."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:97
msgid ""
"Next, we will describe how to configure such a stack and execute the service "
"transition to this new layout."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:101
msgid "Setup of new MariaDB and RabbitMQ clusters"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:103
msgid ""
"To configure such a layout and migrate Neutron using it with OpenStack-"
"Ansible you need to follow these steps:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:108
msgid ""
"You can reference the following documentation for a deeper understanding of "
"how env.d and conf.d files should be constructed: :ref:`inventory-in-depth`"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:111
msgid ""
"Define new groups for RabbitMQ and MariaDB. For that, you can create files "
"with the following content: ``/etc/openstack_deploy/env.d/galera-neutron."
"yml``:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:152
msgid "``/etc/openstack_deploy/env.d/rabbit-neutron.yml``:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:183
msgid ""
"Map your new neutron-infra hosts to these new groups. To add to your "
"``openstack_user_config.yml`` the following content:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:197
msgid ""
"Define some specific configurations for newly created groups and balance "
"them:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:200
msgid "MariaDB"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:202
msgid "In file ``/etc/openstack_deploy/group_vars/neutron_galera.yml``:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:210
msgid "In file ``/etc/openstack_deploy/group_vars/galera.yml``:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:216
msgid ""
"Move `galera_root_password` from ``/etc/openstack_deploy/user_secrets.yml`` "
"to ``/etc/openstack_deploy/group_vars/galera.yml``"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:219
#: ../../source/admin/upgrades/distribution-upgrades.rst:120
msgid "RabbitMQ"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:221
msgid "In file ``/etc/openstack_deploy/group_vars/neutron_rabbitmq.yml``:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:228
msgid "In file ``/etc/openstack_deploy/group_vars/rabbitmq.yml``"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:234
msgid "HAProxy"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:236
msgid ""
"In ``/etc/openstack_deploy/user_variables.yml`` define extra service for "
"MariaDB:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:264
msgid ""
"Prepare new infra hosts and create containers on them. For that, run the "
"command:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:271
msgid "Deploy clusters:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:273
msgid "MariaDB:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:279
msgid "RabbitMQ:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:286
msgid "Migrating the service to use new clusters"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:288
msgid ""
"While it’s relatively easy to start using the new RabbitMQ cluster for the "
"service, migration of the database is slightly tricky and will include some "
"downtime."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:292
msgid ""
"First, we need to tell Neutron that from now on, the MariaDB database for "
"the service is listening on a different port. So you should add the "
"following override to your ``user_variables.yml``:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:300
msgid ""
"Now let’s prepare the destination database: create the database itself along "
"with required users and provide them permissions to interact with the "
"database. For that, we will run the neutron role with a common-db tag and "
"limit execution to the neutron_server group only. You can use the following "
"command for that:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:309
msgid ""
"Once we have a database prepared, we need to disable HAProxy backends that "
"proxy traffic to the API of the service in order to prevent any user or "
"service actions with it."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:313
msgid ""
"For that, we use a small custom playbook. Let’s name it ``haproxy_backends."
"yml``:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:332
msgid "We run it as follows:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:338
msgid "No, we can stop the API service for Neutron:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:344
msgid ""
"And run a backup/restore of the MariaDB database for the service. For this "
"purpose, we will use another small playbook, that we name as "
"``mysql_backup_restore.yml`` with the following content:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:372
msgid "Now let’s run the playbook we’ve just created:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:380
msgid ""
"The playbook above is not idempotent as it will override database content on "
"the destination hosts."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:383
msgid ""
"Once the database content is in place, we can now re-configure the service "
"using the playbook."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:386
msgid ""
"It will not only tell Neutron to use the new database but also will switch "
"it to using the new RabbitMQ cluster as well and re-enable the service in "
"HAProxy."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:390
msgid "For that to happen we should run the following command:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:396
msgid ""
"After the playbook has finished, neutron services will be started and "
"configured to use new clusters."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:402
msgid "Option 2: Dedicated hardware for clusters"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:404
msgid ""
"This option will describe how to move current MariaDB and RabbitMQ clusters "
"to standalone nodes. This approach can be used to offload control-planes and "
"provide dedicated resources for clusters."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:410
msgid ""
"While it’s quite straightforward to establish the architecture above from "
"the very beginning of the deployment, flawless migration of the existing "
"deployment to such a setup is more tricky, as you need to migrate running "
"clusters to the new hardware. Since we will be performing moves one-by-one, "
"to preserve at least two active cluster members, the steps below should be "
"repeated for the other two members."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:418
msgid "Migrating MariaDB to the new hardware"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:420
msgid ""
"The first thing to do is to list current members of the MariaDB cluster. For "
"that, you can issue the following ad-hoc command:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:436
msgid ""
"Unless overridden, the first host in the group is considered as a "
"“bootstrap” one. This bootstrap host should be migrated last to avoid "
"unnecessary failovers, so it is recommended to start the migration of hosts "
"to the new hardware from the last one to the first one in the output."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:441
msgid ""
"Once we’ve figured out the execution order, it’s time for a step-by-step "
"guide."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:443
msgid "Remove the last container in the group using the following playbook:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:449
msgid "Clean up the removed container from the inventory:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:455
msgid "Re-configure ``openstack_user_config`` to create a new container."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:457
msgid ""
"Assuming, you currently have a config like the one below in your "
"``openstack_user_config.yml``:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:472
msgid "Convert it to something like this:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:496
msgid ""
"In the example above we de-couple each service that is part of the `shared-"
"infra_hosts` and define them separately, along with providing MariaDB its "
"new destination host."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:500
msgid "Create the container on the new infra node:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:508
msgid ""
"New infra hosts should be prepared before this step (i.e., by running "
"``setup-hosts.yml`` playbook against them)."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:511
msgid "Install MariaDB to this new container and add it to the cluster:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:517
msgid ""
"Once the playbook is finished, you can ensure that the cluster is in the "
"**Synced** state and has proper cluster_size with the following ad-hoc:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:524
msgid ""
"If the cluster is healthy, repeat steps 1-6 for the rest instances, "
"including the “bootstrap” one."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:528
msgid "Migrating RabbitMQ to the new hardware"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:530
msgid ""
"The process of RabbitMQ migration will be pretty much the same as MariaDB "
"with one exception – we need to preserve the same IP addresses for "
"containers when moving them to the new hardware. Otherwise, we would need to "
"re-configure all services (like cinder, nova, neutron, etc.) that rely on "
"RabbitMQ as well, as contrary to MariaDB which is balanced through HAProxy, "
"it’s a client who decides to which RabbitMQ backend it will connect."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:537
msgid "Thus, we also don’t care about the order of migration."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:539
msgid ""
"Since we need to preserve an IP address, let’s collect this data before "
"taking any actions against the current setup:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:549
msgid ""
"Before dropping the RabbitMQ container, it’s worth transitioning the "
"RabbitMQ instance to the Maintenance mode, so it could offload its "
"responsibilities to other cluster members and close connections to clients "
"properly. You can use the following ad-hoc for that:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:561
msgid "Now we can proceed with container removal:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:567
msgid "And remove it from the inventory:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:573
msgid ""
"Now you need to re-configure ``openstack_user_config`` similar to how it was "
"done for MariaDB. The resulting record at this stage for RabbitMQ should "
"look like this:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:589
msgid "Ensure that you don’t have more generic shared-infra_hosts defined."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:591
msgid ""
"Now we need to manually re-generate the inventory and ensure that a new "
"record was mapped to our infra01:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:606
msgid ""
"As you might see from the output above, a record for the new container has "
"been generated and assigned correctly to the infra01 host. Though this "
"container has a new IP address, we need to preserve it. So we manually "
"replaced the new IP with the old one in the inventory file and ensured it’s "
"the proper one now:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:620
msgid "Now you can proceed with container creation:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:626
msgid ""
"And install RabbitMQ to the new container and ensure it’s part of the "
"cluster:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:632
msgid ""
"Once the cluster is re-established, it’s worth to clean-up cluster status "
"with regards to the old container name still being considered as “Disk "
"Node”, since the container name has changed:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:642
msgid ""
"You can take the cluster node name to remove from the output at step two."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:644
msgid "Repeat the steps above for the rest of the instances."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:649
msgid "Option 3: Growing Clusters Horizontally"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:651
msgid ""
"This option is by far the least popular despite being very straightforward, "
"as it has a pretty narrowed use case when it makes sense to scale this way."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:655
msgid ""
"Though, to preserve quorum you should always have an odd number of cluster "
"members or be prepared to provide extra configuration if using an even "
"number of members."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:662
msgid "Adding new members to the MariaDB Galera cluster"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:664
msgid ""
"Horizontal scaling of the MariaDB cluster makes sense only when you’re using "
"an L7 balancer which can work properly with Galera clusters (like ProxySQL "
"or MaxScale) instead of default HAProxy and the weak point of the current "
"cluster is read performance rather than writes."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:669
msgid "Extending the cluster is quite trivial. For that, you need to:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:671
msgid ""
"Add another destination host in ``openstack_user_config`` for database_hosts:"
""
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:687
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:729
msgid "Create new containers on the destination host:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:693
msgid "Deploy MariaDB there and add it to the cluster:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:699
msgid "Ensure the cluster is healthy with the following ad-hoc:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:706
msgid "Adding new members to the RabbitMQ cluster"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:708
msgid ""
"Growing the RabbitMQ cluster vertically makes sense mostly when you don’t "
"have HA queues or Quorum queues enabled."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:711
msgid ""
"To add more members to the RabbitMQ cluster execute the following steps:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:713
msgid ""
"Add another destination host in ``openstack_user_config`` for mq_hosts:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:735
msgid "Deploy RabbitMQ on the new host and enroll it to the cluster:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:741
msgid ""
"Once a new RabbitMQ container is deployed, you need to make all services "
"aware of its existence by re-configuring them. For that, you can either run "
"individual service playbooks, like this:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:749
msgid ""
"Where is a service name, like neutron, nova, cinder, etc. Another "
"way around would be to fire up setup-openstack.yml but it will take quite "
"some time to execute."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:756
msgid "Conclusion"
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:758
msgid ""
"As you might see, OpenStack-Ansible is flexible enough to let you scale a "
"deployment in many different ways."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:761
msgid ""
"But which one is right for you? Well, it all depends on the situation you "
"find yourself in."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:764
msgid ""
"In case your deployment has grown to a point where RabbitMQ/MariaDB clusters "
"can’t simply deal with the load these clusters create regardless of the "
"hardware beneath them – you should use option one (:ref:`scaling-osa-one`) "
"and make independent clusters per service."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:769
msgid ""
"This option can be also recommended to improve deployment resilience – in "
"case of cluster failure this will affect just one service rather than each "
"and everyone in a common deployment use case. Another quite popular "
"variation of this option can be having just standalone MariaDB/RabbitMQ "
"instances per service, without any clusterization. The benefit of such a "
"setup is very fast recovery, especially when talking about RabbitMQ."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:776
msgid ""
"In case you are the owner of quite modest hardware specs for controllers, "
"you might pay more attention to option two (:ref:`scaling-osa-one`). This "
"way you can offload your controllers by moving heavy applications, like "
"MariaDB/RabbitMQ, to some other hardware that can also have relatively "
"modest specs."
msgstr ""
#: ../../source/admin/scale-environment/scaling-mariadb-rabbitmq.rst:781
msgid ""
"Option three (:ref:`scaling-osa-three`) can be used if your deployment meets "
"the requirements that were written above (ie. not using HA queues or using "
"ProxySQL for balancing) and usually should be considered when you’ve "
"outgrown option one as well."
msgstr ""
#: ../../source/admin/scale-environment/scaling-swift.rst:2
msgid "Accessibility for multi-region Object Storage"
msgstr ""
#: ../../source/admin/scale-environment/scaling-swift.rst:4
msgid ""
"In multi-region Object Storage utilizing separate database backends, objects "
"are retrievable from an alternate location if the ``default_project_id`` for "
"a user in the keystone database is the same across each database backend."
msgstr ""
#: ../../source/admin/scale-environment/scaling-swift.rst:11
msgid ""
"It is recommended to perform the following steps before a failure occurs to "
"avoid having to dump and restore the database."
msgstr ""
#: ../../source/admin/scale-environment/scaling-swift.rst:14
msgid ""
"If a failure does occur, follow these steps to restore the database from the "
"Primary (failed) Region:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-swift.rst:17
msgid ""
"Record the Primary Region output of the ``default_project_id`` for the "
"specified user from the user table in the keystone database:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-swift.rst:22
msgid "The user is ``admin`` in this example."
msgstr ""
#: ../../source/admin/scale-environment/scaling-swift.rst:35
msgid ""
"Record the Secondary Region output of the ``default_project_id`` for the "
"specified user from the user table in the keystone database:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-swift.rst:50
msgid ""
"In the Secondary Region, update the references to the ``project_id`` to "
"match the ID from the Primary Region:"
msgstr ""
#: ../../source/admin/scale-environment/scaling-swift.rst:70
msgid ""
"The user in the Secondary Region now has access to objects PUT in the "
"Primary Region. The Secondary Region can PUT objects accessible by the user "
"in the Primary Region."
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-block-storage-host.rst:2
msgid "Shutting down the Block Storage host"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-block-storage-host.rst:4
msgid "If a LVM backed Block Storage host needs to be shut down:"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-block-storage-host.rst:6
msgid "Disable the ``cinder-volume`` service:"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-block-storage-host.rst:14
msgid "List all instances with Block Storage volumes attached:"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-block-storage-host.rst:21
msgid "Shut down the instances:"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-block-storage-host.rst:27
msgid "Verify the instances are shutdown:"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-block-storage-host.rst:33
msgid "Shut down the Block Storage host:"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-block-storage-host.rst:39
msgid ""
"Replace the failed hardware and validate the new hardware is functioning."
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-block-storage-host.rst:41
msgid "Enable the ``cinder-volume`` service:"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-block-storage-host.rst:47
msgid "Verify the services on the host are reconnected to the environment:"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-block-storage-host.rst:53
msgid "Start your instances and confirm all of the instances are started:"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-compute-host.rst:2
msgid "Shutting down the Compute host"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-compute-host.rst:4
msgid "If a Compute host needs to be shut down:"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-compute-host.rst:6
msgid "Disable the ``nova-compute`` binary:"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-compute-host.rst:12
msgid "List all running instances on the Compute host:"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-compute-host.rst:19
msgid "Use SSH to connect to the Compute host."
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-compute-host.rst:21
msgid "Confirm all instances are down:"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-compute-host.rst:27
msgid "Shut down the Compute host:"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-compute-host.rst:33
msgid ""
"Once the Compute host comes back online, confirm everything is in working "
"order and start the instances on the host. For example:"
msgstr ""
#: ../../source/admin/scale-environment/shutting-down-compute-host.rst:42
msgid "Enable the ``nova-compute`` service in the environment:"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:5
msgid "Using Ansible tags"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:7
msgid ""
"In Ansible, a tag is a label you can assign to tasks, allowing you to run "
"only the tasks you need instead of the whole playbook. This is especially "
"handy in large playbooks — for example, if you have 20–30 tasks but just "
"need to restart a service or make some changes in configuration, you can tag "
"those tasks and run them individually."
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:13
msgid "The following tags are available in OpenStack Ansible:"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:15
msgid "``common-mq``"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:16
msgid "``common-service``"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:17
msgid "``common-db``"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:18
msgid "``pki``"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:19
msgid "``post-install``"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:20
msgid "``haproxy-service-config``"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:21
msgid "``ceph``"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:22
msgid "``uwsgi``"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:23
msgid "``systemd-service``"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:24
msgid "``-install``"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:25
msgid "``-config``"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:26
msgid "``-key``"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:29
msgid "common-mq"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:31
msgid ""
"Handles tasks for setting up and configuring RabbitMQ. Use this tag when you "
"need to reconfigure virtual hosts, users, or their privileges without "
"affecting the rest of the deployment."
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:35
#: ../../source/admin/scale-environment/use-ansible-tags.rst:47
#: ../../source/admin/scale-environment/use-ansible-tags.rst:60
#: ../../source/admin/scale-environment/use-ansible-tags.rst:72
#: ../../source/admin/scale-environment/use-ansible-tags.rst:86
#: ../../source/admin/scale-environment/use-ansible-tags.rst:98
#: ../../source/admin/scale-environment/use-ansible-tags.rst:111
#: ../../source/admin/scale-environment/use-ansible-tags.rst:123
#: ../../source/admin/scale-environment/use-ansible-tags.rst:136
#: ../../source/admin/scale-environment/use-ansible-tags.rst:152
#: ../../source/admin/scale-environment/use-ansible-tags.rst:168
#: ../../source/admin/scale-environment/use-ansible-tags.rst:182
msgid "Example:"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:42
msgid "common-service"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:44
msgid ""
"Manages service configuration inside Keystone, such as service catalog "
"entries, service user existence, and user privileges."
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:54
msgid "common-db"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:56
msgid ""
"Creates and configures databases, including user creation, and permission "
"assignments. Run this tag if database credential or permissions need to be "
"refreshed or corrected."
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:67
msgid "pki"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:69
msgid ""
"Manages certificates and public key infrastructure. Use it when renewing, "
"replacing, or troubleshooting SSL/TLS certificates."
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:79
msgid "post-install"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:81
msgid ""
"Runs tasks after the main installation and configuration are complete. This "
"tag is used for final adjustments, applying changes in configuration files, "
"and validation checks. Run this tag when you’ve made changes that require "
"only applying updated configuration."
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:93
msgid "haproxy-service-config"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:95
msgid ""
"Configures HAProxy for routing traffic between services. Use this tag if "
"HAProxy settings change or a new service backend is added."
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:105
msgid "ceph"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:107
msgid ""
"Deploys and configures Ceph clients and related components. Use this tag for "
"tasks such as adding new monitors or upgrading Ceph clients to a different "
"version, as well as other Ceph-related configuration updates."
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:118
msgid "uwsgi"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:120
msgid ""
"Sets up and configures uWSGI processes. Useful when adjusting process "
"counts, sockets, or performance tuning."
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:130
msgid "systemd-service"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:132
msgid ""
"Manages systemd unit components, ensuring they are configured as expected "
"and allowing overrides to be applied. Use this tag when you need to adjust "
"unit files or restart services in a controlled way."
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:143
msgid "-install"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:145
msgid ""
"Installs a specific OpenStack service (replace ```` with the "
"service name). A tag including the word ``install`` handles only software "
"installation tasks — it deploys the necessary packages and binaries on the "
"target host. Use this tag when you only need to install or reinstall service "
"software without changing its configuration or running it."
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:159
msgid "-config"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:161
msgid ""
"Configures a specific OpenStack service (replace with the service "
"name). This tag applies configuration files, directories, and service-"
"specific settings. It usually covers a broad set of tasks beyond post-"
"install, and may include systemd-service, pki, common-mq or common-db "
"service tags. Run this tag when applying updated configurations to a service "
"that is already installed."
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:175
msgid "-key"
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:177
msgid ""
"This tag is used to generate and distribute SSH certificates, issued through "
"``openstack.osa.ssh_keypairs`` role."
msgstr ""
#: ../../source/admin/scale-environment/use-ansible-tags.rst:180
msgid "This is currently in-use by Keystone, Nova and Swift roles."
msgstr ""
#: ../../source/admin/troubleshooting.rst:3
msgid "Troubleshooting"
msgstr ""
#: ../../source/admin/troubleshooting.rst:5
msgid ""
"This chapter is intended to help troubleshoot and resolve operational issues "
"in an OpenStack-Ansible deployment."
msgstr ""
#: ../../source/admin/troubleshooting.rst:9
msgid "Networking"
msgstr ""
#: ../../source/admin/troubleshooting.rst:11
msgid ""
"This section focuses on troubleshooting general host-to-host communication "
"required for the OpenStack control plane to function properly."
msgstr ""
#: ../../source/admin/troubleshooting.rst:14
msgid "This does not cover any networking related to instance connectivity."
msgstr ""
#: ../../source/admin/troubleshooting.rst:16
msgid ""
"These instructions assume an OpenStack-Ansible installation using LXC "
"containers, VXLAN overlay for ML2/OVS and Geneve overlay for the ML2/OVN "
"drivers."
msgstr ""
#: ../../source/admin/troubleshooting.rst:20
msgid "Network List"
msgstr ""
#: ../../source/admin/troubleshooting.rst:22
msgid "``HOST_NET`` (Physical Host Management and Access to Internet)"
msgstr ""
#: ../../source/admin/troubleshooting.rst:23
msgid "``MANAGEMENT_NET`` (LXC container network used OpenStack Services)"
msgstr ""
#: ../../source/admin/troubleshooting.rst:24
msgid ""
"``OVERLAY_NET`` (VXLAN overlay network for OVS, Geneve overlay network for "
"OVN)"
msgstr ""
#: ../../source/admin/troubleshooting.rst:26
msgid "Useful network utilities and commands:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:40
msgid "Troubleshooting host-to-host traffic on HOST_NET"
msgstr ""
#: ../../source/admin/troubleshooting.rst:42
#: ../../source/admin/troubleshooting.rst:70
#: ../../source/admin/troubleshooting.rst:134
msgid "Perform the following checks:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:44
#: ../../source/admin/troubleshooting.rst:72
#: ../../source/admin/troubleshooting.rst:136
msgid "Check physical connectivity of hosts to physical network"
msgstr ""
#: ../../source/admin/troubleshooting.rst:45
#: ../../source/admin/troubleshooting.rst:73
#: ../../source/admin/troubleshooting.rst:137
msgid "Check interface bonding (if applicable)"
msgstr ""
#: ../../source/admin/troubleshooting.rst:46
#: ../../source/admin/troubleshooting.rst:74
#: ../../source/admin/troubleshooting.rst:138
msgid ""
"Check VLAN configurations and any necessary trunking to edge ports on "
"physical switch"
msgstr ""
#: ../../source/admin/troubleshooting.rst:48
#: ../../source/admin/troubleshooting.rst:76
#: ../../source/admin/troubleshooting.rst:140
msgid ""
"Check VLAN configurations and any necessary trunking to uplink ports on "
"physical switches (if applicable)"
msgstr ""
#: ../../source/admin/troubleshooting.rst:50
msgid ""
"Check that hosts are in the same IP subnet or have proper routing between "
"them"
msgstr ""
#: ../../source/admin/troubleshooting.rst:52
#: ../../source/admin/troubleshooting.rst:79
#: ../../source/admin/troubleshooting.rst:143
msgid ""
"Check there are no firewall (firewalld, ufw, etc.) rules applied to the "
"hosts that would deny traffic"
msgstr ""
#: ../../source/admin/troubleshooting.rst:55
msgid ""
"IP addresses should be applied to physical interface, bond interface, tagged "
"sub-interface, or in some cases the bridge interface:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:68
msgid "Troubleshooting host-to-host traffic on MANAGEMENT_NET"
msgstr ""
#: ../../source/admin/troubleshooting.rst:78
#: ../../source/admin/troubleshooting.rst:142
msgid ""
"Check that hosts are in the same subnet or have proper routing between them"
msgstr ""
#: ../../source/admin/troubleshooting.rst:81
msgid "Check to verify that physical interface is in the bridge"
msgstr ""
#: ../../source/admin/troubleshooting.rst:82
msgid "Check to verify that veth-pair end from container is in ``br-mgmt``"
msgstr ""
#: ../../source/admin/troubleshooting.rst:84
msgid "IP address should be applied to ``br-mgmt``:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:95
msgid "IP address should be applied to ``eth1`` inside the LXC container:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:106
msgid ""
"``br-mgmt`` should contain veth-pair ends from all containers and a physical "
"interface or tagged-subinterface:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:120
msgid "You can also use ip command to display bridges:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:132
msgid "Troubleshooting host-to-host traffic on OVERLAY_NET"
msgstr ""
#: ../../source/admin/troubleshooting.rst:145
msgid "Check to verify that physcial interface is in the bridge"
msgstr ""
#: ../../source/admin/troubleshooting.rst:146
msgid "Check to verify that veth-pair end from container is in ``br-vxlan``"
msgstr ""
#: ../../source/admin/troubleshooting.rst:148
msgid "IP address should be applied to ``br-vxlan``:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:160
msgid "Checking services"
msgstr ""
#: ../../source/admin/troubleshooting.rst:162
msgid ""
"You can check the status of an OpenStack service by accessing every "
"controller node and running the :command:`systemctl status `."
msgstr ""
#: ../../source/admin/troubleshooting.rst:165
msgid ""
"See the following links for additional information to verify OpenStack "
"services:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:168
msgid ""
"`Identity service (keystone) `_"
msgstr ""
#: ../../source/admin/troubleshooting.rst:169
msgid ""
"`Image service (glance) `_"
msgstr ""
#: ../../source/admin/troubleshooting.rst:170
msgid ""
"`Compute service (nova) `_"
msgstr ""
#: ../../source/admin/troubleshooting.rst:171
msgid ""
"`Networking service (neutron) `_"
msgstr ""
#: ../../source/admin/troubleshooting.rst:172
msgid ""
"`Block Storage service (cinder) `_"
msgstr ""
#: ../../source/admin/troubleshooting.rst:173
msgid ""
"`Object Storage service (swift) `_"
msgstr ""
#: ../../source/admin/troubleshooting.rst:175
msgid "Some useful commands to manage LXC see :ref:`command-line-reference`."
msgstr ""
#: ../../source/admin/troubleshooting.rst:178
msgid "Restarting services"
msgstr ""
#: ../../source/admin/troubleshooting.rst:180
msgid ""
"Restart your OpenStack services by accessing every controller node. Some "
"OpenStack services will require restart from other nodes in your environment."
""
msgstr ""
#: ../../source/admin/troubleshooting.rst:183
msgid ""
"The following table lists the commands to restart an OpenStack service."
msgstr ""
#: ../../source/admin/troubleshooting.rst:185
msgid "Restarting OpenStack services"
msgstr ""
#: ../../source/admin/troubleshooting.rst:189
msgid "OpenStack service"
msgstr ""
#: ../../source/admin/troubleshooting.rst:190
msgid "Commands"
msgstr ""
#: ../../source/admin/troubleshooting.rst:192
msgid "Image service"
msgstr ""
#: ../../source/admin/troubleshooting.rst:197
msgid "Compute service (controller node)"
msgstr ""
#: ../../source/admin/troubleshooting.rst:207
msgid "Compute service (compute node)"
msgstr ""
#: ../../source/admin/troubleshooting.rst:212
msgid "Networking service (controller node, for OVS)"
msgstr ""
#: ../../source/admin/troubleshooting.rst:221
msgid "Networking service (compute node, for OVS)"
msgstr ""
#: ../../source/admin/troubleshooting.rst:226
msgid "Networking service (controller node, for OVN)"
msgstr ""
#: ../../source/admin/troubleshooting.rst:233
msgid "Networking service (compute node, for OVN)"
msgstr ""
#: ../../source/admin/troubleshooting.rst:238
msgid "Block Storage service"
msgstr ""
#: ../../source/admin/troubleshooting.rst:246
msgid "Shared Filesystems service"
msgstr ""
#: ../../source/admin/troubleshooting.rst:254
msgid "Object Storage service"
msgstr ""
#: ../../source/admin/troubleshooting.rst:276
msgid "Troubleshooting instance connectivity issues"
msgstr ""
#: ../../source/admin/troubleshooting.rst:278
msgid ""
"This section will focus on troubleshooting general instances connectivity "
"communication. This does not cover any networking related to instance "
"connectivity. This is assuming a OpenStack-Ansible install using LXC "
"containers, VXLAN overlay for ML2/OVS and Geneve overlay for the ML2/OVN "
"driver."
msgstr ""
#: ../../source/admin/troubleshooting.rst:283
msgid "**Data flow example (for OVS)**"
msgstr ""
#: ../../source/admin/troubleshooting.rst:311
msgid "**Data flow example (for OVN)**"
msgstr ""
#: ../../source/admin/troubleshooting.rst:327
msgid "Preliminary troubleshooting questions to answer:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:329
msgid "Which compute node is hosting the instance in question?"
msgstr ""
#: ../../source/admin/troubleshooting.rst:330
msgid "Which interface is used for provider network traffic?"
msgstr ""
#: ../../source/admin/troubleshooting.rst:331
msgid "Which interface is used for VXLAN (Geneve) overlay?"
msgstr ""
#: ../../source/admin/troubleshooting.rst:332
msgid "Is there connectivity issue ingress to the instance?"
msgstr ""
#: ../../source/admin/troubleshooting.rst:333
msgid "Is there connectivity issue egress from the instance?"
msgstr ""
#: ../../source/admin/troubleshooting.rst:334
msgid "What is the source address of the traffic?"
msgstr ""
#: ../../source/admin/troubleshooting.rst:335
msgid "What is the destination address of the traffic?"
msgstr ""
#: ../../source/admin/troubleshooting.rst:336
msgid "Is there a Neutron Router in play?"
msgstr ""
#: ../../source/admin/troubleshooting.rst:337
msgid "Which network node (container) is the router hosted?"
msgstr ""
#: ../../source/admin/troubleshooting.rst:338
msgid "What is the project network type?"
msgstr ""
#: ../../source/admin/troubleshooting.rst:340
msgid "If VLAN:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:342
#: ../../source/admin/troubleshooting.rst:417
msgid ""
"Does physical interface show link and all VLANs properly trunked across "
"physical network?"
msgstr ""
#: ../../source/admin/troubleshooting.rst:345
#: ../../source/admin/troubleshooting.rst:361
#: ../../source/admin/troubleshooting.rst:385
#: ../../source/admin/troubleshooting.rst:420
#: ../../source/admin/troubleshooting.rst:435
#: ../../source/admin/troubleshooting.rst:455
#: ../../source/admin/troubleshooting.rst:481
msgid "No:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:346
#: ../../source/admin/troubleshooting.rst:421
msgid ""
"Check cable, seating, physical switchport configuration, interface/bonding "
"configuration, and general network configuration. See general network "
"troubleshooting documentation."
msgstr ""
#: ../../source/admin/troubleshooting.rst:350
#: ../../source/admin/troubleshooting.rst:372
#: ../../source/admin/troubleshooting.rst:400
#: ../../source/admin/troubleshooting.rst:425
#: ../../source/admin/troubleshooting.rst:441
#: ../../source/admin/troubleshooting.rst:469
#: ../../source/admin/troubleshooting.rst:499
msgid "Yes:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:351
#: ../../source/admin/troubleshooting.rst:426
msgid "Good!"
msgstr ""
#: ../../source/admin/troubleshooting.rst:352
#: ../../source/admin/troubleshooting.rst:375
#: ../../source/admin/troubleshooting.rst:427
msgid "Continue!"
msgstr ""
#: ../../source/admin/troubleshooting.rst:356
#: ../../source/admin/troubleshooting.rst:431
msgid "Do not continue until physical network is properly configured."
msgstr ""
#: ../../source/admin/troubleshooting.rst:358
#: ../../source/admin/troubleshooting.rst:452
msgid ""
"Does the instance's IP address ping from network's DHCP namespace or other "
"instances in the same network?"
msgstr ""
#: ../../source/admin/troubleshooting.rst:362
msgid ""
"Check nova console logs to see if the instance ever received its IP address "
"initially."
msgstr ""
#: ../../source/admin/troubleshooting.rst:364
#: ../../source/admin/troubleshooting.rst:458
#: ../../source/admin/troubleshooting.rst:488
msgid ""
"Check ``security-group-rules``, consider adding allow ICMP rule for testing."
msgstr ""
#: ../../source/admin/troubleshooting.rst:366
#: ../../source/admin/troubleshooting.rst:390
#: ../../source/admin/troubleshooting.rst:438
#: ../../source/admin/troubleshooting.rst:460
#: ../../source/admin/troubleshooting.rst:486
msgid ""
"Check that OVS bridges contain the proper interfaces on compute and network "
"nodes."
msgstr ""
#: ../../source/admin/troubleshooting.rst:368
#: ../../source/admin/troubleshooting.rst:462
msgid "Check Neutron DHCP agent logs."
msgstr ""
#: ../../source/admin/troubleshooting.rst:369
#: ../../source/admin/troubleshooting.rst:463
msgid "Check syslogs."
msgstr ""
#: ../../source/admin/troubleshooting.rst:370
#: ../../source/admin/troubleshooting.rst:464
#: ../../source/admin/troubleshooting.rst:483
msgid "Check Neutron Open vSwitch agent logs."
msgstr ""
#: ../../source/admin/troubleshooting.rst:373
#: ../../source/admin/troubleshooting.rst:470
msgid ""
"Good! This suggests that the instance received its IP address and can reach "
"local network resources."
msgstr ""
#: ../../source/admin/troubleshooting.rst:379
msgid ""
"Do not continue until instance has an IP address and can reach local network "
"resources like DHCP."
msgstr ""
#: ../../source/admin/troubleshooting.rst:382
#: ../../source/admin/troubleshooting.rst:478
msgid ""
"Does the instance's IP address ping from the gateway device (Neutron Router "
"namespace or another gateway device)?"
msgstr ""
#: ../../source/admin/troubleshooting.rst:386
#: ../../source/admin/troubleshooting.rst:482
msgid "Check Neutron L3 agent logs (if applicable)."
msgstr ""
#: ../../source/admin/troubleshooting.rst:387
msgid "Check Neutron Open vSwitch logs."
msgstr ""
#: ../../source/admin/troubleshooting.rst:388
#: ../../source/admin/troubleshooting.rst:484
msgid "Check physical interface mappings."
msgstr ""
#: ../../source/admin/troubleshooting.rst:389
#: ../../source/admin/troubleshooting.rst:485
msgid "Check Neutron router ports (if applicable)."
msgstr ""
#: ../../source/admin/troubleshooting.rst:392
msgid ""
"Check ``security-group-rules``, consider adding allow ICMP rule for testing. "
"In case of using OVN check additionally:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:395
#: ../../source/admin/troubleshooting.rst:494
msgid "Check ovn-controller on all nodes."
msgstr ""
#: ../../source/admin/troubleshooting.rst:396
#: ../../source/admin/troubleshooting.rst:495
msgid "Verify ovn-northd is running and DBs are healthy."
msgstr ""
#: ../../source/admin/troubleshooting.rst:397
#: ../../source/admin/troubleshooting.rst:496
msgid "Ensure ovn-metadata-agent is active."
msgstr ""
#: ../../source/admin/troubleshooting.rst:398
#: ../../source/admin/troubleshooting.rst:497
msgid "Review logs for ovn-controller, ovn-northd."
msgstr ""
#: ../../source/admin/troubleshooting.rst:401
msgid ""
"Good! The instance can ping its intended gateway. The issue may be north of "
"the gateway or related to the provider network."
msgstr ""
#: ../../source/admin/troubleshooting.rst:404
msgid "Check \"gateway\" or host routes on the Neutron subnet."
msgstr ""
#: ../../source/admin/troubleshooting.rst:405
#: ../../source/admin/troubleshooting.rst:502
msgid "Check ``security-group-rules``, consider adding ICMP rule for testing."
msgstr ""
#: ../../source/admin/troubleshooting.rst:407
msgid "Check Floating IP associations (if applicable)."
msgstr ""
#: ../../source/admin/troubleshooting.rst:408
#: ../../source/admin/troubleshooting.rst:505
msgid "Check Neutron Router external gateway information (if applicable)."
msgstr ""
#: ../../source/admin/troubleshooting.rst:409
msgid "Check upstream routes, NATs or access-control-lists."
msgstr ""
#: ../../source/admin/troubleshooting.rst:413
msgid "Do not continue until the instance can reach its gateway."
msgstr ""
#: ../../source/admin/troubleshooting.rst:415
msgid "If VXLAN (Geneve):"
msgstr ""
#: ../../source/admin/troubleshooting.rst:433
msgid "Are VXLAN (Geneve) VTEP addresses able to ping each other?"
msgstr ""
#: ../../source/admin/troubleshooting.rst:436
msgid "Check ``br-vxlan`` interface on Compute and Network nodes."
msgstr ""
#: ../../source/admin/troubleshooting.rst:437
msgid "Check veth pairs between containers and Linux bridges on the host."
msgstr ""
#: ../../source/admin/troubleshooting.rst:442
msgid ""
"Check ml2 config file for local VXLAN (Geneve) IP and other VXLAN (Geneve) "
"configuration settings."
msgstr ""
#: ../../source/admin/troubleshooting.rst:444
msgid "Check VTEP learning method (multicast or l2population):"
msgstr ""
#: ../../source/admin/troubleshooting.rst:445
msgid ""
"If multicast, make sure the physical switches are properly allowing and "
"distributing multicast traffic."
msgstr ""
#: ../../source/admin/troubleshooting.rst:450
msgid ""
"Do not continue until VXLAN (Geneve) endpoints have reachability to each "
"other."
msgstr ""
#: ../../source/admin/troubleshooting.rst:456
msgid ""
"Check Nova console logs to see if the instance ever received its IP address "
"initially."
msgstr ""
#: ../../source/admin/troubleshooting.rst:465
msgid ""
"Check that Bridge Forwarding Database (fdb) contains the proper entries on "
"both the compute and Neutron agent container (``ovs-appctl fdb/show br-"
"int``)."
msgstr ""
#: ../../source/admin/troubleshooting.rst:475
msgid ""
"Do not continue until instance has an IP address and can reach local network "
"resources."
msgstr ""
#: ../../source/admin/troubleshooting.rst:490
msgid ""
"Check that Bridge Forwarding Database (fdb) contains the proper entries on "
"both the compute and Neutron agent container (``ovs-appctl fdb/show br-"
"int``). In case of using OVN check additionally:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:500
msgid "Good! The instance can ping its intended gateway."
msgstr ""
#: ../../source/admin/troubleshooting.rst:501
msgid "Check gateway or host routes on the Neutron subnet."
msgstr ""
#: ../../source/admin/troubleshooting.rst:504
msgid "Check Neutron Floating IP associations (if applicable)."
msgstr ""
#: ../../source/admin/troubleshooting.rst:506
msgid "Check upstream routes, NATs or ``access-control-lists``."
msgstr ""
#: ../../source/admin/troubleshooting.rst:509
msgid "Diagnose Image service issues"
msgstr ""
#: ../../source/admin/troubleshooting.rst:511
msgid "The ``glance-api`` handles the API interactions and image store."
msgstr ""
#: ../../source/admin/troubleshooting.rst:513
msgid ""
"To troubleshoot problems or errors with the Image service, refer to :file:`/"
"var/log/glance-api.log` inside the glance api container."
msgstr ""
#: ../../source/admin/troubleshooting.rst:516
msgid ""
"You can also conduct the following activities which may generate logs to "
"help identity problems:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:519
msgid "Download an image to ensure that an image can be read from the store."
msgstr ""
#: ../../source/admin/troubleshooting.rst:520
msgid ""
"Upload an image to test whether the image is registering and writing to the "
"image store."
msgstr ""
#: ../../source/admin/troubleshooting.rst:522
msgid ""
"Run the ``openstack image list`` command to ensure that the API and registry "
"is working."
msgstr ""
#: ../../source/admin/troubleshooting.rst:525
msgid ""
"For an example and more information, see `Verify operation `_ and `Manage Images `_."
msgstr ""
#: ../../source/admin/troubleshooting.rst:531
msgid "Cached Ansible facts issues"
msgstr ""
#: ../../source/admin/troubleshooting.rst:533
msgid ""
"At the beginning of a playbook run, information about each host is gathered, "
"such as:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:536
msgid "Linux distribution"
msgstr ""
#: ../../source/admin/troubleshooting.rst:537
msgid "Kernel version"
msgstr ""
#: ../../source/admin/troubleshooting.rst:538
msgid "Network interfaces"
msgstr ""
#: ../../source/admin/troubleshooting.rst:540
msgid ""
"To improve performance, particularly in large deployments, you can cache "
"host facts and information."
msgstr ""
#: ../../source/admin/troubleshooting.rst:543
msgid ""
"OpenStack-Ansible enables fact caching by default. The facts are cached in "
"JSON files within ``/etc/openstack_deploy/ansible_facts``."
msgstr ""
#: ../../source/admin/troubleshooting.rst:546
msgid ""
"Fact caching can be disabled by running ``export ANSIBLE_CACHE_PLUGIN="
"memory``. To set this permanently, set this variable in ``/usr/local/bin/"
"openstack-ansible.rc``. Refer to the Ansible documentation on `fact "
"caching`_ for more details."
msgstr ""
#: ../../source/admin/troubleshooting.rst:555
msgid "Forcing regeneration of cached facts"
msgstr ""
#: ../../source/admin/troubleshooting.rst:557
msgid ""
"Cached facts may be incorrect if the host receives a kernel upgrade or new "
"network interfaces. Newly created bridges also disrupt cache facts."
msgstr ""
#: ../../source/admin/troubleshooting.rst:560
msgid ""
"This can lead to unexpected errors while running playbooks, and require "
"cached facts to be regenerated."
msgstr ""
#: ../../source/admin/troubleshooting.rst:563
msgid ""
"Run the following command to remove all currently cached facts for all hosts:"
""
msgstr ""
#: ../../source/admin/troubleshooting.rst:569
msgid "New facts will be gathered and cached during the next playbook run."
msgstr ""
#: ../../source/admin/troubleshooting.rst:571
msgid ""
"To clear facts for a single host, find its file within ``/etc/"
"openstack_deploy/ansible_facts/`` and remove it. Each host has a JSON file "
"that is named after its hostname. The facts for that host will be "
"regenerated on the next playbook run."
msgstr ""
#: ../../source/admin/troubleshooting.rst:577
msgid "Rebuilding Python Virtual Environments"
msgstr ""
#: ../../source/admin/troubleshooting.rst:579
msgid ""
"In certain situations, you may need to forcefully rebuild a service's Python "
"virtual environment. This can be required if the ``python_venv_build`` role "
"fails (for example, due to temporary package conflicts), or if you want to "
"reset the virtual environment after manual modifications."
msgstr ""
#: ../../source/admin/troubleshooting.rst:585
msgid "Two variables control the rebuild process:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:587
msgid ""
"``venv_rebuild`` — resets the virtual environment to its intended state "
"without rebuilding wheels. This is usually sufficient when the service "
"version has not changed and only the venv state needs to be restored."
msgstr ""
#: ../../source/admin/troubleshooting.rst:592
msgid ""
"``venv_wheels_rebuild`` — additionally forces a rebuild of the Python wheels."
" This may be required if the service version has changed or if its venv "
"requirements were modified."
msgstr ""
#: ../../source/admin/troubleshooting.rst:596
msgid ""
"To trigger a rebuild for a specific service, re-run its playbook with the "
"following environment variables:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:608
msgid "Container networking issues"
msgstr ""
#: ../../source/admin/troubleshooting.rst:610
msgid ""
"All LXC containers on the host have at least two virtual Ethernet interfaces:"
""
msgstr ""
#: ../../source/admin/troubleshooting.rst:612
msgid "`eth0` in the container connects to `lxcbr0` on the host"
msgstr ""
#: ../../source/admin/troubleshooting.rst:613
msgid "`eth1` in the container connects to `br-mgmt` on the host"
msgstr ""
#: ../../source/admin/troubleshooting.rst:617
msgid ""
"Some containers, such as ``cinder``, ``glance``, ``neutron_agents``, and "
"``swift_proxy`` have more than two interfaces to support their functions."
msgstr ""
#: ../../source/admin/troubleshooting.rst:622
msgid "Predictable interface naming"
msgstr ""
#: ../../source/admin/troubleshooting.rst:624
msgid ""
"On the host, all virtual Ethernet devices are named based on their container "
"as well as the name of the interface inside the container:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:631
msgid ""
"As an example, an all-in-one (AIO) build might provide a utility container "
"called `aio1_utility_container-d13b7132`. That container will have two "
"network interfaces: `d13b7132_eth0` and `d13b7132_eth1`."
msgstr ""
#: ../../source/admin/troubleshooting.rst:635
msgid ""
"Another option would be to use the LXC tools to retrieve information about "
"the utility container. For example:"
msgstr ""
#: ../../source/admin/troubleshooting.rst:660
msgid ""
"The ``Link:`` lines will show the network interfaces that are attached to "
"the utility container."
msgstr ""
#: ../../source/admin/troubleshooting.rst:664
msgid "Review container networking traffic"
msgstr ""
#: ../../source/admin/troubleshooting.rst:666
msgid ""
"To dump traffic on the ``br-mgmt`` bridge, use ``tcpdump`` to see all "
"communications between the various containers. To narrow the focus, run "
"``tcpdump`` only on the desired network interface of the containers."
msgstr ""
#: ../../source/admin/troubleshooting.rst:672
msgid "Restoring inventory from backup"
msgstr ""
#: ../../source/admin/troubleshooting.rst:674
msgid ""
"OpenStack-Ansible maintains a running archive of inventory. If a change has "
"been introduced into the system that has broken inventory or otherwise has "
"caused an unforseen issue, the inventory can be reverted to an early version."
" The backup file ``/etc/openstack_deploy/backup_openstack_inventory.tar`` "
"contains a set of timestamped inventories that can be restored as needed."
msgstr ""
#: ../../source/admin/troubleshooting.rst:680
msgid "Example inventory restore process."
msgstr ""
#: ../../source/admin/troubleshooting.rst:693
msgid ""
"At the completion of this operation the inventory will be restored to the "
"earlier version."
msgstr ""
#: ../../source/admin/upgrades/compatibility-matrix-legacy.rst:4
msgid "Compatibility Matrix of Legacy releases"
msgstr ""
#: ../../source/admin/upgrades/compatibility-matrix-legacy.rst:6
msgid ""
"This page contains compatability matrix of releases that are either in "
"Extended Maintanence or already reached End of Life. We keep such matrix for "
"historical reasons mainly and for deployments that forgot to get updated in "
"time."
msgstr ""
#: ../../source/admin/upgrades/compatibility-matrix-legacy.rst:11
#: ../../source/admin/upgrades/compatibility-matrix.rst:32
msgid ""
"Operating systems with experimental support are marked with ``E`` in the "
"table."
msgstr ""
#: ../../source/admin/upgrades/compatibility-matrix-legacy.rst:14
#: ../../source/admin/upgrades/compatibility-matrix.rst:35
msgid "Operating System Compatibility Matrix"
msgstr ""
#: ../../source/admin/upgrades/compatibility-matrix.rst:4
msgid "Compatibility Matrix"
msgstr ""
#: ../../source/admin/upgrades/compatibility-matrix.rst:7
msgid ""
"All of the OpenStack-Ansible releases are compatible with specific sets of "
"operating systems and their versions. Operating Systems have their own "
"lifecycles, however we may drop their support before end of their EOL "
"because of various reasons:"
msgstr ""
#: ../../source/admin/upgrades/compatibility-matrix.rst:12
msgid "OpenStack requires a higher version of a library (ie. libvirt)"
msgstr ""
#: ../../source/admin/upgrades/compatibility-matrix.rst:13
msgid "Python version"
msgstr ""
#: ../../source/admin/upgrades/compatibility-matrix.rst:14
msgid "specific dependencies"
msgstr ""
#: ../../source/admin/upgrades/compatibility-matrix.rst:15
msgid "etc."
msgstr ""
#: ../../source/admin/upgrades/compatibility-matrix.rst:17
msgid ""
"However, we do try to provide ``upgrade`` releases where we support both new "
"and old Operating System versions, providing deployers the ability to "
"properly upgrade their deployments to the new Operating System release."
msgstr ""
#: ../../source/admin/upgrades/compatibility-matrix.rst:21
msgid ""
"In CI we test upgrades between releases only for ``source`` deployments. "
"This also includes CI testing of upgrade path between SLURP releases."
msgstr ""
#: ../../source/admin/upgrades/compatibility-matrix.rst:24
msgid ""
"Below you will find the support matrix of Operating Systems for OpenStack-"
"Ansible releases."
msgstr ""
#: ../../source/admin/upgrades/compatibility-matrix.rst:29
msgid ""
"Compatability matrix for legacy releases of OpenStack-Ansible can be found "
"on this page: :ref:`compatibility-matrix-legacy`."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:3
msgid "Distribution upgrades"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:5
msgid ""
"This guide provides information about upgrading from one distribution "
"release to the next."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:10
msgid ""
"This guide was last updated when upgrading from Ubuntu 20.04 (Focal Fossa) "
"to Ubuntu 22.04 (Jammy Jellyfish) during the Antelope (2023.1) release. For "
"earlier releases please see other versions of the guide."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:15
#: ../../source/admin/upgrades/major-upgrades.rst:17
msgid "Introduction"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:17
msgid ""
"OpenStack-Ansible supports operating system distribution upgrades during "
"specific release cycles. These can be observed by consulting the operating "
"system compatibility matrix, and identifying where two versions of the same "
"operating system are supported."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:22
msgid ""
"Upgrades should be performed in the order specified in this guide to "
"minimise the risk of service interruptions. Upgrades must also be carried "
"out by performing a fresh installation of the target system's operating "
"system, before running OpenStack-Ansible to install services on this host."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:28
msgid "Ordering"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:30
msgid ""
"This guide includes a suggested order for carrying out upgrades. This may "
"need to be adapted dependent on the extent to which you have customised your "
"OpenStack-Ansible deployment."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:34
msgid ""
"Critically, it is important to consider when you upgrade 'repo' hosts/"
"containers. At least one 'repo' host should be upgraded before you upgrade "
"any API hosts/containers. The last 'repo' host to be upgraded should be the "
"'primary', and should not be carried out until after the final service which "
"does not support '--limit' is upgraded."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:40
msgid ""
"If you have a multi-architecture deployment, then at least one 'repo' host "
"of each architecture will need to be upgraded before upgrading any other "
"hosts which use that architecture."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:44
msgid ""
"If this order is adapted, it will be necessary to restore some files to the "
"'repo' host from a backup part-way through the process. This will be "
"necessary if no 'repo' hosts remain which run the older operating system "
"version, which prevents older packages from being built."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:49
msgid ""
"Beyond these requirements, a suggested order for upgrades is a follows:"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:51
msgid "Infrastructure services (Galera, RabbitMQ, APIs, HAProxy)"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:53
msgid "In all cases, secondary or backup instances should be upgraded first"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:55
msgid "Compute nodes"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:57
msgid "Network nodes"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:60
msgid "Pre-Requisites"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:62
msgid ""
"Ensure that all hosts in your target deployment have been installed and "
"configured using a matching version of OpenStack-Ansible. Ideally perform a "
"minor upgrade to the latest version of the OpenStack release cycle which you "
"are currently running first in order to reduce the risk of encountering bugs."
""
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:68
msgid ""
"Check any OpenStack-Ansible variables which you customise to ensure that "
"they take into account the new and old operating system version (for example "
"custom package repositories and version pinning)."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:72
msgid ""
"Perform backups of critical data, in particular the Galera database in case "
"of any failures. It is also recommended to back up the '/var/www/repo' "
"directory on the primary 'repo' host in case it needs to be restored mid-"
"upgrade."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:77
msgid ""
"Identify your 'primary' HAProxy/Galera/RabbitMQ/repo infrastructure host"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:79
msgid ""
"In a simple 3 infrastructure hosts setup, these services/containers usually "
"end up being all on the the same host."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:82
msgid "The 'primary' will be the LAST box you'll want to reinstall."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:84
msgid "HAProxy/Keepalived"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:86
msgid "Finding your HAProxy/Keepalived primary is as easy as"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:92
msgid "Or preferably if you've installed HAProxy with stats, like so:"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:99
msgid ""
"and can visit https://admin:password@external_lb_vip_address:1936/ and read "
"'Statistics Report for pid # on infrastructure_host'"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:102
msgid ""
"Ensure RabbitMQ is running with all feature flags enabled to avoid conflicts "
"when re-installing nodes. If any are listed as disabled then enable them via "
"the console on one of the nodes:"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:112
msgid "Warnings"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:114
msgid ""
"During the upgrade process, some OpenStack services cannot be deployed by "
"using Ansible's '--limit'. As such, it will be necessary to deploy some "
"services to mixed operating system versions at the same time."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:118
msgid "The following services are known to lack support for '--limit':"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:121
msgid "Repo Server"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:122
msgid "Keystone"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:124
msgid ""
"In the same way as OpenStack-Ansible major (and some minor) upgrades, there "
"will be brief interruptions to the entire Galera and RabbitMQ clusters "
"during the upgrade which will result in brief service interruptions."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:128
msgid ""
"When taking down 'memcached' instances for upgrades you may encounter "
"performance issues with the APIs."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:132
msgid "Deploying Infrastructure Hosts"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:134
msgid "Define redeployed host as environment variable"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:136
msgid ""
"This will serve as a shortcut for future operations and will make following "
"the instruction more error-prone. For example:"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:143
msgid "Disable HAProxy back ends (optional)"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:145
msgid ""
"If you wish to minimise error states in HAProxy, services on hosts which are "
"being reinstalled can be set in maintenance mode (MAINT)."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:148
msgid "Log into your primary HAProxy/Keepalived and run something similar to"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:154
msgid "for each API or service instance you wish to disable."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:156
msgid "You can also use a playbook for this:"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:162
msgid ""
"Or if you've enabled haproxy_stats as described above, you can visit https://"
"admin:password@external_lb_vip_address:1936/ and select them and set state "
"to ``MAINT``."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:166
msgid "Reinstall an infrastructure host's operating system"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:168
msgid ""
"As noted above, this should be carried out for non-primaries first, ideally "
"starting with a 'repo' host."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:171
msgid "Clearing out stale information"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:173
msgid "Removing stale ansible-facts"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:179
#: ../../source/admin/upgrades/distribution-upgrades.rst:344
msgid "(* because we're deleting all container facts for the host as well)"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:181
msgid "If RabbitMQ was running on this host"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:183
msgid "We forget it by running these commands on another RabbitMQ host."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:190
msgid "If GlusterFS was running on this host (repo nodes)"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:192
msgid ""
"We forget it by running these commands on another repo host. Note that we "
"have to tell Gluster we are intentionally reducing the number of replicas. "
"'N' should be set to the number of repo servers minus 1. Existing gluster "
"peer names can be found using the 'gluster peer status' command."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:203
msgid "Do generic preparation of reinstalled host"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:209
msgid ""
"This step should be executed when you are re-configuring one of HAProxy "
"hosts"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:212
msgid ""
"Since configuration of HAProxy backends happens during individual service "
"provisioning, we need to ensure that all backends are configured before "
"enabling Keepalived to select this host."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:216
msgid "Commands below will configure all required backends on HAProxy nodes:"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:226
msgid "Once this is done, you can deploy Keepalived again:"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:232
msgid ""
"After that you might want to ensure that \"local\" backends remain disabled. "
"You can also use the playbook for this:"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:239
msgid "If it is NOT a 'primary', install everything on the new host"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:246
#: ../../source/admin/upgrades/distribution-upgrades.rst:354
msgid "(* because we need to include containers in the limit)"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:248
msgid "If it IS a 'primary', do these steps"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:250
msgid "Temporarily set your primary Galera in 'MAINT' in HAProxy."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:252
msgid ""
"In order to prevent role from making your primary Galera as UP in HAProxy, "
"create an empty file ``/var/tmp/clustercheck.disabled`` . You can do this "
"with ad-hoc:"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:261
msgid ""
"Once it's done you can run playbook to install MariaDB to the destination"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:267
msgid ""
"You'll now have MariaDB running, and it should be synced with non-primaries."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:270
msgid ""
"To check that verify MariaDB cluster status by executing from host running "
"primary MariaDB following command:"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:278
msgid ""
"In case node is not getting synced you might need to restart the mariadb."
"service and verify everything is in order."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:288
msgid ""
"Once MariaDB cluster is healthy you can remove the file that disables "
"backend from being used by HAProxy."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:295
msgid "We can move on to RabbitMQ primary"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:301
msgid "Now the repo host primary"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:307
msgid ""
"Everything should now be in a working state and we can finish it off with"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:314
msgid "Adjust HAProxy status"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:316
msgid ""
"If HAProxy was set into 'MAINT' mode, this can now be removed for services "
"which have been restored."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:319
msgid ""
"For the 'repo' host, it is important that the freshly installed hosts are "
"set to 'READY' in HAProxy, and any which remain on the old operating system "
"are set to 'MAINT'."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:323
msgid "You can use the playbook to re-enable all backends from the host:"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:331
msgid "Deploying Compute and Network Hosts"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:333
msgid ""
"Disable the hypervisor service on compute hosts and migrate any instances to "
"another available hypervisor."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:336
msgid "Reinstall a host's operating system"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:338
msgid "Clear out stale ansible-facts"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:346
msgid "Execute the following:"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:356
msgid "Re-instate compute node hypervisor UUIDs"
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:358
msgid ""
"Compute nodes should have their UUID stored in the file '/var/lib/nova/"
"compute_id' and the 'nova-compute' service restarted. UUIDs can be found "
"from the command line'openstack hypervisor list'."
msgstr ""
#: ../../source/admin/upgrades/distribution-upgrades.rst:362
msgid ""
"Alternatively, the following Ansible can be used to automate these actions:"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:3
msgid "Major upgrades"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:5
msgid ""
"This guide provides information about the upgrade process from "
"|previous_release_formal_name| |previous_slurp_name| to "
"|current_release_formal_name| for OpenStack-Ansible."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:11
msgid ""
"You can upgrade between sequential releases or between releases marked as "
"`SLURP`_."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:19
msgid ""
"For upgrades between major versions, the OpenStack-Ansible repository "
"provides playbooks and scripts to upgrade an environment. The ``run-upgrade."
"sh`` script runs each upgrade playbook in the correct order, or playbooks "
"can be run individually if necessary. Alternatively, a deployer can upgrade "
"manually."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:24
msgid ""
"For more information about the major upgrade process, see :ref:`upgrading-by-"
"using-a-script` and :ref:`upgrading-manually`."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:29
msgid "|upgrade_warning| Test this on a development environment first."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:34
msgid "Upgrading by using a script"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:36
msgid ""
"The |current_release_formal_name| release series of OpenStack-Ansible "
"contains the code for migrating from |previous_release_formal_name| "
"|previous_slurp_name| to |current_release_formal_name|."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:41
msgid "Running the upgrade script"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:43
msgid ""
"To upgrade from |previous_release_formal_name| |previous_slurp_name| to "
"|current_release_formal_name| by using the upgrade script, perform the "
"following steps in the ``openstack-ansible`` directory:"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:47
#: ../../source/admin/upgrades/minor-upgrades.rst:99
msgid "Change directory to the repository clone root directory:"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:53
msgid "Run the following commands:"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:60
msgid ""
"For more information about the steps performed by the script, see :ref:"
"`upgrading-manually`."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:66
msgid "Upgrading manually"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:68
msgid ""
"Manual upgrades are useful for scoping the changes in the upgrade process "
"(for example, in very large deployments with strict SLA requirements), or "
"performing other upgrade automation beyond that provided by OpenStack-"
"Ansible."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:72
msgid ""
"The steps detailed here match those performed by the ``run-upgrade.sh`` "
"script. You can safely run these steps multiple times."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:76
msgid "Preflight checks"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:78
msgid ""
"Before starting with the upgrade, perform preflight health checks to ensure "
"your environment is stable. If any of those checks fail, ensure that the "
"issue is resolved before continuing."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:83
msgid "Check out the |current_release_formal_name| release"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:85
msgid ""
"Ensure that your OpenStack-Ansible code is on the latest "
"|current_release_formal_name| tagged release."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:93
msgid "Backup the existing OpenStack-Ansible configuration"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:95
msgid "Make a backup of the configuration of the environment:"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:103
msgid "Bootstrap the new Ansible and OpenStack-Ansible roles"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:105
msgid ""
"To ensure that there is no currently set ANSIBLE_INVENTORY to override the "
"default inventory location, we unset the environment variable."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:112
msgid ""
"Bootstrap Ansible again to ensure that all OpenStack-Ansible role "
"dependencies are in place before you run playbooks from the "
"|current_release_formal_name| release."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:122
msgid "Implement changes to OpenStack-Ansible configuration"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:124
msgid ""
"If there have been any OpenStack-Ansible variable name changes or "
"environment/inventory changes, there is a playbook to handle those changes "
"to ensure service continuity in the environment when the new playbooks run. "
"The playbook is tagged to ensure that any part of it can be executed on its "
"own or skipped. Please review the contents of the playbook for more "
"information."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:138
msgid ""
"With upgrade to 2024.2 (Dalmatian) release and beyond, usage of RabbitMQ "
"Quorum Queues is mandatory to ensure high availability of queues. If you had "
"previously set ``oslomsg_rabbit_quorum_queues: false``, please consider "
"migrating before continuing with this upgrade which uses RabbitMQ 4.x."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:144
msgid ""
"Please, check `RabbitMQ maintenance `_ for more information about switching between Quourum and HA Queues."
""
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:148
msgid "Upgrade hosts"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:150
msgid ""
"Before installing the infrastructure and OpenStack, update the host machines."
""
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:154
msgid ""
"Usage of non-trusted certificates for RabbitMQ is not possible due to "
"requirements of newer ``amqp`` versions."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:157
msgid "After that you can proceed with standard OpenStack upgrade steps:"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:163
msgid ""
"This command is the same setting up hosts on a new installation. The "
"``galera_all`` and ``rabbitmq_all`` host groups are excluded to prevent "
"reconfiguration and restarting of any of those containers as they need to be "
"updated, but not restarted."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:168
msgid ""
"Once that is complete, upgrade the final host groups with the flag to "
"prevent container restarts."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:176
msgid "Upgrade infrastructure"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:178
msgid ""
"We can now go ahead with the upgrade of all the infrastructure components. "
"To ensure that RabbitMQ and MariaDB are upgraded, we pass the appropriate "
"flags."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:183
msgid ""
"Please make sure you are running RabbitMQ version 3.13 or later before "
"proceeding to this step. Upgrade of RabbitMQ to version 4.0 (default for "
"2024.2) from prior version will result in playbook failure."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:188
msgid ""
"At this point you can minorly upgrade RabbitMQ with the following command:"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:190
msgid ""
"``openstack-ansible openstack.osa.rabbitmq_server -e rabbitmq_upgrade=true -"
"e rabbitmq_package_version=3.13.7-1``"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:192
msgid ""
"Also ensure that you have migrated from mirrored queues (HA queues) to "
"Quorum queues before the upgrade, as mirrored queues are no longer supported "
"after upgrade."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:200
msgid ""
"With this complete, we can now restart the MariaDB containers one at a time, "
"ensuring that each is started, responding, and synchronized with the other "
"nodes in the cluster before moving on to the next steps. This step allows "
"the LXC container configuration that you applied earlier to take effect, "
"ensuring that the containers are restarted in a controlled fashion."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:211
msgid "Upgrade OpenStack"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:213
msgid "We can now go ahead with the upgrade of all the OpenStack components."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:220
msgid "Upgrade Ceph"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:222
msgid ""
"With each OpenStack-Ansible version we define default Ceph client version "
"that will be installed on Glance/Cinder/Nova hosts and used by these "
"services. If you want to preserve the previous version of the ceph client "
"during an OpenStack-Ansible upgrade, you will need to override a variable "
"``ceph_stable_release`` in your user_variables.yml"
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:228
msgid ""
"If Ceph has been deployed as part of an OpenStack-Ansible deployment using "
"the roles maintained by the `Ceph-Ansible`_ project you will also need to "
"upgrade the Ceph version. Each OpenStack-Ansible release is tested only with "
"specific Ceph-Ansible release and Ceph upgrades are not checked in any "
"Openstack-Ansible integration tests. So we do not test or guarantee an "
"upgrade path for such deployments. In this case tests should be done in a "
"lab environment before upgrading."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:238
msgid ""
"Ceph related playbooks are included as part of ``openstack.osa."
"setup_infrastructure`` and ``openstack.osa.setup_openstack`` playbooks, so "
"you should be cautious when running them during OpenStack upgrades. If you "
"have ``upgrade_ceph_packages: true`` in your user variables or provided ``-e "
"upgrade_ceph_packages=true`` as argument and run ``setup-infrastructure."
"yml`` this will result in Ceph package being upgraded as well."
msgstr ""
#: ../../source/admin/upgrades/major-upgrades.rst:246
msgid "In order to upgrade Ceph in the deployment you will need to run:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:3
msgid "Minor version upgrade"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:5
msgid ""
"Upgrades between minor versions of OpenStack-Ansible require updating the "
"repository clone to the latest minor release tag, updating the Ansible "
"roles, and then running playbooks against the target hosts. This section "
"provides instructions for those tasks."
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:11
msgid "Prerequisites"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:13
msgid ""
"To avoid issues and simplify troubleshooting during the upgrade, disable the "
"security hardening role by setting the ``apply_security_hardening`` variable "
"to ``False`` in the :file:`user_variables.yml` file, and backup your "
"OpenStack-Ansible installation."
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:19
msgid "Execute a minor version upgrade"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:21
msgid "A minor upgrade typically requires the following steps:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:23
msgid "Change directory to the cloned repository's root directory:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:29
msgid ""
"Ensure that your OpenStack-Ansible code is on the latest "
"|current_release_formal_name| tagged release:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:36
msgid "Update all the dependent roles to the latest version:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:42
msgid "Change to the playbooks directory:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:48
msgid "Update the hosts:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:54
msgid "Update the infrastructure:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:61
msgid "Update all OpenStack services:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:69
msgid ""
"You can limit upgrades to specific OpenStack components. See the following "
"section for details."
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:73
msgid "Upgrade specific components"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:75
msgid ""
"You can limit upgrades to specific OpenStack components by running each of "
"the component playbooks against groups."
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:78
msgid ""
"For example, you can update only the Compute hosts by running the following "
"command:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:85
msgid "To update only a single Compute host, run the following command:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:93
msgid ""
"Skipping the ``nova-key`` tag is necessary so that the keys on all Compute "
"hosts are not gathered."
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:96
msgid ""
"To see which hosts belong to which groups, use the ``openstack-ansible-"
"inventory-manage`` script to show all groups and their hosts. For example:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:105
msgid "Show all groups and which hosts belong to them:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:111
msgid "Show all hosts and the groups to which they belong:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:117
msgid ""
"To see which hosts a playbook runs against, and to see which tasks are "
"performed, run the following commands (for example):"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:121
msgid ""
"See the hosts in the ``nova_compute`` group that a playbook runs against:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:128
msgid ""
"See the tasks that are executed on hosts in the ``nova_compute`` group:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:137
msgid ""
"Upgrading a specific component within the same OpenStack-Ansible version"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:139
msgid ""
"Sometimes you may need to apply the latest security patches or bug fixes for "
"a service while remaining on the same stable branch. This can be done by "
"overriding the Git installation branch for that service, which instructs "
"OpenStack-Ansible to pull the most recent code from the branch you are "
"already tracking. But using branches directly as "
"``_git_install_branch`` is highly discouraged. Every time the "
"playbook is re-run, the service may be upgraded to a newer commit, which can "
"lead to inconsistent versions between hosts (for example, when adding a new "
"compute node later)."
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:150
msgid ""
"So the recommended practice is to take the HEAD commit SHA of the desired "
"stable branch and set it explicitly. To find the latest SHA of the ``stable/"
"2025.1`` branch, you can run (e.g. for Nova):"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:158
msgid ""
"And use that SHA in your configuration to ensure consistent versions across "
"all hosts in your ``user_variables.yml``:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:165
msgid "And run:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:171
msgid ""
"The playbook will fetch and install the code from the specified branch or "
"commit SHA, applying the latest patches and fixes as defined. Using a pinned "
"SHA ensures consistent versions across all hosts, while following the branch "
"directly will always pull its current HEAD."
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:176
msgid ""
"We can verify the version of the service before and after the upgrade (don't "
"forget to load required environment variables):"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:191
msgid "After the upgrade to the latest patches in the same branch:"
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:207
msgid ""
"This approach is not limited to Nova. You can apply the same method to any "
"other OpenStack service managed by OpenStack-Ansible by overriding its "
"corresponding ``_git_install_branch`` variable."
msgstr ""
#: ../../source/admin/upgrades/minor-upgrades.rst:212
msgid ""
"Always ensure that the branch is up-to-date and compatible with the rest of "
"your deployment before proceeding."
msgstr ""