\"context_is_admin\": "
"\"role:admin\", which limits access to private images for projects."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:420(para)
msgid ""
"Verify proper operation of your environment. Then, notify your users that "
"their cloud is operating normally again."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:427(title)
msgid "Rolling back a failed upgrade"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:429(para)
msgid ""
"Upgrades involve complex operations and can fail. Before attempting any "
"upgrade, you should make a full database backup of your production data. As "
"of Kilo, database downgrades are not supported, and the only method "
"available to get back to a prior database version will be to restore from "
"backup."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:435(para)
msgid ""
"This section provides guidance for rolling back to a previous release of "
"OpenStack. All distributions follow a similar deinstall state, and "
"save the final output to a file. For example, the following command covers a "
"controller node with keystone, glance, nova, neutron, and cinder:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:562(para)
msgid ""
"Depending on the type of server, the contents and order of your package list "
"might vary from this example."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:569(para)
msgid ""
"You can determine the package versions available for reversion by using the "
"1:2013.1.4-"
"0ubuntu1~cloud0 in this case. The process of manually picking through "
"this list of packages is rather tedious and prone to errors. You should "
"consider using the following script to help with this process:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:652(para)
msgid ""
"If you decide to continue this step manually, don't forget to change "
"neutron to quantum where applicable."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:659(para)
msgid ""
"Use the <package-name>=<version>. The "
"script in the previous step conveniently created a list of package="
"version pairs for you:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upgrades.xml:668(para)
msgid ""
"This step completes the rollback procedure. You should remove the upgrade "
"release repository and run --description tenant-"
"description , which can be very useful. You can also "
"create a group in a disabled state by appending --disable to "
"the command. By default, projects are created in an enabled state."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:134(title)
msgid "Quotas"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:136(para)
msgid ""
"To prevent system capacities from being exhausted without notification, you "
"can set up image_member_quota, set to 128 by default. That setting is a "
"different quota from the storage quota.container_quotas or "
"account_quotas (or both) added to the swift command "
"provided by the python-swiftclient package. Any user included "
"in the project can view the quotas placed on their project. To update Object "
"Storage quotas on a project, you must have the role of ResellerAdmin in the "
"project that the quota is being applied to."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:548(para)
msgid "To view account quotas placed on a project:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:561(para)
msgid "To apply or update account quotas on a project:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:566(para)
msgid "For example, to place a 5 GB quota on an account:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:571(para)
msgid ""
"To verify the quota, run the policy.json file. The "
"actual location of this file might vary from distribution to distribution: "
"for nova, it is typically in /etc/nova/policy.json. You can "
"update entries while the system is running, and you do not have to restart "
"services. Currently, the only way to update such policies is to edit the "
"policy file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:929(para)
msgid ""
"The OpenStack service's policy engine matches a policy directly. A rule "
"indicates evaluation of the elements of such policies. For instance, in a "
"compute:create: [[\"rule:admin_or_owner\"]] statement, the "
"policy is compute:create, and the rule is admin_or_owner"
"code>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:935(para)
msgid ""
"Policies are triggered by an OpenStack policy engine whenever one of them "
"matches an OpenStack API operation or a specific attribute being used in a "
"given operation. For instance, the engine tests the create:compute"
"code> policy every time a user sends a POST /v2/{tenant_id}/servers"
"code> request to the OpenStack Compute API server. Policies can be also "
"related to specific API extension s. For instance, if "
"a user needs an extension like compute_extension:rescue, the "
"attributes defined by the provider extensions trigger the rule test for that "
"operation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:945(para)
msgid ""
"An authorization policy can be composed by one or more rules. If more rules "
"are specified, evaluation policy is successful if any of the rules evaluates "
"successfully; if an API operation matches multiple policies, then all the "
"policies must evaluate successfully. Also, authorization rules are recursive."
" Once a rule is matched, the rule(s) can be resolved to another rule, until "
"a terminal rule is reached. These are the rules defined :"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:955(term)
msgid "Role-based rules"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:958(para)
msgid ""
"Evaluate successfully if the user submitting the request has the specified "
"role. For instance, \"role:admin\" is successful if the user "
"submitting the request is an administrator."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:966(term)
msgid "Field-based rules"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:969(para)
msgid ""
"Evaluate successfully if a field of the resource specified in the current "
"request matches a specific value. For instance, \"field:networks:"
"shared=True\" is successful if the attribute shared of the network "
"resource is set to true ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:978(term)
msgid "Generic rules"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:981(para)
msgid ""
"Compare an attribute in the resource with an attribute extracted from the "
"user's security credentials and evaluates successfully if the comparison is "
"successful. For instance, \"tenant_id:%(tenant_id)s\" is "
"successful if the tenant identifier in the resource is equal to the tenant "
"identifier of the user submitting the request."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:991(para)
msgid ""
"Here are snippets of the default nova policy.json file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1021(para)
msgid ""
"Shows a rule that evaluates successfully if the current user is an "
"administrator or the owner of the resource specified in the request (tenant "
"identifier is equal)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1027(para)
msgid ""
"Shows the default policy, which is always evaluated if an API operation does "
"not match any of the policies in policy.json."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1033(para)
msgid ""
"Shows a policy restricting the ability to manipulate flavors to "
"administrators using the Admin API only.admin API "
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1041(para)
msgid ""
"In some cases, some operations should be restricted to administrators only. "
"Therefore, as a further example, let us consider how this sample policy file "
"could be modified in a scenario where we enable users to create their own "
"flavors:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1050(title)
msgid "Users Who Disrupt Other Users"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1052(para)
msgid ""
"Users on your cloud can disrupt other users, sometimes intentionally and "
"maliciously and other times by accident. Understanding the situation allows "
"you to make a better decision on how to handle the disruption.user management handling "
"disruptive users "
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1061(para)
msgid ""
"For example, a group of users have instances that are utilizing a large "
"amount of compute resources for very compute-intensive tasks. This is "
"driving the load up on compute nodes and affecting other users. In this "
"situation, review your user use cases. You may find that high compute "
"scenarios are common, and should then plan for proper segregation in your "
"cloud, such as host aggregation or regions."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1068(para)
msgid ""
"Another example is a user consuming a very large amount of "
"bandwidthbandwidth"
"primary>recognizing DDOS attacks . Again, "
"the key is to understand what the user is doing. If she naturally needs a "
"high amount of bandwidth, you might have to limit her transmission rate as "
"to not affect other users or move her to an area with more bandwidth "
"available. On the other hand, maybe her instance has been hacked and is part "
"of a botnet launching DDOS attacks. Resolution of this issue is the same as "
"though any other server on your network has been hacked. Contact the user "
"and give her time to respond. If she doesn't respond, shut down the instance."
""
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1082(para)
msgid ""
"A final example is if a user is hammering cloud resources repeatedly. "
"Contact the user and learn what he is trying to do. Maybe he doesn't "
"understand that what he's doing is inappropriate, or maybe there is an issue "
"with the resource he is trying to access that is causing his requests to "
"queue or lag."
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1091(title) ./doc/openstack-ops/ch_ops_log_monitor.xml:1045(title) ./doc/openstack-ops/ch_ops_backup_recovery.xml:281(title) ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1289(title) ./doc/openstack-ops/ch_ops_lay_of_land.xml:788(title)
msgid "Summary"
msgstr ""
#: ./doc/openstack-ops/ch_ops_projects_users.xml:1093(para)
msgid ""
"One key element of systems administration that is often overlooked is that "
"end users are the reason systems administrators exist. Don't go the BOFH "
"route and terminate every user who causes an alert to go off. Work with "
"users to understand what they're trying to accomplish and see how your "
"environment can better assist them in achieving their goals. Meet your users "
"needs by organizing your users into projects, applying policies, managing "
"quotas, and working with them.systems "
"administration user management "
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:12(title)
msgid "Advanced Configuration"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:14(para)
msgid ""
"OpenStack is intended to work well across a variety of installation flavors, "
"from very small private clouds to large public clouds. To achieve this, the "
"developers add configuration options to their code that allow the behavior "
"of the various components to be tweaked depending on your needs. "
"Unfortunately, it is not possible to cover all possible deployments with the "
"default configuration values.advanced "
"configuration configuration options "
"indexterm>configuration options"
"primary>wide availability of "
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:29(para)
msgid ""
"At the time of writing, OpenStack has more than 3,000 configuration options. "
"You can see them documented at the OpenStack "
"configuration reference guide. This chapter cannot hope to document "
"all of these, but we do try to introduce the important concepts so that you "
"know where to go digging for more information."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:37(title)
msgid "Differences Between Various Drivers"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:39(para)
msgid ""
"Many OpenStack projects implement a driver layer, and each of these drivers "
"will implement its own configuration options. For example, in OpenStack "
"Compute (nova), there are various hypervisor drivers implemented—libvirt, "
"xenserver, hyper-v, and vmware, for example. Not all of these hypervisor "
"drivers have the same features, and each has different tuning requirements."
"hypervisors"
"primary>differences between drivers differences between"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:55(para)
msgid ""
"The currently implemented hypervisors are listed on the OpenStack documentation website. You can see a "
"matrix of the various features in OpenStack Compute (nova) hypervisor "
"drivers on the OpenStack wiki at the Hypervisor support matrix page"
"link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:63(para)
msgid ""
"The point we are trying to make here is that just because an option exists "
"doesn't mean that option is relevant to your driver choices. Normally, the "
"documentation notes which drivers the configuration applies to."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:70(title)
msgid "Implementing Periodic Tasks"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:72(para)
msgid ""
"Another common concept across various OpenStack projects is that of periodic "
"tasks. Periodic tasks are much like cron jobs on traditional Unix systems, "
"but they are run inside an OpenStack process. For example, when OpenStack "
"Compute (nova) needs to work out what images it can remove from its local "
"cache, it runs a periodic task to do this.periodic tasks configuration options periodic "
"task implementation "
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:85(para)
msgid ""
"Periodic tasks are important to understand because of limitations in the "
"threading model that OpenStack uses. OpenStack uses cooperative threading in "
"Python, which means that if something long and complicated is running, it "
"will block other tasks inside that process from running unless it "
"voluntarily yields execution to another cooperative thread.cooperative threading "
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:94(para)
msgid ""
"A tangible example of this is the nova-compute process. "
"In order to manage the image cache with libvirt, nova-compute"
"literal> has a periodic process that scans the contents of the image cache. "
"Part of this scan is calculating a checksum for each of the images and "
"making sure that checksum matches what nova-compute "
"expects it to be. However, images can be very large, and these checksums can "
"take a long time to generate. At one point, before it was reported as a bug "
"and fixed, nova-compute would block on this task and stop "
"responding to RPC requests. This was visible to users as failure of "
"operations such as spawning or deleting instances."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:106(para)
msgid ""
"The take away from this is if you observe an OpenStack process that appears "
"to \"stop\" for a while and then continue to process normally, you should "
"check that periodic tasks aren't the problem. One way to do this is to "
"disable the periodic tasks by setting their interval to zero. Additionally, "
"you can configure how often these periodic tasks run—in some cases, it might "
"make sense to run them at a different frequency from the default."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:114(para)
msgid ""
"The frequency is defined separately for each periodic task. Therefore, to "
"disable every periodic task in OpenStack Compute (nova), you would need to "
"set a number of configuration options to zero. The current list of "
"configuration options you would need to set to zero are:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:121(literal)
msgid "bandwidth_poll_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:125(literal)
msgid "sync_power_state_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:129(literal)
msgid "heal_instance_info_cache_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:133(literal)
msgid "host_state_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:137(literal)
msgid "image_cache_manager_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:141(literal)
msgid "reclaim_instance_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:145(literal)
msgid "volume_usage_poll_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:149(literal)
msgid "shelved_poll_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:153(literal)
msgid "shelved_offload_time"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:157(literal)
msgid "instance_delete_interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:161(para)
msgid ""
"To set a configuration option to zero, include a line such as "
"image_cache_manager_interval=0 in your nova."
"conf file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:165(para)
msgid ""
"This list will change between releases, so please refer to your "
"configuration guide for up-to-date information."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:170(title)
msgid "Specific Configuration Topics"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:172(para)
msgid ""
"This section covers specific examples of configuration options you might "
"consider tuning. It is by no means an exhaustive list."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:176(title)
msgid "Security Configuration for Compute, Networking, and Storage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:179(para)
msgid ""
"The OpenStack "
"Security Guide provides a deep dive into securing an "
"OpenStack cloud, including SSL/TLS, key management, PKI and certificate "
"management, data transport and privacy concerns, and compliance.security issues"
"primary>configuration options configuration options"
"primary>security "
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:195(title)
msgid "High Availability"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:197(para)
msgid ""
"The OpenStack High Availability Guide offers "
"suggestions for elimination of a single point of failure that could cause "
"system downtime. While it is not a completely prescriptive document, it "
"offers methods and techniques for avoiding downtime and data loss.high availability "
"indexterm>configuration options"
"primary>high availability "
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:212(title)
msgid "Enabling IPv6 Support"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:214(para)
msgid ""
"You can follow the progress being made on IPV6 support by watching the neutron IPv6 Subteam at work.Liberty IPv6 support "
"indexterm>IPv6, enabling support for"
"primary> configuration "
"options IPv6 support "
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:228(para)
msgid ""
"By modifying your configuration setup, you can set up IPv6 when using "
"nova-network for networking, and a tested setup is "
"documented for FlatDHCP and a multi-host configuration. The key is to make "
"nova-network think a radvd command ran "
"successfully. The entire configuration is detailed in a Cybera blog post, "
"“An IPv6 enabled cloud”."
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:238(title)
msgid "Geographical Considerations for Object Storage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:240(para)
msgid ""
"Support for global clustering of object storage servers is available for all "
"supported releases. You would implement these global clusters to ensure "
"replication across geographic areas in case of a natural disaster and also "
"to ensure that users can write or access their objects more quickly based on "
"the closest data center. You configure a default region with one zone for "
"each cluster, but be sure your network (WAN) can handle the additional "
"request and response load between zones as you add more zones and build a "
"ring that handles more zones. Refer to Geographically Distributed Clusters in the documentation "
"for additional information.Object "
"Storage geographical considerations "
"indexterm>storage"
"primary>geographical considerations "
"indexterm>configuration options"
"primary>geographical storage considerations "
"indexterm>"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/ch_arch_storage.xml:593(None) ./doc/openstack-ops/ch_arch_storage.xml:610(None) ./doc/openstack-ops/ch_arch_storage.xml:623(None) ./doc/openstack-ops/ch_arch_storage.xml:630(None) ./doc/openstack-ops/ch_arch_storage.xml:643(None) ./doc/openstack-ops/ch_arch_storage.xml:650(None) ./doc/openstack-ops/ch_arch_storage.xml:657(None) ./doc/openstack-ops/ch_arch_storage.xml:670(None) ./doc/openstack-ops/ch_arch_storage.xml:677(None) ./doc/openstack-ops/ch_arch_storage.xml:690(None) ./doc/openstack-ops/ch_arch_storage.xml:701(None) ./doc/openstack-ops/ch_arch_storage.xml:707(None)
msgid ""
"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/"
"openstack-ops/figures/Check_mark_23x20_02.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:12(title)
msgid "Storage Decisions"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:14(para)
msgid ""
"Storage is found in many parts of the OpenStack stack, and the differing "
"types can cause confusion to even experienced cloud engineers. This section "
"focuses on persistent storage options you can configure with your cloud. "
"It's important to understand the distinction between ephemeral storage and persistent storage."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:22(title)
msgid "Ephemeral Storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:24(para)
msgid ""
"If you deploy only the OpenStack Compute Service (nova), your users do not "
"have access to any form of persistent storage by default. The disks "
"associated with VMs are \"ephemeral,\" meaning that (from the user's point "
"of view) they effectively disappear when a virtual machine is terminated."
"storage"
"primary>ephemeral "
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:36(title)
msgid "Persistent Storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:38(para)
msgid ""
"Persistent storage means that the storage resource outlives any other "
"resource and is always available, regardless of the state of a running "
"instance."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:42(para)
msgid ""
"Today, OpenStack clouds explicitly support three types of persistent storage:"
" object storage , block storage , "
"and file system storage . swift Object Storage API"
"secondary> persistent "
"storage objects"
"primary>persistent storage of Object Storage Object "
"Storage API storage object storage"
"secondary> shared file "
"system storage shared file systems service "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:75(title) ./doc/openstack-ops/ch_ops_backup_recovery.xml:211(title)
msgid "Object Storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:77(para)
msgid ""
"With object storage, users access binary objects through a REST API. You may "
"be familiar with Amazon S3, which is a well-known example of an object "
"storage system. Object storage is implemented in OpenStack by the OpenStack "
"Object Storage (swift) project. If your intended users need to archive or "
"manage large datasets, you want to provide them with object storage. In "
"addition, OpenStack can store your virtual machine (VM) images inside of an object storage system, "
"as an alternative to storing the images on a file system.binary binary objects "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:91(para)
msgid ""
"OpenStack Object Storage provides a highly scalable, highly available "
"storage solution by relaxing some of the constraints of traditional file "
"systems. In designing and procuring for such a cluster, it is important to "
"understand some key concepts about its operation. Essentially, this type of "
"storage is built on the idea that all storage hardware fails, at every "
"level, at some point. Infrequently encountered failures that would hamstring "
"other storage systems, such as issues taking down RAID cards or entire "
"servers, are handled gracefully with OpenStack Object Storage.scaling Object Storage and"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:105(para)
msgid ""
"A good document describing the Object Storage architecture is found within "
"the developer "
"documentation—read this first. Once you understand the architecture, "
"you should know what a proxy server does and how zones work. However, some "
"important points are often missed at first glance."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:112(para)
msgid ""
"When designing your cluster, you must consider durability and availability. "
"Understand that the predominant source of these is the spread and placement "
"of your data, rather than the reliability of the hardware. Consider the "
"default value of the number of replicas, which is three. This means that "
"before an object is marked as having been written, at least two copies "
"exist—in case a single server fails to write, the third copy may or may not "
"yet exist when the write operation initially returns. Altering this number "
"increases the robustness of your data, but reduces the amount of storage you "
"have available. Next, look at the placement of your servers. Consider "
"spreading them widely throughout your data center's network and power-"
"failure zones. Is a zone a rack, a server, or a disk?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:139(para)
msgid ""
"Among object , container , and "
"account server s"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:145(para)
msgid "Between those servers and the proxies"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:149(para)
msgid "Between the proxies and your users"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:125(para)
msgid ""
"Object Storage's network patterns might seem unfamiliar at first. Consider "
"these main traffic flows: objects"
"primary>storage decisions and containers storage decisions "
"and account "
"server "
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:154(para)
msgid ""
"Object Storage is very \"chatty\" among servers hosting data—even a small "
"cluster does megabytes/second of traffic, which is predominantly, “Do you "
"have the object?”/“Yes I have the object!” Of course, if the answer to the "
"aforementioned question is negative or the request times out, replication of "
"the object begins."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:160(para)
msgid ""
"Consider the scenario where an entire server fails and 24 TB of data needs "
"to be transferred \"immediately\" to remain at three copies—this can put "
"significant load on the network."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:166(para)
msgid ""
"Another fact that's often forgotten is that when a new file is being "
"uploaded, the proxy server must write out as many streams as there are "
"replicas—giving a multiple of network traffic. For a three-replica cluster, "
"10 Gbps in means 30 Gbps out. Combining this with the previous high "
"bandwidth bandwidth"
"primary>private vs. public network recommendations "
"indexterm> demands of replication is what results in the recommendation that "
"your private network be of significantly higher bandwidth than your public "
"need be. Oh, and OpenStack Object Storage communicates internally with "
"unencrypted, unauthenticated rsync for performance—you do want the private "
"network to be private."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:182(para)
msgid ""
"The remaining point on bandwidth is the public-facing portion. The "
"swift-proxy service is stateless, which means that you "
"can easily add more and use HTTP load-balancing methods to share bandwidth "
"and availability between them."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:188(para)
msgid "More proxies means more bandwidth, if your storage can keep up."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:193(title) ./doc/openstack-ops/ch_ops_backup_recovery.xml:196(title) ./doc/openstack-ops/ch_ops_user_facing.xml:958(title)
msgid "Block Storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:195(para)
msgid ""
"Block storage (sometimes referred to as volume storage) provides users with "
"access to block-storage devices. Users interact with block storage by "
"attaching volumes to their running VM instances.volume storage block storage storage block storage "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:208(para)
msgid ""
"These volumes are persistent: they can be detached from one instance and re-"
"attached to another, and the data remains intact. Block storage is "
"implemented in OpenStack by the OpenStack Block Storage (cinder) project, "
"which supports multiple back ends in the form of drivers. Your choice of a "
"storage back end must be supported by a Block Storage driver."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:215(para)
msgid ""
"Most block storage drivers allow the instance to have direct access to the "
"underlying storage hardware's block device. This helps increase the overall "
"read/write IO. However, support for utilizing files as volumes is also well "
"established, with full support for NFS, GlusterFS and others."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:221(para)
msgid ""
"These drivers work a little differently than a traditional \"block\" storage "
"driver. On an NFS or GlusterFS file system, a single file is created and "
"then mapped as a \"virtual\" volume into the instance. This mapping/"
"translation is similar to how OpenStack utilizes QEMU's file-based virtual "
"machines stored in /var/lib/nova/instances."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:230(title) ./doc/openstack-ops/ch_ops_user_facing.xml:1060(title)
msgid "Shared File Systems Service"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:242(para)
msgid ""
"Create a share specifying its size, shared file system protocol, visibility "
"level"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:247(para)
msgid ""
"Create a share on either a share server or standalone, depending on the "
"selected back-end mode, with or without using a share network."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:254(para)
msgid "Specify access rules and security services for existing shares."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:258(para)
msgid ""
"Combine several shares in groups to keep data consistency inside the groups "
"for the following safe group operations."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:262(para)
msgid ""
"Create a snapshot of a selected share or a share group for storing the "
"existing shares consistently or creating new shares from that snapshot in a "
"consistent way"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:267(para)
msgid "Create a share from a snapshot."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:270(para)
msgid "Set rate limits and quotas for specific shares and snapshots"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:273(para)
msgid "View usage of share resources"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:276(para)
msgid "Remove shares."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:283(para)
msgid "Mounted to any number of client machines."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:286(para)
msgid ""
"Detached from one instance and attached to another without data loss. During "
"this process the data are safe unless the Shared File Systems service itself "
"is changed or removed."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:231(para)
msgid ""
"The Shared File Systems service provides a set of services for management of "
"Shared File Systems in a multi-tenant cloud environment. Users interact with "
"Shared File Systems service by mounting remote File Systems on their "
"instances with the following usage of those systems for file storing and "
"exchange. Shared File Systems service provides you with shares. A share is a "
"remote, mountable file system. You can mount a share to and access a share "
"from several hosts by several users at a time. With shares, user can also: "
" Like Block Storage, the Shared File Systems service is "
"persistent. It can be: Shares are provided by the Shared "
"File Systems service. In OpenStack, Shared File Systems service is "
"implemented by Shared File System (manila) project, which supports multiple "
"back-ends in the form of drivers. The Shared File Systems service can be "
"configured to provision shares from one or more back-ends. Share servers "
"are, mostly, virtual machines that export file shares via different "
"protocols such as NFS, CIFS, GlusterFS, or HDFS."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:303(title)
msgid "OpenStack Storage Concepts"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:305(para)
msgid ""
" explains the different storage "
"concepts provided by OpenStack.block "
"device storage"
"primary>overview of concepts "
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:315(caption)
msgid "OpenStack storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:331(th)
msgid "Ephemeral storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:333(th)
msgid "Block storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:335(th) ./doc/openstack-ops/section_arch_example-nova.xml:163(para)
msgid "Object storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:337(th)
msgid "Shared File System storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:343(para)
msgid "Used to…"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:345(para)
msgid "Run operating system and scratch space"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:347(para)
msgid "Add additional persistent storage to a virtual machine (VM)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:350(para)
msgid "Store data, including VM images"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:352(para)
msgid "Add additional persistent storage to a virtual machine"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:356(para)
msgid "Accessed through…"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:358(para)
msgid "A file system"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:360(para)
msgid ""
"A block device that can be partitioned, formatted, "
"and mounted (such as, /dev/vdc)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:363(para)
msgid "The REST API"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:365(para)
msgid ""
"A Shared File Systems service share (either manila managed or an external "
"one registered in manila) that can be partitioned, formatted and mounted "
"(such as /dev/vdc)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:371(para)
msgid "Accessible from…"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:373(para) ./doc/openstack-ops/ch_arch_storage.xml:375(para) ./doc/openstack-ops/ch_arch_storage.xml:379(para)
msgid "Within a VM"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:377(para)
msgid "Anywhere"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:383(para)
msgid "Managed by…"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:385(para)
msgid "OpenStack Compute (nova)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:387(para)
msgid "OpenStack Block Storage (cinder)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:389(para) ./doc/openstack-ops/ch_arch_storage.xml:779(term) ./doc/openstack-ops/section_arch_example-nova.xml:165(para)
msgid "OpenStack Object Storage (swift)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:391(para)
msgid "OpenStack Shared File System Storage (manila)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:395(para)
msgid "Persists until…"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:397(para)
msgid "VM is terminated"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:399(para) ./doc/openstack-ops/ch_arch_storage.xml:401(para) ./doc/openstack-ops/ch_arch_storage.xml:403(para)
msgid "Deleted by user"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:407(para)
msgid "Sizing determined by…"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:409(para)
msgid ""
"Administrator configuration of size settings, known as flavors"
"emphasis>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:412(para) ./doc/openstack-ops/ch_arch_storage.xml:420(para)
msgid "User specification in initial request"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:414(para)
msgid "Amount of available physical storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:425(para)
msgid "Requests for extension"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:430(para)
msgid "Available user-level quotes"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:435(para)
msgid "Limitations applied by Administrator"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:445(para)
msgid "Encryption set by…"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:447(para)
msgid "Parameter in nova.conf"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:449(para)
msgid ""
"Admin establishing encrypted volume type, then user "
"selecting encrypted volume"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:453(para)
msgid "Not yet available"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:455(para)
msgid ""
"Shared File Systems service does not apply any additional encryption above "
"what the share’s back-end storage provides"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:460(para)
msgid "Example of typical usage…"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:462(para)
msgid "10 GB first disk, 30 GB second disk"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:464(para)
msgid "1 TB disk"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:466(para)
msgid "10s of TBs of dataset storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:468(para)
msgid ""
"Depends completely on the size of back-end storage specified when a share "
"was being created. In case of thin provisioning it can be partial space "
"reservation (for more details see Capabilities and Extra-Specs "
"specification)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:476(title)
msgid "File-level Storage (for Live Migration)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:478(para)
msgid ""
"With file-level storage, users access stored data using the operating "
"system's file system interface. Most users, if they have used a network "
"storage solution before, have encountered this form of networked storage. In "
"the Unix world, the most common form of this is NFS. In the Windows world, "
"the most common form is called CIFS (previously, SMB).migration live migration storage file-level "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:493(para)
msgid ""
"OpenStack clouds do not present file-level storage to end users. However, it "
"is important to consider file-level storage for storing instances under "
"/var/lib/nova/instances when designing your cloud, since you "
"must have a shared file system if you want to support live migration."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:502(title)
msgid "Choosing Storage Back Ends"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:504(para)
msgid ""
"Users will indicate different needs for their cloud use cases. Some may need "
"fast access to many objects that do not change often, or want to set a time-"
"to-live (TTL) value on a file. Others may access only storage that is "
"mounted with the file system itself, but want it to be replicated instantly "
"when starting a new instance. For other systems, ephemeral storage—storage "
"that is released when a VM attached to it is shut down— is the preferred way."
" When you select storage back end s, storage choosing back ends"
"secondary> storage back "
"end back end "
"interactions store ask the "
"following questions on behalf of your users:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:525(para)
msgid "Do my users need block storage?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:529(para)
msgid "Do my users need object storage?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:533(para)
msgid "Do I need to support live migration?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:537(para)
msgid ""
"Should my persistent storage drives be contained in my compute nodes, or "
"should I use external storage?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:542(para)
msgid ""
"What is the platter count I can achieve? Do more spindles result in better I/"
"O despite network access?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:547(para)
msgid ""
"Which one results in the best cost-performance scenario I'm aiming for?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:552(para)
msgid "How do I manage the storage operationally?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:556(para)
msgid ""
"How redundant and distributed is the storage? What happens if a storage node "
"fails? To what extent can it mitigate my data-loss disaster scenarios?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:562(para)
msgid ""
"To deploy your storage by using only commodity hardware, you can use a "
"number of open-source packages, as shown in ."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:567(caption)
msgid "Persistent file-based storage support"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:571(th) ./doc/openstack-ops/ch_arch_storage.xml:599(para) ./doc/openstack-ops/ch_arch_storage.xml:605(para) ./doc/openstack-ops/ch_arch_storage.xml:614(para) ./doc/openstack-ops/ch_arch_storage.xml:685(para) ./doc/openstack-ops/ch_arch_storage.xml:694(para)
msgid " "
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:573(th)
msgid "Object"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:575(th)
msgid "Block"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:578(para)
msgid ""
"This list of open source file-level shared storage solutions is not "
"exhaustive; other open source solutions exist (MooseFS). Your organization "
"may already have deployed a file-level shared storage solution that you can "
"use."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:577(th)
msgid "File-level "
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:588(para)
msgid "Swift"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:603(para)
msgid "LVM"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:618(para)
msgid "Ceph"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:634(para)
msgid "Experimental"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:638(para)
msgid "Gluster"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:663(para)
msgid "NFS"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:683(para)
msgid "ZFS"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:697(para)
msgid "Sheepdog"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:716(title)
msgid "Storage Driver Support"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:718(para)
msgid ""
"In addition to the open source technologies, there are a number of "
"proprietary solutions that are officially supported by OpenStack Block "
"Storage.storage"
"primary>storage driver support They are "
"offered by the following vendors:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:728(para)
msgid "IBM (Storwize family/SVC, XIV)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:732(para)
msgid "NetApp"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:736(para)
msgid "Nexenta"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:740(para)
msgid "SolidFire"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:744(para)
msgid ""
"You can find a matrix of the functionality provided by all of the supported "
"Block Storage drivers on the OpenStack wiki"
"link>."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:750(para)
msgid ""
"Also, you need to decide whether you want to support object storage in your "
"cloud. The two common use cases for providing object storage in a compute "
"cloud are:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:756(para)
msgid "To provide users with a persistent storage mechanism"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:760(para)
msgid "As a scalable, reliable data store for virtual machine images"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:766(title)
msgid "Commodity Storage Back-end Technologies"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:768(para)
msgid ""
"This section provides a high-level overview of the differences among the "
"different commodity storage back end technologies. Depending on your cloud "
"user's needs, you can implement one or many of these technologies in "
"different combinations:storage"
"primary>commodity storage "
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:782(para)
msgid ""
"The official OpenStack Object Store implementation. It is a mature "
"technology that has been used for several years in production by Rackspace "
"as the technology behind Rackspace Cloud Files. As it is highly scalable, it "
"is well-suited to managing petabytes of storage. OpenStack Object Storage's "
"advantages are better integration "
"with OpenStack (integrates with OpenStack Identity, works with the OpenStack "
"dashboard interface) and better support for multiple data center deployment "
"through support of asynchronous eventual consistency replication."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:793(para)
msgid ""
"Therefore, if you eventually plan on distributing your storage cluster "
"across multiple data centers, if you need unified accounts for your users "
"for both compute and object storage, or if you want to control your object "
"storage with the OpenStack dashboard, you should consider OpenStack Object "
"Storage. More detail can be found about OpenStack Object Storage in the "
"section below.accounts "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:806(term)
msgid "CephCeph "
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:811(para)
msgid ""
"A scalable storage solution that replicates data across commodity storage "
"nodes. Ceph was originally developed by one of the founders of DreamHost and "
"is currently used in production there."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:816(para)
msgid ""
"Ceph was designed to expose different types of storage interfaces to the end "
"user: it supports object storage, block storage, and file-system interfaces, "
"although the file-system interface is not yet considered production-ready. "
"Ceph supports the same API as swift for object storage and can be used as a "
"back end for cinder block storage as well as back-end storage for glance "
"images. Ceph supports \"thin provisioning,\" implemented using copy-on-write."
""
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:825(para)
msgid ""
"This can be useful when booting from volume because a new volume can be "
"provisioned very quickly. Ceph also supports keystone-based authentication "
"(as of version 0.56), so it can be a seamless swap in for the default "
"OpenStack swift implementation."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:831(para)
msgid ""
"Ceph's advantages are that it gives the administrator more fine-grained "
"control over data distribution and replication strategies, enables you to "
"consolidate your object and block storage, enables very fast provisioning of "
"boot-from-volume instances using thin provisioning, and supports a "
"distributed file-system interface, though this interface is not "
"yet recommended for use in production deployment by the Ceph project."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:841(para)
msgid ""
"If you want to manage your object and block storage within a single system, "
"or if you want to support fast boot-from-volume, you should consider Ceph."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:848(term)
msgid ""
"GlusterGlusterFS "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:853(para)
msgid ""
"A distributed, shared file system. As of Gluster version 3.3, you can use "
"Gluster to consolidate your object storage and file storage into one unified "
"file and object storage solution, which is called Gluster For OpenStack "
"(GFO). GFO uses a customized version of swift that enables Gluster to be "
"used as the back-end storage."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:860(para)
msgid ""
"The main reason to use GFO rather than regular swift is if you also want to "
"support a distributed file system, either to support shared storage live "
"migration or to provide it as a separate service to your end users. If you "
"want to manage your object and file storage within a single system, you "
"should consider GFO."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:870(term)
msgid ""
"LVMLVM (Logical Volume Manager)"
"primary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:875(para)
msgid ""
"The Logical Volume Manager is a Linux-based system that provides an "
"abstraction layer on top of physical disks to expose logical volumes to the "
"operating system. The LVM back-end implements block storage as LVM logical "
"partitions."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:880(para)
msgid ""
"On each host that will house block storage, an administrator must initially "
"create a volume group dedicated to Block Storage volumes. Blocks are created "
"from LVM logical volumes."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:885(para)
msgid ""
"LVM does not provide any replication. Typically, "
"administrators configure RAID on nodes that use LVM as block storage to "
"protect against failures of individual hard drives. However, RAID does not "
"protect against a failure of the entire host."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:895(term)
msgid "ZFSZFS "
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:900(para)
msgid ""
"The Solaris iSCSI driver for OpenStack Block Storage implements blocks as "
"ZFS entities. ZFS is a file system that also has the functionality of a "
"volume manager. This is unlike on a Linux system, where there is a "
"separation of volume manager (LVM) and file system (such as, ext3, ext4, "
"xfs, and btrfs). ZFS has a number of advantages over ext4, including "
"improved data-integrity checking."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:908(para)
msgid ""
"The ZFS back end for OpenStack Block Storage supports only Solaris-based "
"systems, such as Illumos. While there is a Linux port of ZFS, it is not "
"included in any of the standard Linux distributions, and it has not been "
"tested with OpenStack Block Storage. As with LVM, ZFS does not provide "
"replication across hosts on its own; you need to add a replication solution "
"on top of ZFS if your cloud needs to be able to handle storage-node failures."
""
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:917(para)
msgid ""
"We don't recommend ZFS unless you have previous experience with deploying "
"it, since the ZFS back end for Block Storage requires a Solaris-based "
"operating system, and we assume that your experience is primarily with Linux-"
"based systems."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:925(term)
msgid ""
"SheepdogSheepdog "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:930(para)
msgid ""
"Sheepdog is a userspace distributed storage system. Sheepdog scales to "
"several hundred nodes, and has powerful virtual disk management features "
"like snapshot, cloning, rollback, thin provisioning."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:934(para)
msgid ""
"It is essentially an object storage system that manages disks and aggregates "
"the space and performance of disks linearly in hyper scale on commodity "
"hardware in a smart way. On top of its object store, Sheepdog provides "
"elastic volume service and http service. Sheepdog does not assume anything "
"about kernel version and can work nicely with xattr-supported file systems."
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:946(title) ./doc/openstack-ops/ch_arch_network_design.xml:526(title) ./doc/openstack-ops/ch_arch_compute_nodes.xml:616(title) ./doc/openstack-ops/ch_arch_provision.xml:367(title) ./doc/openstack-ops/ch_ops_customize.xml:1158(title)
msgid "Conclusion"
msgstr ""
#: ./doc/openstack-ops/ch_arch_storage.xml:948(para)
msgid ""
"We hope that you now have some considerations in mind and questions to ask "
"your future cloud users about their storage use cases. As you can see, your "
"storage decisions will also influence your network design for performance "
"and security needs. Continue with us to make more informed decisions about "
"your OpenStack cloud design ."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:12(title)
msgid "Network Design"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:14(para)
msgid ""
"OpenStack provides a rich networking environment, and this chapter details "
"the requirements and options to deliberate when designing your cloud."
"network design"
"primary>first steps design considerations network "
"design "
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:27(para)
msgid ""
"If this is the first time you are deploying a cloud infrastructure in your "
"organization, after reading this section, your first conversations should be "
"with your networking team. Network usage in a running cloud is vastly "
"different from traditional network deployments and has the potential to be "
"disruptive at both a connectivity and a policy level.cloud computing vs. traditional "
"deployments "
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:39(para)
msgid ""
"For example, you must plan the number of IP addresses that you need for both "
"your guest instances as well as management infrastructure. Additionally, you "
"must research and discuss cloud network connectivity through proxy servers "
"and firewalls."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:44(para)
msgid ""
"In this chapter, we'll give some examples of network implementations to "
"consider and provide information about some of the network layouts that "
"OpenStack uses. Finally, we have some brief notes on the networking services "
"that are essential for stable operation."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:50(title)
msgid "Management Network"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:52(para)
msgid ""
"A management network (a separate network for use by "
"your cloud operators) typically consists of a separate switch and separate "
"NICs (network interface cards), and is a recommended option. This "
"segregation prevents system administration and the monitoring of system "
"access from being disrupted by traffic generated by guests.NICs (network interface cards) "
"indexterm>management network"
"primary> network design"
"primary>management network "
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:67(para)
msgid ""
"Consider creating other private networks for communication between internal "
"components of OpenStack, such as the message queue and OpenStack Compute. "
"Using a virtual local area network (VLAN) works well for these scenarios "
"because it provides a method for creating multiple virtual networks on a "
"physical network."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:75(title)
msgid "Public Addressing Options"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:77(para)
msgid ""
"There are two main types of IP addresses for guest virtual machines: fixed "
"IPs and floating IPs. Fixed IPs are assigned to instances on boot, whereas "
"floating IP addresses can change their association between instances by "
"action of the user. Both types of IP addresses can be either public or "
"private, depending on your use case.IP addresses public addressing "
"options network design public addressing "
"options "
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:91(para)
msgid ""
"Fixed IP addresses are required, whereas it is possible to run OpenStack "
"without floating IPs. One of the most common use cases for floating IPs is "
"to provide public IP addresses to a private cloud, where there are a limited "
"number of IP addresses available. Another is for a public cloud user to have "
"a \"static\" IP address that can be reassigned when an instance is upgraded "
"or moved.IP addresses"
"primary>static static IP addresses "
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:104(para)
msgid ""
"Fixed IP addresses can be private for private clouds, or public for public "
"clouds. When an instance terminates, its fixed IP is lost. It is worth "
"noting that newer users of cloud computing may find their ephemeral nature "
"frustrating.IP addresses"
"primary>fixed fixed IP addresses "
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:117(title)
msgid "IP Address Planning"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:119(para)
msgid ""
"An OpenStack installation can potentially have many subnets (ranges of IP "
"addresses) and different types of services in each. An IP address plan can "
"assist with a shared understanding of network partition purposes and "
"scalability. Control services can have public and private IP addresses, and "
"as noted above, there are a couple of options for an instance's public "
"addresses.IP addresses"
"primary>address planning network design IP address "
"planning "
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:134(para)
msgid ""
"An IP address plan might be broken down into the following sections:"
"IP addresses"
"primary>sections of "
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:143(term)
msgid "Subnet router"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:146(para)
msgid ""
"Packets leaving the subnet go via this address, which could be a dedicated "
"router or a nova-network service."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:153(term)
msgid "Control services public interfaces"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:156(para)
msgid ""
"Public access to swift-proxy, nova-api, "
"glance-api, and horizon come to these addresses, which could be "
"on one side of a load balancer or pointing at individual machines."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:164(term)
msgid "Object Storage cluster internal communications"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:167(para)
msgid ""
"Traffic among object/account/container servers and between these and the "
"proxy server's internal interface uses this private network.containers container servers"
"secondary> objects"
"primary>object servers account server "
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:184(term)
msgid "Compute and storage communications"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:187(para)
msgid ""
"If ephemeral or block storage is external to the compute node, this network "
"is used."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:193(term)
msgid "Out-of-band remote management"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:196(para)
msgid ""
"If a dedicated remote access controller chip is included in servers, often "
"these are on a separate network."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:202(term)
msgid "In-band remote management"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:205(para)
msgid ""
"Often, an extra (such as 1 GB) interface on compute or storage nodes is used "
"for system administrators or monitoring tools to access the host instead of "
"going through the public interface."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:213(term)
msgid "Spare space for future growth"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:216(para)
msgid ""
"Adding more public-facing control services or guest instance IPs should "
"always be part of your plan."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:222(para)
msgid ""
"For example, take a deployment that has both OpenStack Compute and Object "
"Storage, with private ranges 172.22.42.0/24 and 172.22.87.0/26 available. "
"One way to segregate the space might be as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:245(para)
msgid ""
"A similar approach can be taken with public IP addresses, taking note that "
"large, flat ranges are preferred for use with guest instance IPs. Take into "
"account that for some OpenStack networking options, a public IP address in "
"the range of a guest instance public IP address is assigned to the "
"nova-compute host."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:253(title)
msgid "Network Topology"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:255(para)
msgid ""
"OpenStack Compute with nova-network provides predefined "
"network deployment models, each with its own strengths and weaknesses. The "
"selection of a network manager changes your network topology, so the choice "
"should be made carefully. You also have a choice between the tried-and-true "
"legacy nova-network settings or the neutron project for OpenStack Networking. Both offer "
"networking for launched instances with different implementations and "
"requirements.networks"
"primary>deployment options networks network managers"
"secondary> network design"
"primary>network topology deployment options"
"tertiary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:278(para)
msgid ""
"For OpenStack Networking with the neutron project, typical configurations "
"are documented with the idea that any setup you can configure with real "
"hardware you can re-create with a software-defined equivalent. Each tenant "
"can contain typical network elements such as routers, and services such as "
"DHCP."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:284(para)
msgid ""
" describes the networking "
"deployment options for both legacy nova-network options "
"and an equivalent neutron configuration.provisioning/deployment network "
"deployment options "
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:294(caption)
msgid "Networking deployment options"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:306(th)
msgid "Network deployment model"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:308(th)
msgid "Strengths"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:310(th)
msgid "Weaknesses"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:312(th)
msgid "Neutron equivalent"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:318(para)
msgid "Flat"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:320(para)
msgid "Extremely simple topology."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:320(para)
msgid "No DHCP overhead."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:323(para)
msgid ""
"Requires file injection into the instance to configure network interfaces."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:326(td)
msgid ""
"Configure a single bridge as the integration bridge (br-int) and connect it "
"to a physical network interface with the Modular Layer 2 (ML2) plug-in, "
"which uses Open vSwitch by default."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:332(para) ./doc/openstack-ops/section_arch_example-nova.xml:128(para)
msgid "FlatDHCP"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:334(para)
msgid "Relatively simple to deploy."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:334(para)
msgid "Standard networking."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:335(para)
msgid "Works with all guest operating systems."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:338(para) ./doc/openstack-ops/ch_arch_network_design.xml:350(para)
msgid "Requires its own DHCP broadcast domain."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:340(td)
msgid ""
"Configure DHCP agents and routing agents. Network Address Translation (NAT) "
"performed outside of compute nodes, typically on one or more network nodes."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:346(para)
msgid "VlanManager"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:348(para)
msgid "Each tenant is isolated to its own VLANs."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:350(para) ./doc/openstack-ops/ch_arch_network_design.xml:372(para)
msgid "More complex to set up."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:351(para)
msgid "Requires many VLANs to be trunked onto a single port."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:352(para)
msgid "Standard VLAN number limitation."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:353(para)
msgid "Switches must support 802.1q VLAN tagging."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:356(para)
msgid ""
"Isolated tenant networks implement some form of isolation of layer 2 traffic "
"between distinct networks. VLAN tagging is key concept, where traffic is "
"“tagged” with an ordinal identifier for the VLAN. Isolated network "
"implementations may or may not include additional services like DHCP, NAT, "
"and routing."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:364(para)
msgid "FlatDHCP Multi-host with high availability (HA)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:367(para)
msgid ""
"Networking failure is isolated to the VMs running on the affected hypervisor."
""
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:368(para)
msgid "DHCP traffic can be isolated within an individual host."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:369(para)
msgid "Network traffic is distributed to the compute nodes."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:372(para)
msgid ""
"Compute nodes typically need IP addresses accessible by external networks."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:374(para)
msgid ""
"Options must be carefully configured for live migration to work with "
"networking services."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:377(para)
msgid ""
"Configure neutron with multiple DHCP and layer-3 agents. Network nodes are "
"not able to failover to each other, so the controller runs networking "
"services, such as DHCP. Compute nodes run the ML2 plug-in with support for "
"agents such as Open vSwitch or Linux Bridge."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:386(para)
msgid ""
"Both nova-network and neutron services provide similar "
"capabilities, such as VLAN between VMs. You also can provide multiple NICs "
"on VMs with either service. Further discussion follows."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:392(title)
msgid "VLAN Configuration Within OpenStack VMs"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:394(para)
msgid ""
"VLAN configuration can be as simple or as complicated as desired. The use of "
"VLANs has the benefit of allowing each project its own subnet and broadcast "
"segregation from other projects. To allow OpenStack to efficiently use "
"VLANs, you must allocate a VLAN range (one for each project) and turn each "
"compute node switch port into a trunk port.networks VLAN "
"indexterm>VLAN network "
"indexterm>network design"
"primary>network topology VLAN with OpenStack "
"VMs "
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:413(para)
msgid ""
"For example, if you estimate that your cloud must support a maximum of 100 "
"projects, pick a free VLAN range that your network infrastructure is "
"currently not using (such as VLAN 200–299). You must configure OpenStack "
"with this range and also configure your switch ports to allow VLAN traffic "
"from that range."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:421(title)
msgid "Multi-NIC Provisioning"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:423(para)
msgid ""
"OpenStack Networking with neutron and OpenStack Compute "
"with nova-network have the ability to assign multiple NICs to instances. For "
"nova-network this can be done on a per-request basis, with each additional "
"NIC using up an entire subnet or VLAN, reducing the total number of "
"supported projects.MultiNic"
"primary> network design"
"primary>network topology multi-NIC "
"provisioning "
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:440(title)
msgid "Multi-Host and Single-Host Networking"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:442(para)
msgid ""
"The nova-network service has the ability to operate in a "
"multi-host or single-host mode. Multi-host is when each compute node runs a "
"copy of nova-network and the instances on that compute "
"node use the compute node as a gateway to the Internet. The compute nodes "
"also host the floating IPs and security groups for instances on that node. "
"Single-host is when a central server—for example, the cloud controller—runs "
"the nova-network service. All compute nodes forward traffic "
"from the instances to the cloud controller. The cloud controller then "
"forwards traffic to the Internet. The cloud controller hosts the floating "
"IPs and security groups for all instances on all compute nodes in the cloud."
"single-host networking "
"indexterm>networks"
"primary>multi-host multi-host networking network design network "
"topology multi- vs. single-host networking "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:469(para)
msgid ""
"There are benefits to both modes. Single-node has the downside of a single "
"point of failure. If the cloud controller is not available, instances cannot "
"communicate on the network. This is not true with multi-host, but multi-host "
"requires that each compute node has a public IP address to communicate on "
"the Internet. If you are not able to obtain a significant block of public IP "
"addresses, multi-host might not be an option."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:480(title)
msgid "Services for Networking"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:482(para)
msgid ""
"OpenStack, like any network application, has a number of standard "
"considerations to apply, such as NTP and DNS.network design services for "
"networking "
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:490(title)
msgid "NTP"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:492(para)
msgid ""
"Time synchronization is a critical element to ensure continued operation of "
"OpenStack components. Correct time is necessary to avoid errors in instance "
"scheduling, replication of objects in the object store, and even matching "
"log timestamps for debugging.networks Network Time Protocol "
"(NTP) "
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:502(para)
msgid ""
"All servers running OpenStack components should be able to access an "
"appropriate NTP server. You may decide to set up one locally or use the "
"public pools available from the Network Time Protocol project."
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:510(title)
msgid "DNS"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:512(para)
msgid ""
"OpenStack does not currently provide DNS services, aside from the dnsmasq "
"daemon, which resides on nova-network hosts. You could consider "
"providing a dynamic DNS service to allow instances to update a DNS entry "
"with new IP addresses. You can also consider making a generic forward and "
"reverse DNS mapping for instances' IP addresses, such as vm-203-0-113-123."
"example.com.DNS (Domain Name Server, "
"Service or System) DNS service choices "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_network_design.xml:528(para)
msgid ""
"Armed with your IP address layout and numbers and knowledge about the "
"topologies and services you can use, it's now time to prepare the network "
"for your installation. Be sure to also check out the OpenStack Security Guide for tips on "
"securing your network. We wish you a good relationship with your networking "
"team!"
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:19(title)
msgid "Acknowledgments"
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:20(para)
msgid ""
"The OpenStack Foundation supported the creation of this book with plane "
"tickets to Austin, lodging (including one adventurous evening without power "
"after a windstorm), and delicious food. For about USD $10,000, we could "
"collaborate intensively for a week in the same room at the Rackspace Austin "
"office. The authors are all members of the OpenStack Foundation, which you "
"can join. Go to the Foundation web site at http://openstack.org/join ."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:28(para)
msgid ""
"We want to acknowledge our excellent host Rackers at Rackspace in Austin:"
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:32(para)
msgid ""
"Emma Richards of Rackspace Guest Relations took excellent care of our lunch "
"orders and even set aside a pile of sticky notes that had fallen off the "
"walls."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:37(para)
msgid ""
"Betsy Hagemeier, a Fanatical Executive Assistant, took care of a room "
"reshuffle and helped us settle in for the week."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:41(para)
msgid ""
"The Real Estate team at Rackspace in Austin, also known as \"The Victors,\" "
"were super responsive."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:45(para)
msgid ""
"Adam Powell in Racker IT supplied us with bandwidth each day and second "
"monitors for those of us needing more screens."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:49(para)
msgid ""
"On Wednesday night we had a fun happy hour with the Austin OpenStack Meetup "
"group and Racker Katie Schmidt took great care of our group."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:54(para)
msgid "We also had some excellent input from outside of the room:"
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:57(para)
msgid ""
"Tim Bell from CERN gave us feedback on the outline before we started and "
"reviewed it mid-week."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:61(para)
msgid ""
"Sébastien Han has written excellent blogs and generously gave his permission "
"for re-use."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:65(para)
msgid ""
"Oisin Feeley read it, made some edits, and provided emailed feedback right "
"when we asked."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:69(para)
msgid ""
"Inside the book sprint room with us each day was our book sprint facilitator "
"Adam Hyde. Without his tireless support and encouragement, we would have "
"thought a book of this scope was impossible in five days. Adam has proven "
"the book sprint method effectively again and again. He creates both tools "
"and faith in collaborative authoring at www.booksprints.net."
msgstr ""
#: ./doc/openstack-ops/acknowledgements.xml:77(para)
msgid ""
"We couldn't have pulled it off without so much supportive help and "
"encouragement."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/preface_ops.xml:591(None)
msgid ""
"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/"
"openstack-ops/figures/osog_00in01.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:12(title)
msgid "Preface"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:14(para)
msgid ""
"OpenStack is an open source platform that lets you build an Infrastructure "
"as a Service (IaaS) cloud that runs on commodity hardware."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:19(title)
msgid "Introduction to OpenStack"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:21(para)
msgid ""
"OpenStack believes in open source, open design, and open development, all in "
"an open community that encourages participation by anyone. The long-term "
"vision for OpenStack is to produce a ubiquitous open source cloud computing "
"platform that meets the needs of public and private cloud providers "
"regardless of size. OpenStack services control large pools of compute, "
"storage, and networking resources throughout a data center."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:30(para)
msgid ""
"The technology behind OpenStack consists of a series of interrelated "
"projects delivering various components for a cloud infrastructure solution. "
"Each service provides an open API so that all of these resources can be "
"managed through a dashboard that gives administrators control while "
"empowering users to provision resources through a web interface, a command-"
"line client, or software development kits that support the API. Many "
"OpenStack APIs are extensible, meaning you can keep compatibility with a "
"core set of calls while providing access to more resources and innovating "
"through API extensions. The OpenStack project is a global collaboration of "
"developers and cloud computing technologists. The project produces an open "
"standard cloud computing platform for both public and private clouds. By "
"focusing on ease of implementation, massive scalability, a variety of rich "
"features, and tremendous extensibility, the project aims to deliver a "
"practical and reliable cloud solution for all types of organizations."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:50(title)
msgid "Getting Started with OpenStack"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:52(para)
msgid ""
"As an open source project, one of the unique aspects of OpenStack is that it "
"has many different levels at which you can begin to engage with it—you don't "
"have to do everything yourself."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:58(title)
msgid "Using OpenStack"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:60(para)
msgid ""
"You could ask, \"Do I even need to build a cloud?\" If you want to start "
"using a compute or storage service by just swiping your credit card, you can "
"go to eNovance, HP, Rackspace, or other organizations to start using their "
"public OpenStack clouds. Using their OpenStack cloud resources is similar to "
"accessing the publicly available Amazon Web Services Elastic Compute Cloud "
"(EC2) or Simple Storage Solution (S3)."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:71(title)
msgid "Plug and Play OpenStack"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:73(para)
msgid ""
"However, the enticing part of OpenStack might be to build your own private "
"cloud, and there are several ways to accomplish this goal. Perhaps the "
"simplest of all is an appliance-style solution. You purchase an appliance, "
"unpack it, plug in the power and the network, and watch it transform into an "
"OpenStack cloud with minimal additional configuration."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:79(para)
msgid ""
"However, hardware choice is important for many applications, so if that "
"applies to you, consider that there are several software distributions "
"available that you can run on servers, storage, and network products of your "
"choosing. Canonical (where OpenStack replaced Eucalyptus as the default "
"cloud option in 2011), Red Hat, and SUSE offer enterprise OpenStack "
"solutions and support. You may also want to take a look at some of the "
"specialized distributions, such as those from Rackspace, Piston, SwiftStack, "
"or Cloudscaling."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:88(para)
msgid ""
"Alternatively, if you want someone to help guide you through the decisions "
"about the underlying hardware or your applications, perhaps adding in a few "
"features or integrating components along the way, consider contacting one of "
"the system integrators with OpenStack experience, such as Mirantis or "
"Metacloud."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:94(para)
msgid ""
"If your preference is to build your own OpenStack expertise internally, a "
"good way to kick-start that might be to attend or arrange a training session."
" The OpenStack Foundation has a Training Marketplace where you can look for "
"nearby events. Also, the OpenStack community is working to produce open "
"source training materials."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:104(title)
msgid "Roll Your Own OpenStack"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:106(para)
msgid ""
"However, this guide has a different audience—those seeking flexibility from "
"the OpenStack framework by deploying do-it-yourself solutions."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:110(para)
msgid ""
"OpenStack is designed for horizontal scalability, so you can easily add new "
"compute, network, and storage resources to grow your cloud over time. In "
"addition to the pervasiveness of massive OpenStack public clouds, many "
"organizations, such as PayPal, Intel, and Comcast, build large-scale private "
"clouds. OpenStack offers much more than a typical software package because "
"it lets you integrate a number of different technologies to construct a "
"cloud. This approach provides great flexibility, but the number of options "
"might be daunting at first."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:123(title)
msgid "Who This Book Is For"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:124(para)
msgid ""
"This book is for those of you starting to run OpenStack clouds as well as "
"those of you who were handed an operational one and want to keep it running "
"well. Perhaps you're on a DevOps team, perhaps you are a system "
"administrator starting to dabble in the cloud, or maybe you want to get on "
"the OpenStack cloud team at your company. This book is for all of you."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:130(para)
msgid ""
"This guide assumes that you are familiar with a Linux distribution that "
"supports OpenStack, SQL databases, and virtualization. You must be "
"comfortable administering and configuring multiple Linux machines for "
"networking. You must install and maintain an SQL database and occasionally "
"run queries against it."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:136(para)
msgid ""
"One of the most complex aspects of an OpenStack cloud is the networking "
"configuration. You should be familiar with concepts such as DHCP, Linux "
"bridges, VLANs, and iptables. You must also have access to a network "
"hardware expert who can configure the switches and routers required in your "
"OpenStack cloud."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:142(para)
msgid ""
"Cloud computing is quite an advanced topic, and this book requires a lot of "
"background knowledge. However, if you are fairly new to cloud computing, we "
"recommend that you make use of the at "
"the back of the book, as well as the online documentation for OpenStack and "
"additional resources mentioned in this book in ."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:151(title)
msgid "Further Reading"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:152(para)
msgid ""
"There are other books on the OpenStack documentation website that can help you get the job "
"done."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:157(title)
msgid "OpenStack Guides"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:159(term)
msgid "OpenStack Installation Guides"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:161(para)
msgid ""
"Describes a manual installation process, as in, by hand, without automation, "
"for multiple distributions based on a packaging system:"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:168(link) ./doc/openstack-ops/ch_ops_resources.xml:16(link)
msgid ""
"Installation Guide for openSUSE 13.2 and SUSE Linux Enterprise Server 12"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:174(link)
msgid "Installation Guide for Red Hat Enterprise Linux 7 and CentOS 7"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:180(link) ./doc/openstack-ops/ch_ops_resources.xml:27(link)
msgid "Installation Guide for Ubuntu 14.04 (LTS) Server"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:188(link)
msgid "OpenStack Configuration Reference"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:192(para)
msgid ""
"Contains a reference listing of all configuration options for core and "
"integrated OpenStack services by release version"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:199(link) ./doc/openstack-ops/preface_ops.xml:253(link) ./doc/openstack-ops/ch_ops_resources.xml:31(link)
msgid "OpenStack Administrator Guide"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:203(para)
msgid ""
"Contains how-to information for managing an OpenStack cloud as needed for "
"your use cases, such as storage, computing, or software-defined-networking"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:211(link)
msgid "OpenStack High Availability Guide"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:215(para)
msgid ""
"Describes potential strategies for making your OpenStack services and "
"related controllers and data stores highly available"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:222(link)
msgid "OpenStack Security Guide"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:226(para)
msgid ""
"Provides best practices and conceptual information about securing an "
"OpenStack cloud"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:232(link)
msgid "Virtual Machine Image Guide"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:236(para)
msgid ""
"Shows you how to obtain, create, and modify virtual machine images that are "
"compatible with OpenStack"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:242(link)
msgid "OpenStack End User Guide"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:246(para)
msgid ""
"Shows OpenStack end users how to create and manage resources in an OpenStack "
"cloud with the OpenStack dashboard and OpenStack client commands"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:257(para)
msgid ""
"Shows OpenStack administrators how to create and manage resources in an "
"OpenStack cloud with the OpenStack dashboard and OpenStack client commands "
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:265(link)
msgid "Networking Guide"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:269(para)
msgid ""
"This guide targets OpenStack administrators seeking to deploy and manage "
"OpenStack Networking (neutron)."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:275(link)
msgid "OpenStack API Guide"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:279(para)
msgid ""
"A brief overview of how to send REST API requests to endpoints for OpenStack "
"services"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:288(title)
msgid "How This Book Is Organized"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:290(para)
msgid ""
"This book is organized into two parts: the architecture decisions for "
"designing OpenStack clouds and the repeated operations for running OpenStack "
"clouds."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:294(emphasis)
msgid "Part I:"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:301(para)
msgid ""
"Because of all the decisions the other chapters discuss, this chapter "
"describes the decisions made for this particular book and much of the "
"justification for the example architecture."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:312(para)
msgid ""
"While this book doesn't describe installation, we do recommend automation "
"for deployment and configuration, discussed in this chapter."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:322(para)
msgid ""
"The cloud controller is an invention for the sake of consolidating and "
"describing which services run on which nodes. This chapter discusses "
"hardware and network considerations as well as how to design the cloud "
"controller for performance and separation of services."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:335(para)
msgid ""
"This chapter describes the compute nodes, which are dedicated to running "
"virtual machines. Some hardware choices come into play here, as well as "
"logging and networking descriptions."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:346(para)
msgid ""
"This chapter discusses the growth of your cloud resources through scaling "
"and segregation considerations."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:356(para)
msgid ""
"As with other architecture decisions, storage concepts within OpenStack "
"offer many options. This chapter lays out the choices for you."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:366(para)
msgid ""
"Your OpenStack cloud networking needs to fit into your existing networks "
"while also enabling the best design for your users and administrators, and "
"this chapter gives you in-depth information about networking decisions."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:374(emphasis)
msgid "Part II:"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:381(para)
msgid ""
"This chapter is written to let you get your hands wrapped around your "
"OpenStack cloud through command-line tools and understanding what is already "
"set up in your cloud."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:392(para)
msgid ""
"This chapter walks through user-enabling processes that all admins must face "
"to manage users, give them quotas to parcel out resources, and so on."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:402(para)
msgid ""
"This chapter shows you how to use OpenStack cloud resources and how to train "
"your users."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:411(para)
msgid ""
"This chapter goes into the common failures that the authors have seen while "
"running clouds in production, including troubleshooting."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:421(para)
msgid ""
"Because network troubleshooting is especially difficult with virtual "
"resources, this chapter is chock-full of helpful tips and tricks for tracing "
"network traffic, finding the root cause of networking failures, and "
"debugging related services, such as DHCP and DNS."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:434(para)
msgid ""
"This chapter shows you where OpenStack places logs and how to best read and "
"manage logs for monitoring purposes."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:444(para)
msgid ""
"This chapter describes what you need to back up within OpenStack as well as "
"best practices for recovering backups."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:454(para)
msgid ""
"For readers who need to get a specialized feature into OpenStack, this "
"chapter describes how to use DevStack to write custom middleware or a custom "
"scheduler to rebalance your resources."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:465(para)
msgid ""
"Because OpenStack is so, well, open, this chapter is dedicated to helping "
"you navigate the community and find out where you can help and where you can "
"get help."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:475(para)
msgid ""
"Much of OpenStack is driver-oriented, so you can plug in different solutions "
"to the base set of services. This chapter describes some advanced "
"configuration topics ."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:486(para)
msgid ""
"This chapter provides upgrade information based on the architectures used in "
"this book."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:494(emphasis)
msgid "Back matter:"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:501(para)
msgid ""
"You can read a small selection of use cases from the OpenStack community "
"with some technical details and further resources."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:511(para)
msgid ""
"These are shared legendary tales of image disappearances, VM massacres, and "
"crazy troubleshooting techniques that result in hard-learned lessons and "
"wisdom ."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:522(para)
msgid ""
"Read about how to track the OpenStack roadmap through the open and "
"transparent development processes."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:531(para)
msgid ""
"So many OpenStack resources are available online because of the fast-moving "
"nature of the project, but there are also resources listed here that the "
"authors found helpful while learning themselves."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:542(para)
msgid ""
"A list of terms used in this book is included, which is a subset of the "
"larger OpenStack glossary available online."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:551(title)
msgid "Why and How We Wrote This Book"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:553(para)
msgid ""
"We wrote this book because we have deployed and maintained OpenStack clouds "
"for at least a year and we wanted to share this knowledge with others. After "
"months of being the point people for an OpenStack cloud, we also wanted to "
"have a document to hand to our system administrators so that they'd know how "
"to operate the cloud on a daily basis—both reactively and pro-actively. We "
"wanted to provide more detailed technical information about the decisions "
"that deployers make along the way."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:565(para)
msgid ""
"Design and create an architecture for your first nontrivial OpenStack cloud. "
"After you read this guide, you'll know which questions to ask and how to "
"organize your compute, networking, and storage resources and the associated "
"software packages."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:573(para)
msgid "Perform the day-to-day tasks required to administer a cloud."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:563(para)
msgid "We wrote this book to help you: "
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:578(para)
msgid ""
"We wrote this book in a book sprint, which is a facilitated, rapid "
"development production method for books. For more information, see the BookSprints site. Your "
"authors cobbled this book together in five days during February 2013, fueled "
"by caffeine and the best takeout food that Austin, Texas, could offer ."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:586(para)
msgid ""
"On the first day, we filled white boards with colorful sticky notes to start "
"to shape this nebulous book about how to architect and operate clouds:"
" "
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:596(para)
msgid ""
"We wrote furiously from our own experiences and bounced ideas between each "
"other. At regular intervals we reviewed the shape and organization of the "
"book and further molded it, leading to what you see today."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:601(para)
msgid "The team includes:"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:605(term)
msgid "Tom Fifield"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:608(para)
msgid ""
"After learning about scalability in computing from particle physics "
"experiments, such as ATLAS at the Large Hadron Collider (LHC) at CERN, Tom "
"worked on OpenStack clouds in production to support the Australian public "
"research sector. Tom currently serves as an OpenStack community manager and "
"works on OpenStack documentation in his spare time."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:619(term)
msgid "Diane Fleming"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:622(para)
msgid ""
"Diane works on the OpenStack API documentation tirelessly. She helped out "
"wherever she could on this project."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:629(term)
msgid "Anne Gentle"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:632(para)
msgid ""
"Anne is the documentation coordinator for OpenStack and also served as an "
"individual contributor to the Google Documentation Summit in 2011, working "
"with the Open Street Maps team. She has worked on book sprints in the past, "
"with FLOSS Manuals’ Adam Hyde facilitating. Anne lives in Austin, Texas."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:642(term)
msgid "Lorin Hochstein"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:645(para)
msgid ""
"An academic turned software-developer-slash-operator, Lorin worked as the "
"lead architect for Cloud Services at Nimbis Services, where he deploys "
"OpenStack for technical computing applications. He has been working with "
"OpenStack since the Cactus release. Previously, he worked on high-"
"performance computing extensions for OpenStack at University of Southern "
"California's Information Sciences Institute (USC-ISI)."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:657(term)
msgid "Adam Hyde"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:660(para)
msgid ""
"Adam facilitated this book sprint. He also founded the book sprint "
"methodology and is the most experienced book-sprint facilitator around. See "
" for more information. Adam "
"founded FLOSS Manuals—a community of some 3,000 individuals developing Free "
"Manuals about Free Software. He is also the founder and project manager for "
"Booktype, an open source project for writing, editing, and publishing books "
"online and in print."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:673(term)
msgid "Jonathan Proulx"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:676(para)
msgid ""
"Jon has been piloting an OpenStack cloud as a senior technical architect at "
"the MIT Computer Science and Artificial Intelligence Lab for his researchers "
"to have as much computing power as they need. He started contributing to "
"OpenStack documentation and reviewing the documentation so that he could "
"accelerate his learning."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:686(term)
msgid "Everett Toews"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:689(para)
msgid ""
"Everett is a developer advocate at Rackspace making OpenStack and the "
"Rackspace Cloud easy to use. Sometimes developer, sometimes advocate, and "
"sometimes operator, he's built web applications, taught workshops, given "
"presentations around the world, and deployed OpenStack for production use by "
"academia and business."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:699(term)
msgid "Joe Topjian"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:702(para)
msgid ""
"Joe has designed and deployed several clouds at Cybera, a nonprofit where "
"they are building e-infrastructure to support entrepreneurs and local "
"researchers in Alberta, Canada. He also actively maintains and operates "
"these clouds as a systems architect, and his experiences have generated a "
"wealth of troubleshooting skills for cloud environments."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:713(term)
msgid "OpenStack community members"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:716(para)
msgid ""
"Many individual efforts keep a community book alive. Our community members "
"updated content for this book year-round. Also, a year after the first "
"sprint, Jon Proulx hosted a second two-day mini-sprint at MIT with the goal "
"of updating the book for the latest release. Since the book's inception, "
"more than 30 contributors have supported this book. We have a tool chain for "
"reviews, continuous builds, and translations. Writers and developers "
"continuously review patches, enter doc bugs, edit content, and fix doc bugs. "
"We want to recognize their efforts!"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:728(para)
msgid ""
"The following people have contributed to this book: Akihiro Motoki, "
"Alejandro Avella, Alexandra Settle, Andreas Jaeger, Andy McCallum, Benjamin "
"Stassart, Chandan Kumar, Chris Ricker, David Cramer, David Wittman, Denny "
"Zhang, Emilien Macchi, Gauvain Pocentek, Ignacio Barrio, James E. Blair, Jay "
"Clark, Jeff White, Jeremy Stanley, K Jonathan Harker, KATO Tomoyuki, Lana "
"Brindley, Laura Alves, Lee Li, Lukasz Jernas, Mario B. Codeniera, Matthew "
"Kassawara, Michael Still, Monty Taylor, Nermina Miller, Nigel Williams, Phil "
"Hopkins, Russell Bryant, Sahid Orentino Ferdjaoui, Sandy Walsh, Sascha "
"Peilicke, Sean M. Collins, Sergey Lukjanov, Shilla Saebi, Stephen Gordon, "
"Summer Long, Uwe Stuehler, Vaibhav Bhatkar, Veronica Musso, Ying Chun "
"\"Daisy\" Guo, Zhengguang Ou, and ZhiQiang Fan."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:749(title)
msgid "How to Contribute to This Book"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:751(para)
msgid ""
"The genesis of this book was an in-person event, but now that the book is in "
"your hands, we want you to contribute to it. OpenStack documentation follows "
"the coding principles of iterative work, with bug logging, investigating, "
"and fixing. We also store the source content on GitHub and invite "
"collaborators through the OpenStack Gerrit installation, which offers "
"reviews. For the O'Reilly edition of this book, we are using the company's "
"Atlas system, which also stores source content on GitHub and enables "
"collaboration among contributors."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:761(para)
msgid ""
"Learn more about how to contribute to the OpenStack docs at OpenStack Documentation "
"Contributor Guide."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:765(para)
msgid ""
"If you find a bug and can't fix it or aren't sure it's really a doc bug, log "
"a bug at OpenStack Manuals. Tag the bug under Extra"
"guilabel> options with the ops-guide tag to indicate that "
"the bug is in this guide. You can assign the bug to yourself if you know how "
"to fix it. Also, a member of the OpenStack doc-core team can triage the doc "
"bug."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:778(title)
msgid "Conventions Used in This Book"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:780(para)
msgid "The following typographical conventions are used in this book:"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:785(emphasis)
msgid "Italic"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:788(para)
msgid ""
"Indicates new terms, URLs, email addresses, filenames, and file extensions."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:794(literal)
msgid "Constant width"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:797(para)
msgid ""
"Used for program listings, as well as within paragraphs to refer to program "
"elements such as variable or function names, databases, data types, "
"environment variables, statements, and keywords."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:808(para)
msgid ""
"Shows commands or other text that should be typed literally by the user."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:814(replaceable)
msgid "Constant width italic"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:817(para)
msgid ""
"Shows text that should be replaced with user-supplied values or by values "
"determined by context."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:823(term)
msgid "Command prompts"
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:826(para)
msgid ""
"Commands prefixed with the # prompt should be executed by "
"the root user. These examples can also be executed using "
"the sudo command, if available."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:831(para)
msgid ""
"Commands prefixed with the $ prompt can be executed by "
"any user, including root ."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:839(para)
msgid "This element signifies a tip or suggestion."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:843(para)
msgid "This element signifies a general note."
msgstr ""
#: ./doc/openstack-ops/preface_ops.xml:847(para)
msgid "This element indicates a warning or caution."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:10(title)
msgid "Use Cases"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:12(para)
msgid ""
"This appendix contains a small selection of use cases from the community, "
"with more technical detail than usual. Further examples can be found on the "
"OpenStack website."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:19(title)
msgid "NeCTAR"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:21(para)
msgid ""
"Who uses it: researchers from the Australian publicly funded research sector."
" Use is across a wide variety of disciplines, with the purpose of instances "
"ranging from running simple web servers to using hundreds of cores for high-"
"throughput computing.NeCTAR Research "
"Cloud use cases"
"primary>NeCTAR OpenStack community use cases"
"secondary>NeCTAR "
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:40(title) ./doc/openstack-ops/app_usecases.xml:110(title) ./doc/openstack-ops/app_usecases.xml:202(title) ./doc/openstack-ops/app_usecases.xml:258(title)
msgid "Deployment"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:42(para)
msgid ""
"Using OpenStack Compute cells, the NeCTAR Research Cloud spans eight sites "
"with approximately 4,000 cores per site."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:45(para)
msgid ""
"Each site runs a different configuration, as a resource cell"
"glossterm>s in an OpenStack Compute cells setup. Some sites span multiple "
"data centers, some use off compute node storage with a shared file system, "
"and some use on compute node storage with a non-shared file system. Each "
"site deploys the Image service with an Object Storage back end. A central "
"Identity, dashboard, and Compute API service are used. A login to the "
"dashboard triggers a SAML login with Shibboleth, which creates an "
"account in the Identity service with an SQL back end. "
"An Object Storage Global Cluster is used across several sites."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:56(para)
msgid ""
"Compute nodes have 24 to 48 cores, with at least 4 GB of RAM per core and "
"approximately 40 GB of ephemeral storage per core."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:59(para)
msgid ""
"All sites are based on Ubuntu 14.04, with KVM as the hypervisor. The "
"OpenStack version in use is typically the current stable version, with 5 to "
"10 percent back-ported code from trunk and modifications."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:66(title) ./doc/openstack-ops/app_usecases.xml:166(title) ./doc/openstack-ops/app_usecases.xml:227(title) ./doc/openstack-ops/app_usecases.xml:280(title) ./doc/openstack-ops/ch_ops_resources.xml:11(title)
msgid "Resources"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:70(link)
msgid "OpenStack.org case study"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:75(link)
msgid "NeCTAR-RC GitHub"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:80(link)
msgid "NeCTAR website"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:88(title)
msgid "MIT CSAIL"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:90(para)
msgid ""
"Who uses it: researchers from the MIT Computer Science and Artificial "
"Intelligence Lab.CSAIL (Computer "
"Science and Artificial Intelligence Lab) MIT CSAIL (Computer Science and Artificial "
"Intelligence Lab) use cases MIT CSAIL "
"indexterm>OpenStack community"
"primary>use cases MIT CSAIL "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:112(para)
msgid ""
"The CSAIL cloud is currently 64 physical nodes with a total of 768 physical "
"cores and 3,456 GB of RAM. Persistent data storage is largely outside the "
"cloud on NFS, with cloud resources focused on compute resources. There are "
"more than 130 users in more than 40 projects, typically running 2,000–2,500 "
"vCPUs in 300 to 400 instances."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:118(para)
msgid ""
"We initially deployed on Ubuntu 12.04 with the Essex release of OpenStack "
"using FlatDHCP multi-host networking."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:121(para)
msgid ""
"The software stack is still Ubuntu 12.04 LTS, but now with OpenStack Havana "
"from the Ubuntu Cloud Archive. KVM is the hypervisor, deployed using FAI and Puppet for "
"configuration management. The FAI and Puppet combination is used lab-wide, "
"not only for OpenStack. There is a single cloud controller node, which also "
"acts as network controller, with the remainder of the server hardware "
"dedicated to compute nodes."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:129(para)
msgid ""
"Host aggregates and instance-type extra specs are used to provide two "
"different resource allocation ratios. The default resource allocation ratios "
"we use are 4:1 CPU and 1.5:1 RAM. Compute-intensive workloads use instance "
"types that require non-oversubscribed hosts where cpu_ratio"
"literal> and ram_ratio are both set to 1.0. Since we have "
"hyper-threading enabled on our compute nodes, this provides one vCPU per CPU "
"thread, or two vCPUs per physical core."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:138(para)
msgid ""
"With our upgrade to Grizzly in August 2013, we moved to OpenStack "
"Networking, neutron (quantum at the time). Compute nodes have two-gigabit "
"network interfaces and a separate management card for IPMI management. One "
"network interface is used for node-to-node communications. The other is used "
"as a trunk port for OpenStack managed VLANs. The controller node uses two "
"bonded 10g network interfaces for its public IP communications. Big pipes "
"are used here because images are served over this port, and it is also used "
"to connect to iSCSI storage, back-ending the image storage and database. The "
"controller node also has a gigabit interface that is used in trunk mode for "
"OpenStack managed VLAN traffic. This port handles traffic to the dhcp-agent "
"and metadata-proxy."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:151(para)
msgid ""
"We approximate the older nova-network multi-host HA setup "
"by using \"provider VLAN networks\" that connect instances directly to "
"existing publicly addressable networks and use existing physical routers as "
"their default gateway. This means that if our network controller goes down, "
"running instances still have their network available, and no single Linux "
"host becomes a traffic bottleneck. We are able to do this because we have a "
"sufficient supply of IPv4 addresses to cover all of our instances and thus "
"don't need NAT and don't use floating IP addresses. We provide a single "
"generic public network to all projects and additional existing VLANs on a "
"project-by-project basis as needed. Individual projects are also allowed to "
"create their own private GRE based networks."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:170(link)
msgid "CSAIL homepage"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:178(title)
msgid "DAIR"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:180(para)
msgid ""
"Who uses it: DAIR is an integrated virtual environment that leverages the "
"CANARIE network to develop and test new information communication technology "
"(ICT) and other digital technologies. It combines such digital "
"infrastructure as advanced networking and cloud computing and storage to "
"create an environment for developing and testing innovative ICT "
"applications, protocols, and services; performing at-scale experimentation "
"for deployment; and facilitating a faster time to market.DAIR use cases DAIR "
"indexterm>OpenStack community"
"primary>use cases DAIR "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:204(para)
msgid ""
"DAIR is hosted at two different data centers across Canada: one in Alberta "
"and the other in Quebec. It consists of a cloud controller at each location, "
"although, one is designated the \"master\" controller that is in charge of "
"central authentication and quotas. This is done through custom scripts and "
"light modifications to OpenStack. DAIR is currently running Havana."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:211(para)
msgid "For Object Storage, each region has a swift environment."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:213(para)
msgid ""
"A NetApp appliance is used in each region for both block storage and "
"instance storage. There are future plans to move the instances off the "
"NetApp appliance and onto a distributed file system such as Ceph"
"glossterm> or GlusterFS."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:218(para)
msgid ""
"VlanManager is used extensively for network management. All servers have two "
"bonded 10GbE NICs that are connected to two redundant switches. DAIR is set "
"up to use single-node networking where the cloud controller is the gateway "
"for all instances on all compute nodes. Internal OpenStack traffic (for "
"example, storage traffic) does not go through the cloud controller."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:231(link)
msgid "DAIR homepage"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:239(title)
msgid "CERN"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:241(para)
msgid ""
"Who uses it: researchers at CERN (European Organization for Nuclear "
"Research) conducting high-energy physics research.CERN (European Organization for Nuclear Research)"
"primary> use cases"
"primary>CERN OpenStack community use cases"
"secondary>CERN "
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:260(para)
msgid ""
"The environment is largely based on Scientific Linux 6, which is Red Hat "
"compatible. We use KVM as our primary hypervisor, although tests are ongoing "
"with Hyper-V on Windows Server 2008."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:264(para)
msgid ""
"We use the Puppet Labs OpenStack modules to configure Compute, Image "
"service, Identity, and dashboard. Puppet is used widely for instance "
"configuration, and Foreman is used as a GUI for reporting and instance "
"provisioning."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:269(para)
msgid ""
"Users and groups are managed through Active Directory and imported into the "
"Identity service using LDAP. CLIs are available for nova and Euca2ools to do "
"this."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:273(para)
msgid ""
"There are three clouds currently running at CERN, totaling about 4,700 "
"compute nodes, with approximately 120,000 cores. The CERN IT cloud aims to "
"expand to 300,000 cores by 2015."
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:284(link)
msgid "“OpenStack in Production: A tale of 3 OpenStack Clouds”"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:289(link)
msgid "“Review of CERN Data Centre Infrastructure”"
msgstr ""
#: ./doc/openstack-ops/app_usecases.xml:294(link)
msgid "“CERN Cloud Infrastructure User Guide”"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:12(title)
msgid "Logging and Monitoring"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:14(para)
msgid ""
"As an OpenStack cloud is composed of so many different services, there are a "
"large number of log files. This chapter aims to assist you in locating and "
"working with them and describes other ways to track the status of your "
"deployment.debugging"
"primary>logging/monitoring; maintenance/debugging "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:24(title)
msgid "Where Are the Logs?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:26(para)
msgid ""
"Most services use the convention of writing their log files to "
"subdirectories of the /var/log directory, as listed in .cloud controllers log information"
"secondary> logging/"
"monitoring log location "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:39(caption)
msgid "OpenStack log locations"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:43(th)
msgid "Node type"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:45(th)
msgid "Service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:47(th)
msgid "Log location"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:53(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:61(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:69(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:77(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:85(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:93(para)
msgid "Cloud controller"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:55(code)
msgid "nova-*"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:57(code)
msgid "/var/log/nova"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:63(code)
msgid "glance-*"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:65(code)
msgid "/var/log/glance"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:71(code)
msgid "cinder-*"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:73(code)
msgid "/var/log/cinder"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:79(code)
msgid "keystone-*"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:81(code)
msgid "/var/log/keystone"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:87(code)
msgid "neutron-*"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:89(code)
msgid "/var/log/neutron"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:95(para) ./doc/openstack-ops/ch_ops_customize.xml:332(code) ./doc/openstack-ops/ch_ops_customize.xml:811(code)
msgid "horizon"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:97(code)
msgid "/var/log/apache2/"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:101(para)
msgid "All nodes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:103(para)
msgid "misc (swift, dnsmasq)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:105(code)
msgid "/var/log/syslog"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:111(para)
msgid "libvirt"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:113(code)
msgid "/var/log/libvirt/libvirtd.log"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:119(para)
msgid "Console (boot up messages) for VM instances:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:121(code)
msgid "/var/lib/nova/instances/instance-"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:122(code)
msgid "<instance id>/console.log"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:127(para)
msgid "Block Storage nodes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:129(para)
msgid "cinder-volume"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:131(code)
msgid "/var/log/cinder/cinder-volume.log"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:139(title)
msgid "Reading the Logs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:141(para)
msgid ""
"OpenStack services use the standard logging levels, at increasing severity: "
"DEBUG, INFO, AUDIT, WARNING, ERROR, CRITICAL, and TRACE. That is, messages "
"only appear in the logs if they are more \"severe\" than the particular log "
"level, with DEBUG allowing all log statements through. For example, TRACE is "
"logged only if the software has a stack trace, while INFO is logged for "
"every message including those that are only for information.logging/monitoring logging levels"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:153(para)
msgid ""
"To disable DEBUG-level logging, edit /etc/nova/nova.conf"
"filename> as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:158(para)
msgid ""
"Keystone is handled a little differently. To modify the logging level, edit "
"the /etc/keystone/logging.conf file and look at the "
"logger_root and handler_file sections."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:163(para)
msgid ""
"Logging for horizon is configured in "
"/etc/openstack_dashboard/local_ "
"phrase>settings.py . Because horizon is a Django web "
"application, it follows the Django Logging "
"framework conventions."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:169(para)
msgid ""
"The first step in finding the source of an error is typically to search for "
"a CRITICAL, TRACE, or ERROR message in the log starting at the bottom of the "
"log file.logging/monitoring"
"primary>reading log messages "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:177(para)
msgid ""
"Here is an example of a CRITICAL log message, with the corresponding TRACE "
"(Python traceback) immediately following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:211(para)
msgid ""
"In this example, cinder-volumes failed to start and has "
"provided a stack trace, since its volume back end has been unable to set up "
"the storage volume—probably because the LVM volume that is expected from the "
"configuration does not exist."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:216(para)
msgid "Here is an example error log:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:221(para)
msgid ""
"In this error, a nova service has failed to connect to the RabbitMQ server "
"because it got a connection refused error."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:226(title)
msgid "Tracing Instance Requests"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:228(para)
msgid ""
"When an instance fails to behave properly, you will often have to trace "
"activity associated with that instance across the log files of various "
"nova-* services and across both the cloud controller and "
"compute nodes.instances"
"primary>tracing instance requests "
"indexterm>logging/monitoring"
"primary>tracing instance requests "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:241(para)
msgid ""
"The typical way is to trace the UUID associated with an instance across the "
"service logs."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:244(para)
msgid "Consider the following example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:253(para)
msgid ""
"Here, the ID associated with the instance is faf7ded8-4a46-413b-b113-"
"f19590746ffe. If you search for this string on the cloud controller "
"in the /var/log/nova-*.log files, it appears in "
"nova-api.log and nova-scheduler.log"
"filename>. If you search for this on the compute nodes in /var/log/"
"nova-*.log , it appears in nova-network.log "
"and nova-compute.log . If no ERROR or CRITICAL messages "
"appear, the most recent log entry that reports this may provide a hint about "
"what has gone wrong."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:267(title)
msgid "Adding Custom Logging Statements"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:269(para)
msgid ""
"If there is not enough information in the existing logs, you may need to add "
"your own custom logging statements to the nova-* services."
"customization"
"primary>custom log statements logging/monitoring adding "
"custom log statements "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:281(para)
msgid ""
"The source files are located in /usr/lib/python2.7/dist-packages/"
"nova ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:284(para)
msgid ""
"To add logging statements, the following line should be near the top of the "
"file. For most files, these should already be there:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:290(para)
msgid "To add a DEBUG logging statement, you would do:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:294(para)
msgid ""
"You may notice that all the existing logging messages are preceded by an "
"underscore and surrounded by parentheses, for example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:299(para)
msgid ""
"This formatting is used to support translation of logging messages into "
"different languages using the gettext internationalization library. You "
"don't need to do this for your own custom log messages. However, if you want "
"to contribute the code back to the OpenStack project that includes logging "
"statements, you must surround your log messages with underscores and "
"parentheses."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:309(title)
msgid "RabbitMQ Web Management Interface or rabbitmqctl"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:311(para)
msgid ""
"Aside from connection failures, RabbitMQ log files are generally not useful "
"for debugging OpenStack related issues. Instead, we recommend you use the "
"RabbitMQ web management interface.RabbitMQ logging/monitoring RabbitMQ web "
"management interface Enable it on your cloud "
"controller:cloud controllers"
"primary>enabling RabbitMQ "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:330(para)
msgid ""
"The RabbitMQ web management interface is accessible on your cloud controller "
"at http://localhost:55672 ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:334(para)
msgid ""
"Ubuntu 12.04 installs RabbitMQ version 2.7.1, which uses port 55672. "
"RabbitMQ versions 3.0 and above use port 15672 instead. You can check which "
"version of RabbitMQ you have running on your local Ubuntu machine by doing:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:343(para)
msgid ""
"An alternative to enabling the RabbitMQ web management interface is to use "
"the rabbitmqctl "
"commands. For example, rabbitmqctl list_queues| grep cinder"
"literal> displays any messages left in the queue. If there are messages, "
"it's a possible sign that cinder services didn't connect properly to "
"rabbitmq and might have to be restarted."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:350(para)
msgid ""
"Items to monitor for RabbitMQ include the number of items in each of the "
"queues and the processing time statistics for the server."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:355(title)
msgid "Centrally Managing Logs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:357(para)
msgid ""
"Because your cloud is most likely composed of many servers, you must check "
"logs on each of those servers to properly piece an event together. A better "
"solution is to send the logs of all servers to a central location so that "
"they can all be accessed from the same area.logging/monitoring central log "
"management "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:367(para)
msgid ""
"Ubuntu uses rsyslog as the default logging service. Since it is natively "
"able to send logs to a remote location, you don't have to install anything "
"extra to enable this feature, just modify the configuration file. In doing "
"this, consider running your logging over a management network or using an "
"encrypted VPN to avoid interception."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:374(title)
msgid "rsyslog Client Configuration"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:376(para)
msgid ""
"To begin, configure all OpenStack components to log to syslog in addition to "
"their standard log file location. Also configure each component to log to a "
"different syslog facility. This makes it easier to split the logs into "
"individual components on the central server:rsyslog "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:384(para)
msgid "nova.conf :"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:389(para)
msgid ""
"glance-api.conf and glance-registry.conf"
"filename>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:395(para)
msgid "cinder.conf :"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:400(para)
msgid "keystone.conf :"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:405(para)
msgid "By default, Object Storage logs to syslog."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:407(para)
msgid ""
"Next, create /etc/rsyslog.d/client.conf with the "
"following line:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:412(para)
msgid ""
"This instructs rsyslog to send all logs to the IP listed. In this example, "
"the IP points to the cloud controller."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:417(title)
msgid "rsyslog Server Configuration"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:419(para)
msgid ""
"Designate a server as the central logging server. The best practice is to "
"choose a server that is solely dedicated to this purpose. Create a file "
"called /etc/rsyslog.d/server.conf with the following "
"contents:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:444(para)
msgid ""
"This example configuration handles the nova service only. It first "
"configures rsyslog to act as a server that runs on port 514. Next, it "
"creates a series of logging templates. Logging templates control where "
"received logs are stored. Using the last example, a nova log from c01."
"example.com goes to the following locations:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:452(filename)
msgid "/var/log/rsyslog/c01.example.com/nova.log"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:456(filename) ./doc/openstack-ops/ch_ops_log_monitor.xml:468(filename)
msgid "/var/log/rsyslog/nova.log"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:460(para)
msgid "This is useful, as logs from c02.example.com go to:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:464(filename)
msgid "/var/log/rsyslog/c02.example.com/nova.log"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:472(para)
msgid ""
"You have an individual log file for each compute node as well as an "
"aggregated log that contains nova logs from all nodes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:478(title)
msgid "Monitoring"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:480(para)
msgid ""
"There are two types of monitoring: watching for problems and watching usage "
"trends. The former ensures that all services are up and running, creating a "
"functional cloud. The latter involves monitoring resource usage over time in "
"order to make informed decisions about potential bottlenecks and upgrades."
"cloud controllers"
"primary>process monitoring and "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:493(title)
msgid "Nagios"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:495(para)
msgid ""
"Nagios is an open source monitoring service. It's capable of executing "
"arbitrary commands to check the status of server and network services, "
"remotely executing arbitrary commands directly on servers, and allowing "
"servers to push notifications back in the form of passive monitoring. Nagios "
"has been around since 1999. Although newer monitoring services are "
"available, Nagios is a tried-and-true systems administration staple."
"Nagios "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:507(title)
msgid "Process Monitoring"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:509(para)
msgid ""
"A basic type of alert monitoring is to simply check and see whether a "
"required process is running.monitoring process monitoring"
"secondary> process "
"monitoring logging/monitoring process "
"monitoring For example, ensure that the nova-"
"api service is running on the cloud controller:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:536(para)
msgid ""
"You can create automated alerts for critical processes by using Nagios and "
"NRPE. For example, to ensure that the nova-compute process is "
"running on compute nodes, create an alert on your Nagios server that looks "
"like this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:550(para)
msgid ""
"Then on the actual compute node, create the following NRPE configuration:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:556(para)
msgid ""
"Nagios checks that at least one nova-compute service is "
"running at all times."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:561(title)
msgid "Resource Alerting"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:563(para)
msgid ""
"Resource alerting provides notifications when one or more resources are "
"critically low. While the monitoring thresholds should be tuned to your "
"specific OpenStack environment, monitoring resource usage is not specific to "
"OpenStack at all—any generic type of alert will work fine.monitoring resource alerting"
"secondary> alerts"
"primary>resource resources resource alerting"
"secondary> logging/"
"monitoring resource alerting "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:585(para)
msgid "Some of the resources that you want to monitor include:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:589(para)
msgid "Disk usage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:593(para)
msgid "Server load"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:597(para)
msgid "Memory usage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:601(para)
msgid "Network I/O"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:605(para)
msgid "Available vCPUs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:609(para)
msgid ""
"For example, to monitor disk capacity on a compute node with Nagios, add the "
"following to your Nagios configuration:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:620(para)
msgid "On the compute node, add the following to your NRPE configuration:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:626(para)
msgid ""
"Nagios alerts you with a WARNING when any disk on the compute node is 80 "
"percent full and CRITICAL when 90 percent is full."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:631(title)
msgid "StackTach"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:633(para)
msgid ""
"StackTach is a tool that collects and reports the notifications sent by "
"nova. Notifications are essentially the same as logs but can be "
"much more detailed. Nearly all OpenStack components are capable of "
"generating notifications when significant events occur. Notifications are "
"messages placed on the OpenStack queue (generally RabbitMQ) for consumption "
"by downstream systems. An overview of notifications can be found at System Usage Data.StackTach logging/monitoring StackTack tool"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:649(para)
msgid ""
"To enable nova to send notifications, add the following to "
"nova.conf :"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:655(para)
msgid ""
"Once nova is sending notifications, install and configure "
"StackTach. StackTach workers for Queue consumption and pipeling processing "
"are configured to read these notifications from RabbitMQ servers and store "
"them in a database. Users can inquire on instances, requests and servers by "
"using the browser interface or command line tool, Stacky. Since StackTach is relatively "
"new and constantly changing, installation instructions quickly become "
"outdated. Please refer to the StackTach Git repo for instructions as "
"well as a demo video. Additional details on the latest developments can be "
"discovered at theofficial page"
"link>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:666(title)
msgid "Logstash"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:667(para)
msgid ""
"Logstash is a high performance indexing and search engine for logs. Logs "
"from Jenkins test runs are sent to logstash where they are indexed and "
"stored. Logstash facilitates reviewing logs from multiple sources in a "
"single test run, searching for errors or particular events within a test "
"run, and searching for log event trends across test runs."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:677(para)
msgid "Log Pusher"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:680(para)
msgid "Log Indexer"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:683(para)
msgid "ElasticSearch"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:686(para)
msgid "Kibana"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:673(para)
msgid ""
"There are four major layers in Logstash setup which are "
"Each layer scales horizontally. As the number of logs grows you can add more "
"log pushers, more Logstash indexers, and more ElasticSearch nodes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:692(para)
msgid ""
"Logpusher is a pair of Python scripts which first listens to Jenkins build "
"events and converts them into Gearman jobs. Gearman provides a generic "
"application framework to farm out work to other machines or processes that "
"are better suited to do the work. It allows you to do work in parallel, to "
"load balance processing, and to call functions between languages.Later "
"Logpusher performs Gearman jobs to push log files into logstash. Logstash "
"indexer reads these log events, filters them to remove unwanted lines, "
"collapse multiple events together, and parses useful information before "
"shipping them to ElasticSearch for storage and indexing. Kibana is a "
"logstash oriented web client for ElasticSearch.Logstash logging/monitoring Logstash"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:712(title)
msgid "OpenStack Telemetry"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:714(para)
msgid ""
"An integrated OpenStack project (code-named ceilometer) collects metering "
"and event data relating to OpenStack services. Data collected by the "
"Telemetry service could be used for billing. Depending on deployment "
"configuration, collected data may be accessible to users based on the "
"deployment configuration. The Telemetry service provides a REST API "
"documented at . You can read more about the module in the OpenStack "
"Administrator Guide or in the developer documentation."
"monitoring"
"primary>metering and telemetry telemetry/metering "
"indexterm>metering/telemetry"
"primary> ceilometer"
"primary> logging/"
"monitoring ceilometer project "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:741(title)
msgid "OpenStack-Specific Resources"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:743(para)
msgid ""
"Resources such as memory, disk, and CPU are generic resources that all "
"servers (even non-OpenStack servers) have and are important to the overall "
"health of the server. When dealing with OpenStack specifically, these "
"resources are important for a second reason: ensuring that enough are "
"available to launch instances. There are a few ways you can see OpenStack "
"resource usage.monitoring"
"primary>OpenStack-specific resources "
"indexterm>resources"
"primary>generic vs. OpenStack-specific "
"indexterm>logging/monitoring"
"primary>OpenStack-specific resources The "
"first is through the nova command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:765(para)
msgid ""
"This command displays a list of how many instances a tenant has running and "
"some light usage statistics about the combined instances. This command is "
"useful for a quick overview of your cloud, but it doesn't really get into a "
"lot of details."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:770(para)
msgid ""
"Next, the nova database contains three tables that store usage "
"information."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:773(para)
msgid ""
"The nova.quotas and nova.quota_usages tables store "
"quota information. If a tenant's quota is different from the default quota "
"settings, its quota is stored in the nova.quotas table. For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:794(para)
msgid ""
"The nova.quota_usages table keeps track of how many resources "
"the tenant currently has in use:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:810(para)
msgid ""
"By comparing a tenant's hard limit with their current resource usage, you "
"can see their usage percentage. For example, if this tenant is using 1 "
"floating IP out of 10, then they are using 10 percent of their floating IP "
"quota. Rather than doing the calculation manually, you can use SQL or the "
"scripting language of your choice and create a formatted report:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:839(para)
msgid ""
"The preceding information was generated by using a custom script that can be "
"found on GitHub."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:844(para)
msgid ""
"This script is specific to a certain OpenStack installation and must be "
"modified to fit your environment. However, the logic should easily be "
"transferable."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:851(title)
msgid "Intelligent Alerting"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:853(para)
msgid ""
"Intelligent alerting can be thought of as a form of continuous integration "
"for operations. For example, you can easily check to see whether the Image "
"service is up and running by ensuring that the glance-api and "
"glance-registry processes are running or by seeing whether "
"glace-api is responding on port 9292.monitoring intelligent alerting"
"secondary> alerts"
"primary>intelligent logging/monitoring"
"seealso> intelligent "
"alerting logging/"
"monitoring intelligent alerting "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:876(para)
msgid ""
"But how can you tell whether images are being successfully uploaded to the "
"Image service? Maybe the disk that Image service is storing the images on is "
"full or the S3 back end is down. You could naturally check this by doing a "
"quick image upload:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:895(para)
msgid ""
"By taking this script and rolling it into an alert for your monitoring "
"system (such as Nagios), you now have an automated way of ensuring that "
"image uploads to the Image Catalog are working."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:900(para)
msgid ""
"You must remove the image after each test. Even better, test whether you can "
"successfully delete an image from the Image Service."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:905(para)
msgid ""
"Intelligent alerting takes considerably more time to plan and implement than "
"the other alerts described in this chapter. A good outline to implement "
"intelligent alerting is:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:911(para)
msgid "Review common actions in your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:915(para)
msgid "Create ways to automatically test these actions."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:919(para)
msgid "Roll these tests into an alerting system."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:923(para)
msgid "Some other examples for Intelligent Alerting include:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:927(para)
msgid "Can instances launch and be destroyed?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:931(para)
msgid "Can users be created?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:935(para)
msgid "Can objects be stored and deleted?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:939(para)
msgid "Can volumes be created and destroyed?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:945(title)
msgid "Trending"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:947(para)
msgid ""
"Trending can give you great insight into how your cloud is performing day to "
"day. You can learn, for example, if a busy day was simply a rare occurrence "
"or if you should start adding new compute nodes.monitoring trending"
"secondary>logging/monitoring trending monitoring cloud "
"performance with logging/monitoring trending"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:966(para)
msgid ""
"Trending takes a slightly different approach than alerting. While alerting "
"is interested in a binary result (whether a check succeeds or fails), "
"trending records the current state of something at a certain point in time. "
"Once enough points in time have been recorded, you can see how the value has "
"changed over time.trending"
"primary>vs. alerts binary binary results in trending"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:980(para)
msgid ""
"All of the alert types mentioned earlier can also be used for trend "
"reporting. Some other trend examples include:trending report examples"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:990(para)
msgid "The number of instances on each compute node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:994(para)
msgid "The types of flavors in use"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:998(para)
msgid "The number of volumes in use"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:1002(para)
msgid "The number of Object Storage requests each hour"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:1006(para)
msgid "The number of nova-api requests each hour"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:1011(para)
msgid "The I/O statistics of your storage services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:1015(para)
msgid ""
"As an example, recording nova-api usage can allow you to track "
"the need to scale your cloud controller. By keeping an eye on nova-"
"api requests, you can determine whether you need to spawn more "
"nova-api processes or go as far as introducing an "
"entirely new server to run nova-api. To get an approximate "
"count of the requests, look for standard INFO messages in /var/log/"
"nova/nova-api.log:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:1025(para)
msgid ""
"You can obtain further statistics by looking for the number of successful "
"requests:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:1030(para)
msgid ""
"By running this command periodically and keeping a record of the result, you "
"can create a trending report over time that shows whether your nova-"
"api usage is increasing, decreasing, or keeping steady."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:1035(para)
msgid ""
"A tool such as collectd can be used to store this information. While "
"collectd is out of the scope of this book, a good starting point would be to "
"use collectd to store the result as a COUNTER data type. More information "
"can be found in collectd's documentation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_log_monitor.xml:1047(para)
msgid ""
"For stable operations, you want to detect failure promptly and determine "
"causes efficiently. With a distributed system, it's even more important to "
"track the right items to meet a service-level target. Learning where these "
"logs are located in the file system or API gives you an advantage. This "
"chapter also showed how to read, interpret, and manipulate information from "
"OpenStack services so that you can monitor effectively."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:12(title)
msgid "Backup and Recovery"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:14(para)
msgid ""
"Standard backup best practices apply when creating your OpenStack backup "
"policy. For example, how often to back up your data is closely related to "
"how quickly you need to recover from data loss.backup/recovery considerations"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:24(para)
msgid ""
"If you cannot have any data loss at all, you should also focus on a highly "
"available deployment. The OpenStack High Availability Guide"
"emphasis> offers suggestions for elimination of a single point of failure "
"that could cause system downtime. While it is not a completely prescriptive "
"document, it offers methods and techniques for avoiding downtime and data "
"loss.data"
"primary>preventing loss of "
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:37(para)
msgid "Other backup considerations include:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:41(para)
msgid "How many backups to keep?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:45(para)
msgid "Should backups be kept off-site?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:49(para)
msgid "How often should backups be tested?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:53(para)
msgid ""
"Just as important as a backup policy is a recovery policy (or at least "
"recovery testing)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:57(title)
msgid "What to Back Up"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:59(para)
msgid ""
"While OpenStack is composed of many components and moving parts, backing up "
"the critical data is quite simple.backup/recovery items included"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:66(para)
msgid ""
"This chapter describes only how to back up configuration files and databases "
"that the various OpenStack components need to run. This chapter does not "
"describe how to back up objects inside Object Storage or data contained "
"inside Block Storage. Generally these areas are left for users to back up on "
"their own."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:74(title)
msgid "Database Backups"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:76(para)
msgid ""
"The example OpenStack architecture designates the cloud controller as the "
"MySQL server. This MySQL server hosts the databases for nova, glance, "
"cinder, and keystone. With all of these databases in one place, it's very "
"easy to create a database backup:databases backup/recovery of"
"secondary> backup/"
"recovery databases "
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:91(para)
msgid "If you only want to backup a single database, you can instead run:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:96(para)
msgid "where nova is the database you want to back up."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:98(para)
msgid ""
"You can easily automate this process by creating a cron job that runs the "
"following script once per day:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:109(para)
msgid ""
"This script dumps the entire MySQL database and deletes any backups older "
"than seven days."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:114(title)
msgid "File System Backups"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:116(para)
msgid ""
"This section discusses which files and directories should be backed up "
"regularly, organized by service.file "
"systems backup/recovery of "
"indexterm>backup/recovery"
"primary>file systems "
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:128(title) ./doc/openstack-ops/section_arch_example-neutron.xml:309(td)
msgid "Compute"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:130(para)
msgid ""
"The /etc/nova directory on both the cloud controller "
"and compute nodes should be regularly backed up.cloud controllers file system "
"backups and compute nodes backup/recovery of"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:142(para)
msgid ""
"/var/log/nova does not need to be backed up if you have all "
"logs going to a central area. It is highly recommended to use a central "
"logging server or back up the log directory."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:146(para)
msgid ""
"/var/lib/nova is another important directory to back up. The "
"exception to this is the /var/lib/nova/instances subdirectory "
"on compute nodes. This subdirectory contains the KVM images of running "
"instances. You would want to back up this directory only if you need to "
"maintain backup copies of all instances. Under most circumstances, you do "
"not need to do this, but this can vary from cloud to cloud and your service "
"levels. Also be aware that making a backup of a live KVM instance can cause "
"that instance to not boot properly if it is ever restored from a backup."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:158(title)
msgid "Image Catalog and Delivery"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:160(para)
msgid ""
"/etc/glance and /var/log/glance follow the same "
"rules as their nova counterparts.Image service backup/recovery of"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:167(para)
msgid ""
"/var/lib/glance should also be backed up. Take special notice "
"of /var/lib/glance/images. If you are using a file-based back "
"end of glance, /var/lib/glance/images is where the images are "
"stored and care should be taken."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:172(para)
msgid ""
"There are two ways to ensure stability with this directory. The first is to "
"make sure this directory is run on a RAID array. If a disk fails, the "
"directory is available. The second way is to use a tool such as rsync to "
"replicate the images to another server:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:182(title)
msgid "Identity"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:184(para)
msgid ""
"/etc/keystone and /var/log/keystone follow the "
"same rules as other components.Identity backup/recovery"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:191(para)
msgid ""
"/var/lib/keystone, although it should not contain any data "
"being used, can also be backed up just in case."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:198(para)
msgid ""
"/etc/cinder and /var/log/cinder follow the same "
"rules as other components.Block "
"Storage storage"
"primary>block storage "
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:207(para)
msgid "/var/lib/cinder should also be backed up."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:213(para)
msgid ""
"/etc/swift is very important to have backed up. This directory "
"contains the swift configuration files as well as the ring files and ring "
"builder file s, which if lost, render the data on your "
"cluster inaccessible. A best practice is to copy the builder files to all "
"storage nodes along with the ring files. Multiple backup copies are spread "
"throughout your storage cluster.builder files rings ring builders "
"indexterm>Object Storage"
"primary>backup/recovery of "
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:234(title)
msgid "Recovering Backups"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:236(para)
msgid ""
"Recovering backups is a fairly simple process. To begin, first ensure that "
"the service you are recovering is not running. For example, to do a full "
"recovery of nova on the cloud controller, first stop all "
"nova services:recovery"
"primary>backup/recovery backup/recovery recovering "
"backups "
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:258(para)
msgid "Now you can import a previously backed-up database:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:262(para)
msgid "You can also restore backed-up nova directories:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:267(para)
msgid "Once the files are restored, start everything back up:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:276(para)
msgid ""
"Other services follow the same process, with their respective directories "
"and databases ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:283(para)
msgid ""
"Backup and subsequent recovery is one of the first tasks system "
"administrators learn. However, each system has different items that need "
"attention. By taking care of your database, image service, and appropriate "
"file system locations, you can be assured that you can handle any event "
"requiring recovery."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:10(title)
msgid "User-Facing Operations"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:12(para)
msgid ""
"This guide is for OpenStack operators and does not seek to be an exhaustive "
"reference for users, but as an operator, you should have a basic "
"understanding of how to use the cloud facilities. This chapter looks at "
"OpenStack from a basic user perspective, which helps you understand your "
"users' needs and determine, when you get a trouble ticket, whether it is a "
"user issue or a service issue. The main concepts covered are images, "
"flavors, security groups, block storage, shared file system storage, and "
"instances."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:21(title) ./doc/openstack-ops/ch_arch_cloud_controller.xml:585(title)
msgid "Images"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:25(para)
msgid ""
"OpenStack images can often be thought of as \"virtual machine templates.\" "
"Images can also be standard installation media such as ISO images. "
"Essentially, they contain bootable file systems that are used to launch "
"instances.user training"
"primary>images "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:36(title)
msgid "Adding Images"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:38(para)
msgid ""
"Several pre-made images exist and can easily be imported into the Image "
"service. A common image to add is the CirrOS image, which is very small and "
"used for testing purposes.images"
"primary>adding To add this image, simply "
"do:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:50(para)
msgid ""
"The glance image-create command provides a large set of options "
"for working with your image. For example, the min-disk option "
"is useful for images that require root disks of a certain size (for example, "
"large Windows images). To view these options, do:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:58(para)
msgid ""
"The location option is important to note. It does not copy the "
"entire image into the Image service, but references an original location "
"where the image can be found. Upon launching an instance of that image, the "
"Image service accesses the image from the location specified."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:64(para)
msgid ""
"The copy-from option copies the image from the location "
"specified into the /var/lib/glance/images directory. The same "
"thing is done when using the STDIN redirection with <, as shown in the "
"example."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:69(para)
msgid "Run the following command to view the properties of existing images:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:77(title)
msgid "Adding Signed Images"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:79(para)
msgid ""
"To provide a chain of trust from an end user to the Image service, and the "
"Image service to Compute, an end user can import signed images into the "
"Image service that can be verified in Compute. Appropriate Image service "
"properties need to be set to enable signature verification. Currently, "
"signature verification is provided in Compute only, but an accompanying "
"feature in the Image service is targeted for Mitaka."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:87(para)
msgid ""
"Prior to the steps below, an asymmetric keypair and certificate must be "
"generated. In this example, these are called private_key.pem and new_cert."
"crt, respectively, and both reside in the current directory. Also note that "
"the image in this example is cirros-0.3.4-x86_64-disk.img, but any image can "
"be used."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:93(para)
msgid ""
"The following are steps needed to create the signature used for the signed "
"images:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:97(para)
msgid "Retrieve image for upload"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:102(para)
msgid "Use private key to create a signature of the image"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:107(para)
msgid "Signature hash method = SHA-256"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:108(para)
msgid "Signature key type = RSA-PSS"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:104(para)
msgid ""
"The following implicit values are being used to create the signature in this "
"example: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:115(para)
msgid "Signature hash methods: SHA-224, SHA-256, SHA-384, and SHA-512"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:116(para)
msgid ""
"Signature key types: DSA, ECC_SECT571K1, ECC_SECT409K1, ECC_SECT571R1, "
"ECC_SECT409R1, ECC_SECP521R1, ECC_SECP384R1, and RSA-PSS"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:113(para)
msgid "The following options are currently supported: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:120(para)
msgid "Generate signature of image and convert it to a base64 representation:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:129(para)
msgid "Create context"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:143(para)
msgid "Encode certificate in DER format"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:156(para)
msgid "Upload Certificate in DER format to Castellan"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:167(para)
msgid "Upload Image to Image service, with Signature Metadata"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:173(para)
msgid "img_signature uses the signature called signature_64"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:176(para)
msgid ""
"img_signature_certificate_uuid uses the value from cert_uuid in section 5 "
"above"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:180(para)
msgid "img_signature_hash_method matches 'SHA-256' in section 2 above"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:183(para)
msgid "img_signature_key_type matches 'RSA-PSS' in section 2 above"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:170(para)
msgid "The following signature properties are used: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:204(para)
msgid "Signature verification will occur when Compute boots the signed image"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:206(para)
msgid ""
"As of the Mitaka release, Compute supports instance signature validation. "
"This is enabled by setting the verify_glance_signatures flag in nova.conf to "
"TRUE. When enabled, Compute will automatically validate signed instances "
"prior to its launch."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:213(title)
msgid "Sharing Images Between Projects"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:215(para)
msgid ""
"In a multi-tenant cloud environment, users sometimes want to share their "
"personal images or snapshots with other projects.projects sharing images between"
"secondary> images"
"primary>sharing between projects This can "
"be done on the command line with the glance tool by the "
"owner of the image."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:228(para)
msgid "To share an image or snapshot with another project, do the following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:233(para)
msgid "Obtain the UUID of the image:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:239(para)
msgid ""
"Obtain the UUID of the project with which you want to share your image. "
"Unfortunately, non-admin users are unable to use the keystone"
"literal> command to do this. The easiest solution is to obtain the UUID "
"either from an administrator of the cloud or from a user located in the "
"project."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:247(para)
msgid ""
"Once you have both pieces of information, run the glance "
"command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:257(para)
msgid ""
"Project 771ed149ef7e4b2b88665cc1c98f77ca will now have access to image "
"733d1c44-a2ea-414b-aca7-69decf20d810."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:264(title)
msgid "Deleting Images"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:266(para)
msgid ""
"To delete an image,images"
"primary>deleting just execute:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:275(para)
msgid ""
"Deleting an image does not affect instances or snapshots that were based on "
"the image."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:281(title)
msgid "Other CLI Options"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:283(para)
msgid ""
"A full set of options can be found using:images CLI options for"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:292(para)
msgid ""
"or the Command-Line Interface Reference ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:297(title)
msgid "The Image service and the Database"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:299(para)
msgid ""
"The only thing that the Image service does not store in a database is the "
"image itself. The Image service database has two main tables:databases Image service"
"secondary> Image service"
"primary>database tables "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:313(literal)
msgid "images"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:317(literal)
msgid "image_properties"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:321(para)
msgid ""
"Working directly with the database and SQL queries can provide you with "
"custom lists and reports of images. Technically, you can update properties "
"about images through the database, although this is not generally "
"recommended."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:328(title)
msgid "Example Image service Database Queries"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:330(para)
msgid ""
"One interesting example is modifying the table of images and the owner of "
"that image. This can be easily done if you simply display the unique ID of "
"the owner. Image service"
"primary>database queries This example goes "
"one step further and displays the readable name of the owner:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:344(para)
msgid "Another example is displaying all properties for a certain image:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:353(title)
msgid "Flavors"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:355(para)
msgid ""
"Virtual hardware templates are called \"flavors\" in OpenStack, defining "
"sizes for RAM, disk, number of cores, and so on. The default install "
"provides five flavors."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:359(para)
msgid ""
"These are configurable by admin users (the rights may also be delegated to "
"other users by redefining the access controls for compute_extension:"
"flavormanage in /etc/nova/policy.json on the nova-"
"api server). To get the list of available flavors on your system, run:"
"DAC (discretionary access control)"
"primary> flavor "
"indexterm>user training"
"primary>flavors "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:385(para)
msgid ""
"The nova flavor-create command allows authorized users to "
"create new flavors. Additional flavor manipulation commands can be shown "
"with the command: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:389(para)
msgid ""
"Flavors define a number of parameters, resulting in the user having a choice "
"of what type of virtual machine to run—just like they would have if they "
"were purchasing a physical server. "
"lists the elements that can be set. Note in particular extra_specs , which can be used to "
"define free-form characteristics, giving a lot of flexibility beyond just "
"the size of RAM, CPU, and Disk.base "
"image "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:401(caption)
msgid "Flavor parameters"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:409(emphasis)
msgid "Column"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:417(para)
msgid "ID"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:419(para)
msgid "Unique ID (integer or UUID) for the flavor."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:423(para) ./doc/openstack-ops/ch_ops_user_facing.xml:2387(th) ./doc/openstack-ops/ch_arch_scaling.xml:78(th)
msgid "Name"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:425(para)
msgid ""
"A descriptive name, such as xx.size_name, is conventional but not required, "
"though some third-party tools may rely on it."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:431(para)
msgid "Memory_MB"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:433(para)
msgid "Virtual machine memory in megabytes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:437(para) ./doc/openstack-ops/ch_arch_scaling.xml:84(th)
msgid "Disk"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:439(para)
msgid ""
"Virtual root disk size in gigabytes. This is an ephemeral disk the base "
"image is copied into. You don't use it when you boot from a persistent "
"volume. The \"0\" size is a special case that uses the native base image "
"size as the size of the ephemeral root volume."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:447(para) ./doc/openstack-ops/ch_arch_scaling.xml:86(th)
msgid "Ephemeral"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:449(para)
msgid ""
"Specifies the size of a secondary ephemeral data disk. This is an empty, "
"unformatted disk and exists only for the life of the instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:455(para)
msgid "Swap"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:457(para)
msgid "Optional swap space allocation for the instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:464(para)
msgid "Number of virtual CPUs presented to the instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:469(para)
msgid "RXTX_Factor"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:471(para)
msgid ""
"Optional property that allows created servers to have a different "
"bandwidthbandwidth"
"primary>capping cap from that defined in "
"the network they are attached to. This factor is multiplied by the rxtx_base "
"property of the network. Default value is 1.0 (that is, the same as the "
"attached network)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:483(para)
msgid "Is_Public"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:485(para)
msgid ""
"Boolean value that indicates whether the flavor is available to all users or "
"private. Private flavors do not get the current tenant assigned to them. "
"Defaults to True ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:492(para)
msgid "extra_specs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:494(para)
msgid ""
"Additional optional restrictions on which compute nodes the flavor can run "
"on. This is implemented as key-value pairs that must match against the "
"corresponding key-value pairs on compute nodes. Can be used to implement "
"things like special resources (such as flavors that can run only on compute "
"nodes with GPU hardware)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:505(title)
msgid "Private Flavors"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:507(para)
msgid ""
"A user might need a custom flavor that is uniquely tuned for a project she "
"is working on. For example, the user might require 128 GB of memory. If you "
"create a new flavor as described above, the user would have access to the "
"custom flavor, but so would all other tenants in your cloud. Sometimes this "
"sharing isn't desirable. In this scenario, allowing all users to have access "
"to a flavor with 128 GB of memory might cause your cloud to reach full "
"capacity very quickly. To prevent this, you can restrict access to the "
"custom flavor using the nova command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:519(para)
msgid "To view a flavor's access list, do the following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:524(title)
msgid "Best Practices"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:526(para)
msgid ""
"Once access to a flavor has been restricted, no other projects besides the "
"ones granted explicit access will be able to see the flavor. This includes "
"the admin project. Make sure to add the admin project in addition to the "
"original project."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:531(para)
msgid ""
"It's also helpful to allocate a specific numeric range for custom and "
"private flavors. On UNIX-based systems, nonsystem accounts usually have a "
"UID starting at 500. A similar approach can be taken with custom flavors. "
"This helps you easily identify which flavors are custom, private, and public "
"for the entire cloud."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:540(title)
msgid "How Do I Modify an Existing Flavor?"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:542(para)
msgid ""
"The OpenStack dashboard simulates the ability to modify a flavor by deleting "
"an existing flavor and creating a new one with the same name."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:551(title)
msgid "Security Groups"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:553(para)
msgid ""
"A common new-user issue with OpenStack is failing to set an appropriate "
"security group when launching an instance. As a result, the user is unable "
"to contact the instance on the network.security groups user training security groups"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:564(para)
msgid ""
"Security groups are sets of IP filter rules that are applied to an "
"instance's networking. They are project specific, and project members can "
"edit the default rules for their group and add new rules sets. All projects "
"have a \"default\" security group, which is applied to instances that have "
"no other security group defined. Unless changed, this security group denies "
"all incoming traffic."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:572(title)
msgid "General Security Groups Configuration"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:574(para)
msgid ""
"The nova.conf option allow_same_net_traffic (which "
"defaults to true ) globally controls whether the rules "
"apply to hosts that share a network. When set to true , "
"hosts on the same subnet are not filtered and are allowed to pass all types "
"of traffic between them. On a flat network, this allows all instances from "
"all projects unfiltered communication. With VLAN networking, this allows "
"access between instances within the same project. If "
"allow_same_net_traffic is set to false , "
"security groups are enforced for all connections. In this case, it is "
"possible for projects to simulate allow_same_net_traffic by "
"configuring their default security group to allow all traffic from their "
"subnet."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:589(para)
msgid ""
"As noted in the previous chapter, the number of rules per security group is "
"controlled by the quota_security_group_rules, and the number of "
"allowed security groups per project is controlled by the "
"quota_security_groups quota."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:598(title)
msgid "End-User Configuration of Security Groups"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:600(para)
msgid ""
"Security groups for the current project can be found on the OpenStack "
"dashboard under Access & Security . To see details "
"of an existing group, select the edit action for that "
"security group. Obviously, modifying existing groups can be done from this "
"edit interface. There is a Create Security "
"Group button on the main Access & Security"
"guilabel> page for creating new groups. We discuss the terms used in these "
"fields when we explain the command-line equivalents."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:611(title)
msgid "Setting with nova command"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:613(para)
msgid ""
"From the command line, you can get a list of security groups for the project "
"you're acting in using the nova command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:627(para) ./doc/openstack-ops/ch_ops_user_facing.xml:741(para)
msgid "To view the details of the \"open\" security group:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:638(para)
msgid ""
"These rules are all \"allow\" type rules, as the default is deny. The first "
"column is the IP protocol (one of icmp, tcp, or udp), and the second and "
"third columns specify the affected port range. The fourth column specifies "
"the IP range in CIDR format. This example shows the full port range for all "
"protocols allowed from all IPs."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:644(para) ./doc/openstack-ops/ch_ops_user_facing.xml:828(para)
msgid ""
"When adding a new security group, you should pick a descriptive but brief "
"name. This name shows up in brief descriptions of the instances that use it "
"where the longer description field often does not. Seeing that an instance "
"is using security group http is much easier to understand "
"than bobs_group or secgrp1 ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:651(para)
msgid ""
"As an example, let's create a security group that allows web traffic "
"anywhere on the Internet. We'll call this group global_http"
"literal>, which is clear and reasonably concise, encapsulating what is "
"allowed and from where. From the command line, do:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:665(para)
msgid ""
"This creates the empty security group. To make it do what we want, we need "
"to add some rules:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:676(para)
msgid ""
"Note that the arguments are positional, and the from-port "
"and to-port arguments specify the allowed local port "
"range connections. These arguments are not indicating source and destination "
"ports of the connection. More complex rule sets can be built up through "
"multiple invocations of nova secgroup-add-rule . For "
"example, if you want to pass both http and https traffic, do this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:691(para) ./doc/openstack-ops/ch_ops_user_facing.xml:910(para)
msgid ""
"Despite only outputting the newly added rule, this operation is additive:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:702(para)
msgid ""
"The inverse operation is called secgroup-delete-rule , "
"using the same format. Whole security groups can be removed with "
"secgroup-delete ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:707(para)
msgid ""
"To create security group rules for a cluster of instances, you want to use "
"SourceGroups ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:710(para)
msgid ""
"SourceGroups are a special dynamic way of defining the CIDR of allowed "
"sources. The user specifies a SourceGroup (security group name) and then all "
"the users' other instances using the specified SourceGroup are selected "
"dynamically. This dynamic selection alleviates the need for individual rules "
"to allow each new member of the cluster"
"phrase>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:717(para)
msgid ""
"The code is structured like this: nova secgroup-add-group-rule "
"<secgroup> <source-group> <ip-proto> <from-port> "
"<to-port>. An example usage is shown here:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:724(para) ./doc/openstack-ops/ch_ops_user_facing.xml:949(para)
msgid ""
"The \"cluster\" rule allows SSH access from any other instance that uses the "
"global-http group."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:728(title)
msgid "Setting with neutron command"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:730(para)
msgid ""
"If your environment is using Neutron, you can configure security groups "
"settings using the neutron command. Get a list of "
"security groups for the project you are acting in, by using following "
"command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:758(para)
msgid ""
"These rules are all \"allow\" type rules, as the default is deny. This "
"example shows the full port range for all protocols allowed from all IPs. "
"This section describes the most common security-group-rule parameters:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:764(term)
msgid "direction"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:767(para)
msgid ""
"The direction in which the security group rule is applied. Valid values are "
"ingress or egress ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:773(term)
msgid "remote_ip_prefix"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:776(para)
msgid ""
"This attribute value matches the specified IP prefix as the source IP "
"address of the IP packet."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:782(term)
msgid "protocol"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:785(para)
msgid ""
"The protocol that is matched by the security group rule. Valid values are "
"null , tcp , udp , "
"icmp , and icmpv6 ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:793(term)
msgid "port_range_min"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:796(para)
msgid ""
"The minimum port number in the range that is matched by the security group "
"rule. If the protocol is TCP or UDP, this value must be less than or equal "
"to the port_range_max attribute value. If the protocol is "
"ICMP or ICMPv6, this value must be an ICMP or ICMPv6 type, respectively."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:806(term)
msgid "port_range_max"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:809(para)
msgid ""
"The maximum port number in the range that is matched by the security group "
"rule. The port_range_min attribute constrains the "
"port_range_max attribute. If the protocol is ICMP or "
"ICMPv6, this value must be an ICMP or ICMPv6 type, respectively."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:819(term)
msgid "ethertype"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:822(para)
msgid ""
"Must be IPv4 or IPv6 , and addresses "
"represented in CIDR must match the ingress or egress rules."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:835(para)
msgid ""
"This example creates a security group that allows web traffic anywhere on "
"the Internet. We'll call this group global_http , which is "
"clear and reasonably concise, encapsulating what is allowed and from where. "
"From the command line, do:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:855(para)
msgid ""
"Immediately after create, the security group has only an allow egress rule. "
"To make it do what we want, we need to add some rules:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:889(para)
msgid ""
"More complex rule sets can be built up through multiple invocations of "
"neutron security-group-rule-create . For example, if you "
"want to pass both http and https traffic, do this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:927(para)
msgid ""
"The inverse operation is called security-group-rule-delete"
"literal>, specifying security-group-rule ID. Whole security groups can be "
"removed with security-group-delete ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:932(para)
msgid ""
"To create security group rules for a cluster of instances, use RemoteGroups ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:935(para)
msgid ""
"RemoteGroups are a dynamic way of defining the CIDR of allowed sources. The "
"user specifies a RemoteGroup (security group name) and then all the users' "
"other instances using the specified RemoteGroup are selected dynamically. "
"This dynamic selection alleviates the need for individual rules to allow "
"each new member of the cluster ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:942(para)
msgid ""
"The code is similar to the above example of security-group-rule-"
"create . To use RemoteGroup, specify --remote-group-id"
"literal> instead of --remote-ip-prefix . For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:960(para)
msgid ""
"OpenStack volumes are persistent block-storage devices that may be attached "
"and detached from instances, but they can be attached to only one instance "
"at a time. Similar to an external hard drive, they do not provide shared "
"storage in the way a network file system or object store does. It is left to "
"the operating system in the instance to put a file system on the block "
"device and mount it, or not. block "
"storage storage"
"primary>block storage user training block storage"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:979(para)
msgid ""
"As with other removable disk technology, it is important that the operating "
"system is not trying to make use of the disk before removing it. On Linux "
"instances, this typically involves unmounting any file systems mounted from "
"the volume. The OpenStack volume service cannot tell whether it is safe to "
"remove volumes from an instance, so it does what it is told. If a user tells "
"the volume service to detach a volume from an instance while it is being "
"written to, you can expect some level of file system corruption as well as "
"faults from whatever process within the instance was using the device."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:989(para)
msgid ""
"There is nothing OpenStack-specific in being aware of the steps needed to "
"access block devices from within the instance operating system, potentially "
"formatting them for first use and being cautious when removing them. What is "
"specific is how to create new volumes and attach and detach them from "
"instances. These operations can all be done from the Volumes"
"guilabel> page of the dashboard or by using the cinder "
"command-line client."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:997(para)
msgid ""
"To add new volumes, you need only a name and a volume size in gigabytes. "
"Either put these into the Create Volume web form or use "
"the command line:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1003(para)
msgid ""
"This creates a 10 GB volume named test-volume . To list "
"existing volumes and the instances they are connected to, if any:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1014(para)
msgid ""
"OpenStack Block Storage also allows creating snapshots of volumes. Remember "
"that this is a block-level snapshot that is crash consistent, so it is best "
"if the volume is not connected to an instance when the snapshot is taken and "
"second best if the volume is not in use on the instance it is attached to. "
"If the volume is under heavy use, the snapshot may have an inconsistent file "
"system. In fact, by default, the volume service does not take a snapshot of "
"a volume that is attached to an image, though it can be forced to. To take a "
"volume snapshot, either select Create Snapshot from the "
"actions column next to the volume name on the dashboard Volumes"
"guilabel> page, or run this from the command line:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1040(para)
msgid ""
"For more information about updating Block Storage volumes (for example, "
"resizing or transferring), see the OpenStack End User Guide"
"link>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1044(title)
msgid "Block Storage Creation Failures"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1046(para)
msgid ""
"If a user tries to create a volume and the volume immediately goes into an "
"error state, the best way to troubleshoot is to grep the cinder log files "
"for the volume's UUID. First try the log files on the cloud controller, and "
"then try the storage node where the volume was attempted to be created:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1062(para)
msgid ""
"Similar to Block Storage, the Shared File System is a persistent storage, "
"called share, that can be used in multi-tenant environments. Users create "
"and mount a share as a remote file system on any machine that allows "
"mounting shares, and has network access to share exporter. This share can "
"then be used for storing, sharing, and exchanging files. The default "
"configuration of the Shared File Systems service depends on the back-end "
"driver the admin chooses when starting the Shared File Systems service. For "
"more information about existing back-end drivers, see section \"Share Backends\" of Shared File Systems service "
"Developer Guide. For example, in case of OpenStack Block Storage based back-"
"end is used, the Shared File Systems service cares about everything, "
"including VMs, networking, keypairs, and security groups. Other "
"configurations require more detailed knowledge of shares functionality to "
"set up and tune specific parameters and modes of shares functioning."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1087(para)
msgid "Create, update, delete and force-delete shares"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1090(para)
msgid "Change access rules for shares, reset share state"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1093(para)
msgid "Specify quotas for existing users or tenants"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1096(para)
msgid "Create share networks"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1099(para)
msgid "Define new share types"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1102(para)
msgid ""
"Perform operations with share snapshots: create, change name, create a share "
"from a snapshot, delete"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1106(para)
msgid "Operate with consistency groups"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1109(para)
msgid "Use security services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1080(para)
msgid ""
"Shares are a remote mountable file system, so users can mount a share to "
"multiple hosts, and have it accessed from multiple hosts by multiple users "
"at a time. With the Shared File Systems service, you can perform a large "
"number of operations with shares: For more information on "
"share management see section “Share management”"
"link> of chapter “Shared File Systems” in OpenStack Administrator Guide. As "
"to Security services, you should remember that different drivers support "
"different authentication methods, while generic driver does not support "
"Security Services at all (see section "
"“Security services” of chapter “Shared File Systems” in OpenStack "
"Administrator Guide)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1124(para)
msgid ""
"You can create a share in a network, list shares, and show information for, "
"update, and delete a specified share. You can also create snapshots of "
"shares (see section “Share snapshots” of chapter "
"“Shared File Systems” in OpenStack Administrator Guide)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1133(para)
msgid ""
"There are default and specific share types that allow you to filter or "
"choose back-ends before you create a share. Functions and behaviour of share "
"type is similar to Block Storage volume type (see section “Share types” of chapter “Shared File Systems” in OpenStack "
"Administrator Guide)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1142(para)
msgid ""
"To help users keep and restore their data, Shared File Systems service "
"provides a mechanism to create and operate snapshots (see section “Share snapshots” of chapter "
"“Shared File Systems” in OpenStack Administrator Guide)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1160(para)
msgid "LDAP"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1163(para)
msgid "Kerberos"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1166(para)
msgid "Microsoft Active Directory"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1150(para)
msgid ""
"A security service stores configuration information for clients for "
"authentication and authorization. Inside Manila a share network can be "
"associated with up to three security types (for detailed information see "
"section “Security services” of "
"chapter “Shared File Systems” in OpenStack Administrator Guide): "
" "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1177(para)
msgid ""
"Without interaction with share networks, in so called \"no share servers\" "
"mode."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1181(para)
msgid "Interacting with share networks."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1171(para)
msgid ""
"Shared File Systems service differs from the principles implemented in Block "
"Storage. Shared File Systems service can work in two modes: "
"Networking service is used by the Shared File Systems service to directly "
"operate with share servers. For switching interaction with Networking "
"service on, create a share specifying a share network. To use \"share "
"servers\" mode even being out of OpenStack, a network plugin called "
"StandaloneNetworkPlugin is used. In this case, provide network information "
"in the configuration: IP range, network type, and segmentation ID. Also you "
"can add security services to a share network (see section “Networking” of chapter “Shared File Systems” in OpenStack "
"Administrator Guide)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1197(para)
msgid ""
"The main idea of consistency groups is to enable you to create snapshots at "
"the exact same point in time from multiple file system shares. Those "
"snapshots can be then used for restoring all shares that were associated "
"with the consistency group (see section “Consistency "
"groups” of chapter “Shared File Systems” in OpenStack Administrator "
"Guide)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1214(para)
msgid "Rate limits"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1217(para)
msgid "Absolute limits"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1229(para)
msgid "Max amount of space awailable for all shares"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1230(para)
msgid "Max number of shares"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1231(para)
msgid "Max number of shared networks"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1232(para)
msgid "Max number of share snapshots"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1233(para)
msgid "Max total amount of all snapshots"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1234(para)
msgid ""
"Type and number of API calls that can be made in a specific time interval"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1207(para)
msgid ""
"Shared File System storage allows administrators to set limits and quotas "
"for specific tenants and users. Limits are the resource limitations that are "
"allowed for each tenant or user. Limits consist of: Rate "
"limits control the frequency at which users can issue specific API requests. "
"Rate limits are configured by administrators in a config file. Also, "
"administrator can specify quotas also known as max values of absolute limits "
"per tenant. Whereas users can see only the amount of their consumed "
"resources. Administrator can specify rate limits or quotas for the following "
"resources: User can see his rate limits and absolute limits "
"by running commands manila rate-limits and manila "
"absolute-limits respectively. For more details on limits and quotas "
"see subsection \"Quotas and limits\" of \"Share "
"management\" section of OpenStack Administrator Guide document."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1253(para)
msgid "Create share"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1256(para)
msgid "Operating with a share"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1259(para)
msgid "Manage access to shares"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1262(para)
msgid "Create snapshots"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1265(para)
msgid "Create a share network"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1268(para)
msgid "Manage a share network"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1247(para)
msgid ""
"This section lists several of the most important Use Cases that demonstrate "
"the main functions and abilities of Shared File Systems service: "
" "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1274(para)
msgid ""
"Shared File Systems service cannot warn you beforehand if it is safe to "
"write a specific large amount of data onto a certain share or to remove a "
"consistency group if it has a number of shares assigned to it. In such a "
"potentially erroneous situations, if a mistake happens, you can expect some "
"error message or even failing of shares or consistency groups into an "
"incorrect status. You can also expect some level of system corruption if a "
"user tries to unmount an unmanaged share while a process is using it for "
"data transfer."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1286(title)
msgid "Create Share"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1293(para)
msgid ""
"Check if there is an appropriate share type defined in the Shared File "
"Systems service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1298(para)
msgid ""
"If such a share type does not exist, an Admin should create it using "
"manila type-create command before other users are able to use "
"it"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1303(para)
msgid ""
"Using a share network is optional. However if you need one, check if there "
"is an appropriate network defined in Shared File Systems service by using "
"manila share-network-list command. For the information on "
"creating a share network, see "
"below in this chapter."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1311(para)
msgid "Create a public share using manila create"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1314(para)
msgid ""
"Make sure that the share has been created successfully and is ready to use "
"(check the share status and see the share export location)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1288(para)
msgid ""
"In this section, we examine the process of creating a simple share. It "
"consists of several steps: Below is the same whole "
"procedure described step by step and in more detail."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1324(para)
msgid ""
"Before you start, make sure that Shared File Systems service is installed on "
"your OpenStack cluster and is ready to use."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1330(para)
msgid ""
"By default, there are no share types defined in Shared File Systems service, "
"so you can check if a required one has been already created: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1340(para)
msgid ""
"If the share types list is empty or does not contain a type you need, create "
"the required share type using this command: This command "
"will create a public share with the following parameters: name = "
"netapp1, spec_driver_handles_share_servers = False"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1347(para)
msgid ""
"You can now create a public share with my_share_net network, default share "
"type, NFS shared file systems protocol, and 1 GB size: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1380(para)
msgid ""
"To confirm that creation has been successful, see the share in the share "
"list: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1428(para)
msgid ""
"See subsection “Share Management” of "
"“Shared File Systems” section of Administration Guide document for the "
"details on share management operations."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1391(para)
msgid ""
"Check the share status and see the share export location. After creation, "
"the share status should become available: The "
"value is_public defines the level of visibility for the share: "
"whether other tenants can or cannot see the share. By default, the share is "
"private. Now you can mount the created share like a remote file system and "
"use it for your purposes. "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1440(title)
msgid "Manage Access To Shares"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1451(para)
msgid ""
"rw: read and write (RW) access. This is the default value."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1457(para)
msgid "ro: read-only (RO) access."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1467(para)
msgid ""
"ip: authenticates an instance through its IP address. A valid "
"format is XX.XX.XX.XX orXX.XX.XX.XX/XX. For example 0.0.0.0/0."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1474(para)
msgid ""
"cert: authenticates an instance through a TLS certificate. "
"Specify the TLS identity as the IDENTKEY. A valid value is any string up to "
"64 characters long in the common name (CN) of the certificate. The meaning "
"of a string depends on its interpretation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1483(para)
msgid ""
"user: authenticates by a specified user or group name. A valid "
"value is an alphanumeric string that can contain some special characters and "
"is from 4 to 32 characters long."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1492(para)
msgid ""
"Do not mount a share without an access rule! This can lead to an exception."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1442(para)
msgid ""
"Currently, you have a share and would like to control access to this share "
"for other users. For this, you have to perform a number of steps and "
"operations. Before getting to manage access to the share, pay attention to "
"the following important parameters. To grant or deny access to a share, "
"specify one of these supported share access levels: "
"Additionally, you should also specify one of these supported authentication "
"methods: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1498(para)
msgid ""
"Allow access to the share with IP access type and 10.254.0.4 IP address: "
" "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1513(para)
msgid ""
"Mount the Share: Then check if the share mounted "
"successfully and according to the specified access rules: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1528(para)
msgid ""
"Different share features are supported by different share drivers. In these "
"examples there was used generic (Cinder as a back-end) driver that does not "
"support user and cert authentication methods."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1537(para)
msgid ""
"For the details of features supported by different drivers see section “Manila share features "
"support mapping” of Manila Developer Guide document."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1550(title)
msgid "Manage Shares"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1552(para)
msgid ""
"There are several other useful operations you would perform when working "
"with shares."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1558(title)
msgid "Update Share"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1560(para)
msgid ""
"To change the name of a share, or update its description, or level of "
"visibility for other tenants, use this command: Check the "
"attributes of the updated Share1: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1598(title)
msgid "Reset Share State"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1600(para)
msgid ""
"Sometimes a share may appear and then hang in an erroneous or a transitional "
"state. Unprivileged users do not have the appropriate access rights to "
"correct this situation. However, having cloud administrator's permissions, "
"you can reset the share's state by using command to reset "
"share state, where state indicates which state to assign the share to. "
"Options include: available, error, creating, deleting, error_deleting"
"code> states."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1613(para)
msgid ""
"After running check the share's status: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1652(title)
msgid "Delete Share"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1654(para)
msgid ""
"If you do not need a share any more, you can delete it using command like: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1661(para)
msgid ""
"If you specified the consistency group while creating a share, you should "
"provide the --consistency-group parameter to delete the share:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1672(para)
msgid ""
"Sometimes it appears that a share hangs in one of transitional states (i.e. "
"creating, deleting, managing, unmanaging, extending, and shrinking"
"code>). In that case, to delete it, you need command and "
"administrative permissions to run it: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1682(para)
msgid ""
"For more details and additional information about other cases, features, API "
"commands etc, see subsection “Share Management”"
"link> of “Shared File Systems” section of Administration Guide document."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1698(title)
msgid "Create Snapshots"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1700(para)
msgid ""
"The Shared File Systems service provides a mechanism of snapshots to help "
"users to restore their own data. To create a snapshot, use "
"command like: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1740(para)
msgid ""
"For more details and additional information on snapshots, see subsection "
" “Share Snapshots” of “Shared "
"File Systems” section of “Administration Guide” document."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1720(para)
msgid ""
"Then, if needed, update the name and description of the created snapshot: "
" To make sure that the snapshot is available, run: "
" "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1755(title)
msgid "Create a Share Network"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1757(para)
msgid ""
"To control a share network, Shared File Systems service requires interaction "
"with Networking service to manage share servers on its own. If the selected "
"driver runs in a mode that requires such kind of interaction, you need to "
"specify the share network when a share is created. For the information on "
"share creation, see earlier in this chapter."
" Initially, check the existing share networks type list by: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1772(para)
msgid ""
"If share network list is empty or does not contain a required network, just "
"create, for example, a share network with a private network and subnetwork. "
" The segmentation_id, cidr, "
"ip_version, and network_type share network "
"attributes are automatically set to the values determined by the network "
"provider."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1800(para)
msgid ""
"Then check if the network became created by requesting the networks list "
"once again: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1815(para)
msgid ""
"See subsection “Share Networks” of "
"“Shared File Systems” section of Administration Guide document for more "
"details."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1811(para)
msgid ""
"Finally, to create a share that uses this share network, get to Create Share "
"use case described earlier in this chapter. "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1828(title)
msgid "Manage a Share Network"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1830(para)
msgid ""
"There is a pair of useful commands that help manipulate share networks. To "
"start, check the network list: If you configured the back-"
"end with driver_handles_share_servers = True (with the share "
"servers) and had already some operations in the Shared File Systems service, "
"you can see manila_service_network in the neutron list of "
"networks. This network was created by the share driver for internal usage. "
" "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1856(para)
msgid ""
"You also can see detailed information about the share network including "
"network_type, segmentation_id fields: You also "
"can add and remove the security services to the share network."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1881(para)
msgid ""
"For details, see subsection \"Security "
"Services\" of “Shared File Systems” section of Administration Guide "
"document."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1901(para)
msgid ""
"Instances are the running virtual machines within an OpenStack cloud. This "
"section deals with how to work with them and their underlying images, their "
"network properties, and how they are represented in the database.user training instances"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1911(title)
msgid "Starting Instances"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1913(para)
msgid ""
"To launch an instance, you need to select an image, a flavor, and a name. "
"The name needn't be unique, but your life will be simpler if it is because "
"many tools will use the name in place of the UUID so long as the name is "
"unique. You can start an instance from the dashboard from the "
"Launch Instance button on the Instances"
"guilabel> page or by selecting the Launch Instance "
"action next to an image or snapshot on the Images page."
"instances"
"primary>starting "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1927(para)
msgid "On the command line, do this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1931(para)
msgid ""
"There are a number of optional items that can be specified. You should read "
"the rest of this section before trying to start an instance, but this is the "
"base command that later details are layered upon."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1940(para)
msgid ""
"In releases prior to Mitaka, select the equivalent Terminate "
"instance action."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1935(para)
msgid ""
"To delete instances from the dashboard, select the Delete "
"instance action next to the instance on the Instances"
"guilabel> page. . From the command line, do this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1947(para)
msgid ""
"It is important to note that powering off an instance does not terminate it "
"in the OpenStack sense."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1952(title)
msgid "Instance Boot Failures"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1954(para)
msgid ""
"If an instance fails to start and immediately moves to an error state, there "
"are a few different ways to track down what has gone wrong. Some of these "
"can be done with normal user access, while others require access to your log "
"server or compute nodes.instances"
"primary>boot failures "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:1963(para)
msgid ""
"The simplest reasons for nodes to fail to launch are quota violations or the "
"scheduler being unable to find a suitable compute node on which to run the "
"instance. In these cases, the error is apparent when you run a nova "
"show on the faulted instance:config drive "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2001(para)
msgid ""
"In this case, looking at the fault message shows "
"NoValidHost , indicating that the scheduler was unable to "
"match the instance requirements."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2005(para)
msgid ""
"If nova show does not sufficiently explain the failure, "
"searching for the instance UUID in the nova-compute.log on the "
"compute node it was scheduled on or the nova-scheduler.log on "
"your scheduler hosts is a good place to start looking for lower-level "
"problems."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2011(para)
msgid ""
"Using nova show as an admin user will show the compute node the "
"instance was scheduled on as hostId. If the instance failed "
"during scheduling, this field is blank."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2017(title)
msgid "Using Instance-Specific Data"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2019(para)
msgid ""
"There are two main types of instance-specific data: metadata and user data."
"metadata instance "
"metadata instances instance-specific data"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2031(title)
msgid "Instance metadata"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2033(para)
msgid ""
"For Compute, instance metadata is a collection of key-value pairs associated "
"with an instance. Compute reads and writes to these key-value pairs any time "
"during the instance lifetime, from inside and outside the instance, when the "
"end user uses the Compute API to do so. However, you cannot query the "
"instance-associated key-value pairs with the metadata service that is "
"compatible with the Amazon EC2 metadata service."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2041(para)
msgid ""
"For an example of instance metadata, users can generate and register SSH "
"keys using the nova command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2046(para)
msgid ""
"This creates a key named , which you can associate with "
"instances. The file mykey.pem is the private key, which "
"should be saved to a secure location because it allows root access to "
"instances the key is associated with."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2052(para)
msgid "Use this command to register an existing key with OpenStack:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2058(para)
msgid ""
"You must have the matching private key to access instances associated with "
"this key."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2062(para)
msgid ""
"To associate a key with an instance on boot, add --key_name mykey"
"code> to your command line. For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2068(para)
msgid ""
"When booting a server, you can also add arbitrary metadata so that you can "
"more easily identify it among other running instances. Use the --meta"
"code> option with a key-value pair, where you can make up the string for "
"both the key and the value. For example, you could add a description and "
"also the creator of the server:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2077(para)
msgid ""
"When viewing the server information, you can see the metadata included on "
"the metadata line:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2113(title)
msgid "Instance user data"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2115(para)
msgid ""
"The user-data key is a special key in the metadata service that "
"holds a file that cloud-aware applications within the guest instance can "
"access. For example, cloudinit is an "
"open source package from Ubuntu, but available in most distributions, that "
"handles early initialization of a cloud instance that makes use of this user "
"data.user data "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2126(para)
msgid ""
"This user data can be put in a file on your local system and then passed in "
"at instance creation with the flag --user-data <user-data-file>"
"code>. For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2132(para)
msgid ""
"To understand the difference between user data and metadata, realize that "
"user data is created before an instance is started. User data is accessible "
"from within the instance when it is running. User data can be used to store "
"configuration, a script, or anything the tenant wants."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2140(title)
msgid "File injection"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2142(para)
msgid ""
"Arbitrary local files can also be placed into the instance file system at "
"creation time by using the --file <dst-path=src-path> "
"option. You may store up to five files.file injection "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2149(para)
msgid ""
"For example, let's say you have a special authorized_keys"
"filename> file named special_authorized_keysfile that for some reason you "
"want to put on the instance instead of using the regular SSH key injection. "
"In this case, you can use the following command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2162(title)
msgid "Associating Security Groups"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2164(para)
msgid ""
"Security groups, as discussed earlier, are typically required to allow "
"network traffic to an instance, unless the default security group for a "
"project has been modified to be more permissive.security groups user training security groups"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2175(para)
msgid ""
"Adding security groups is typically done on instance boot. When launching "
"from the dashboard, you do this on the Access & Security"
"guilabel> tab of the Launch Instance dialog. When "
"launching from the command line, append --security-groups with "
"a comma-separated list of security groups."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2182(para)
msgid ""
"It is also possible to add and remove security groups when an instance is "
"running. Currently this is only available through the command-line tools. "
"Here is an example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2194(para)
msgid ""
"Where floating IPs are configured in a deployment, each project will have a "
"limited number of floating IPs controlled by a quota. However, these need to "
"be allocated to the project from the central pool prior to their use—usually "
"by the administrator of the project. To allocate a floating IP to a project, "
"use the Allocate IP To Project button on the "
"Floating IPs tab of the Access & "
"Security page of the dashboard. The command line can also be used:"
"address pool "
"indexterm>IP addresses"
"primary>floating user training floating IPs"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2215(para)
msgid ""
"Once allocated, a floating IP can be assigned to running instances from the "
"dashboard either by selecting Associate Floating IP "
"from the actions drop-down next to the IP on the Floating IPs"
"guilabel> tab of the Access & Security page or by "
"making this selection next to the instance you want to associate it with on "
"the Instances page. The inverse action, "
"Dissociate Floating IP , is available from the "
"Floating IPs tab of the Access & "
"Security page and from the Instances page."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2227(para)
msgid ""
"To associate or disassociate a floating IP with a server from the command "
"line, use the following commands:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2236(title)
msgid "Attaching Block Storage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2238(para)
msgid ""
"You can attach block storage to instances from the dashboard on the "
"Volumes page. Click the Manage Attachments"
"guibutton> action next to the volume you want to attach.storage block storage "
"indexterm>block storage "
"indexterm>user training"
"primary>block storage "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2253(para)
msgid "To perform this action from command line, run the following command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2258(para)
msgid ""
"You can also specify block deviceblock device mapping at instance "
"boot time through the nova command-line client with this option set:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2266(code)
msgid "<dev-name>=<id>:<type>:<size(GB)>:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2265(phrase)
msgid "The block device mapping format is "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2267(code)
msgid "<delete-on-terminate>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2267(phrase)
msgid " , where:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2272(term)
msgid "dev-name"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2275(para)
msgid ""
"A device name where the volume is attached in the system at /dev/"
"dev_name "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2281(term)
msgid "id"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2284(para)
msgid ""
"The ID of the volume to boot from, as shown in the output of nova "
"volume-list "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2290(term)
msgid "type"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2293(para)
msgid ""
"Either snap , which means that the volume was created from "
"a snapshot, or anything other than snap (a blank string "
"is valid). In the preceding example, the volume was not created from a "
"snapshot, so we leave this field blank in our following example."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2302(term)
msgid "size (GB)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2305(para)
msgid ""
"The size of the volume in gigabytes. It is safe to leave this blank and have "
"the Compute Service infer the size."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2311(term)
msgid "delete-on-terminate"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2314(para)
msgid ""
"A boolean to indicate whether the volume should be deleted when the instance "
"is terminated. True can be specified as True or "
"1 . False can be specified as False or "
"0 ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2323(para)
msgid ""
"The following command will boot a new instance and attach a volume at the "
"same time. The volume of ID 13 will be attached as /dev/vdc. It "
"is not a snapshot, does not specify a size, and will not be deleted when the "
"instance is terminated:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2332(para)
msgid ""
"If you have previously prepared block storage with a bootable file system "
"image, it is even possible to boot from persistent block storage. The "
"following command boots an image from the specified volume. It is similar to "
"the previous command, but the image is omitted and the volume is now "
"attached as /dev/vda:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2341(para)
msgid ""
"Read more detailed instructions for launching an instance from a bootable "
"volume in the OpenStack End User Guide."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2346(para)
msgid ""
"To boot normally from an image and attach block storage, map to a device "
"other than vda. You can find instructions for launching an instance and "
"attaching a volume to the instance and for copying the image to the attached "
"volume in the OpenStack End User Guide."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2357(title)
msgid "Taking Snapshots"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2359(para)
msgid ""
"The OpenStack snapshot mechanism allows you to create new images from "
"running instances. This is very convenient for upgrading base images or for "
"taking a published image and customizing it for local use. To snapshot a "
"running instance to an image using the CLI, do this:base image snapshot user training snapshots"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2375(para)
msgid ""
"The dashboard interface for snapshots can be confusing because the snapshots "
"and images are displayed in the Images page. However, "
"an instance snapshot is an image. The only difference "
"between an image that you upload directly to the Image Service and an image "
"that you create by snapshot is that an image created by snapshot has "
"additional properties in the glance database. These properties are found in "
"the image_properties table and include:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2389(th)
msgid "Value"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2395(literal)
msgid "image_type"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2397(para) ./doc/openstack-ops/ch_ops_user_facing.xml:2416(para)
msgid "snapshot"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2401(literal)
msgid "instance_uuid"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2403(para)
msgid "<uuid of instance that was snapshotted>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2407(literal)
msgid "base_image_ref"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2409(para)
msgid "<uuid of original image of instance that was snapshotted>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2414(literal)
msgid "image_location"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2422(title)
msgid "Live Snapshots"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2424(para)
msgid ""
"Live snapshots is a feature that allows users to snapshot the running "
"virtual machines without pausing them. These snapshots are simply disk-only "
"snapshots. Snapshotting an instance can now be performed with no downtime "
"(assuming QEMU 1.3+ and libvirt 1.0+ are used).live snapshots "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2433(title)
msgid "Disable live snapshotting"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2434(para)
msgid ""
"If you use libvirt version 1.2.2 , you may experience "
"intermittent problems with live snapshot creation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2438(para)
msgid ""
"To effectively disable the libvirt live snapshotting, until the problem is "
"resolved, add the below setting to nova.conf."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2445(title)
msgid "Ensuring Snapshots of Linux Guests Are Consistent"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2447(para)
msgid ""
"The following section is from Sébastien Han's “OpenStack: Perform Consistent "
"Snapshots” blog entry."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2452(para)
msgid ""
"A snapshot captures the state of the file system, but not the state of the "
"memory. Therefore, to ensure your snapshot contains the data that you want, "
"before your snapshot you need to ensure that:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2459(para)
msgid "Running programs have written their contents to disk"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2463(para)
msgid ""
"The file system does not have any \"dirty\" buffers: where programs have "
"issued the command to write to disk, but the operating system has not yet "
"done the write"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2469(para)
msgid ""
"To ensure that important services have written their contents to disk (such "
"as databases), we recommend that you read the documentation for those "
"applications to determine what commands to issue to have them sync their "
"contents to disk. If you are unsure how to do this, the safest approach is "
"to simply stop these running services normally."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2476(para)
msgid ""
"To deal with the \"dirty\" buffer issue, we recommend using the sync command "
"before snapshotting:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2481(para)
msgid ""
"Running sync writes dirty buffers (buffered blocks that have "
"been modified but not written yet to the disk block) to disk."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2485(para)
msgid ""
"Just running sync is not enough to ensure that the file system "
"is consistent. We recommend that you use the fsfreeze tool, "
"which halts new access to the file system, and create a stable image on disk "
"that is suitable for snapshotting. The fsfreeze tool supports "
"several file systems, including ext3, ext4, and XFS. If your virtual machine "
"instance is running on Ubuntu, install the util-linux package to get "
"fsfreeze :"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2495(para)
msgid ""
"In the very common case where the underlying snapshot is done via LVM, the "
"filesystem freeze is automatically handled by LVM."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2502(para)
msgid ""
"If your operating system doesn't have a version of fsfreeze"
"literal> available, you can use xfs_freeze instead, which "
"is available on Ubuntu in the xfsprogs package. Despite the \"xfs\" in the "
"name, xfs_freeze also works on ext3 and ext4 if you are using a Linux kernel "
"version 2.6.29 or greater, since it works at the virtual file system (VFS) "
"level starting at 2.6.29. The xfs_freeze version supports the same command-"
"line arguments as fsfreeze ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2511(para)
msgid ""
"Consider the example where you want to take a snapshot of a persistent block "
"storage volume, detected by the guest operating system as /dev/vdb"
"literal> and mounted on /mnt . The fsfreeze command "
"accepts two arguments:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2519(term)
msgid "-f"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2522(para)
msgid "Freeze the system"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2527(term)
msgid "-u"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2530(para)
msgid "Thaw (unfreeze) the system"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2535(para)
msgid ""
"To freeze the volume in preparation for snapshotting, you would do the "
"following, as root, inside the instance:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2540(para)
msgid ""
"You must mount the file system before you run the "
"fsfreeze command."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2543(para)
msgid ""
"When the fsfreeze -f command is issued, all ongoing "
"transactions in the file system are allowed to complete, new write system "
"calls are halted, and other calls that modify the file system are halted. "
"Most importantly, all dirty data, metadata, and log information are written "
"to disk."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2549(para)
msgid ""
"Once the volume has been frozen, do not attempt to read from or write to the "
"volume, as these operations hang. The operating system stops every I/O "
"operation and any I/O attempts are delayed until the file system has been "
"unfrozen."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2554(para)
msgid ""
"Once you have issued the fsfreeze command, it is safe to "
"perform the snapshot. For example, if your instance was named mon-"
"instance and you wanted to snapshot it to an image named "
"mon-snapshot , you could now run the following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2562(para)
msgid ""
"When the snapshot is done, you can thaw the file system with the following "
"command, as root, inside of the instance:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2567(para)
msgid ""
"If you want to back up the root file system, you can't simply run the "
"preceding command because it will freeze the prompt. Instead, run the "
"following one-liner, as root, inside the instance:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2573(para)
msgid ""
"After this command it is common practice to call from your "
"workstation, and once done press enter in your instance shell to unfreeze it."
" Obviously you could automate this, but at least it will let you properly "
"synchronize."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2579(title)
msgid "Ensuring Snapshots of Windows Guests Are Consistent"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2581(para)
msgid ""
"Obtaining consistent snapshots of Windows VMs is conceptually similar to "
"obtaining consistent snapshots of Linux VMs, although it requires additional "
"utilities to coordinate with a Windows-only subsystem designed to facilitate "
"consistent backups."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2586(para)
msgid ""
"Windows XP and later releases include a Volume Shadow Copy Service (VSS) "
"which provides a framework so that compliant applications can be "
"consistently backed up on a live filesystem. To use this framework, a VSS "
"requestor is run that signals to the VSS service that a consistent backup is "
"needed. The VSS service notifies compliant applications (called VSS writers) "
"to quiesce their data activity. The VSS service then tells the copy provider "
"to create a snapshot. Once the snapshot has been made, the VSS service "
"unfreezes VSS writers and normal I/O activity resumes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2596(para)
msgid ""
"QEMU provides a guest agent that can be run in guests running on KVM "
"hypervisors. This guest agent, on Windows VMs, coordinates with the Windows "
"VSS service to facilitate a workflow which ensures consistent snapshots. "
"This feature requires at least QEMU 1.7. The relevant guest agent commands "
"are:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2604(term)
msgid "guest-file-flush"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2606(para)
msgid ""
"Write out \"dirty\" buffers to disk, similar to the Linux sync"
"literal> operation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2612(term)
msgid "guest-fsfreeze"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2614(para)
msgid ""
"Suspend I/O to the disks, similar to the Linux fsfreeze -f"
"literal> operation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2620(term)
msgid "guest-fsfreeze-thaw"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2622(para)
msgid ""
"Resume I/O to the disks, similar to the Linux fsfreeze -u "
"operation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2628(para)
msgid ""
"To obtain snapshots of a Windows VM these commands can be scripted in "
"sequence: flush the filesystems, freeze the filesystems, snapshot the "
"filesystems, then unfreeze the filesystems. As with scripting similar "
"workflows against Linux VMs, care must be used when writing such a script to "
"ensure error handling is thorough and filesystems will not be left in a "
"frozen state."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2641(title)
msgid "Instances in the Database"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2643(para)
msgid ""
"While instance information is stored in a number of database tables, the "
"table you most likely need to look at in relation to user instances is the "
"instances table.instances"
"primary>database information databases instance "
"information in user training instances"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2659(para)
msgid ""
"The instances table carries most of the information related to both running "
"and deleted instances. It has a bewildering array of fields; for an "
"exhaustive list, look at the database. These are the most useful fields for "
"operators looking to form queries:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2666(para)
msgid ""
"The deleted field is set to 1 if the "
"instance has been deleted and NULL if it has not been "
"deleted. This field is important for excluding deleted instances from your "
"queries."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2673(para)
msgid ""
"The uuid field is the UUID of the instance and is used "
"throughout other tables in the database as a foreign key. This ID is also "
"reported in logs, the dashboard, and command-line tools to uniquely identify "
"an instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2680(para)
msgid ""
"A collection of foreign keys are available to find relations to the instance."
" The most useful of these—user_id and "
"project_id —are the UUIDs of the user who launched the "
"instance and the project it was launched in."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2687(para)
msgid ""
"The host field tells which compute node is hosting the "
"instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2692(para)
msgid ""
"The hostname field holds the name of the instance when it "
"is launched. The display-name is initially the same as hostname but can be "
"reset using the nova rename command."
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2698(para)
msgid ""
"A number of time-related fields are useful for tracking when state changes "
"happened on an instance:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2703(literal)
msgid "created_at"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2707(literal)
msgid "updated_at"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2711(literal)
msgid "deleted_at"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2715(literal)
msgid "scheduled_at"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2719(literal)
msgid "launched_at"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2723(literal)
msgid "terminated_at"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2729(title)
msgid "Good Luck!"
msgstr ""
#: ./doc/openstack-ops/ch_ops_user_facing.xml:2731(para)
msgid ""
"This section was intended as a brief introduction to some of the most useful "
"of many OpenStack commands. For an exhaustive list, please refer to the "
" OpenStack "
"Administrator Guide. We hope your users remain happy and recognize "
"your hard work! (For more hard work, turn the page to the next chapter, "
"where we discuss the system-facing operations: maintenance, failures and "
"debugging.)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:12(title)
msgid "Compute Nodes"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:14(para)
msgid ""
"In this chapter, we discuss some of the choices you need to consider when "
"building out your compute nodes. Compute nodes form the resource core of the "
"OpenStack Compute cloud, providing the processing, memory, network and "
"storage resources to run instances."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:20(title)
msgid "Choosing a CPU"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:22(para)
msgid ""
"The type of CPU in your compute node is a very important choice. First, "
"ensure that the CPU supports virtualization by way of VT-x"
"emphasis> for Intel chips and AMD-v for AMD chips."
"CPUs (central processing units)"
"primary>choosing compute nodes CPU choice"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:36(para)
msgid ""
"Consult the vendor documentation to check for virtualization support. For "
"Intel, read “Does my processor support Intel® "
"Virtualization Technology?”. For AMD, read AMD Virtualization. Note that "
"your CPU may support virtualization but it may be disabled. Consult your "
"BIOS documentation for how to enable CPU features.virtualization technology "
"indexterm>AMD Virtualization"
"primary> Intel "
"Virtualization Technology "
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:52(para)
msgid ""
"The number of cores that the CPU has also affects the decision. It's common "
"for current CPUs to have up to 12 cores. Additionally, if an Intel CPU "
"supports hyperthreading, those 12 cores are doubled to 24 cores. If you "
"purchase a server that supports multiple CPUs, the number of cores is "
"further multiplied.cores "
"indexterm>hyperthreading "
"indexterm>multithreading "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:67(title)
msgid "Multithread Considerations"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:69(para)
msgid ""
"Hyper-Threading is Intel's proprietary simultaneous multithreading "
"implementation used to improve parallelization on their CPUs. You might "
"consider enabling Hyper-Threading to improve the performance of "
"multithreaded applications."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:74(para)
msgid ""
"Whether you should enable Hyper-Threading on your CPUs depends upon your use "
"case. For example, disabling Hyper-Threading can be beneficial in intense "
"computing environments. We recommend that you do performance testing with "
"your local workload with both Hyper-Threading on and off to determine what "
"is more appropriate in your case.CPUs "
"(central processing units) enabling hyperthreading on"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:88(title)
msgid "Choosing a Hypervisor"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:124(link) ./doc/openstack-ops/section_arch_example-neutron.xml:76(para) ./doc/openstack-ops/section_arch_example-neutron.xml:166(term) ./doc/openstack-ops/section_arch_example-nova.xml:103(para)
msgid "KVM"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:129(link)
msgid "LXC"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:134(link)
msgid "QEMU"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:139(link)
msgid "VMware ESX/ESXi"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:144(link)
msgid "Xen"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:149(link)
msgid "Hyper-V"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:154(link)
msgid "Docker"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:90(para)
msgid ""
"A hypervisor provides software to manage virtual machine access to the "
"underlying hardware. The hypervisor creates, manages, and monitors virtual "
"machines.Docker "
"indexterm>Hyper-V "
"indexterm>ESXi hypervisor "
"indexterm>ESX hypervisor "
"indexterm>VMware API "
"indexterm>Quick EMUlator (QEMU)"
"primary> Linux containers "
"(LXC) kernel-"
"based VM (KVM) hypervisor Xen API XenServer hypervisor"
"secondary> hypervisors"
"primary>choosing compute nodes hypervisor choice"
"secondary> OpenStack Compute supports many hypervisors to "
"various degrees, including: "
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:158(para)
msgid ""
"Probably the most important factor in your choice of hypervisor is your "
"current usage or experience. Aside from that, there are practical concerns "
"to do with feature parity, documentation, and the level of community "
"experience."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:163(para)
msgid ""
"For example, KVM is the most widely adopted hypervisor in the OpenStack "
"community. Besides KVM, more deployments run Xen, LXC, VMware, and Hyper-V "
"than the others listed. However, each of these are lacking some feature "
"support or the documentation on how to use them with OpenStack is out of "
"date."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:169(para)
msgid ""
"The best information available to support your choice is found on the Hypervisor Support Matrix and in the "
"configuration reference."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:177(para)
msgid ""
"It is also possible to run multiple hypervisors in a single deployment using "
"host aggregates or cells. However, an individual compute node can run only a "
"single hypervisor at a time.hypervisors running multiple"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:189(title)
msgid "Instance Storage Solutions"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:191(para)
msgid ""
"As part of the procurement for a compute cluster, you must specify some "
"storage for the disk on which the instantiated instance runs. There are "
"three main approaches to providing this temporary-style storage, and it is "
"important to understand the implications of the choice.storage instance storage "
"solutions instances storage solutions"
"secondary> compute nodes"
"primary>instance storage solutions "
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:209(para)
msgid "They are:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:213(para)
msgid "Off compute node storage—shared file system"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:217(para)
msgid "On compute node storage—shared file system"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:221(para)
msgid "On compute node storage—nonshared file system"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:225(para)
msgid ""
"In general, the questions you should ask when selecting storage are as "
"follows:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:230(para)
msgid "What is the platter count you can achieve?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:234(para)
msgid "Do more spindles result in better I/O despite network access?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:239(para)
msgid ""
"Which one results in the best cost-performance scenario you're aiming for?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:244(para)
msgid "How do you manage the storage operationally?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:248(para)
msgid ""
"Many operators use separate compute and storage hosts. Compute services and "
"storage services have different requirements, and compute hosts typically "
"require more CPU and RAM than storage hosts. Therefore, for a fixed budget, "
"it makes sense to have different configurations for your compute nodes and "
"your storage nodes. Compute nodes will be invested in CPU and RAM, and "
"storage nodes will be invested in block storage."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:256(para)
msgid ""
"However, if you are more restricted in the number of physical hosts you have "
"available for creating your cloud and you want to be able to dedicate as "
"many of your hosts as possible to running instances, it makes sense to run "
"compute and storage on the same machines."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:261(para)
msgid ""
"We'll discuss the three main approaches to instance storage in the next few "
"sections."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:267(title)
msgid "Off Compute Node Storage—Shared File System"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:269(para)
msgid ""
"In this option, the disks storing the running instances are hosted in "
"servers outside of the compute nodes.shared storage file systems shared "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:278(para)
msgid ""
"If you use separate compute and storage hosts, you can treat your compute "
"hosts as \"stateless.\" As long as you don't have any instances currently "
"running on a compute host, you can take it offline or wipe it completely "
"without having any effect on the rest of your cloud. This simplifies "
"maintenance for the compute hosts."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:284(para)
msgid "There are several advantages to this approach:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:288(para)
msgid "If a compute node fails, instances are usually easily recoverable."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:293(para)
msgid "Running a dedicated storage system can be operationally simpler."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:298(para)
msgid "You can scale to any number of spindles."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:302(para)
msgid "It may be possible to share the external storage for other purposes."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:307(para)
msgid "The main downsides to this approach are:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:311(para)
msgid ""
"Depending on design, heavy I/O usage from some instances can affect "
"unrelated instances."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:316(para) ./doc/openstack-ops/ch_arch_compute_nodes.xml:350(para)
msgid "Use of the network can decrease performance."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:322(title)
msgid "On Compute Node Storage—Shared File System"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:324(para)
msgid ""
"In this option, each compute node is specified with a significant amount of "
"disk space, but a distributed file system ties the disks from each compute "
"node into a single mount."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:328(para)
msgid ""
"The main advantage of this option is that it scales to external storage when "
"you require additional storage."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:331(para)
msgid "However, this option has several downsides:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:335(para)
msgid ""
"Running a distributed file system can make you lose your data locality "
"compared with nonshared storage."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:340(para)
msgid "Recovery of instances is complicated by depending on multiple hosts."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:345(para) ./doc/openstack-ops/ch_arch_compute_nodes.xml:387(para)
msgid ""
"The chassis size of the compute node can limit the number of spindles able "
"to be used in a compute node."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:356(title)
msgid "On Compute Node Storage—Nonshared File System"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:358(para)
msgid ""
"In this option, each compute node is specified with enough disks to store "
"the instances it hosts.file systems"
"primary>nonshared "
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:365(para)
msgid "There are two main reasons why this is a good idea:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:369(para)
msgid ""
"Heavy I/O usage on one compute node does not affect instances on other "
"compute nodes."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:374(para)
msgid "Direct I/O access can increase performance."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:378(para)
msgid "This has several downsides:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:382(para)
msgid "If a compute node fails, the instances running on that node are lost."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:392(para)
msgid ""
"Migrations of instances from one node to another are more complicated and "
"rely on features that may not continue to be developed."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:398(para)
msgid "If additional storage is required, this option does not scale."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:403(para)
msgid ""
"Running a shared file system on a storage system apart from the computes "
"nodes is ideal for clouds where reliability and scalability are the most "
"important factors. Running a shared file system on the compute nodes "
"themselves may be best in a scenario where you have to deploy to preexisting "
"servers for which you have little to no control over their specifications. "
"Running a nonshared file system on the compute nodes themselves is a good "
"option for clouds with high I/O requirements and low concern for reliability."
"scaling file "
"system choice "
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:418(title)
msgid "Issues with Live Migration"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:420(para)
msgid ""
"We consider live migration an integral part of the operations of the cloud. "
"This feature provides the ability to seamlessly move instances from one "
"physical host to another, a necessity for performing upgrades that require "
"reboots of the compute hosts, but only works well with shared storage."
"storage live "
"migration migration live migration compute nodes live migration"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:438(para)
msgid ""
"Live migration can also be done with nonshared storage, using a feature "
"known as KVM live block migration . While an earlier "
"implementation of block-based migration in KVM and QEMU was considered "
"unreliable, there is a newer, more reliable implementation of block-based "
"live migration as of QEMU 1.4 and libvirt 1.0.2 that is also compatible with "
"OpenStack. However, none of the authors of this guide have first-hand "
"experience using live block migration.block migration "
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:451(title)
msgid "Choice of File System"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:453(para)
msgid ""
"If you want to support shared-storage live migration, you need to configure "
"a distributed file system.compute "
"nodes file system choice "
"indexterm>file systems"
"primary>choice of storage file system choice"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:468(para)
msgid "Possible options include:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:472(para)
msgid "NFS (default for Linux)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:476(para) ./doc/openstack-ops/section_arch_example-neutron.xml:106(para) ./doc/openstack-ops/section_arch_example-neutron.xml:118(para) ./doc/openstack-ops/section_arch_example-neutron.xml:218(term)
msgid "GlusterFS"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:480(para)
msgid "MooseFS"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:484(para)
msgid "Lustre"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:488(para)
msgid ""
"We've seen deployments with all, and recommend that you choose the one you "
"are most familiar with operating. If you are not familiar with any of these, "
"choose NFS, as it is the easiest to set up and there is extensive community "
"knowledge about it."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:496(title)
msgid "Overcommitting"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:498(para)
msgid ""
"OpenStack allows you to overcommit CPU and RAM on compute nodes. This allows "
"you to increase the number of instances you can have running on your cloud, "
"at the cost of reducing the performance of the instances.RAM overcommit CPUs (central processing units)"
"primary>overcommitting overcommitting compute nodes overcommitting"
"secondary> OpenStack Compute uses the following ratios by "
"default:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:518(para)
msgid "CPU allocation ratio: 16:1"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:522(para)
msgid "RAM allocation ratio: 1.5:1"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:526(para)
msgid ""
"The default CPU allocation ratio of 16:1 means that the scheduler allocates "
"up to 16 virtual cores per physical core. For example, if a physical node "
"has 12 cores, the scheduler sees 192 available virtual cores. With typical "
"flavor definitions of 4 virtual cores per instance, this ratio would provide "
"48 instances on a physical node."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:532(para)
msgid ""
"The formula for the number of virtual instances on a compute node is "
"(OR*PC)/VC , where:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:537(emphasis)
msgid "OR"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:540(para)
msgid "CPU overcommit ratio (virtual cores per physical core)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:545(emphasis)
msgid "PC"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:548(para)
msgid "Number of physical cores"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:553(emphasis)
msgid "VC"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:556(para)
msgid "Number of virtual cores per instance"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:561(para)
msgid ""
"Similarly, the default RAM allocation ratio of 1.5:1 means that the "
"scheduler allocates instances to a physical node as long as the total amount "
"of RAM associated with the instances is less than 1.5 times the amount of "
"RAM available on the physical node."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:566(para)
msgid ""
"For example, if a physical node has 48 GB of RAM, the scheduler allocates "
"instances to that node until the sum of the RAM associated with the "
"instances reaches 72 GB (such as nine instances, in the case where each "
"instance has 8 GB of RAM)."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:572(para)
msgid ""
"Regardless of the overcommit ratio, an instance can not be placed on any "
"physical node with fewer raw (pre-overcommit) resources than the instance "
"flavor requires."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:577(para)
msgid ""
"You must select the appropriate CPU and RAM allocation ratio for your "
"particular use case."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:582(title)
msgid "Logging"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:584(para)
msgid ""
"Logging is detailed more fully in . "
"However, it is an important design consideration to take into account before "
"commencing operations of your cloud.logging/monitoring compute nodes "
"and compute "
"nodes logging "
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:597(para)
msgid ""
"OpenStack produces a great deal of useful logging information, however; but "
"for the information to be useful for operations purposes, you should "
"consider having a central logging server to send logs to, and a log parsing/"
"analysis system (such as logstash )."
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:605(title) ./doc/openstack-ops/ch_ops_resources.xml:59(title)
msgid "Networking"
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:607(para)
msgid ""
"Networking in OpenStack is a complex, multifaceted challenge. See .compute "
"nodes networking "
msgstr ""
#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:618(para)
msgid ""
"Compute nodes are the workhorse of your cloud and the place where your "
"users' applications will run. They are likely to be affected by your "
"decisions on what to deploy and how you deploy it. Their requirements should "
"be reflected in the choices you make."
msgstr ""
#: ./doc/openstack-ops/part_operations.xml:9(title)
msgid "Operations"
msgstr ""
#: ./doc/openstack-ops/part_operations.xml:12(para)
msgid ""
"Congratulations! By now, you should have a solid design for your cloud. We "
"now recommend that you turn to the OpenStack Installation Guides (), which "
"contains a step-by-step guide on how to manually install the OpenStack "
"packages and dependencies on your cloud."
msgstr ""
#: ./doc/openstack-ops/part_operations.xml:18(para)
msgid ""
"While it is important for an operator to be familiar with the steps involved "
"in deploying OpenStack, we also strongly encourage you to evaluate "
"configuration-management tools, such as Puppet or "
"Chef , which can help automate this deployment process."
"Chef Puppet "
msgstr ""
#: ./doc/openstack-ops/part_operations.xml:28(para)
msgid ""
"In the remainder of this guide, we assume that you have successfully "
"deployed an OpenStack cloud and are able to perform basic operations such as "
"adding images, booting instances, and attaching volumes."
msgstr ""
#: ./doc/openstack-ops/part_operations.xml:32(para)
msgid ""
"As your focus turns to stable operations, we recommend that you do skim the "
"remainder of this book to get a sense of the content. Some of this content "
"is useful to read in advance so that you can put best practices into effect "
"to simplify your life in the long run. Other content is more useful as a "
"reference that you might turn to when an unexpected event occurs (such as a "
"power failure), or to troubleshoot a particular problem."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:88(None)
msgid ""
"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/"
"openstack-ops/figures/osog_1201.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:207(None)
msgid ""
"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/"
"openstack-ops/figures/osog_1202.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:12(title)
msgid "Network Troubleshooting"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:14(para)
msgid ""
"Network troubleshooting can unfortunately be a very difficult and confusing "
"procedure. A network issue can cause a problem at several points in the "
"cloud. Using a logical troubleshooting procedure can help mitigate the "
"confusion and more quickly isolate where exactly the network issue is. This "
"chapter aims to give you the information you need to identify any issues for "
"either nova-network or OpenStack Networking (neutron) "
"with Linux Bridge or Open vSwitch.OpenStack Networking (neutron)"
"primary>troubleshooting Linux Bridge troubleshooting"
"secondary> network "
"troubleshooting troubleshooting "
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:35(title)
msgid "Using \"ip a\" to Check Interface States"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:37(para)
msgid ""
"On compute nodes and nodes running nova-network , use the "
"following command to see information about interfaces, including information "
"about IPs, VLANs, and whether your interfaces are up:ip a command interface states, checking "
"indexterm>troubleshooting"
"primary>checking interface states "
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:52(para)
msgid ""
"If you're encountering any sort of networking difficulty, one good initial "
"sanity check is to make sure that your interfaces are up. For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:66(para)
msgid ""
"You can safely ignore the state of virbr0 , which is a "
"default bridge created by libvirt and not used by OpenStack."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:71(title)
msgid "Visualizing nova-network Traffic in the Cloud"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:73(para)
msgid ""
"If you are logged in to an instance and ping an external host—for example, "
"Google—the ping packet takes the route shown in .ping packets "
"indexterm>troubleshooting"
"primary>nova-network traffic "
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:84(title)
msgid "Traffic route for ping packet"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:95(para)
msgid ""
"The instance generates a packet and places it on the virtual Network "
"Interface Card (NIC) inside the instance, such as eth0 ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:101(para)
msgid ""
"The packet transfers to the virtual NIC of the compute host, such as, "
"vnet1 . You can find out what vnet NIC is being used by "
"looking at the /etc/libvirt/qemu/instance-xxxxxxxx.xml "
"file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:109(para)
msgid ""
"From the vnet NIC, the packet transfers to a bridge on the compute node, "
"such as br100."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:112(para)
msgid ""
"If you run FlatDHCPManager, one bridge is on the compute node. If you run "
"VlanManager, one bridge exists for each VLAN."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:115(para)
msgid ""
"To see which bridge the packet will use, run the command: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:118(para)
msgid ""
"Look for the vnet NIC. You can also reference nova.conf "
"and look for the flat_interface_bridge option."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:124(para)
msgid ""
"The packet transfers to the main NIC of the compute node. You can also see "
"this NIC in the brctl output, or you can find it by "
"referencing the flat_interface option in nova."
"conf ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:131(para)
msgid ""
"After the packet is on this NIC, it transfers to the compute node's default "
"gateway. The packet is now most likely out of your control at this point. "
"The diagram depicts an external gateway. However, in the default "
"configuration with multi-host, the compute host is the gateway."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:139(para)
msgid ""
"Reverse the direction to see the path of a ping reply. From this path, you "
"can see that a single packet travels across four different NICs. If a "
"problem occurs with any of these NICs, a network issue occurs."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:145(title)
msgid "Visualizing OpenStack Networking Service Traffic in the Cloud"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:148(para)
msgid ""
"OpenStack Networking has many more degrees of freedom than nova-"
"network does because of its pluggable back end. It can be "
"configured with open source or vendor proprietary plug-ins that control "
"software defined networking (SDN) hardware or plug-ins that use Linux native "
"facilities on your hosts, such as Open vSwitch or Linux Bridge.troubleshooting"
"primary>OpenStack traffic "
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:159(para)
msgid ""
"The networking chapter of the OpenStack "
"Administrator Guide shows a variety of networking scenarios and their "
"connection paths. The purpose of this section is to give you the tools to "
"troubleshoot the various components involved however they are plumbed "
"together in your environment."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:167(para)
msgid ""
"For this example, we will use the Open vSwitch (OVS) back end. Other back-"
"end plug-ins will have very different flow paths. OVS is the most popularly "
"deployed network driver, according to the October 2015 OpenStack User "
"Survey, with 41 percent more sites using it than the Linux Bridge driver. "
"We'll describe each step in turn, with for reference."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:176(para)
msgid ""
"The instance generates a packet and places it on the virtual NIC inside the "
"instance, such as eth0."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:181(para)
msgid ""
"The packet transfers to a Test Access Point (TAP) device on the compute "
"host, such as tap690466bc-92. You can find out what TAP is being used by "
"looking at the /etc/libvirt/qemu/instance-xxxxxxxx.xml "
"file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:187(para)
msgid ""
"The TAP device name is constructed using the first 11 characters of the port "
"ID (10 hex digits plus an included '-'), so another means of finding the "
"device name is to use the neutron command. This returns a "
"pipe-delimited list, the first item of which is the port ID. For example, to "
"get the port ID associated with IP address 10.0.0.10, do this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:197(para)
msgid ""
"Taking the first 11 characters, we can construct a device name of "
"tapff387e54-9e from this output."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:203(title)
msgid "Neutron network paths"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:214(para)
msgid ""
"The TAP device is connected to the integration bridge, br-int. "
"This bridge connects all the instance TAP devices and any other bridges on "
"the system. In this example, we have int-br-eth1 and "
"patch-tun. int-br-eth1 is one half of a veth pair "
"connecting to the bridge br-eth1, which handles VLAN networks "
"trunked over the physical Ethernet device eth1. patch-"
"tun is an Open vSwitch internal port that connects to the br-"
"tun bridge for GRE networks."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:224(para)
msgid ""
"The TAP devices and veth devices are normal Linux network devices and may be "
"inspected with the usual tools, such as ip and "
"tcpdump . Open vSwitch internal devices, such as "
"patch-tun, are only visible within the Open vSwitch environment."
" If you try to run tcpdump -i patch-tun , it will raise an "
"error, saying that the device does not exist."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:232(para)
msgid ""
"It is possible to watch packets on internal interfaces, but it does take a "
"little bit of networking gymnastics. First you need to create a dummy "
"network device that normal Linux tools can see. Then you need to add it to "
"the bridge containing the internal interface you want to snoop on. Finally, "
"you need to tell Open vSwitch to mirror all traffic to or from the internal "
"port onto this dummy port. After all this, you can then run "
"tcpdump on the dummy interface and see the traffic on the "
"internal port."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:242(title)
msgid ""
"To capture packets from the patch-tun internal interface on "
"integration bridge, br-int:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:246(para)
msgid "Create and bring up a dummy interface, snooper0:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:256(para)
msgid "Add device snooper0 to bridge br-int:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:264(para)
msgid ""
"Create mirror of patch-tun to snooper0 (returns "
"UUID of mirror port):"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:274(para)
msgid ""
"Profit. You can now see traffic on patch-tun by running "
"tcpdump -i snooper0 ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:279(para)
msgid ""
"Clean up by clearing all mirrors on br-int and deleting the "
"dummy interface:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:291(para)
msgid ""
"On the integration bridge, networks are distinguished using internal VLANs "
"regardless of how the networking service defines them. This allows instances "
"on the same host to communicate directly without transiting the rest of the "
"virtual, or physical, network. These internal VLAN IDs are based on the "
"order they are created on the node and may vary between nodes. These IDs are "
"in no way related to the segmentation IDs used in the network definition and "
"on the physical wire."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:300(para)
msgid ""
"VLAN tags are translated between the external tag defined in the network "
"settings, and internal tags in several places. On the br-int, "
"incoming packets from the int-br-eth1 are translated from "
"external tags to internal tags. Other translations also happen on the other "
"bridges and will be discussed in those sections."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:310(title)
msgid ""
"To discover which internal VLAN tag is in use for a given external VLAN by "
"using the ovs-ofctl command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:315(para)
msgid ""
"Find the external VLAN tag of the network you're interested in. This is the "
"provider:segmentation_id as returned by the networking service:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:330(para)
msgid ""
"Grep for the provider:segmentation_id, 2113 in this case, in "
"the output of ovs-ofctl dump-flows br-int :"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:340(para)
msgid ""
"Here you can see packets received on port ID 1 with the VLAN tag 2113 are "
"modified to have the internal VLAN tag 7. Digging a little deeper, you can "
"confirm that port 1 is in fact int-br-eth1:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:378(para)
msgid ""
"The next step depends on whether the virtual network is configured to use "
"802.1q VLAN tags or GRE:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:383(para)
msgid ""
"VLAN-based networks exit the integration bridge via veth interface int-"
"br-eth1 and arrive on the bridge br-eth1 on the other "
"member of the veth pair phy-br-eth1. Packets on this interface "
"arrive with internal VLAN tags and are translated to external tags in the "
"reverse of the process described above:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:395(para)
msgid ""
"Packets, now tagged with the external VLAN tag, then exit onto the physical "
"network via eth1. The Layer2 switch this interface is connected "
"to must be configured to accept traffic with the VLAN ID used. The next hop "
"for this packet must also be on the same layer-2 network."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:403(para)
msgid ""
"GRE-based networks are passed with patch-tun to the tunnel "
"bridge br-tun on interface patch-int. This bridge "
"also contains one port for each GRE tunnel peer, so one for each compute "
"node and network node in your network. The ports are named sequentially from "
"gre-1 onward."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:410(para)
msgid ""
"Matching gre-<n> interfaces to tunnel endpoints is "
"possible by looking at the Open vSwitch state:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:421(para)
msgid ""
"In this case, gre-1 is a tunnel from IP 10.10.128.21, which "
"should match a local interface on this node, to IP 10.10.128.16 on the "
"remote side."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:425(para)
msgid ""
"These tunnels use the regular routing tables on the host to route the "
"resulting GRE packet, so there is no requirement that GRE endpoints are all "
"on the same layer-2 network, unlike VLAN encapsulation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:430(para)
msgid ""
"All interfaces on the br-tun are internal to Open vSwitch. To "
"monitor traffic on them, you need to set up a mirror port as described above "
"for patch-tun in the br-int bridge."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:435(para)
msgid ""
"All translation of GRE tunnels to and from internal VLANs happens on this "
"bridge."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:441(title)
msgid ""
"To discover which internal VLAN tag is in use for a GRE tunnel by using the "
"ovs-ofctl command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:445(para)
msgid ""
"Find the provider:segmentation_id of the network you're "
"interested in. This is the same field used for the VLAN ID in VLAN-based "
"networks:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:460(para)
msgid ""
"Grep for 0x<provider:segmentation_id>, 0x3 in this case, "
"in the output of ovs-ofctl dump-flows br-tun :"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:487(para)
msgid ""
"Here, you see three flows related to this GRE tunnel. The first is the "
"translation from inbound packets with this tunnel ID to internal VLAN ID 1. "
"The second shows a unicast flow to output port 53 for packets destined for "
"MAC address fa:16:3e:a6:48:24. The third shows the translation from the "
"internal VLAN representation to the GRE tunnel ID flooded to all output "
"ports. For further details of the flow descriptions, see the man page for "
"ovs-ofctl . As in the previous VLAN example, numeric port "
"IDs can be matched with their named representations by examining the output "
"of ovs-ofctl show br-tun ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:503(para)
msgid ""
"The packet is then received on the network node. Note that any traffic to "
"the l3-agent or dhcp-agent will be visible only within their network "
"namespace. Watching any interfaces outside those namespaces, even those that "
"carry the network traffic, will only show broadcast packets like Address "
"Resolution Protocols (ARPs), but unicast traffic to the router or DHCP "
"address will not be seen. See Dealing with Network Namespaces for detail "
"on how to run commands within these namespaces."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:514(para)
msgid ""
"Alternatively, it is possible to configure VLAN-based networks to use "
"external routers rather than the l3-agent shown here, so long as the "
"external router is on the same VLAN:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:520(para)
msgid ""
"VLAN-based networks are received as tagged packets on a physical network "
"interface, eth1 in this example. Just as on the compute node, "
"this interface is a member of the br-eth1 bridge."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:527(para)
msgid ""
"GRE-based networks will be passed to the tunnel bridge br-tun, "
"which behaves just like the GRE interfaces on the compute node."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:535(para)
msgid ""
"Next, the packets from either input go through the integration bridge, again "
"just as on the compute node."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:540(para)
msgid ""
"The packet then makes it to the l3-agent. This is actually another TAP "
"device within the router's network namespace. Router namespaces are named in "
"the form qrouter-<router-uuid>. Running ip a"
"literal> within the namespace will show the TAP device name, qr-e6256f7d-31 "
"in this example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:557(para)
msgid ""
"The qg-<n> interface in the l3-agent router namespace "
"sends the packet on to its next hop through device eth2 on the "
"external bridge br-ex. This bridge is constructed similarly to "
"br-eth1 and may be inspected in the same way."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:565(para)
msgid ""
"This external bridge also includes a physical network interface, eth2"
"code> in this example, which finally lands the packet on the external "
"network destined for an external router or destination."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:572(para)
msgid ""
"DHCP agents running on OpenStack networks run in namespaces similar to the "
"l3-agents. DHCP namespaces are named qdhcp-<uuid> and "
"have a TAP device on the integration bridge. Debugging of DHCP issues "
"usually involves working inside this network namespace."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:583(title)
msgid "Finding a Failure in the Path"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:585(para)
msgid ""
"Use ping to quickly find where a failure exists in the network path. In an "
"instance, first see whether you can ping an external host, such as google."
"com. If you can, then there shouldn't be a network problem at all."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:590(para)
msgid ""
"If you can't, try pinging the IP address of the compute node where the "
"instance is hosted. If you can ping this IP, then the problem is somewhere "
"between the compute node and that compute node's gateway."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:594(para)
msgid ""
"If you can't ping the IP address of the compute node, the problem is between "
"the instance and the compute node. This includes the bridge connecting the "
"compute node's main NIC with the vnet NIC of the instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:599(para)
msgid ""
"One last test is to launch a second instance and see whether the two "
"instances can ping each other. If they can, the issue might be related to "
"the firewall on the compute node.path "
"failures troubleshooting detecting path "
"failures "
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:611(title) ./doc/openstack-ops/ch_ops_resources.xml:72(code)
msgid "tcpdump"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:613(para)
msgid ""
"One great, although very in-depth, way of troubleshooting network issues is "
"to use tcpdump . We recommended using tcpdump"
"literal> at several points along the network path to correlate where a "
"problem might be. If you prefer working with a GUI, either live or by using "
"a tcpdump capture, do also check out Wireshark."
"tcpdump "
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:623(para)
msgid "For example, run the following command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:628(para)
msgid "Run this on the command line of the following areas:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:632(para)
msgid "An external server outside of the cloud"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:636(para)
msgid "A compute node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:640(para)
msgid "An instance running on that compute node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:644(para)
msgid "In this example, these locations have the following IP addresses:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:656(para)
msgid ""
"Next, open a new shell to the instance and then ping the external host where "
"tcpdump is running. If the network path to the external "
"server and back is fully functional, you see something like the following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:661(para)
msgid "On the external server:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:671(para)
msgid "On the compute node:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:692(para)
msgid "On the instance:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:698(para)
msgid ""
"Here, the external server received the ping request and sent a ping reply. "
"On the compute node, you can see that both the ping and ping reply "
"successfully passed through. You might also see duplicate packets on the "
"compute node, as seen above, because tcpdump captured the "
"packet on both the bridge and outgoing interface."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:706(title)
msgid "iptables"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:708(para)
msgid ""
"Through nova-network or neutron , "
"OpenStack Compute automatically manages iptables, including forwarding "
"packets to and from instances on a compute node, forwarding floating IP "
"traffic, and managing security group rules. In addition to managing the "
"rules, comments (if supported) will be inserted in the rules to help "
"indicate the purpose of the rule. iptables troubleshooting iptables"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:722(para)
msgid "The following comments are added to the rule set as appropriate:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:726(para)
msgid "Perform source NAT on outgoing traffic."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:729(para)
msgid "Default drop rule for unmatched traffic."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:732(para)
msgid "Direct traffic from the VM interface to the security group chain."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:736(para)
msgid "Jump to the VM specific chain."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:739(para)
msgid "Direct incoming traffic from VM to the security group chain."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:743(para)
msgid "Allow traffic from defined IP/MAC pairs."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:746(para)
msgid "Drop traffic without an IP/MAC allow rule."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:749(para)
msgid "Allow DHCP client traffic."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:752(para)
msgid "Prevent DHCP Spoofing by VM."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:755(para)
msgid "Send unmatched traffic to the fallback chain."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:758(para)
msgid "Drop packets that are not associated with a state."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:761(para)
msgid "Direct packets associated with a known session to the RETURN chain."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:765(para)
msgid "Allow IPv6 ICMP traffic to allow RA packets."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:769(para)
msgid "Run the following command to view the current iptables configuration:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:775(para)
msgid ""
"If you modify the configuration, it reverts the next time you restart "
"nova-network or neutron-server . You "
"must use OpenStack to manage iptables."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:783(title)
msgid "Network Configuration in the Database for nova-network"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:785(para)
msgid ""
"With nova-network , the nova database table contains a few "
"tables with networking information:databases nova-network "
"troubleshooting troubleshooting nova-network "
"database "
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:799(literal)
msgid "fixed_ips"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:802(para)
msgid ""
"Contains each possible IP address for the subnet(s) added to Compute. This "
"table is related to the instances table by way of the "
"fixed_ips.instance_uuid column."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:810(literal)
msgid "floating_ips"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:813(para)
msgid ""
"Contains each floating IP address that was added to Compute. This table is "
"related to the fixed_ips table by way of the "
"floating_ips.fixed_ip_id column."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:824(para)
msgid ""
"Not entirely network specific, but it contains information about the "
"instance that is utilizing the fixed_ip and optional "
"floating_ip ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:831(para)
msgid ""
"From these tables, you can see that a floating IP is technically never "
"directly related to an instance; it must always go through a fixed IP."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:836(title)
msgid "Manually Disassociating a Floating IP"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:838(para)
msgid ""
"Sometimes an instance is terminated but the floating IP was not correctly de-"
"associated from that instance. Because the database is in an inconsistent "
"state, the usual tools to disassociate the IP no longer work. To fix this, "
"you must manually update the database.IP addresses floating "
"indexterm>floating IP address"
"primary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:850(para)
msgid "First, find the UUID of the instance in question:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:854(para)
msgid "Next, find the fixed IP entry for that UUID:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:858(para)
msgid "You can now get the related floating IP entry:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:862(para)
msgid "And finally, you can disassociate the floating IP:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:867(para)
msgid "You can optionally also deallocate the IP from the user's pool:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:876(title)
msgid "Debugging DHCP Issues with nova-network"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:878(para)
msgid ""
"One common networking problem is that an instance boots successfully but is "
"not reachable because it failed to obtain an IP address from dnsmasq, which "
"is the DHCP server that is launched by the nova-network "
"service.DHCP (Dynamic Host "
"Configuration Protocol) debugging "
"indexterm>troubleshooting"
"primary>nova-network DHCP "
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:891(para)
msgid ""
"The simplest way to identify that this is the problem with your instance is "
"to look at the console output of your instance. If DHCP failed, you can "
"retrieve the console log by doing:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:897(para)
msgid ""
"If your instance failed to obtain an IP through DHCP, some messages should "
"appear in the console. For example, for the Cirros image, you see output "
"that looks like the following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:911(para)
msgid ""
"After you establish that the instance booted properly, the task is to figure "
"out where the failure is."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:914(para)
msgid ""
"A DHCP problem might be caused by a misbehaving dnsmasq process. First, "
"debug by checking logs and then restart the dnsmasq processes only for that "
"project (tenant). In VLAN mode, there is a dnsmasq process for each tenant. "
"Once you have restarted targeted dnsmasq processes, the simplest way to rule "
"out dnsmasq causes is to kill all of the dnsmasq processes on the machine "
"and restart nova-network . As a last resort, do this as "
"root:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:926(para)
msgid ""
"Use openstack-nova-network on RHEL/CentOS/Fedora but "
"nova-network on Ubuntu/Debian."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:931(para)
msgid ""
"Several minutes after nova-network is restarted, you "
"should see new dnsmasq processes running:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:955(para)
msgid ""
"If your instances are still not able to obtain IP addresses, the next thing "
"to check is whether dnsmasq is seeing the DHCP requests from the instance. "
"On the machine that is running the dnsmasq process, which is the compute "
"host if running in multi-host mode, look at /var/log/syslog"
"literal> to see the dnsmasq output. If dnsmasq is seeing the request "
"properly and handing out an IP, the output looks like this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:971(para)
msgid ""
"If you do not see the DHCPDISCOVER , a problem exists with "
"the packet getting from the instance to the machine running dnsmasq. If you "
"see all of the preceding output and your instances are still not able to "
"obtain IP addresses, then the packet is able to get from the instance to the "
"host running dnsmasq, but it is not able to make the return trip."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:978(para)
msgid "You might also see a message such as this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:983(para)
msgid ""
"This may be a dnsmasq and/or nova-network related issue. "
"(For the preceding example, the problem happened to be that dnsmasq did not "
"have any more IP addresses to give away because there were no more fixed IPs "
"available in the OpenStack Compute database.)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:988(para)
msgid ""
"If there's a suspicious-looking dnsmasq log message, take a look at the "
"command-line arguments to the dnsmasq processes to see if they look correct:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:994(para)
msgid "The output looks something like the following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1023(para)
msgid ""
"The output shows three different dnsmasq processes. The dnsmasq process that "
"has the DHCP subnet range of 192.168.122.0 belongs to libvirt and can be "
"ignored. The other two dnsmasq processes belong to nova-network"
"literal>. The two processes are actually related—one is simply the parent "
"process of the other. The arguments of the dnsmasq processes should "
"correspond to the details you configured nova-network "
"with."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1031(para)
msgid ""
"If the problem does not seem to be related to dnsmasq itself, at this point "
"use tcpdump on the interfaces to determine where the packets "
"are getting lost."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1035(para)
msgid ""
"DHCP traffic uses UDP. The client sends from port 68 to port 67 on the "
"server. Try to boot a new instance and then systematically listen on the "
"NICs until you identify the one that isn't seeing the traffic. To use "
"tcpdump to listen to ports 67 and 68 on br100, you would do:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1043(para)
msgid ""
"You should be doing sanity checks on the interfaces using command such as "
"ip a and brctl show to ensure that the interfaces "
"are actually up and configured the way that you think that they are."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1050(title)
msgid "Debugging DNS Issues"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1052(para)
msgid ""
"If you are able to use SSH to log into an instance, but it takes a very long "
"time (on the order of a minute) to get a prompt, then you might have a DNS "
"issue. The reason a DNS issue can cause this problem is that the SSH server "
"does a reverse DNS lookup on the IP address that you are connecting from. If "
"DNS lookup isn't working on your instances, then you must wait for the DNS "
"reverse lookup timeout to occur for the SSH login process to complete."
"DNS (Domain Name Server, Service or "
"System) debugging troubleshooting DNS issues"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1068(para)
msgid ""
"When debugging DNS issues, start by making sure that the host where the "
"dnsmasq process for that instance runs is able to correctly resolve. If the "
"host cannot resolve, then the instances won't be able to either."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1073(para)
msgid ""
"A quick way to check whether DNS is working is to resolve a hostname inside "
"your instance by using the host command. If DNS is working, you "
"should see:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1082(para)
msgid ""
"If you're running the Cirros image, it doesn't have the \"host\" program "
"installed, in which case you can use ping to try to access a machine by "
"hostname to see whether it resolves. If DNS is working, the first line of "
"ping would be:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1090(para)
msgid ""
"If the instance fails to resolve the hostname, you have a DNS problem. For "
"example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1096(para)
msgid ""
"In an OpenStack cloud, the dnsmasq process acts as the DNS server for the "
"instances in addition to acting as the DHCP server. A misbehaving dnsmasq "
"process may be the source of DNS-related issues inside the instance. As "
"mentioned in the previous section, the simplest way to rule out a "
"misbehaving dnsmasq process is to kill all the dnsmasq processes on the "
"machine and restart nova-network . However, be aware that "
"this command affects everyone running instances on this node, including "
"tenants that have not seen the issue. As a last resort, as root:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1109(para)
msgid "After the dnsmasq processes start again, check whether DNS is working."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1112(para)
msgid ""
"If restarting the dnsmasq process doesn't fix the issue, you might need to "
"use tcpdump to look at the packets to trace where the failure "
"is. The DNS server listens on UDP port 53. You should see the DNS request on "
"the bridge (such as, br100) of your compute node. Let's say you start "
"listening with tcpdump on the compute node:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1122(para)
msgid ""
"Then, if you use SSH to log into your instance and try ping openstack."
"org, you should see something like:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1135(title)
msgid "Troubleshooting Open vSwitch"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1137(para)
msgid ""
"Open vSwitch, as used in the previous OpenStack Networking examples is a "
"full-featured multilayer virtual switch licensed under the open source "
"Apache 2.0 license. Full documentation can be found at the project's website. In practice, given "
"the preceding configuration, the most common issues are being sure that the "
"required bridges (br-int, br-tun, and br-ex"
"code>) exist and have the proper ports connected to them.Open vSwitch troubleshooting"
"secondary> troubleshooting Open vSwitch"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1154(para)
msgid ""
"The Open vSwitch driver should and usually does manage this automatically, "
"but it is useful to know how to do this by hand with the ovs-vsctl"
"literal> command. This command has many more subcommands than we will use "
"here; see the man page or use ovs-vsctl --help for the "
"full listing."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1160(para)
msgid ""
"To list the bridges on a system, use ovs-vsctl list-br . "
"This example shows a compute node that has an internal bridge and a tunnel "
"bridge. VLAN networks are trunked through the eth1 network "
"interface:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1171(para)
msgid ""
"Working from the physical interface inwards, we can see the chain of ports "
"and bridges. First, the bridge eth1-br, which contains the "
"physical network interface eth1 and the virtual interface "
"phy-eth1-br:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1181(para)
msgid ""
"Next, the internal bridge, br-int, contains int-eth1-br"
"code>, which pairs with phy-eth1-br to connect to the physical "
"network shown in the previous bridge, patch-tun, which is used "
"to connect to the GRE tunnel bridge and the TAP devices that connect to the "
"instances currently running on the system:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1196(para)
msgid ""
"The tunnel bridge, br-tun, contains the patch-int "
"interface and gre-<N> interfaces for each peer it "
"connects to via GRE, one for each compute and network node in your cluster:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1210(para)
msgid ""
"If any of these links is missing or incorrect, it suggests a configuration "
"error. Bridges can be added with ovs-vsctl add-br , and "
"ports can be added to bridges with ovs-vsctl add-port . "
"While running these by hand can be useful debugging, it is imperative that "
"manual changes that you intend to keep be reflected back into your "
"configuration files."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1219(title)
msgid "Dealing with Network Namespaces"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1221(para)
msgid ""
"Linux network namespaces are a kernel feature the networking service uses to "
"support multiple isolated layer-2 networks with overlapping IP address "
"ranges. The support may be disabled, but it is on by default. If it is "
"enabled in your environment, your network nodes will run their dhcp-agents "
"and l3-agents in isolated namespaces. Network interfaces and traffic on "
"those interfaces will not be visible in the default namespace.network namespaces, troubleshooting "
"indexterm>namespaces, "
"troubleshooting troubleshooting network "
"namespaces "
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1237(para)
msgid ""
"To see whether you are using namespaces, run ip netns :"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1248(para)
msgid ""
"L3-agent router namespaces are named qrouter-"
"<router_uuid> , and dhcp-agent "
"name spaces are named qdhcp-"
"literal><net_uuid> . This "
"output shows a network node with four networks running dhcp-agents, one of "
"which is also running an l3-agent router. It's important to know which "
"network you need to be working in. A list of existing networks and their "
"UUIDs can be obtained by running neutron net-list with "
"administrative credentials."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1258(para)
msgid ""
"Once you've determined which namespace you need to work in, you can use any "
"of the debugging tools mention earlier by prefixing the command with "
"ip netns exec <namespace> . For example, to see what "
"network interfaces exist in the first qdhcp namespace returned above, do "
"this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1278(para)
msgid ""
"From this you see that the DHCP server on that network is using the "
"tape6256f7d-31 device and has an IP address of 10.0.1.100. Seeing the "
"address 169.254.169.254, you can also see that the dhcp-agent is running a "
"metadata-proxy service. Any of the commands mentioned previously in this "
"chapter can be run in the same way. It is also possible to run a shell, such "
"as bash , and have an interactive session within the "
"namespace. In the latter case, exiting the shell returns you to the top-"
"level default namespace."
msgstr ""
#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1291(para)
msgid ""
"The authors have spent too much time looking at packet dumps in order to "
"distill this information for you. We trust that, following the methods "
"outlined in this chapter, you will have an easier time! Aside from working "
"with the tools and steps above, don't forget that sometimes an extra pair of "
"eyes goes a long way to assist."
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:13(title) ./doc/openstack-ops/bk_ops_guide.xml:34(productname)
msgid "OpenStack"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:21(link)
msgid ""
"Installation Guide for Red Hat Enterprise Linux 7, CentOS 7, and Fedora 22"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:35(emphasis)
msgid "OpenStack Cloud Computing Cookbook"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:35(link)
msgid " (Packt Publishing)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:41(title)
msgid "Cloud (General)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:44(link)
msgid "“The NIST Definition of Cloud Computing”"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:50(title)
msgid "Python"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:53(emphasis)
msgid "Dive Into Python"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:53(link) ./doc/openstack-ops/ch_ops_resources.xml:103(link)
msgid " (Apress)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:62(emphasis)
msgid "TCP/IP Illustrated, Volume 1: The Protocols, 2/E"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:62(link)
msgid " (Pearson)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:67(emphasis)
msgid "The TCP/IP Guide"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:67(link) ./doc/openstack-ops/ch_ops_resources.xml:90(link)
msgid " (No Starch Press)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:71(link)
msgid "“A Tutorial and Primer”"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:77(title)
msgid "Systems Administration"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:80(emphasis)
msgid "UNIX and Linux Systems Administration Handbook"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:80(link)
msgid " (Prentice Hall)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:87(title)
msgid "Virtualization"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:90(emphasis)
msgid "The Book of Xen"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:96(title) ./doc/openstack-ops/ch_ops_maintenance.xml:870(title)
msgid "Configuration Management"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:99(link)
msgid "Puppet Labs Documentation"
msgstr ""
#: ./doc/openstack-ops/ch_ops_resources.xml:103(emphasis)
msgid "Pro Puppet"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/ch_arch_provision.xml:156(None)
msgid ""
"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/"
"openstack-ops/figures/osog_0201.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:12(title)
msgid "Provisioning and Deployment"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:14(para)
msgid ""
"A critical part of a cloud's scalability is the amount of effort that it "
"takes to run your cloud. To minimize the operational cost of running your "
"cloud, set up and use an automated deployment and configuration "
"infrastructure with a configuration management system, such as Puppet or "
"Chef. Combined, these systems greatly reduce manual effort and the chance "
"for operator error.cloud computing"
"primary>minimizing costs of "
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:25(para)
msgid ""
"This infrastructure includes systems to automatically install the operating "
"system's initial configuration and later coordinate the configuration of all "
"services automatically and centrally, which reduces both manual effort and "
"the chance for error. Examples include Ansible, CFEngine, Chef, Puppet, and "
"Salt. You can even use OpenStack to deploy OpenStack, named TripleO "
"(OpenStack On OpenStack).Puppet"
"primary> Chef "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:37(title)
msgid "Automated Deployment"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:39(para)
msgid ""
"An automated deployment system installs and configures operating systems on "
"new servers, without intervention, after the absolute minimum amount of "
"manual work, including physical racking, MAC-to-IP assignment, and power "
"configuration. Typically, solutions rely on wrappers around PXE boot and "
"TFTP servers for the basic operating system install and then hand off to an "
"automated configuration management system.deployment provisioning/deployment"
"see> provisioning/"
"deployment automated deployment "
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:55(para)
msgid ""
"Both Ubuntu and Red Hat Enterprise Linux include mechanisms for configuring "
"the operating system, including preseed and kickstart, that you can use "
"after a network boot. Typically, these are used to bootstrap an automated "
"configuration system. Alternatively, you can use an image-based approach for "
"deploying the operating system, such as systemimager. You can use both "
"approaches with a virtualized infrastructure, such as when you run VMs to "
"separate your control services and physical infrastructure."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:64(para)
msgid ""
"When you create a deployment plan, focus on a few vital areas because they "
"are very hard to modify post deployment. The next two sections talk about "
"configurations for:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:70(para)
msgid "Disk partitioning and disk array setup for scalability"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:74(para)
msgid "Networking configuration just for PXE booting"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:79(title)
msgid "Disk Partitioning and RAID"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:81(para)
msgid ""
"At the very base of any operating system are the hard drives on which the "
"operating system (OS) is installed.RAID (redundant array of independent disks) "
"indexterm>partitions"
"primary>disk partitioning disk partitioning "
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:92(para)
msgid ""
"You must complete the following configurations on the server's hard drives:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:97(para)
msgid ""
"Partitioning, which provides greater flexibility for layout of operating "
"system and swap space, as described below."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:102(para)
msgid ""
"Adding to a RAID array (RAID stands for redundant array of independent "
"disks), based on the number of disks you have available, so that you can add "
"capacity as your cloud grows. Some options are described in more detail "
"below."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:109(para)
msgid ""
"The simplest option to get started is to use one hard drive with two "
"partitions:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:114(para)
msgid ""
"File system to store files and directories, where all the data lives, "
"including the root partition that starts and runs the system"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:120(para)
msgid ""
"Swap space to free up memory for processes, as an independent area of the "
"physical disk used only for swapping and nothing else"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:126(para)
msgid ""
"RAID is not used in this simplistic one-drive setup because generally for "
"production clouds, you want to ensure that if one disk fails, another can "
"take its place. Instead, for production, use more than one disk. The number "
"of disks determine what types of RAID arrays to build."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:132(para)
msgid ""
"We recommend that you choose one of the following multiple disk options:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:137(term)
msgid "Option 1"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:140(para)
msgid ""
"Partition all drives in the same way in a horizontal fashion, as shown in "
"."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:144(para)
msgid ""
"With this option, you can assign different partitions to different RAID "
"arrays. You can allocate partition 1 of disk one and two to the /boot"
"code> partition mirror. You can make partition 2 of all disks the root "
"partition mirror. You can use partition 3 of all disks for a cinder-"
"volumes LVM partition running on a RAID 10 array."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:152(title)
msgid "Partition setup of drives"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:161(para)
msgid ""
"While you might end up with unused partitions, such as partition 1 in disk "
"three and four of this example, this option allows for maximum utilization "
"of disk space. I/O performance might be an issue as a result of all disks "
"being used for all tasks."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:170(term)
msgid "Option 2"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:173(para)
msgid ""
"Add all raw disks to one large RAID array, either hardware or software based."
" You can partition this large array with the boot, root, swap, and LVM areas."
" This option is simple to implement and uses all partitions. However, disk I/"
"O might suffer."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:182(term)
msgid "Option 3"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:185(para)
msgid ""
"Dedicate entire disks to certain partitions. For example, you could allocate "
"disk one and two entirely to the boot, root, and swap partitions under a "
"RAID 1 mirror. Then, allocate disk three and four entirely to the LVM "
"partition, also under a RAID 1 mirror. Disk I/O should be better because I/O "
"is focused on dedicated tasks. However, the LVM partition is much smaller."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:197(para)
msgid ""
"You may find that you can automate the partitioning itself. For example, MIT "
"uses Fully Automatic "
"Installation (FAI) to do the initial PXE-based partition and then "
"install using a combination of min/max and percentage-based partitioning."
"Fully Automatic Installation (FAI)"
"primary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:206(para)
msgid ""
"As with most architecture choices, the right answer depends on your "
"environment. If you are using existing hardware, you know the disk density "
"of your servers and can determine some decisions based on the options above. "
"If you are going through a procurement process, your user's requirements "
"also help you determine hardware purchases. Here are some examples from a "
"private cloud providing web developers custom environments at AT&T. This "
"example is from a specific deployment, so your existing hardware or "
"procurement opportunity may vary from this. AT&T uses three types of "
"hardware in its deployment:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:218(para)
msgid ""
"Hardware for controller nodes, used for all stateless OpenStack API services."
" About 32–64 GB memory, small attached disk, one processor, varied number of "
"cores, such as 6–12."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:224(para)
msgid ""
"Hardware for compute nodes. Typically 256 or 144 GB memory, two processors, "
"24 cores. 4–6 TB direct attached storage, typically in a RAID 5 "
"configuration."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:230(para)
msgid ""
"Hardware for storage nodes. Typically for these, the disk space is optimized "
"for the lowest cost per GB of storage while maintaining rack-space "
"efficiency."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:236(para)
msgid ""
"Again, the right answer depends on your environment. You have to make your "
"decision based on the trade-offs between space utilization, simplicity, and "
"I/O performance."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:242(title)
msgid "Network Configuration"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:244(para)
msgid ""
"Network configuration is a very large topic that spans multiple areas of "
"this book. For now, make sure that your servers can PXE boot and "
"successfully communicate with the deployment server.networks configuration of"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:253(para)
msgid ""
"For example, you usually cannot configure NICs for VLANs when PXE booting. "
"Additionally, you usually cannot PXE boot with bonded NICs. If you run into "
"this scenario, consider using a simple 1 GB switch in a private network on "
"which only your cloud communicates."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:261(title)
msgid "Automated Configuration"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:263(para)
msgid ""
"The purpose of automatic configuration management is to establish and "
"maintain the consistency of a system without using human intervention. You "
"want to maintain consistency in your deployments so that you can have the "
"same cloud every time, repeatably. Proper use of automatic configuration-"
"management tools ensures that components of the cloud systems are in "
"particular states, in addition to simplifying deployment, and configuration "
"change propagation.automated "
"configuration provisioning/deployment automated "
"configuration "
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:277(para)
msgid ""
"These tools also make it possible to test and roll back changes, as they are "
"fully repeatable. Conveniently, a large body of work has been done by the "
"OpenStack community in this space. Puppet, a configuration management tool, "
"even provides official modules for OpenStack projects in an OpenStack "
"infrastructure system known as Puppet OpenStack . Chef configuration management is "
"provided within . Additional "
"configuration management systems include Juju, Ansible, and Salt. Also, "
"PackStack is a command-line utility for Red Hat Enterprise Linux and "
"derivatives that uses Puppet modules to support rapid deployment of "
"OpenStack on existing servers over an SSH connection."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:292(para)
msgid ""
"An integral part of a configuration-management system is the item that it "
"controls. You should carefully consider all of the items that you want, or "
"do not want, to be automatically managed. For example, you may not want to "
"automatically format hard drives with user data."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:299(title)
msgid "Remote Management"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:301(para)
msgid ""
"In our experience, most operators don't sit right next to the servers "
"running the cloud, and many don't necessarily enjoy visiting the data center."
" OpenStack should be entirely remotely configurable, but sometimes not "
"everything goes according to plan.provisioning/deployment remote "
"management "
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:311(para)
msgid ""
"In this instance, having an out-of-band access into nodes running OpenStack "
"components is a boon. The IPMI protocol is the de facto standard here, and "
"acquiring hardware that supports it is highly recommended to achieve that "
"lights-out data center aim."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:316(para)
msgid ""
"In addition, consider remote power control as well. While IPMI usually "
"controls the server's power state, having remote access to the PDU that the "
"server is plugged into can really be useful for situations when everything "
"seems wedged."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:323(title)
msgid "Parting Thoughts for Provisioning and Deploying OpenStack"
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:325(para)
msgid ""
"You can save time by understanding the use cases for the cloud you want to "
"create. Use cases for OpenStack are varied. Some include object storage "
"only; others require preconfigured compute resources to speed development-"
"environment set up; and others need fast provisioning of compute resources "
"that are already secured per tenant with private networks. Your users may "
"have need for highly redundant servers to make sure their legacy "
"applications continue to run. Perhaps a goal would be to architect these "
"legacy applications so that they run on multiple instances in a cloudy, "
"fault-tolerant way, but not make it a goal to add to those clusters over "
"time. Your users may indicate that they need scaling considerations because "
"of heavy Windows server use.provisioning/deployment tips for"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:342(para)
msgid ""
"You can save resources by looking at the best fit for the hardware you have "
"in place already. You might have some high-density storage hardware "
"available. You could format and repurpose those servers for OpenStack Object "
"Storage. All of these considerations and input from users help you build "
"your use case and your deployment plan."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:349(para)
msgid ""
"For further research about OpenStack deployment, investigate the supported "
"and documented preconfigured, prepackaged installers for OpenStack from "
"companies such as Canonical, Cisco, Cloudscaling, IBM, Metacloud, Mirantis, Piston, Rackspace, Red Hat, SUSE, and SwiftStack."
msgstr ""
#: ./doc/openstack-ops/ch_arch_provision.xml:369(para)
msgid ""
"The decisions you make with respect to provisioning and deployment will "
"affect your day-to-day, week-to-week, and month-to-month maintenance of the "
"cloud. Your configuration management will be able to evolve over time. "
"However, more thought and design need to be done for upfront choices about "
"deployment, disk partitioning, and network configuration."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/part_architecture.xml:82(None)
msgid ""
"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/"
"openstack-ops/figures/osog_0001.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:10(title)
msgid "Architecture"
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:13(para)
msgid ""
"Designing an OpenStack cloud is a great achievement. It requires a robust "
"understanding of the requirements and needs of the cloud's users to "
"determine the best possible configuration to meet them. OpenStack provides a "
"great deal of flexibility to achieve your needs, and this part of the book "
"aims to shine light on many of the decisions you need to make during the "
"process."
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:20(para)
msgid ""
"To design, deploy, and configure OpenStack, administrators must understand "
"the logical architecture. A diagram can help you envision all the integrated "
"services within OpenStack and how they interact with each other.modules, types of "
"indexterm>OpenStack"
"primary>module types in "
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:31(para)
msgid "OpenStack modules are one of the following types:"
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:35(term)
msgid "Daemon"
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:38(para)
msgid ""
"Runs as a background process. On Linux platforms, a daemon is usually "
"installed as a service.daemons"
"primary>basics of "
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:48(term)
msgid "Script"
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:51(para)
msgid ""
"Installs a virtual environment and runs tests.script modules "
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:59(term)
msgid "Command-line interface (CLI)"
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:62(para)
msgid ""
"Enables users to submit API calls to OpenStack services through commands."
"Command-line interface (CLI)"
"primary> "
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:70(para)
msgid ""
"As shown, end users can interact through the dashboard, CLIs, and APIs. All "
"services authenticate through a common Identity service, and individual "
"services interact with each other through public APIs, except where "
"privileged administrator commands are necessary. shows the most common, but not the only logical architecture for "
"an OpenStack cloud."
msgstr ""
#: ./doc/openstack-ops/part_architecture.xml:77(title)
msgid ""
"OpenStack Logical Architecture ()"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:12(title)
msgid ""
"Designing for Cloud Controllers and Cloud "
"Management "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:15(para)
msgid ""
"OpenStack is designed to be massively horizontally scalable, which allows "
"all services to be distributed widely. However, to simplify this guide, we "
"have decided to discuss services of a more central nature, using the concept "
"of a cloud controller . A cloud controller is just a "
"conceptual simplification. In the real world, you design an architecture for "
"your cloud controller that enables high availability so that if any node "
"fails, another can take over the required tasks. In reality, cloud "
"controller tasks are spread out across more than a single node.design considerations cloud "
"controller services cloud controllers concept of"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:33(para)
msgid ""
"The cloud controller provides the central management system for OpenStack "
"deployments. Typically, the cloud controller manages authentication and "
"sends messaging to all the systems through a message queue."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:38(para)
msgid ""
"For many deployments, the cloud controller is a single node. However, to "
"have high availability, you have to take a few considerations into account, "
"which we'll cover in this chapter."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:42(para)
msgid ""
"The cloud controller manages the following services for the cloud:cloud controllers services "
"managed by "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:51(term) ./doc/openstack-ops/ch_ops_maintenance.xml:1014(title)
msgid "Databases"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:54(para)
msgid ""
"Tracks current information about users and instances, for example, in a "
"database, typically one database instance managed per service"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:61(term)
msgid "Message queue services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:64(para)
msgid ""
"All AMQP—Advanced Message Queue Protocol—messages for services are received "
"and sent according to the queue brokerAdvanced Message Queuing Protocol (AMQP) "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:73(term)
msgid "Conductor services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:76(para)
msgid "Proxy requests to a database"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:81(term)
msgid "Authentication and authorization for identity management"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:84(para)
msgid ""
"Indicates which users can do what actions on certain cloud resources; quota "
"management is spread out among services, howeverauthentication "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:93(term)
msgid "Image-management services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:96(para)
msgid ""
"Stores and serves images with metadata on each, for launching in the cloud"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:102(term)
msgid "Scheduling services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:105(para)
msgid ""
"Indicates which resources to use first; for example, spreading out where "
"instances are launched based on an algorithm"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:111(term)
msgid "User dashboard"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:114(para)
msgid ""
"Provides a web-based front end for users to consume OpenStack cloud services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:120(term)
msgid "API endpoints"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:123(para)
msgid ""
"Offers each service's REST API access, where the API endpoint catalog is "
"managed by the Identity service"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:129(para)
msgid ""
"For our example, the cloud controller has a collection of nova-*"
"code> components that represent the global state of the cloud; talks to "
"services such as authentication; maintains information about the cloud in a "
"database; communicates to all compute nodes and storage worker"
"glossterm>s through a queue; and provides API access. Each service running "
"on a designated cloud controller may be broken out into separate nodes for "
"scalability or availability.storage"
"primary>storage workers workers "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:143(para)
msgid ""
"As another example, you could use pairs of servers for a collective cloud "
"controller—one active, one standby—for redundant nodes providing a given set "
"of related services, such as:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:149(para)
msgid ""
"Front end web for API requests, the scheduler for choosing which compute "
"node to boot an instance on, Identity services, and the dashboard"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:155(para)
msgid "Database and message queue server (such as MySQL, RabbitMQ)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:159(para)
msgid "Image service for the image management"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:163(para)
msgid ""
"Now that you see the myriad designs for controlling your cloud, read more "
"about the further considerations to help with your design decisions."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:168(title)
msgid "Hardware Considerations"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:170(para)
msgid ""
"A cloud controller's hardware can be the same as a compute node, though you "
"may want to further specify based on the size and type of cloud that you run."
"hardware design "
"considerations design considerations hardware "
"considerations "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:182(para)
msgid ""
"It's also possible to use virtual machines for all or some of the services "
"that the cloud controller manages, such as the message queuing. In this "
"guide, we assume that all services are running directly on the cloud "
"controller."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:187(para)
msgid ""
" contains common "
"considerations to review when sizing hardware for the cloud controller "
"design.cloud controllers"
"primary>hardware sizing considerations "
"indexterm>Active Directory "
"indexterm>dashboard "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:200(caption)
msgid "Cloud controller hardware sizing considerations"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:208(th)
msgid "Consideration"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:210(th)
msgid "Ramification"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:216(para)
msgid "How many instances will run at once?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:218(para)
msgid ""
"Size your database server accordingly, and scale out beyond one cloud "
"controller if many instances will report status at the same time and "
"scheduling where a new instance starts up needs computing power."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:225(para)
msgid "How many compute nodes will run at once?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:227(para)
msgid ""
"Ensure that your messaging queue handles requests successfully and size "
"accordingly."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:232(para)
msgid "How many users will access the API?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:234(para)
msgid ""
"If many users will make multiple requests, make sure that the CPU load for "
"the cloud controller can handle it."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:239(para)
msgid ""
"How many users will access the dashboard versus the "
"REST API directly?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:243(para)
msgid ""
"The dashboard makes many requests, even more than the API access, so add "
"even more CPU if your dashboard is the main interface for your users."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:249(para)
msgid ""
"How many nova-api services do you run at once for your cloud?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:252(para)
msgid "You need to size the controller with a core per service."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:257(para)
msgid "How long does a single instance run?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:259(para)
msgid ""
"Starting instances and deleting instances is demanding on the compute node "
"but also demanding on the controller node because of all the API queries and "
"scheduling needs."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:265(para)
msgid "Does your authentication system also verify externally?"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:268(para)
msgid ""
"External systems such as LDAP or Active Directory "
"require network connectivity between the cloud controller and an external "
"authentication system. Also ensure that the cloud controller has the CPU "
"power to keep up with requests."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:279(title)
msgid "Separation of Services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:281(para)
msgid ""
"While our example contains all central services in a single location, it is "
"possible and indeed often a good idea to separate services onto different "
"physical servers. is a list of "
"deployment scenarios we've seen and their justifications.provisioning/deployment deployment "
"scenarios services separation of"
"secondary> separation of "
"services design "
"considerations separation of services "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:302(caption)
msgid "Deployment scenarios"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:310(th)
msgid "Scenario"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:312(th)
msgid "Justification"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:318(para)
msgid ""
"Run glance-* servers on the swift-proxy server."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:321(para)
msgid ""
"This deployment felt that the spare I/O on the Object Storage proxy server "
"was sufficient and that the Image Delivery portion of glance benefited from "
"being on physical hardware and having good connectivity to the Object "
"Storage back end it was using."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:329(para)
msgid "Run a central dedicated database server."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:331(para)
msgid ""
"This deployment used a central dedicated server to provide the databases for "
"all services. This approach simplified operations by isolating database "
"server updates and allowed for the simple creation of slave database servers "
"for failover."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:338(para)
msgid "Run one VM per service."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:340(para)
msgid ""
"This deployment ran central services on a set of servers running KVM. A "
"dedicated VM was created for each service (nova-scheduler"
"literal>, rabbitmq, database, etc). This assisted the deployment with "
"scaling because administrators could tune the resources given to each "
"virtual machine based on the load it received (something that was not well "
"understood during installation)."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:350(para)
msgid "Use an external load balancer."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:352(para)
msgid ""
"This deployment had an expensive hardware load balancer in its organization. "
"It ran multiple nova-api and swift-proxy servers "
"on different physical servers and used the load balancer to switch between "
"them."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:360(para)
msgid ""
"One choice that always comes up is whether to virtualize. Some services, "
"such as nova-compute, swift-proxy and swift-"
"object servers, should not be virtualized. However, control servers "
"can often be happily virtualized—the performance penalty can usually be "
"offset by simply running more of the service."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:368(title) ./doc/openstack-ops/section_arch_example-neutron.xml:80(para) ./doc/openstack-ops/section_arch_example-nova.xml:107(para)
msgid "Database"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:370(para)
msgid ""
"OpenStack Compute uses an SQL database to store and retrieve stateful "
"information. MySQL is the popular database choice in the OpenStack community."
"databases design "
"considerations design considerations database "
"choice "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:382(para)
msgid ""
"Loss of the database leads to errors. As a result, we recommend that you "
"cluster your database to make it failure tolerant. Configuring and "
"maintaining a database cluster is done outside OpenStack and is determined "
"by the database software you choose to use in your cloud environment. MySQL/"
"Galera is a popular option for MySQL-based databases."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:390(title)
msgid "Message Queue"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:392(para)
msgid ""
"Most OpenStack services communicate with each other using the "
"message queue .messages design considerations"
"secondary> design "
"considerations message queues "
"For example, Compute communicates to block storage services and networking "
"services through the message queue. Also, you can optionally enable "
"notifications for any service. RabbitMQ, Qpid, and 0mq are all popular "
"choices for a message-queue service. In general, if the message queue fails "
"or becomes inaccessible, the cluster grinds to a halt and ends up in a read-"
"only state, with information stuck at the point where the last message was "
"sent. Accordingly, we recommend that you cluster the message queue. Be aware "
"that clustered message queues can be a pain point for many OpenStack "
"deployments. While RabbitMQ has native clustering support, there have been "
"reports of issues when running it at a large scale. While other queuing "
"solutions are available, such as 0mq and Qpid, 0mq does not offer stateful "
"queues. Qpid is the messaging system "
"of choice for Red Hat and its derivatives. Qpid does not have native "
"clustering capabilities and requires a supplemental service, such as "
"Pacemaker or Corsync. For your message queue, you need to determine what "
"level of data loss you are comfortable with and whether to use an OpenStack "
"project's ability to retry multiple MQ hosts in the event of a failure, such "
"as using Compute's ability to do so.0mq Qpid RabbitMQ message queue "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:431(title)
msgid "Conductor Services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:433(para)
msgid ""
"In the previous version of OpenStack, all nova-compute "
"services required direct access to the database hosted on the cloud "
"controller. This was problematic for two reasons: security and performance. "
"With regard to security, if a compute node is compromised, the attacker "
"inherently has access to the database. With regard to performance, "
"nova-compute calls to the database are single-threaded "
"and blocking. This creates a performance bottleneck because database "
"requests are fulfilled serially rather than in parallel.conductors design considerations conductor "
"services "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:449(para)
msgid ""
"The conductor service resolves both of these issues by acting as a proxy for "
"the nova-compute service. Now, instead of nova-"
"compute directly accessing the database, it contacts the "
"nova-conductor service, and nova-conductor"
"literal> accesses the database on nova-compute 's behalf. "
"Since nova-compute no longer has direct access to the "
"database, the security issue is resolved. Additionally, nova-"
"conductor is a nonblocking service, so requests from all compute "
"nodes are fulfilled in parallel."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:461(para)
msgid ""
"If you are using nova-network and multi-host networking "
"in your cloud environment, nova-compute still requires "
"direct access to the database.multi-"
"host networking "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:468(para)
msgid ""
"The nova-conductor service is horizontally scalable. To "
"make nova-conductor highly available and fault tolerant, "
"just launch more instances of the nova-conductor process, "
"either on the same server or across multiple servers."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:476(title)
msgid "Application Programming Interface (API)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:478(para)
msgid ""
"All public access, whether direct, through a command-line client, or through "
"the web-based dashboard, uses the API service. Find the API reference at "
".API (application programming interface)"
"primary>design considerations design considerations API "
"support "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:491(para)
msgid ""
"You must choose whether you want to support the Amazon EC2 compatibility "
"APIs, or just the OpenStack APIs. One issue you might encounter when running "
"both APIs is an inconsistent experience when referring to images and "
"instances."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:496(para)
msgid ""
"For example, the EC2 API refers to instances using IDs that contain "
"hexadecimal, whereas the OpenStack API uses names and digits. Similarly, the "
"EC2 API tends to rely on DNS aliases for contacting virtual machines, as "
"opposed to OpenStack, which typically lists IP addresses.DNS (Domain Name Server, Service or System)"
"primary>DNS aliases troubleshooting DNS issues"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:510(para)
msgid ""
"If OpenStack is not set up in the right way, it is simple to have scenarios "
"in which users are unable to contact their instances due to having only an "
"incorrect DNS alias. Despite this, EC2 compatibility can assist users "
"migrating to your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:515(para)
msgid ""
"As with databases and message queues, having more than one API "
"server is a good thing. Traditional HTTP load-balancing "
"techniques can be used to achieve a highly available nova-api "
"service.API (application programming "
"interface) API server "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:526(title)
msgid "Extensions"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:528(para)
msgid ""
"The API Specifications define the core "
"actions, capabilities, and mediatypes of the OpenStack API. A client can "
"always depend on the availability of this core API, and implementers are "
"always required to support it in its entirety . Requiring strict adherence to the core API "
"allows clients to rely upon a minimal level of functionality when "
"interacting with multiple implementations of the same API.extensions design considerations"
"secondary> design "
"considerations extensions "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:546(para)
msgid ""
"The OpenStack Compute API is extensible. An extension adds capabilities to "
"an API beyond those defined in the core. The introduction of new features, "
"MIME types, actions, states, headers, parameters, and resources can all be "
"accomplished by means of extensions to the core API. This allows the "
"introduction of new features in the API without requiring a version change "
"and allows the introduction of vendor-specific niche functionality."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:556(title)
msgid "Scheduling"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:558(para)
msgid ""
"The scheduling services are responsible for determining the compute or "
"storage node where a virtual machine or block storage volume should be "
"created. The scheduling services receive creation requests for these "
"resources from the message queue and then begin the process of determining "
"the appropriate node where the resource should reside. This process is done "
"by applying a series of user-configurable filters against the available "
"collection of nodes.schedulers"
"primary>design considerations design considerations"
"primary>scheduling "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:574(para)
msgid ""
"There are currently two schedulers: nova-scheduler for "
"virtual machines and cinder-scheduler for block storage "
"volumes. Both schedulers are able to scale horizontally, so for high-"
"availability purposes, or for very large or high-schedule-frequency "
"installations, you should consider running multiple instances of each "
"scheduler. The schedulers all listen to the shared message queue, so no "
"special load balancing is required."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:587(para)
msgid ""
"The OpenStack Image service consists of two parts: glance-api "
"and glance-registry. The former is responsible for the delivery "
"of images; the compute node uses it to download images from the back end. "
"The latter maintains the metadata information associated with virtual "
"machine images and requires a database.glance glance registry"
"secondary> glance"
"primary>glance API server metadata OpenStack Image service "
"and Image "
"service design considerations "
"indexterm>design considerations"
"primary>images "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:614(para)
msgid ""
"The glance-api part is an abstraction layer that allows a "
"choice of back end. Currently, it supports:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:619(term)
msgid "OpenStack Object Storage"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:622(para)
msgid "Allows you to store images as objects."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:627(term)
msgid "File system"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:630(para)
msgid "Uses any traditional file system to store the images as files."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:636(term)
msgid "S3"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:639(para)
msgid "Allows you to fetch images from Amazon S3."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:644(term)
msgid "HTTP"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:647(para)
msgid ""
"Allows you to fetch images from a web server. You cannot write images by "
"using this mode."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:653(para)
msgid ""
"If you have an OpenStack Object Storage service, we recommend using this as "
"a scalable place to store your images. You can also use a file system with "
"sufficient performance or Amazon S3—unless you do not need the ability to "
"upload new images through OpenStack."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:660(title)
msgid "Dashboard"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:662(para)
msgid ""
"The OpenStack dashboard (horizon) provides a web-based user interface to the "
"various OpenStack components. The dashboard includes an end-user area for "
"users to manage their virtual infrastructure and an admin area for cloud "
"operators to manage the OpenStack environment as a whole.dashboard design considerations dashboard"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:674(para)
msgid ""
"The dashboard is implemented as a Python web application that normally runs "
"in Apache httpd. Therefore, you may treat "
"it the same as any other web application, provided it can reach the API "
"servers (including their admin endpoints) over the network .Apache"
"primary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:685(title)
msgid "Authentication and Authorization"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:687(para)
msgid ""
"The concepts supporting OpenStack's authentication and authorization are "
"derived from well-understood and widely used systems of a similar nature. "
"Users have credentials they can use to authenticate, and they can be a "
"member of one or more groups (known as projects or tenants, interchangeably)."
"credentials "
"indexterm>authorization "
"indexterm>authentication "
"indexterm>design considerations"
"primary>authentication/authorization "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:703(para)
msgid ""
"For example, a cloud administrator might be able to list all instances in "
"the cloud, whereas a user can see only those in his current group. Resources "
"quotas, such as the number of cores that can be used, disk space, and so on, "
"are associated with a project."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:708(para)
msgid ""
"OpenStack Identity provides authentication decisions and user attribute "
"information, which is then used by the other OpenStack services to perform "
"authorization. The policy is set in the policy.json "
"file. For information on how to "
"configure these, see .Identity authentication decisions"
"secondary> Identity"
"primary>plug-in support "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:723(para)
msgid ""
"OpenStack Identity supports different plug-ins for authentication decisions "
"and identity storage. Examples of these plug-ins include:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:728(para)
msgid "In-memory key-value Store (a simplified internal storage structure)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:733(para)
msgid "SQL database (such as MySQL or PostgreSQL)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:737(para)
msgid "Memcached (a distributed memory object caching system)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:741(para)
msgid "LDAP (such as OpenLDAP or Microsoft's Active Directory)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:745(para)
msgid ""
"Many deployments use the SQL database; however, LDAP is also a popular "
"choice for those with existing authentication infrastructure that needs to "
"be integrated."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:751(title)
msgid "Network Considerations"
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:753(para)
msgid ""
"Because the cloud controller handles so many different services, it must be "
"able to handle the amount of traffic that hits it. For example, if you "
"choose to host the OpenStack Image service on the cloud controller, the "
"cloud controller should be able to support the transferring of the images at "
"an acceptable speed.cloud "
"controllers network traffic and "
"indexterm>networks"
"primary>design considerations design considerations"
"primary>networks "
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:771(para)
msgid ""
"As another example, if you choose to use single-host networking where the "
"cloud controller is the network gateway for all instances, then the cloud "
"controller must support the total amount of traffic that travels between "
"your cloud and the public Internet."
msgstr ""
#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:776(para)
msgid ""
"We recommend that you use a fast NIC, such as 10 GB. You can also choose to "
"use two 10 GB NICs and bond them together. While you might not be able to "
"get a full bonded 20 GB speed, different transmission streams use different "
"NICs. For example, if the cloud controller transfers two images, each image "
"uses a different NIC and gets a full 10 GB of bandwidth.bandwidth design considerations "
"for "
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/section_arch_example-neutron.xml:490(None)
msgid ""
"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/"
"openstack-ops/figures/osog_0101.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/section_arch_example-neutron.xml:514(None)
msgid ""
"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/"
"openstack-ops/figures/osog_0102.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/section_arch_example-neutron.xml:536(None)
msgid ""
"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/"
"openstack-ops/figures/osog_0103.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/section_arch_example-neutron.xml:546(None)
msgid ""
"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/"
"openstack-ops/figures/osog_0104.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/section_arch_example-neutron.xml:556(None)
msgid ""
"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/"
"openstack-ops/figures/osog_0105.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/section_arch_example-neutron.xml:566(None)
msgid ""
"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/"
"openstack-ops/figures/osog_0106.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:16(title)
msgid "Example Architecture—OpenStack Networking"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:18(para)
msgid ""
"This chapter provides an example architecture using OpenStack Networking, "
"also known as the Neutron project, in a highly available environment."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:23(title) ./doc/openstack-ops/section_arch_example-nova.xml:27(title)
msgid "Overview"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:25(para)
msgid ""
"A highly-available environment can be put into place if you require an "
"environment that can scale horizontally, or want your cloud to continue to "
"be operational in case of node failure. This example architecture has been "
"written based on the current default feature set of OpenStack Havana, with "
"an emphasis on high availability.RDO "
"(Red Hat Distributed OpenStack) OpenStack Networking (neutron)"
"primary>component overview "
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:38(title) ./doc/openstack-ops/section_arch_example-nova.xml:62(title)
msgid "Components"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:47(th) ./doc/openstack-ops/section_arch_example-nova.xml:71(th)
msgid "Component"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:49(th) ./doc/openstack-ops/section_arch_example-nova.xml:73(th)
msgid "Details"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:55(para) ./doc/openstack-ops/section_arch_example-nova.xml:79(para)
msgid "OpenStack release"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:61(para) ./doc/openstack-ops/section_arch_example-nova.xml:85(para)
msgid "Host operating system"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:63(para)
msgid "Red Hat Enterprise Linux 6.5"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:67(para) ./doc/openstack-ops/section_arch_example-nova.xml:93(para)
msgid "OpenStack package repository"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:69(link)
msgid "Red Hat Distributed OpenStack (RDO)"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:74(para) ./doc/openstack-ops/section_arch_example-nova.xml:101(para)
msgid "Hypervisor"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:82(para) ./doc/openstack-ops/section_arch_example-neutron.xml:176(term)
msgid "MySQL"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:86(para) ./doc/openstack-ops/section_arch_example-nova.xml:113(para)
msgid "Message queue"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:88(para) ./doc/openstack-ops/section_arch_example-neutron.xml:188(term)
msgid "Qpid"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:92(para) ./doc/openstack-ops/section_arch_example-nova.xml:120(para)
msgid "Networking service"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:98(para)
msgid "Tenant Network Separation"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:100(para) ./doc/openstack-ops/section_arch_example-neutron.xml:208(term)
msgid "VLAN"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:104(para)
msgid "Image service back end"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:110(para)
msgid "Identity driver"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:112(para) ./doc/openstack-ops/section_arch_example-nova.xml:147(para)
msgid "SQL"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:116(para)
msgid "Block Storage back end"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:125(title) ./doc/openstack-ops/section_arch_example-nova.xml:250(title)
msgid "Rationale"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:127(para)
msgid ""
"This example architecture has been selected based on the current default "
"feature set of OpenStack Havana, with an emphasis on high availability. This "
"architecture is currently being deployed in an internal Red Hat OpenStack "
"cloud and used to run hosted and shared services, which by their nature must "
"be highly available.OpenStack "
"Networking (neutron) rationale for choice of"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:138(para)
msgid ""
"This architecture's components have been selected for the following reasons:"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:143(term)
msgid "Red Hat Enterprise Linux"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:146(para)
msgid ""
"You must choose an operating system that can run on all of the physical "
"nodes. This example architecture is based on Red Hat Enterprise Linux, which "
"offers reliability, long-term support, certified testing, and is hardened. "
"Enterprise customers, now moving into OpenStack usage, typically require "
"these advantages."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:156(term)
msgid "RDO"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:159(para)
msgid ""
"The Red Hat Distributed OpenStack package offers an easy way to download the "
"most current OpenStack release that is built for the Red Hat Enterprise "
"Linux platform."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:169(para)
msgid ""
"KVM is the supported hypervisor of choice for Red Hat Enterprise Linux (and "
"included in distribution). It is feature complete and free from licensing "
"charges and restrictions."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:179(para)
msgid ""
"MySQL is used as the database back end for all databases in the OpenStack "
"environment. MySQL is the supported database of choice for Red Hat "
"Enterprise Linux (and included in distribution); the database is open "
"source, scalable, and handles memory well."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:191(para)
msgid ""
"Apache Qpid offers 100 percent compatibility with the Advanced Message "
"Queuing Protocol Standard, and its broker is available for both C++ and Java."
""
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:201(para)
msgid ""
"OpenStack Networking offers sophisticated networking functionality, "
"including Layer 2 (L2) network segregation and provider networks."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:211(para)
msgid ""
"Using a virtual local area network offers broadcast control, security, and "
"physical layer transparency. If needed, use VXLAN to extend your address "
"space."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:221(para)
msgid ""
"GlusterFS offers scalable storage. As your environment grows, you can "
"continue to add more storage nodes (instead of being restricted, for "
"example, by an expensive storage array)."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:232(title) ./doc/openstack-ops/section_arch_example-nova.xml:394(title)
msgid "Detailed Description"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:235(title) ./doc/openstack-ops/section_arch_example-neutron.xml:248(caption)
msgid "Node types"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:237(para)
msgid ""
"This section gives you a breakdown of the different nodes that make up the "
"OpenStack environment. A node is a physical machine that is provisioned with "
"an operating system, and running a defined software stack on top of it. "
" provides node descriptions and "
"specifications.OpenStack Networking "
"(neutron) detailed description of "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:258(th)
msgid "Type"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:262(th)
msgid "Example hardware"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:268(td)
msgid "Controller"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:270(para)
msgid ""
"Controller nodes are responsible for running the management software "
"services needed for the OpenStack environment to function. These nodes:"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:276(para)
msgid ""
"Provide the front door that people access as well as the API services that "
"all other components in the environment talk to."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:282(para)
msgid ""
"Run a number of services in a highly available fashion, utilizing Pacemaker "
"and HAProxy to provide a virtual IP and load-balancing functions so all "
"controller nodes are being used."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:289(para)
msgid ""
"Supply highly available \"infrastructure\" services, such as MySQL and Qpid, "
"that underpin all the services."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:295(para)
msgid ""
"Provide what is known as \"persistent storage\" through services run on the "
"host as well. This persistent storage is backed onto the storage nodes for "
"reliability."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:302(para)
msgid "See ."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:303(para) ./doc/openstack-ops/section_arch_example-neutron.xml:327(para) ./doc/openstack-ops/section_arch_example-neutron.xml:369(para) ./doc/openstack-ops/section_arch_example-neutron.xml:384(para)
msgid "Model: Dell R620"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:303(para) ./doc/openstack-ops/section_arch_example-neutron.xml:341(para) ./doc/openstack-ops/section_arch_example-neutron.xml:384(para)
msgid "CPU: 2x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:304(para) ./doc/openstack-ops/section_arch_example-neutron.xml:370(para) ./doc/openstack-ops/section_arch_example-neutron.xml:385(para)
msgid "Memory: 32 GB"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:304(para) ./doc/openstack-ops/section_arch_example-neutron.xml:370(para)
msgid "Disk: two 300 GB 10000 RPM SAS Disks"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:305(para) ./doc/openstack-ops/section_arch_example-neutron.xml:345(para) ./doc/openstack-ops/section_arch_example-neutron.xml:386(para)
msgid "Network: two 10G network ports"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:310(para)
msgid "Compute nodes run the virtual machine instances in OpenStack. They:"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:315(para)
msgid "Run the bare minimum of services needed to facilitate these instances."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:319(para)
msgid ""
"Use local storage on the node for the virtual machines so that no VM "
"migration or instance recovery at node failure is possible."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:324(phrase)
msgid "See ."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:327(para)
msgid "CPU: 2x Intel® Xeon® CPU E5-2650 0 @ 2.00 GHz"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:328(para)
msgid "Memory: 128 GB"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:329(para)
msgid "Disk: two 600 GB 10000 RPM SAS Disks"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:329(para)
msgid "Network: four 10G network ports (For future proofing expansion)"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:333(td)
msgid "Storage"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:334(para)
msgid ""
"Storage nodes store all the data required for the environment, including "
"disk images in the Image service library, and the persistent storage volumes "
"created by the Block Storage service. Storage nodes use GlusterFS technology "
"to keep the data highly available and scalable."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:339(para)
msgid "See ."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:341(para)
msgid "Model: Dell R720xd"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:342(para)
msgid "Memory: 64 GB"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:343(para)
msgid ""
"Disk: two 500 GB 7200 RPM SAS Disks and twenty-four 600 GB 10000 RPM SAS "
"Disks"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:344(para)
msgid "Raid Controller: PERC H710P Integrated RAID Controller, 1 GB NV Cache"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:349(td)
msgid "Network"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:350(para)
msgid ""
"Network nodes are responsible for doing all the virtual networking needed "
"for people to create public or private networks and uplink their virtual "
"machines into external networks. Network nodes:"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:357(para)
msgid ""
"Form the only ingress and egress point for instances running on top of "
"OpenStack."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:361(para)
msgid ""
"Run all of the environment's networking services, with the exception of the "
"networking API service (which runs on the controller node)."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:367(para)
msgid "See ."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:369(para)
msgid "CPU: 1x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:371(para)
msgid "Network: five 10G network ports"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:375(td)
msgid "Utility"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:376(para)
msgid ""
"Utility nodes are used by internal administration staff only to provide a "
"number of basic system administration functions needed to get the "
"environment up and running and to maintain the hardware, OS, and software on "
"which it runs."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:379(para)
msgid ""
"These nodes run services such as provisioning, configuration management, "
"monitoring, or GlusterFS management software. They are not required to "
"scale, although these machines are usually backed up."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:385(para)
msgid "Disk: two 500 GB 7200 RPM SAS Disks"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:394(title)
msgid "Networking layout"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:396(para)
msgid ""
"The network contains all the management devices for all hardware in the "
"environment (for example, by including Dell iDrac7 devices for the hardware "
"nodes, and management interfaces for network switches). The network is "
"accessed by internal staff only when diagnosing or recovering a hardware "
"issue."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:403(title)
msgid "OpenStack internal network"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:405(para)
msgid ""
"This network is used for OpenStack management functions and traffic, "
"including services needed for the provisioning of physical nodes "
"(pxe , tftp , kickstart"
"literal>), traffic between various OpenStack node types using OpenStack APIs "
"and messages (for example, nova-compute talking to "
"keystone or cinder-volume talking to "
"nova-api ), and all traffic for storage data to the "
"storage layer underneath by the Gluster protocol. All physical nodes have at "
"least one network interface (typically eth0 ) in this "
"network. This network is only accessible from other VLANs on port 22 (for "
"ssh access to manage machines)."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:423(title)
msgid "Public Network"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:427(para)
msgid ""
"IP addresses for public-facing interfaces on the controller nodes (which end "
"users will access the OpenStack services)"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:433(para)
msgid ""
"A range of publicly routable, IPv4 network addresses to be used by OpenStack "
"Networking for floating IPs. You may be restricted in your access to IPv4 "
"addresses; a large range of IPv4 addresses is not necessary."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:440(para)
msgid "Routers for private networks created within OpenStack."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:425(para)
msgid "This network is a combination of: "
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:445(para)
msgid ""
"This network is connected to the controller nodes so users can access the "
"OpenStack interfaces, and connected to the network nodes to provide VMs with "
"publicly routable traffic functionality. The network is also connected to "
"the utility machines so that any utility services that need to be made "
"public (such as system monitoring) can be accessed."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:454(title)
msgid "VM traffic network"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:456(para)
msgid ""
"This is a closed network that is not publicly routable and is simply used as "
"a private, internal network for traffic between virtual machines in "
"OpenStack, and between the virtual machines and the network nodes that "
"provide l3 routes out to the public network (and floating IPs for "
"connections back in to the VMs). Because this is a closed network, we are "
"using a different address space to the others to clearly define the "
"separation. Only Compute and OpenStack Networking nodes need to be connected "
"to this network."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:468(title)
msgid "Node connectivity"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:470(para)
msgid ""
"The following section details how the nodes are connected to the different "
"networks (see ) and what other "
"considerations need to take place (for example, bonding) when connecting "
"nodes to the networks."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:476(title)
msgid "Initial deployment"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:478(para)
msgid ""
"Initially, the connection setup should revolve around keeping the "
"connectivity simple and straightforward in order to minimize deployment "
"complexity and time to deploy. The deployment shown in aims to have 1 10G connectivity available to all compute nodes, while "
"still leveraging bonding on appropriate nodes for maximum performance."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:486(title)
msgid "Basic node deployment"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:497(title)
msgid "Connectivity for maximum performance"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:499(para)
msgid ""
"If the networking performance of the basic layout is not enough, you can "
"move to , which provides 2 10G network links to "
"all instances in the environment as well as providing more network bandwidth "
"to the storage layer. bandwidth obtaining maximum performance"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:510(title)
msgid "Performance node deployment"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:522(title)
msgid "Node diagrams"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:524(para)
msgid ""
"The following diagrams ( through "
") include logical information about "
"the different types of nodes, indicating what services will be running on "
"top of them and how they interact with each other. The diagrams also "
"illustrate how the availability and scalability of services are achieved."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:542(title)
msgid "Compute node"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:552(title)
msgid "Network node"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:562(title)
msgid "Storage node"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:574(title)
msgid "Example Component Configuration"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:695(para)
msgid ""
"Because Pacemaker is cluster software, the software itself handles its own "
"availability, leveraging corosync and cman"
"literal> underneath."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:702(para)
msgid ""
"If you use the GlusterFS native client, no virtual IP is needed, since the "
"client knows all about nodes after initial connection and automatically "
"routes around failures on the client side."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:709(para)
msgid ""
"If you use the NFS or SMB adaptor, you will need a virtual IP on which to "
"mount the GlusterFS volumes."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:691(para)
msgid ""
"Pacemaker is the clustering software used to ensure the availability of "
"services running on the controller and network nodes: "
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:825(para)
msgid ""
"Configured to use Qpid, qpid_heartbeat = 10 , configured to use Memcached for caching, configured to "
"use libvirt , "
"configured to use neutron ."
" "
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:833(para)
msgid ""
"Configured nova-consoleauth to use Memcached for session "
"management (so that it can have multiple copies and run in a load balancer)."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:837(para)
msgid ""
"The nova API, scheduler, objectstore, cert, consoleauth, conductor, and "
"vncproxy services are run on all controller nodes, ensuring at least one "
"instance will be available in case of node failure. Compute is also behind "
"HAProxy, which detects when the software fails and routes requests around "
"the failing instance."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:842(para)
msgid ""
"Nova-compute and nova-conductor services, which run on the compute nodes, "
"are only needed to run services on that node, so availability of those "
"services is coupled tightly to the nodes that are available. As long as a "
"compute node is up, it will have the needed services running on top of it."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:895(para)
msgid ""
"The OpenStack Networking service is run on all controller nodes, ensuring at "
"least one instance will be available in case of node failure. It also sits "
"behind HAProxy, which detects if the software fails and routes requests "
"around the failing instance."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:899(para)
msgid ""
"OpenStack Networking's ovs-agent , l3-agent"
"literal>, dhcp-agent , and metadata-agent"
"literal> services run on the network nodes, as lsb "
"resources inside of Pacemaker. This means that in the case of network node "
"failure, services are kept running on another node. Finally, the "
"ovs-agent service is also run on all compute nodes, and "
"in case of compute node failure, the other nodes will continue to function "
"using the copy of the service running on them."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-neutron.xml:576(para)
msgid ""
" and include example configuration and considerations for both third-"
"party and OpenStackOpenStack "
"Networking (neutron) third-party component "
"configuration components: Third-party component configuration"
"caption> Component Tuning Availability"
"th> Scalability MySQL"
"td> binlog-format = row Master/master "
"replication. However, both nodes are not used at the same time. Replication "
"keeps all nodes as close to being up to date as possible (although the "
"asynchronous nature of the replication means a fully consistent state is not "
"possible). Connections to the database only happen through a Pacemaker "
"virtual IP, ensuring that most problems that occur with master-master "
"replication can be avoided. Not heavily considered. Once load on the "
"MySQL server increases enough that scalability needs to be considered, "
"multiple masters or a master/slave setup can be used. Qpid"
"td> max-connections=1000 worker-threads=20"
"literal>connection-backlog=10 , sasl security enabled with "
"SASL-BASIC authentication Qpid is added as a resource to the "
"Pacemaker software that runs on Controller nodes where Qpid is situated. "
"This ensures only one Qpid instance is running at one time, and the node "
"with the Pacemaker virtual IP will always be the node running Qpid."
"td> Not heavily considered. However, Qpid can be changed to run on all "
"controller nodes for scalability and availability purposes, and removed from "
"Pacemaker. HAProxy maxconn 3000 "
"td>HAProxy is a software layer-7 load balancer used to front door all "
"clustered OpenStack API components and do SSL termination. HAProxy can be "
"added as a resource to the Pacemaker software that runs on the Controller "
"nodes where HAProxy is situated. This ensures that only one HAProxy instance "
"is running at one time, and the node with the Pacemaker virtual IP will "
"always be the node running HAProxy. Not considered. HAProxy has "
"small enough performance overheads that a single instance should scale "
"enough for this level of workload. If extra scalability is needed, "
"keepalived or other Layer-4 load balancing can be "
"introduced to be placed in front of multiple copies of HAProxy. "
"tr>Memcached MAXCONN=\"8192\" CACHESIZE=\"30457\""
"literal> Memcached is a fast in-memory key-value cache software that "
"is used by OpenStack components for caching data and increasing performance. "
"Memcached runs on all controller nodes, ensuring that should one go down, "
"another instance of Memcached is available. Not considered. A single "
"instance of Memcached should be able to scale to the desired workloads. If "
"scalability is desired, HAProxy can be placed in front of Memcached (in raw "
"tcp mode) to utilize multiple Memcached instances for "
"scalability. However, this might cause cache consistency issues. "
"tr>Pacemaker Configured to use corosync andcman "
"as a cluster communication stack/quorum manager, and as a two-node cluster."
"td> If more nodes need to be made cluster aware, "
"Pacemaker can scale to 64 nodes. GlusterFS"
"td> glusterfs performance profile \"virt\" enabled on "
"all volumes. Volumes are setup in two-node replication.Glusterfs is "
"a clustered file system that is run on the storage nodes to provide "
"persistent scalable data storage in the environment. Because all connections "
"to gluster use the gluster native mount points, the "
"gluster instances themselves provide availability and "
"failover functionality. The scalability of GlusterFS storage can be "
"achieved by adding in more storage volumes.
OpenStack component "
"configuration Component"
"th> Node type Tuning Availability Scalability"
"th> Dashboard (horizon) Controller"
"td> Configured to use Memcached as a session store, neutron"
"literal> support is enabled, can_set_mount_point = False "
"td>The dashboard is run on all controller nodes, ensuring at least one "
"instance will be available in case of node failure. It also sits behind "
"HAProxy, which detects when the software fails and routes requests around "
"the failing instance. The dashboard is run on all controller nodes, "
"so scalability can be achieved with additional controller nodes. HAProxy "
"allows scalability for the dashboard as more nodes are added. "
"tr>Identity (keystone) Controller Configured to use "
"Memcached for caching and PKI for tokens. Identity is run on all "
"controller nodes, ensuring at least one instance will be available in case "
"of node failure. Identity also sits behind HAProxy, which detects when the "
"software fails and routes requests around the failing instance."
"td> Identity is run on all controller nodes, so scalability can be "
"achieved with additional controller nodes. HAProxy allows scalability for "
"Identity as more nodes are added. Image service (glance)"
"td> Controller /var/lib/glance/images is a "
"GlusterFS native mount to a Gluster volume off the storage layer."
"td>The Image service is run on all controller nodes, ensuring at least "
"one instance will be available in case of node failure. It also sits behind "
"HAProxy, which detects when the software fails and routes requests around "
"the failing instance. The Image service is run on all controller "
"nodes, so scalability can be achieved with additional controller nodes. "
"HAProxy allows scalability for the Image service as more nodes are added."
"td> Compute (nova) Controller, Compute The nova API, scheduler, objectstore, cert, "
"consoleauth, conductor, and vncproxy services are run on all controller "
"nodes, so scalability can be achieved with additional controller nodes. "
"HAProxy allows scalability for Compute as more nodes are added. The "
"scalability of services running on the compute nodes (compute, conductor) is "
"achieved linearly by adding in more compute nodes. Block "
"Storage (cinder) Controller Configured to use Qpid, qpid_heartbeat = 10 , configured to use a Gluster volume from the "
"storage layer as the back end for Block Storage, using the Gluster native "
"client. Block Storage API, scheduler, and volume services are run on "
"all controller nodes, ensuring at least one instance will be available in "
"case of node failure. Block Storage also sits behind HAProxy, which detects "
"if the software fails and routes requests around the failing instance."
"td> Block Storage API, scheduler and volume services are run on all "
"controller nodes, so scalability can be achieved with additional controller "
"nodes. HAProxy allows scalability for Block Storage as more nodes are added."
" OpenStack Networking (neutron) Controller, "
"Compute, Network Configured to use QPID, qpid_heartbeat = 10 , kernel namespace "
"support enabled, tenant_network_type = vlan , "
"allow_overlapping_ips = true , "
"tenant_network_type = vlan , bridge_uplinks = br-"
"ex:em2 , bridge_mappings = physnet1:br-ex "
"td> The OpenStack Networking server service is run on all "
"controller nodes, so scalability can be achieved with additional controller "
"nodes. HAProxy allows scalability for OpenStack Networking as more nodes are "
"added. Scalability of services running on the network nodes is not currently "
"supported by OpenStack Networking, so they are not be considered. One copy "
"of the services should be sufficient to handle the workload. Scalability of "
"the ovs-agent running on compute nodes is achieved by "
"adding in more compute nodes as necessary.
"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:12(title)
msgid "Upstream OpenStack"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:14(para)
msgid ""
"OpenStack is founded on a thriving community that is a source of help and "
"welcomes your contributions. This chapter details some of the ways you can "
"interact with the others involved."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:19(title)
msgid "Getting Help"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:21(para)
msgid ""
"There are several avenues available for seeking assistance. The quickest way "
"is to help the community help you. Search the Q&A sites, mailing list "
"archives, and bug lists for issues similar to yours. If you can't find "
"anything, follow the directions for reporting bugs or use one of the "
"channels for support, which are listed below.mailing lists OpenStack documentation"
"secondary> help, resources "
"for troubleshooting getting help"
"secondary> OpenStack "
"community getting help from "
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:44(para)
msgid ""
"Your first port of call should be the official OpenStack documentation, "
"found on . You can get "
"questions answered on ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:49(para)
msgid ""
"Mailing "
"lists are also a great place to get help. The wiki page has more "
"information about the various lists. As an operator, the main lists you "
"should be aware of are:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:56(link)
msgid "General list"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:60(para)
msgid ""
"openstack@lists.openstack.org . The scope of this list "
"is the current state of OpenStack. This is a very high-traffic mailing list, "
"with many, many emails per day."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:67(link)
msgid "Operators list"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:71(para)
msgid ""
"openstack-operators@lists.openstack.org. This list is "
"intended for discussion among existing OpenStack cloud operators, such as "
"yourself. Currently, this list is relatively low traffic, on the order of "
"one email a day."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:79(link)
msgid "Development list"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:83(para)
msgid ""
"openstack-dev@lists.openstack.org . The scope of this "
"list is the future state of OpenStack. This is a high-traffic mailing list, "
"with multiple emails per day."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:90(para)
msgid ""
"We recommend that you subscribe to the general list and the operator list, "
"although you must set up filters to manage the volume for the general list. "
"You'll also find links to the mailing list archives on the mailing list wiki "
"page, where you can search through the discussions."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:96(para)
msgid ""
"Multiple IRC "
"channels are available for general questions and developer "
"discussions. The general discussion channel is #openstack on irc."
"freenode.net ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:103(title)
msgid "Reporting Bugs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:105(para)
msgid ""
"As an operator, you are in a very good position to report unexpected "
"behavior with your cloud. Since OpenStack is flexible, you may be the only "
"individual to report a particular issue. Every issue is important to fix, so "
"it is essential to learn how to easily submit a bug report.maintenance/debugging reporting "
"bugs bugs, "
"reporting OpenStack community reporting "
"bugs "
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:121(para)
msgid ""
"All OpenStack projects use Launchpad for bug tracking. You'll need to create an account on "
"Launchpad before you can submit a bug report."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:126(para)
msgid ""
"Once you have a Launchpad account, reporting a bug is as simple as "
"identifying the project or projects that are causing the issue. Sometimes "
"this is more difficult than expected, but those working on the bug triage "
"are happy to help relocate issues if they are not in the right place "
"initially:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:134(para)
msgid ""
"Report a bug in nova."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:139(para)
msgid ""
"Report a bug in python-novaclient."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:144(para)
msgid ""
"Report a bug in swift."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:149(para)
msgid ""
"Report a bug in python-swiftclient."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:154(para)
msgid ""
"Report a bug in glance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:159(para)
msgid ""
"Report a bug in python-glanceclient."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:164(para)
msgid ""
"Report a bug in keystone."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:169(para)
msgid ""
"Report a bug in python-keystoneclient."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:174(para)
msgid ""
"Report a bug in neutron."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:179(para)
msgid ""
"Report a bug in python-neutronclient."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:184(para)
msgid ""
"Report a bug in cinder."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:189(para)
msgid ""
"Report a bug in python-cinderclient."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:194(para)
msgid ""
"Report a bug in manila."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:199(para)
msgid ""
"Report a bug in python-manilaclient."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:204(para)
msgid ""
"Report a bug in python-openstackclient."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:209(para)
msgid ""
"Report a bug in horizon."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:214(para)
msgid ""
"Report a bug with the documentation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:219(para)
msgid ""
"Report a bug with the API documentation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:224(para)
msgid ""
"To write a good bug report, the following process is essential. First, "
"search for the bug to make sure there is no bug already filed for the same "
"issue. If you find one, be sure to click on \"This bug affects X people. "
"Does this bug affect you?\" If you can't find the issue, then enter the "
"details of your report. It should at least include:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:232(para)
msgid ""
"The release, or milestone, or commit ID corresponding to the software that "
"you are running"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:237(para)
msgid "The operating system and version where you've identified the bug"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:242(para)
msgid "Steps to reproduce the bug, including what went wrong"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:246(para)
msgid "Description of the expected results instead of what you saw"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:251(para)
msgid "Portions of your log files so that you include only relevant excerpts"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:256(para)
msgid "When you do this, the bug is created with:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:260(para)
msgid "Status: New "
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:264(para)
msgid ""
"In the bug comments, you can contribute instructions on how to fix a given "
"bug, and set it to Triaged . Or you can directly fix it: "
"assign the bug to yourself, set it to In progress , "
"branch the code, implement the fix, and propose your change for merging. But "
"let's not get ahead of ourselves; there are bug triaging tasks as well."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:272(title)
msgid "Confirming and Prioritizing"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:274(para)
msgid ""
"This stage is about checking that a bug is real and assessing its impact. "
"Some of these steps require bug supervisor rights (usually limited to core "
"teams). If the bug lacks information to properly reproduce or assess the "
"importance of the bug, the bug is set to:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:281(para)
msgid "Status: Incomplete "
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:285(para)
msgid ""
"Once you have reproduced the issue (or are 100 percent confident that this "
"is indeed a valid bug) and have permissions to do so, set:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:291(para)
msgid "Status: Confirmed "
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:295(para)
msgid "Core developers also prioritize the bug, based on its impact:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:300(para)
msgid "Importance: <Bug impact>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:304(para)
msgid "The bug impacts are categorized as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:310(para)
msgid ""
"Critical if the bug prevents a key feature from working "
"properly (regression) for all users (or without a simple workaround) or "
"results in data loss"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:316(para)
msgid ""
"High if the bug prevents a key feature from working "
"properly for some users (or with a workaround)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:321(para)
msgid ""
"Medium if the bug prevents a secondary feature from "
"working properly"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:326(para)
msgid "Low if the bug is mostly cosmetic"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:330(para)
msgid ""
"Wishlist if the bug is not really a bug but rather a "
"welcome change in behavior"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:335(para)
msgid ""
"If the bug contains the solution, or a patch, set the bug status to "
"Triaged ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:340(title)
msgid "Bug Fixing"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:342(para)
msgid ""
"At this stage, a developer works on a fix. During that time, to avoid "
"duplicating the work, the developer should set:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:347(para)
msgid "Status: In Progress "
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:351(para)
msgid "Assignee: <yourself>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:355(para)
msgid ""
"When the fix is ready, the developer proposes a change and gets the change "
"reviewed."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:360(title)
msgid "After the Change Is Accepted"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:362(para)
msgid ""
"After the change is reviewed, accepted, and lands in master, it "
"automatically moves to:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:367(para)
msgid "Status: Fix Committed "
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:371(para)
msgid ""
"When the fix makes it into a milestone or release branch, it automatically "
"moves to:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:376(para)
msgid "Milestone: Milestone the bug was fixed in"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:380(para)
msgid "Status: Fix Released "
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:387(title)
msgid "Join the OpenStack Community"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:389(para)
msgid ""
"Since you've made it this far in the book, you should consider becoming an "
"official individual member of the community and join the OpenStack Foundation. The "
"OpenStack Foundation is an independent body providing shared resources to "
"help achieve the OpenStack mission by protecting, empowering, and promoting "
"OpenStack software and the community around it, including users, developers, "
"and the entire ecosystem. We all share the responsibility to make this "
"community the best it can possibly be, and signing up to be a member is the "
"first step to participating. Like the software, individual membership within "
"the OpenStack Foundation is free and accessible to anyone.OpenStack community joining"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:407(title)
msgid "How to Contribute to the Documentation"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:409(para)
msgid ""
"OpenStack documentation efforts encompass operator and administrator docs, "
"API docs, and user docs.OpenStack "
"community contributing to "
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:416(para)
msgid ""
"The genesis of this book was an in-person event, but now that the book is in "
"your hands, we want you to contribute to it. OpenStack documentation follows "
"the coding principles of iterative work, with bug logging, investigating, "
"and fixing."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:421(para)
msgid ""
"Just like the code, is "
"updated constantly using the Gerrit review system, with source stored in git."
"openstack.org in the openstack-manuals repository and the "
"api-"
"site repository."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:428(para)
msgid ""
"To review the documentation before it's published, go to the OpenStack "
"Gerrit server at and "
"search for project:openstack/openstack-"
"manuals or project:openstack/api-site."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:435(para)
msgid ""
"See the How To Contribute page on the wiki for more "
"information on the steps you need to take to submit your first documentation "
"review or change."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:441(title)
msgid "Security Information"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:443(para)
msgid ""
"As a community, we take security very seriously and follow a specific "
"process for reporting potential issues. We vigilantly pursue fixes and "
"regularly eliminate exposures. You can report security issues you discover "
"through this specific process. The OpenStack Vulnerability Management Team "
"is a very small group of experts in vulnerability management drawn from the "
"OpenStack community. The team's job is facilitating the reporting of "
"vulnerabilities, coordinating security fixes and handling progressive "
"disclosure of the vulnerability information. Specifically, the team is "
"responsible for the following functions:vulnerability tracking/management "
"indexterm>security issues"
"primary>reporting/fixing vulnerabilities "
"indexterm>OpenStack community"
"primary>security information "
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:466(term)
msgid "Vulnerability management"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:469(para)
msgid ""
"All vulnerabilities discovered by community members (or users) can be "
"reported to the team."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:475(term)
msgid "Vulnerability tracking"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:478(para)
msgid ""
"The team will curate a set of vulnerability related issues in the issue "
"tracker. Some of these issues are private to the team and the affected "
"product leads, but once remediation is in place, all vulnerabilities are "
"public."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:486(term)
msgid "Responsible disclosure"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:489(para)
msgid ""
"As part of our commitment to work with the security community, the team "
"ensures that proper credit is given to security researchers who responsibly "
"report issues in OpenStack."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:496(para)
msgid ""
"We provide two ways to report issues to the OpenStack Vulnerability "
"Management Team, depending on how sensitive the issue is:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:501(para)
msgid ""
"Open a bug in Launchpad and mark it as a \"security bug.\" This makes the "
"bug private and accessible to only the Vulnerability Management Team."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:507(para)
msgid ""
"If the issue is extremely sensitive, send an encrypted email to one of the "
"team's members. Find their GPG keys at OpenStack Security."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:514(para)
msgid ""
"You can find the full list of security-oriented teams you can join at Security Teams"
"link>. The vulnerability management process is fully documented at Vulnerability Management."
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:522(title)
msgid "Finding Additional Information"
msgstr ""
#: ./doc/openstack-ops/ch_ops_upstream.xml:524(para)
msgid ""
"In addition to this book, there are many other sources of information about "
"OpenStack. The OpenStack "
"website is a good starting point, with OpenStack Docs and OpenStack API Docs providing technical "
"documentation about OpenStack. The OpenStack wiki contains a lot of general "
"information that cuts across the OpenStack projects, including a list of "
"recommended tools. Finally, there are a number of "
"blogs aggregated at Planet "
"OpenStack.OpenStack community"
"primary>additional information "
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:10(title)
msgid "Tales From the Cryp^H^H^H^H Cloud"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:12(para)
msgid ""
"Herein lies a selection of tales from OpenStack cloud operators. Read, and "
"learn from their wisdom."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:16(title)
msgid "Double VLAN"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:17(para)
msgid ""
"I was on-site in Kelowna, British Columbia, Canada setting up a new "
"OpenStack cloud. The deployment was fully automated: Cobbler deployed the OS "
"on the bare metal, bootstrapped it, and Puppet took over from there. I had "
"run the deployment scenario so many times in practice and took for granted "
"that everything was working."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:23(para)
msgid ""
"On my last day in Kelowna, I was in a conference call from my hotel. In the "
"background, I was fooling around on the new cloud. I launched an instance "
"and logged in. Everything looked fine. Out of boredom, I ran and all of the sudden the instance locked up."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:29(para)
msgid ""
"Thinking it was just a one-off issue, I terminated the instance and launched "
"a new one. By then, the conference call ended and I was off to the data "
"center."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:32(para)
msgid ""
"At the data center, I was finishing up some tasks and remembered the lock-up."
" I logged into the new instance and ran again. It worked. "
"Phew. I decided to run it one more time. It locked up."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:36(para)
msgid ""
"After reproducing the problem several times, I came to the unfortunate "
"conclusion that this cloud did indeed have a problem. Even worse, my time "
"was up in Kelowna and I had to return back to Calgary."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:40(para)
msgid ""
"Where do you even begin troubleshooting something like this? An instance "
"that just randomly locks up when a command is issued. Is it the image? "
"Nopeit happens on all images. Is it the compute node? Nopeall nodes. Is the "
"instance locked up? No! New SSH connections work just fine!"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:45(para)
msgid ""
"We reached out for help. A networking engineer suggested it was an MTU issue."
" Great! MTU! Something to go on! What's MTU and why would it cause a "
"problem?"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:48(para)
msgid ""
"MTU is maximum transmission unit. It specifies the maximum number of bytes "
"that the interface accepts for each packet. If two interfaces have two "
"different MTUs, bytes might get chopped off and weird things happensuch as "
"random session lockups."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:54(para)
msgid ""
"Not all packets have a size of 1500. Running the command "
"over SSH might only create a single packets less than 1500 bytes. However, "
"running a command with heavy output, such as requires "
"several packets of 1500 bytes."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:60(para)
msgid ""
"OK, so where is the MTU issue coming from? Why haven't we seen this in any "
"other deployment? What's new in this situation? Well, new data center, new "
"uplink, new switches, new model of switches, new servers, first time using "
"this model of servers… so, basically everything was new. Wonderful. We toyed "
"around with raising the MTU at various areas: the switches, the NICs on the "
"compute nodes, the virtual NICs in the instances, we even had the data "
"center raise the MTU for our uplink interface. Some changes worked, some "
"didn't. This line of troubleshooting didn't feel right, though. We shouldn't "
"have to be changing the MTU in these areas."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:72(para)
msgid ""
"As a last resort, our network admin (Alvaro) and myself sat down with four "
"terminal windows, a pencil, and a piece of paper. In one window, we ran ping."
" In the second window, we ran on the cloud controller. In "
"the third, on the compute node. And the forth had "
" on the instance. For background, this cloud was a multi-"
"node, non-multi-host setup."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:80(para)
msgid ""
"One cloud controller acted as a gateway to all compute nodes. VlanManager "
"was used for the network config. This means that the cloud controller and "
"all compute nodes had a different VLAN for each OpenStack project. We used "
"the -s option of to change the packet size. We watched as "
"sometimes packets would fully return, sometimes they'd only make it out and "
"never back in, and sometimes the packets would stop at a random point. We "
"changed to start displaying the hex dump of the packet. We "
"pinged between every combination of outside, controller, compute, and "
"instance."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:92(para)
msgid ""
"Finally, Alvaro noticed something. When a packet from the outside hits the "
"cloud controller, it should not be configured with a VLAN. We verified this "
"as true. When the packet went from the cloud controller to the compute node, "
"it should only have a VLAN if it was destined for an instance. This was "
"still true. When the ping reply was sent from the instance, it should be in "
"a VLAN. True. When it came back to the cloud controller and on its way out "
"to the Internet, it should no longer have a VLAN. False. Uh oh. It looked as "
"though the VLAN part of the packet was not being removed."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:103(para)
msgid "That made no sense."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:104(para)
msgid ""
"While bouncing this idea around in our heads, I was randomly typing commands "
"on the compute node: "
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:111(para)
msgid "\"Hey Alvaro, can you run a VLAN on top of a VLAN?\""
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:113(para)
msgid "\"If you did, you'd add an extra 4 bytes to the packet\""
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:115(para)
msgid "Then it all made sense… "
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:119(para)
msgid ""
"In nova.conf , vlan_interface specifies "
"what interface OpenStack should attach all VLANs to. The correct setting "
"should have been: "
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:123(para)
msgid "As this would be the server's bonded NIC."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:124(para)
msgid ""
"vlan20 is the VLAN that the data center gave us for outgoing Internet access."
" It's a correct VLAN and is also attached to bond0."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:127(para)
msgid ""
"By mistake, I configured OpenStack to attach all tenant VLANs to vlan20 "
"instead of bond0 thereby stacking one VLAN on top of another. This added an "
"extra 4 bytes to each packet and caused a packet of 1504 bytes to be sent "
"out which would cause problems when it arrived at an interface that only "
"accepted 1500."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:133(para)
msgid "As soon as this setting was fixed, everything worked."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:137(title)
msgid "\"The Issue\""
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:138(para)
msgid ""
"At the end of August 2012, a post-secondary school in Alberta, Canada "
"migrated its infrastructure to an OpenStack cloud. As luck would have it, "
"within the first day or two of it running, one of their servers just "
"disappeared from the network. Blip. Gone."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:143(para)
msgid ""
"After restarting the instance, everything was back up and running. We "
"reviewed the logs and saw that at some point, network communication stopped "
"and then everything went idle. We chalked this up to a random occurrence."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:148(para)
msgid "A few nights later, it happened again."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:149(para)
msgid ""
"We reviewed both sets of logs. The one thing that stood out the most was "
"DHCP. At the time, OpenStack, by default, set DHCP leases for one minute "
"(it's now two minutes). This means that every instance contacts the cloud "
"controller (DHCP server) to renew its fixed IP. For some reason, this "
"instance could not renew its IP. We correlated the instance's logs with the "
"logs on the cloud controller and put together a conversation:"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:160(para)
msgid "Instance tries to renew IP."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:163(para)
msgid "Cloud controller receives the renewal request and sends a response."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:167(para)
msgid "Instance \"ignores\" the response and re-sends the renewal request."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:171(para)
msgid "Cloud controller receives the second request and sends a new response."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:175(para)
msgid ""
"Instance begins sending a renewal request to 255.255.255.255 "
"since it hasn't heard back from the cloud controller."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:180(para)
msgid ""
"The cloud controller receives the 255.255.255.255 request and "
"sends a third response."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:185(para)
msgid "The instance finally gives up."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:188(para)
msgid ""
"With this information in hand, we were sure that the problem had to do with "
"DHCP. We thought that for some reason, the instance wasn't getting a new IP "
"address and with no IP, it shut itself off from the network."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:192(para)
msgid ""
"A quick Google search turned up this: DHCP lease errors in VLAN mode"
"link> (https://lists.launchpad.net/openstack/msg11696.html) which further "
"supported our DHCP theory."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:197(para)
msgid ""
"An initial idea was to just increase the lease time. If the instance only "
"renewed once every week, the chances of this problem happening would be "
"tremendously smaller than every minute. This didn't solve the problem, "
"though. It was just covering the problem up."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:202(para)
msgid ""
"We decided to have run on this instance and see if we could "
"catch it in action again. Sure enough, we did."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:205(para)
msgid ""
"The looked very, very weird. In short, it looked as though "
"network communication stopped before the instance tried to renew its IP. "
"Since there is so much DHCP chatter from a one minute lease, it's very hard "
"to confirm it, but even with only milliseconds difference between packets, "
"if one packet arrives first, it arrived first, and if that packet reported "
"network issues, then it had to have happened before DHCP."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:213(para)
msgid ""
"Additionally, this instance in question was responsible for a very, very "
"large backup job each night. While \"The Issue\" (as we were now calling it) "
"didn't happen exactly when the backup happened, it was close enough (a few "
"hours) that we couldn't ignore it."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:218(para)
msgid ""
"Further days go by and we catch The Issue in action more and more. We find "
"that dhclient is not running after The Issue happens. Now we're back to "
"thinking it's a DHCP issue. Running /etc/init.d/networking"
"filename> restart brings everything back up and running."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:223(para)
msgid ""
"Ever have one of those days where all of the sudden you get the Google "
"results you were looking for? Well, that's what happened here. I was looking "
"for information on dhclient and why it dies when it can't renew its lease "
"and all of the sudden I found a bunch of OpenStack and dnsmasq discussions "
"that were identical to the problem we were seeing!"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:230(para)
msgid ""
"Problem with Heavy Network IO and Dnsmasq (http://www."
"gossamer-threads.com/lists/openstack/operators/18197)"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:236(para)
msgid ""
"instances losing IP address while running, due to No DHCPOFFER"
"link> (http://www.gossamer-threads.com/lists/openstack/dev/14696)"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:242(para)
msgid "Seriously, Google."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:243(para)
msgid ""
"This bug report was the key to everything: KVM images lose "
"connectivity with bridged network (https://bugs.launchpad.net/ubuntu/"
"+source/qemu-kvm/+bug/997978)"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:249(para)
msgid ""
"It was funny to read the report. It was full of people who had some strange "
"network problem but didn't quite explain it in the same way."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:252(para)
msgid "So it was a qemu/kvm bug."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:253(para)
msgid ""
"At the same time of finding the bug report, a co-worker was able to "
"successfully reproduce The Issue! How? He used to spew a "
"ton of bandwidth at an instance. Within 30 minutes, the instance just "
"disappeared from the network."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:258(para)
msgid ""
"Armed with a patched qemu and a way to reproduce, we set out to see if we've "
"finally solved The Issue. After 48 hours straight of hammering the instance "
"with bandwidth, we were confident. The rest is history. You can search the "
"bug report for \"joe\" to find my comments and actual tests."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:266(title)
msgid "Disappearing Images"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:267(para)
msgid ""
"At the end of 2012, Cybera (a nonprofit with a mandate to oversee the "
"development of cyberinfrastructure in Alberta, Canada) deployed an updated "
"OpenStack cloud for their DAIR project (http://www.canarie.ca/"
"en/dair-program/about). A few days into production, a compute node locks up. "
"Upon rebooting the node, I checked to see what instances were hosted on that "
"node so I could boot them on behalf of the customer. Luckily, only one "
"instance."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:278(para)
msgid ""
"The command wasn't working, so I used , but "
"it immediately came back with an error saying it was unable to find the "
"backing disk. In this case, the backing disk is the Glance image that is "
"copied to /var/lib/nova/instances/_base when the image "
"is used for the first time. Why couldn't it find it? I checked the directory "
"and sure enough it was gone."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:287(para)
msgid ""
"I reviewed the nova database and saw the instance's entry in "
"the nova.instances table. The image that the instance was using "
"matched what virsh was reporting, so no inconsistency there."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:291(para)
msgid ""
"I checked Glance and noticed that this image was a snapshot that the user "
"created. At least that was good newsthis user would have been the only user "
"affected."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:295(para)
msgid ""
"Finally, I checked StackTach and reviewed the user's events. They had "
"created and deleted several snapshotsmost likely experimenting. Although the "
"timestamps didn't match up, my conclusion was that they launched their "
"instance and then deleted the snapshot and it was somehow removed from /var/"
"lib/nova/instances/_base. None of that made sense, but it was the best I "
"could come up with."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:302(para)
msgid ""
"It turns out the reason that this compute node locked up was a hardware "
"issue. We removed it from the DAIR cloud and called Dell to have it serviced."
" Dell arrived and began working. Somehow or another (or a fat finger), a "
"different compute node was bumped and rebooted. Great."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:308(para)
msgid ""
"When this node fully booted, I ran through the same scenario of seeing what "
"instances were running so I could turn them back on. There were a total of "
"four. Three booted and one gave an error. It was the same error as before: "
"unable to find the backing disk. Seriously, what?"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:314(para)
msgid ""
"Again, it turns out that the image was a snapshot. The three other instances "
"that successfully started were standard cloud images. Was it a problem with "
"snapshots? That didn't make sense."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:318(para)
msgid ""
"A note about DAIR's architecture: /var/lib/nova/instances"
"filename> is a shared NFS mount. This means that all compute nodes have "
"access to it, which includes the _base directory. Another "
"centralized area is /var/log/rsyslog on the cloud "
"controller. This directory collects all OpenStack logs from all compute "
"nodes. I wondered if there were any entries for the file that is reporting: "
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:333(para)
msgid "Ah-hah! So OpenStack was deleting it. But why?"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:334(para)
msgid ""
"A feature was introduced in Essex to periodically check and see if there "
"were any _base files not in use. If there were, OpenStack "
"Compute would delete them. This idea sounds innocent enough and has some "
"good qualities to it. But how did this feature end up turned on? It was "
"disabled by default in Essex. As it should be. It was decided to be turned on in "
"Folsom (https://bugs.launchpad.net/nova/+bug/1029674). I cannot "
"emphasize enough that:"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:346(emphasis)
msgid "Actions which delete things should not be enabled by default."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:349(para)
msgid "Disk space is cheap these days. Data recovery is not."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:351(para)
msgid ""
"Secondly, DAIR's shared /var/lib/nova/instances "
"directory contributed to the problem. Since all compute nodes have access to "
"this directory, all compute nodes periodically review the _base directory. "
"If there is only one instance using an image, and the node that the instance "
"is on is down for a few minutes, it won't be able to mark the image as still "
"in use. Therefore, the image seems like it's not in use and is deleted. When "
"the compute node comes back online, the instance hosted on that node is "
"unable to start."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:364(title)
msgid "The Valentine's Day Compute Node Massacre"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:365(para)
msgid ""
"Although the title of this story is much more dramatic than the actual "
"event, I don't think, or hope, that I'll have the opportunity to use "
"\"Valentine's Day Massacre\" again in a title."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:369(para)
msgid ""
"This past Valentine's Day, I received an alert that a compute node was no "
"longer available in the cloudmeaning, showed this particular node in down "
"state."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:373(para)
msgid ""
"I logged into the cloud controller and was able to both and "
"SSH into the problematic compute node which seemed very odd. Usually if I "
"receive this type of alert, the compute node has totally locked up and would "
"be inaccessible."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:378(para)
msgid "After a few minutes of troubleshooting, I saw the following details:"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:382(para)
msgid "A user recently tried launching a CentOS instance on that node"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:386(para)
msgid "This user was the only user on the node (new node)"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:390(para)
msgid "The load shot up to 8 right before I received the alert"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:394(para)
msgid "The bonded 10gb network device (bond0) was in a DOWN state"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:398(para)
msgid "The 1gb NIC was still alive and active"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:401(para)
msgid ""
"I looked at the status of both NICs in the bonded pair and saw that neither "
"was able to communicate with the switch port. Seeing as how each NIC in the "
"bond is connected to a separate switch, I thought that the chance of a "
"switch port dying on each switch at the same time was quite improbable. I "
"concluded that the 10gb dual port NIC had died and needed replaced. I "
"created a ticket for the hardware support department at the data center "
"where the node was hosted. I felt lucky that this was a new node and no one "
"else was hosted on it yet."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:411(para)
msgid ""
"An hour later I received the same alert, but for another compute node. Crap. "
"OK, now there's definitely a problem going on. Just like the original node, "
"I was able to log in by SSH. The bond0 NIC was DOWN but the 1gb NIC was "
"active."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:416(para)
msgid ""
"And the best part: the same user had just tried creating a CentOS instance. "
"What?"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:418(para)
msgid ""
"I was totally confused at this point, so I texted our network admin to see "
"if he was available to help. He logged in to both switches and immediately "
"saw the problem: the switches detected spanning tree packets coming from the "
"two compute nodes and immediately shut the ports down to prevent spanning "
"tree loops: "
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:431(para)
msgid ""
"He re-enabled the switch ports and the two compute nodes immediately came "
"back to life."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:433(para)
msgid ""
"Unfortunately, this story has an open ending... we're still looking into why "
"the CentOS image was sending out spanning tree packets. Further, we're "
"researching a proper way on how to mitigate this from happening. It's a "
"bigger issue than one might think. While it's extremely important for "
"switches to prevent spanning tree loops, it's very problematic to have an "
"entire compute node be cut from the network when this happens. If a compute "
"node is hosting 100 instances and one of them sends a spanning tree packet, "
"that instance has effectively DDOS'd the other 99 instances."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:444(para)
msgid ""
"This is an ongoing and hot topic in networking circles especially with the "
"raise of virtualization and virtual switches."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:449(title)
msgid "Down the Rabbit Hole"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:450(para)
msgid ""
"Users being able to retrieve console logs from running instances is a boon "
"for supportmany times they can figure out what's going on inside their "
"instance and fix what's going on without bothering you. Unfortunately, "
"sometimes overzealous logging of failures can cause problems of its own."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:456(para)
msgid ""
"A report came in: VMs were launching slowly, or not at all. Cue the standard "
"checksnothing on the Nagios, but there was a spike in network towards the "
"current master of our RabbitMQ cluster. Investigation started, but soon the "
"other parts of the queue cluster were leaking memory like a sieve. Then the "
"alert came inthe master Rabbit server went down and connections failed over "
"to the slave."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:463(para)
msgid ""
"At that time, our control services were hosted by another team and we didn't "
"have much debugging information to determine what was going on with the "
"master, and we could not reboot it. That team noted that it failed without "
"alert, but managed to reboot it. After an hour, the cluster had returned to "
"its normal state and we went home for the day."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:470(para)
msgid ""
"Continuing the diagnosis the next morning was kick started by another "
"identical failure. We quickly got the message queue running again, and tried "
"to work out why Rabbit was suffering from so much network traffic. Enabling "
"debug logging on nova-api quickly "
"brought understanding. A was scrolling by faster than we'd "
"ever seen before. CTRL+C on that and we could plainly see the contents of a "
"system log spewing failures over and over again - a system log from one of "
"our users' instances."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:482(para)
msgid ""
"After finding the instance ID we headed over to /var/lib/nova/"
"instances to find the console.log : "
" "
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:490(para)
msgid ""
"Sure enough, the user had been periodically refreshing the console log page "
"on the dashboard and the 5G file was traversing the Rabbit cluster to get to "
"the dashboard."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:494(para)
msgid ""
"We called them and asked them to stop for a while, and they were happy to "
"abandon the horribly broken VM. After that, we started monitoring the size "
"of console logs."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:498(para)
msgid ""
"To this day, the issue (https://bugs.launchpad.net/nova/+bug/832507) "
"doesn't have a permanent resolution, but we look forward to the discussion "
"at the next summit."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:506(title)
msgid "Havana Haunted by the Dead"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:507(para)
msgid ""
"Felix Lee of Academia Sinica Grid Computing Centre in Taiwan contributed "
"this story."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:509(para)
msgid ""
"I just upgraded OpenStack from Grizzly to Havana 2013.2-2 using the RDO "
"repository and everything was running pretty wellexcept the EC2 API."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:512(para)
msgid ""
"I noticed that the API would suffer from a heavy load and respond slowly to "
"particular EC2 requests such as RunInstances ."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:515(para)
msgid "Output from /var/log/nova/nova-api.log on Havana:"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:524(para)
msgid ""
"This request took over two minutes to process, but executed quickly on "
"another co-existing Grizzly deployment using the same hardware and system "
"configuration."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:527(para)
msgid ""
"Output from /var/log/nova/nova-api.log on Grizzly:"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:536(para)
msgid ""
"While monitoring system resources, I noticed a significant increase in "
"memory consumption while the EC2 API processed this request. I thought it "
"wasn't handling memory properlypossibly not releasing memory. If the API "
"received several of these requests, memory consumption quickly grew until "
"the system ran out of RAM and began using swap. Each node has 48 GB of RAM "
"and the \"nova-api\" process would consume all of it within minutes. Once "
"this happened, the entire system would become unusably slow until I "
"restarted the nova-api service."
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:546(para)
msgid ""
"So, I found myself wondering what changed in the EC2 API on Havana that "
"might cause this to happen. Was it a bug or a normal behavior that I now "
"need to work around?"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:549(para)
msgid ""
"After digging into the nova (OpenStack Compute) code, I noticed two areas in "
"api/ec2/cloud.py potentially impacting my system:"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:561(para)
msgid ""
"Since my database contained many recordsover 1 million metadata records and "
"over 300,000 instance records in \"deleted\" or \"errored\" stateseach "
"search took a long time. I decided to clean up the database by first "
"archiving a copy for backup and then performing some deletions using the "
"MySQL client. For example, I ran the following SQL command to remove rows of "
"instances deleted for over a year:"
msgstr ""
#: ./doc/openstack-ops/app_crypt.xml:569(para)
msgid ""
"Performance increased greatly after deleting the old records and my new "
"deployment continues to behave well."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/section_arch_example-nova.xml:411(None)
msgid ""
"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/"
"openstack-ops/figures/osog_01in01.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/openstack-ops/section_arch_example-nova.xml:450(None)
msgid ""
"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/"
"openstack-ops/figures/osog_01in02.png'; md5=THIS FILE DOESN'T EXIST"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:12(title)
msgid "Example Architecture—Legacy Networking (nova)"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:14(para)
msgid ""
"This particular example architecture has been upgraded from Grizzly to "
"Havana and tested in production environments where many public IP addresses "
"are available for assignment to multiple instances. You can find a second "
"example architecture that uses OpenStack Networking (neutron) after this "
"section. Each example offers high availability, meaning that if a particular "
"node goes down, another node with the same configuration can take over the "
"tasks so that the services continue to be available.Havana Grizzly "
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:29(para)
msgid ""
"The simplest architecture you can build upon for Compute has a single cloud "
"controller and multiple compute nodes. The simplest architecture for Object "
"Storage has five nodes: one for identifying users and proxying requests to "
"the API, then four for storage itself to provide enough replication for "
"eventual consistency. This example architecture does not dictate a "
"particular number of nodes, but shows the thinking and considerations that went into choosing this "
"architecture including the features offered"
"phrase>.CentOS "
"indexterm>RDO (Red Hat Distributed "
"OpenStack) Ubuntu legacy networking (nova) component "
"overview example architectures legacy networking; "
"OpenStack networking Object Storage simplest "
"architecture for Compute simplest architecture for"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:87(para)
msgid ""
"Ubuntu 12.04 LTS or Red Hat Enterprise Linux 6.5, including derivatives such "
"as CentOS and Scientific Linux"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:95(para)
msgid ""
"Ubuntu "
"Cloud Archive or RDO*"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:109(para)
msgid "MySQL*"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:115(para)
msgid "RabbitMQ for Ubuntu; Qpid for Red Hat Enterprise Linux and derivatives"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:122(literal)
msgid "nova-network"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:126(para)
msgid "Network manager"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:132(para)
msgid "Single nova-network or multi-host?"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:135(para)
msgid "multi-host*"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:139(para)
msgid "Image service (glance) back end"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:141(para)
msgid "file"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:145(para)
msgid "Identity (keystone) driver"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:151(para)
msgid "Block Storage (cinder) back end"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:153(para)
msgid "LVM/iSCSI"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:157(para)
msgid "Live Migration back end"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:159(para)
msgid "Shared storage using NFS*"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:170(para)
msgid ""
"An asterisk (*) indicates when the example architecture deviates from the "
"settings of a default installation. We'll offer explanations for those "
"deviations next.objects"
"primary>object storage storage object storage"
"secondary> migration"
"primary> live migration"
"primary> IP addresses"
"primary>floating floating IP address storage block storage"
"secondary> block storage"
"primary> dashboard"
"primary> legacy networking "
"(nova) features supported by "
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:209(para)
msgid ""
"Dashboard : You probably want to offer a dashboard, "
"but your users may be more interested in API access only."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:215(para)
msgid ""
"Block storage : You don't have to offer users block "
"storage if their use case only needs ephemeral storage on compute nodes, for "
"example."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:221(para)
msgid ""
"Floating IP address : Floating IP addresses are public "
"IP addresses that you allocate from a predefined pool to assign to virtual "
"machines at launch. Floating IP address ensure that the public IP address is "
"available whenever an instance is booted. Not every organization can offer "
"thousands of public floating IP addresses for thousands of instances, so "
"this feature is considered optional."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:232(para)
msgid ""
"Live migration : If you need to move running virtual "
"machine instances from one host to another with little or no service "
"interruption, you would enable live migration, but it is considered optional."
""
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:239(para)
msgid ""
"Object storage : You may choose to store machine "
"images on a file system rather than in object storage if you do not have the "
"extra hardware for the required replication and redundancy that OpenStack "
"Object Storage offers."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:205(para)
msgid ""
"The following features of OpenStack are supported by the example "
"architecture documented in this guide, but are optional: "
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:252(para)
msgid ""
"This example architecture has been selected based on the current default "
"feature set of OpenStack Havana , with an emphasis on "
"stability. We believe that many clouds that currently run OpenStack in "
"production have made similar choices.legacy networking (nova) rationale "
"for choice of "
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:262(para)
msgid ""
"You must first choose the operating system that runs on all of the physical "
"nodes. While OpenStack is supported on several distributions of Linux, we "
"used Ubuntu 12.04 LTS (Long Term Support) , which is "
"used by the majority of the development community, has feature completeness "
"compared with other distributions and has clear future support plans."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:269(para)
msgid ""
"We recommend that you do not use the default Ubuntu OpenStack install "
"packages and instead use the Ubuntu Cloud Archive. The Cloud Archive is "
"a package repository supported by Canonical that allows you to upgrade to "
"future OpenStack releases while remaining on Ubuntu 12.04."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:276(para)
msgid ""
"KVM as a hypervisor complements "
"the choice of Ubuntu—being a matched pair in terms of support, and also "
"because of the significant degree of attention it garners from the OpenStack "
"development community (including the authors, who mostly use KVM). It is "
"also feature complete, free from licensing charges and restrictions."
"kernel-based VM (KVM) hypervisor"
"primary> hypervisors"
"primary>KVM "
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:289(para)
msgid ""
"MySQL follows a similar trend. Despite its recent "
"change of ownership, this database is the most tested for use with OpenStack "
"and is heavily documented. We deviate from the default database, "
"SQLite , because SQLite is not an appropriate database "
"for production usage."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:295(para)
msgid ""
"The choice of RabbitMQ over other AMQP compatible "
"options that are gaining support in OpenStack, such as ZeroMQ and Qpid, is "
"due to its ease of use and significant testing in production. It also is the "
"only option that supports features such as Compute cells. We recommend "
"clustering with RabbitMQ, as it is an integral component of the system and "
"fairly simple to implement due to its inbuilt nature.Advanced Message Queuing Protocol (AMQP) "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:305(para)
msgid ""
"As discussed in previous chapters, there are several options for networking "
"in OpenStack Compute. We recommend FlatDHCP and to use "
"Multi-Host networking mode for high availability, "
"running one nova-network daemon per OpenStack compute host. "
"This provides a robust mechanism for ensuring network interruptions are "
"isolated to individual compute hosts, and allows for the direct use of "
"hardware network gateways."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:314(para)
msgid ""
"Live Migration is supported by way of shared storage, "
"with NFS as the distributed file system."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:318(para)
msgid ""
"Acknowledging that many small-scale deployments see running Object Storage "
"just for the storage of virtual machine images as too costly, we opted for "
"the file back end in the OpenStack Image service (Glance). If your cloud "
"will include Object Storage, you can easily add it as a back end."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:324(para)
msgid ""
"We chose the SQL back end for Identity over others, "
"such as LDAP. This back end is simple to install and is robust. The authors "
"acknowledge that many installations want to bind with existing directory "
"services and caution careful understanding of the array of options available."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:332(para)
msgid ""
"Block Storage (cinder) is installed natively on external storage nodes and "
"uses the LVM/iSCSI plug-in . Most Block Storage plug-ins "
"are tied to particular vendor products and implementations limiting their "
"use to consumers of those hardware platforms, but LVM/iSCSI is robust and "
"stable on commodity hardware."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:341(para)
msgid ""
"While the cloud can be run without the OpenStack Dashboard"
"emphasis>, we consider it to be indispensable, not just for user interaction "
"with the cloud, but also as a tool for operators. Additionally, the "
"dashboard's use of Django makes it a flexible framework for extension ."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:349(title)
msgid "Why not use OpenStack Networking?"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:351(para)
msgid ""
"This example architecture does not use OpenStack Networking, because it does "
"not yet support multi-host networking and our organizations (university, "
"government) have access to a large range of publicly-accessible IPv4 "
"addresses.legacy networking (nova)"
"primary>vs. OpenStack Networking (neutron) "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:362(title)
msgid "Why use multi-host networking?"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:364(para)
msgid ""
"In a default OpenStack deployment, there is a single nova-network"
"code> service that runs within the cloud (usually on the cloud controller) "
"that provides services such as network address translation (NAT), DHCP, and "
"DNS to the guest instances. If the single node that runs the nova-"
"network service goes down, you cannot access your instances, and the "
"instances cannot access the Internet. The single node that runs the "
"nova-network service can become a bottleneck if excessive "
"network traffic comes in and goes out of the cloud.networks multi-host "
"indexterm>multi-host networking"
"primary> legacy networking "
"(nova) benefits of multi-host networking "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:385(para)
msgid ""
"Multi-host is a high-availability "
"option for the network configuration, where the nova-network"
"literal> service is run on every compute node instead of running on only a "
"single node."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:396(para)
msgid ""
"The reference architecture consists of multiple compute nodes, a cloud "
"controller, an external NFS storage server for instance storage, and an "
"OpenStack Block Storage server for volume storage."
"legacy networking (nova)"
"primary>detailed description A network "
"time service (Network Time Protocol, or NTP) synchronizes time on all the "
"nodes. FlatDHCPManager in multi-host mode is used for the networking. A "
"logical diagram for this example architecture shows which services are "
"running on each node:"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:416(para)
msgid ""
"The cloud controller runs the dashboard, the API services, the database "
"(MySQL), a message queue server (RabbitMQ), the scheduler for choosing "
"compute resources (nova-scheduler ), Identity services "
"(keystone, nova-consoleauth), Image services (glance-api"
"code>, glance-registry), services for console access of guests, "
"and Block Storage services, including the scheduler for storage resources "
"(cinder-api and cinder-scheduler).cloud controllers duties of"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:429(para)
msgid ""
"Compute nodes are where the computing resources are held, and in our example "
"architecture, they run the hypervisor (KVM), libvirt (the driver for the "
"hypervisor, which enables live migration from node to node), nova-"
"compute, nova-api-metadata (generally only used when "
"running in multi-host mode, it retrieves instance-specific metadata), "
"nova-vncproxy, and nova-network."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:437(para)
msgid ""
"The network consists of two switches, one for the management or private "
"traffic, and one that covers public access, including floating IPs. To "
"support this, the cloud controller and the compute nodes have two network "
"cards. The OpenStack Block Storage and NFS storage servers only need to "
"access the private network and therefore only need one network card, but "
"multiple cards run in a bonded configuration are recommended if possible. "
"Floating IP access is direct to the Internet, whereas Flat IP access goes "
"through a NAT. To envision the network traffic, use this diagram:"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:457(title)
msgid "Optional Extensions"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:459(para)
msgid ""
"You can extend this reference architecture aslegacy networking (nova) optional "
"extensions follows:"
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:468(para)
msgid "Add additional cloud controllers (see )."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:473(para)
msgid ""
"Add an OpenStack Storage service (see the Object Storage chapter in the "
"OpenStack Installation Guide for your distribution)."
msgstr ""
#: ./doc/openstack-ops/section_arch_example-nova.xml:479(para)
msgid ""
"Add additional OpenStack Block Storage hosts (see )."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:12(title)
msgid "Lay of the Land"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:14(para)
msgid ""
"This chapter helps you set up your working environment and use it to take a "
"look around your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:18(title)
msgid "Using the OpenStack Dashboard for Administration"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:20(para)
msgid ""
"As a cloud administrative user, you can use the OpenStack dashboard to "
"create and manage projects, users, images, and flavors. Users are allowed to "
"create and manage images within specified projects and to share images, "
"depending on the Image service configuration. Typically, the policy "
"configuration allows admin users only to set quotas and create and manage "
"services. The dashboard provides an Admin tab with a "
"System Panel and an Identity tab. "
"These interfaces give you access to system information and usage as well as "
"to settings for configuring what end users can do. Refer to the OpenStack "
"Administrator Guide for detailed how-to information about using the "
"dashboard as an admin user.working "
"environment dashboard dashboard "
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:41(title)
msgid "Command-Line Tools"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:43(para)
msgid ""
"We recommend using a combination of the OpenStack command-line interface "
"(CLI) tools and the OpenStack dashboard for administration. Some users with "
"a background in other cloud technologies may be using the EC2 Compatibility "
"API, which uses naming conventions somewhat different from the native API. "
"We highlight those differences.working environment command-line "
"tools "
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:53(para)
msgid ""
"We strongly suggest that you install the command-line clients from the Python Package Index "
"(PyPI) instead of from the distribution packages. The clients are under "
"heavy development, and it is very likely at any given time that the version "
"of the packages distributed by your operating-system vendor are out of date."
"command-line tools"
"primary>Python Package Index (PyPI) "
"indexterm>pip utility "
"indexterm>Python Package Index "
"(PyPI) "
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:68(para)
msgid ""
"The pip utility is used to manage package installation from the PyPI archive "
"and is available in the python-pip package in most Linux distributions. Each "
"OpenStack project has its own client, so depending on which services your "
"site runs, install some or all of the followingneutron python-neutronclient"
"secondary> swift"
"primary>python-swiftclient cinder keystone glance python-glanceclient"
"secondary> nova"
"primary>python-novaclient packages:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:96(para)
msgid "python-novaclient (nova CLI)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:100(para)
msgid "python-glanceclient (glance CLI)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:104(para)
msgid "python-keystoneclient (keystone CLI)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:109(para)
msgid "python-cinderclient (cinder CLI)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:113(para)
msgid "python-swiftclient (swift CLI)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:117(para)
msgid "python-neutronclient (neutron CLI)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:122(title)
msgid "Installing the Tools"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:124(para)
msgid ""
"To install (or upgrade) a package from the PyPI archive with pip, command-line tools"
"primary>installing as root:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:133(para)
msgid "To remove the package:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:137(para)
msgid ""
"If you need even newer versions of the clients, pip can install directly "
"from the upstream git repository using the -e flag. You must "
"specify a name for the Python egg that is installed. For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:145(para)
msgid ""
"If you support the EC2 API on your cloud, you should also install the "
"euca2ools package or some other EC2 API tool so that you can get the same "
"view your users have. Using EC2 API-based tools is mostly out of the scope "
"of this guide, though we discuss getting credentials for use with it."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:153(title)
msgid "Administrative Command-Line Tools"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:155(para)
msgid ""
"There are also several *-manage command-line tools. These "
"are installed with the project's services on the cloud controller and do not "
"need to be installed*-manage command-"
"line tools command-line tools administrative"
"secondary> separately:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:167(literal)
msgid "glance-manage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:171(literal)
msgid "keystone-manage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:175(literal)
msgid "cinder-manage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:179(para)
msgid ""
"Unlike the CLI tools mentioned above, the *-manage tools must "
"be run from the cloud controller, as root, because they need read access to "
"the config files such as /etc/nova/nova.conf and to make "
"queries directly against the database rather than against the OpenStack "
"API endpoints .API (application programming interface)"
"primary>API endpoint endpoints API endpoint"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:195(para)
msgid ""
"The existence of the *-manage tools is a legacy issue. It is a "
"goal of the OpenStack project to eventually migrate all of the remaining "
"functionality in the *-manage tools into the API-based tools. "
"Until that day, you need to SSH into the cloud controller node"
"glossterm> to perform some maintenance operations that require one of the "
"*-manage "
"tools .cloud controller "
"nodes command-line tools and "
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:211(title)
msgid "Getting Credentials"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:213(para)
msgid ""
"You must have the appropriate credentials if you want to use the command-"
"line tools to make queries against your OpenStack cloud. By far, the easiest "
"way to obtain authentication credentials to use with "
"command-line clients is to use the OpenStack dashboard. Select "
"Project , click the Project"
"guimenuitem> tab, and click Access & Security "
"on the Compute category. On the "
"Access & Security page, click the "
"API Access tab to display two buttons, "
"Download OpenStack RC File and Download EC2 "
"Credentials , which let you generate files that you can source in "
"your shell to populate the environment variables the command-line tools "
"require to know where your service endpoints and your authentication "
"information are. The user you logged in to the dashboard dictates the "
"filename for the openrc file, such as demo-openrc.sh . "
"When logged in as admin, the file is named admin-openrc.sh"
"filename>.credentials "
"indexterm>authentication "
"indexterm>command-line tools"
"primary>getting credentials "
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:240(para)
msgid "The generated file looks something like this:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:268(para)
msgid ""
"This does not save your password in plain text, which is a good thing. But "
"when you source or run the script, it prompts you for your password and then "
"stores your response in the environment variable OS_PASSWORD. "
"It is important to note that this does require interactivity. It is possible "
"to store a value directly in the script if you require a noninteractive "
"operation, but you then need to be extremely cautious with the security and "
"permissions of this file.passwords"
"primary> security issues"
"primary>passwords "
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:284(para)
msgid ""
"EC2 compatibility credentials can be downloaded by selecting "
"Project , then Compute "
"guimenuitem>, then Access & Security , then "
"API Access to display the Download EC2 "
"Credentials button. Click the button to generate a ZIP file with "
"server x509 certificates and a shell script fragment. Create a new directory "
"in a secure location because these are live credentials containing all the "
"authentication information required to access your cloud identity, unlike "
"the default user-openrc. Extract the ZIP file here. You should "
"have cacert.pem , cert.pem , "
"ec2rc.sh , and pk.pem . The "
"ec2rc.sh is similar to this:access key "
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:321(para)
msgid ""
"To put the EC2 credentials into your environment, source the ec2rc.sh"
"code> file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:326(title)
msgid "Inspecting API Calls"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:328(para)
msgid ""
"The command-line tools can be made to show the OpenStack API calls they make "
"by passing the --debug flag to them.API (application programming interface)"
"primary>API calls, inspecting command-line tools"
"primary>inspecting API calls For example:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:342(para)
msgid ""
"This example shows the HTTP requests from the client and the responses from "
"the endpoints, which can be helpful in creating custom tools written to the "
"OpenStack API."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:347(title)
msgid "Using cURL for further inspection"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:349(para)
msgid ""
"Underlying the use of the command-line tools is the OpenStack API, which is "
"a RESTful API that runs over HTTP. There may be cases where you want to "
"interact with the API directly or need to use it because of a suspected bug "
"in one of the CLI tools. The best way to do this is to use a combination "
"of cURL and another tool, "
"such as jq, to "
"parse the JSON from the responses.authentication tokens cURL "
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:362(para)
msgid ""
"The first thing you must do is authenticate with the cloud using your "
"credentials to get an authentication token ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:366(para)
msgid ""
"Your credentials are a combination of username, password, and tenant "
"(project). You can extract these values from the openrc.sh "
"discussed above. The token allows you to interact with your other service "
"endpoints without needing to reauthenticate for every request. Tokens are "
"typically good for 24 hours, and when the token expires, you are alerted "
"with a 401 (Unauthorized) response and you can request another token .catalog "
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:379(para)
msgid "Look at your OpenStack service catalog :"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:391(para)
msgid ""
"Read through the JSON response to get a feel for how the catalog is laid out."
""
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:394(para)
msgid ""
"To make working with subsequent requests easier, store the token in an "
"environment variable:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:404(para)
msgid ""
"Now you can refer to your token on the command line as $TOKEN"
"literal>."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:409(para)
msgid ""
"Pick a service endpoint from your service catalog, such as compute. Try a "
"request, for example, listing instances (servers):"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:420(para)
msgid ""
"To discover how API requests should be structured, read the OpenStack API Reference"
"link>. To chew through the responses using jq, see the jq Manual."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:425(para)
msgid ""
"The -s flag used in the cURL commands above are used to prevent "
"the progress meter from being shown. If you are having trouble running cURL "
"commands, you'll want to remove it. Likewise, to help you troubleshoot cURL "
"commands, you can include the -v flag to show you the verbose "
"output. There are many more extremely useful features in cURL; refer to the "
"man page for all the options."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:436(title)
msgid "Servers and Services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:438(para)
msgid ""
"As an administrator, you have a few ways to discover what your OpenStack "
"cloud looks like simply by using the OpenStack tools available. This section "
"gives you an idea of how to get an overview of your cloud, its shape, size, "
"and current state.services"
"primary>obtaining overview of servers obtaining overview "
"of cloud "
"computing cloud overview "
"indexterm>command-line tools"
"primary>servers and services "
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:460(para)
msgid ""
"First, you can discover what servers belong to your OpenStack cloud by "
"running:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:465(para)
msgid "The output looks like the following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:483(para)
msgid ""
"The output shows that there are five compute nodes and one cloud controller. "
"You can see all the services are in up state, which indicates that the "
"services are up and running. If a service is no longer available, then "
"service state changes to down state. This is an indication that you should "
"troubleshoot why the service is down."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:489(para)
msgid ""
"If you are using cinder, run the following command to see a similar listing:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:502(para)
msgid ""
"With these two tables, you now have a good overview of what servers and "
"services make up your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:505(para)
msgid ""
"You can also use the Identity service (keystone) to see what services are "
"available in your cloud as well as what endpoints have been configured for "
"the services.Identity"
"primary>displaying services and endpoints with "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:513(para)
msgid ""
"The following command requires you to have your shell environment configured "
"with the proper administrative variables:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:536(para)
msgid ""
"The preceding output has been truncated to show only two services. You will "
"see one service entry for each service that your cloud provides. Note how "
"the endpoint domain can be different depending on the endpoint type. "
"Different endpoint domains per type are not required, but this can be done "
"for different reasons, such as endpoint privacy or network traffic "
"segregation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:543(para)
msgid ""
"You can find the version of the Compute installation by using the "
"nova client command"
"phrase>: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:549(title)
msgid "Diagnose Your Compute Nodes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:551(para)
msgid ""
"You can obtain extra information about virtual machines that are "
"running—their CPU usage, the memory, the disk I/O or network I/O—per "
"instance, by running the nova diagnostics command "
"withcompute nodes"
"primary>diagnosing command-line tools compute node "
"diagnostics a server ID:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:566(para)
msgid ""
"The output of this command varies depending on the hypervisor because "
"hypervisors support different attributes.hypervisors compute node diagnosis "
"and The following demonstrates the difference "
"between the two most popular hypervisors. Here is example output when the "
"hypervisor is Xen: While the command should work with any "
"hypervisor that is controlled through libvirt (KVM, QEMU, or LXC), it has "
"been tested only with KVM. Here is the example output when the hypervisor is "
"KVM:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:616(title)
msgid "Network Inspection"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:618(para)
msgid ""
"To see which fixed IP networks are configured in your cloud, you can use the "
"nova command-line client to get the IP ranges:networks inspection of"
"secondary> working "
"environment network inspection "
"indexterm> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:636(para)
msgid ""
"The nova command-line client can provide some additional "
"details:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:644(para)
msgid ""
"This output shows that two networks are configured, each network containing "
"255 IPs (a /24 subnet). The first network has been assigned to a certain "
"project, while the second network is still open for assignment. You can "
"assign this network manually; otherwise, it is automatically assigned when a "
"project launches its first instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:650(para)
msgid "To find out whether any floating IPs are available in your cloud, run:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:658(para)
msgid ""
"Here, two floating IPs are available. The first has been allocated to a "
"project, while the other is unallocated."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:663(title)
msgid "Users and Projects"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:665(para)
msgid ""
"To see a list of projects that have been added to the cloud,projects obtaining list of "
"current user "
"management listing users "
"indexterm>working environment"
"primary>users and projects run:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:692(para)
msgid "To see a list of users, run:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:708(para)
msgid ""
"Sometimes a user and a group have a one-to-one mapping. This happens for "
"standard system accounts, such as cinder, glance, nova, and swift, or when "
"only one user is part of a group."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:715(title)
msgid "Running Instances"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:717(para)
msgid ""
"To see a list of running instances,instances list of running"
"secondary> working "
"environment running instances "
"run:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:741(para)
msgid ""
"Unfortunately, this command does not tell you various details about the "
"running instances , such as what "
"compute node the instance is running on, what flavor the instance is, and so "
"on. You can use the following command to view details about individual "
"instances:config drive "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:751(para)
msgid "For example: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:781(para)
msgid ""
"This output shows that an instance named was created from "
"an Ubuntu 12.04 image using a flavor of m1.small and is "
"hosted on the compute node c02.example.com ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:790(para)
msgid ""
"We hope you have enjoyed this quick tour of your working environment, "
"including how to interact with your cloud and extract useful information. "
"From here, you can use the OpenStack Administrator Guide "
"as your reference for all of the command-line functionality in your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_arch_examples.xml:12(title)
msgid "Architecture Examples"
msgstr ""
#: ./doc/openstack-ops/ch_arch_examples.xml:14(para)
msgid ""
"To understand the possibilities that OpenStack offers, it's best to start "
"with basic architecture that has been tested in production environments. We "
"offer two examples with basic pivots on the base operating system (Ubuntu "
"and Red Hat Enterprise Linux) and the networking architecture. There are "
"other differences between these two examples and this guide provides reasons "
"for each choice made."
msgstr ""
#: ./doc/openstack-ops/ch_arch_examples.xml:21(para)
msgid ""
"Because OpenStack is highly configurable, with many different back ends and "
"network configuration options, it is difficult to write documentation that "
"covers all possible OpenStack deployments. Therefore, this guide defines "
"examples of architecture to simplify the task of documenting, as well as to "
"provide the scope for this guide. Both of the offered architecture examples "
"are currently running in production and serving users."
msgstr ""
#: ./doc/openstack-ops/ch_arch_examples.xml:29(para)
msgid ""
"As always, refer to the if you are "
"unclear about any of the terminology mentioned in architecture examples."
msgstr ""
#: ./doc/openstack-ops/ch_arch_examples.xml:39(title)
msgid "Parting Thoughts on Architecture Examples"
msgstr ""
#: ./doc/openstack-ops/ch_arch_examples.xml:41(para)
msgid ""
"With so many considerations and options available, our hope is to provide a "
"few clearly-marked and tested paths for your OpenStack exploration. If "
"you're looking for additional ideas, check out , the OpenStack "
"Installation Guides, or the OpenStack User Stories page."
msgstr ""
#: ./doc/openstack-ops/bk_ops_guide.xml:16(title)
msgid "OpenStack Operations Guide"
msgstr ""
#: ./doc/openstack-ops/bk_ops_guide.xml:18(titleabbrev)
msgid "OpenStack Ops Guide"
msgstr ""
#: ./doc/openstack-ops/bk_ops_guide.xml:26(orgname) ./doc/openstack-ops/bk_ops_guide.xml:32(holder)
msgid "OpenStack Foundation"
msgstr ""
#: ./doc/openstack-ops/bk_ops_guide.xml:31(year)
msgid "2014"
msgstr ""
#: ./doc/openstack-ops/bk_ops_guide.xml:38(remark)
msgid "Copyright details are filled in by the template."
msgstr ""
#: ./doc/openstack-ops/bk_ops_guide.xml:43(para)
msgid ""
"This book provides information about designing and operating OpenStack "
"clouds."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:12(title)
msgid "Maintenance, Failures, and Debugging"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:14(para)
msgid ""
"Downtime, whether planned or unscheduled, is a certainty when running a "
"cloud. This chapter aims to provide useful information for dealing "
"proactively, or reactively, with these occurrences.maintenance/debugging"
"primary>troubleshooting "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:26(title)
msgid "Cloud Controller and Storage Proxy Failures and Maintenance"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:28(para)
msgid ""
"The cloud controller and storage proxy are very similar to each other when "
"it comes to expected and unexpected downtime. One of each server type "
"typically runs in the cloud, which makes them very noticeable when they are "
"not running."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:33(para)
msgid ""
"For the cloud controller, the good news is if your cloud is using the "
"FlatDHCP multi-host HA network mode, existing instances and volumes continue "
"to operate while the cloud controller is offline. For the storage proxy, "
"however, no storage traffic is possible until it is back up and running."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:42(title) ./doc/openstack-ops/ch_ops_maintenance.xml:174(title)
msgid "Planned Maintenance"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:44(para)
msgid ""
"One way to plan for cloud controller or storage proxy maintenance is to "
"simply do it off-hours, such as at 1 a.m. or 2 a.m. This strategy affects "
"fewer users. If your cloud controller or storage proxy is too important to "
"have unavailable at any point in time, you must look into high-availability "
"options.cloud controllers"
"primary>planned maintenance of maintenance/debugging cloud "
"controller planned maintenance "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:62(title)
msgid "Rebooting a Cloud Controller or Storage Proxy"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:64(para)
msgid ""
"All in all, just issue the \"reboot\" command. The operating system cleanly "
"shuts down services and then automatically reboots. If you want to be very "
"thorough, run your backup jobs just before you reboot.maintenance/debugging rebooting "
"following storage storage proxy maintenance"
"secondary> reboot"
"primary>cloud controller or storage proxy "
"indexterm>cloud controllers"
"primary>rebooting "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:89(title)
msgid "After a Cloud Controller or Storage Proxy Reboots"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:91(para)
msgid ""
"After a cloud controller reboots, ensure that all required services were "
"successfully started. The following commands use ps and "
"grep to determine if nova, glance, and keystone are currently "
"running:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:101(para)
msgid ""
"Also check that all services are functioning. The following set of commands "
"sources the openrc file, then runs some basic glance, nova, and "
"openstack commands. If the commands work as expected, you can be confident "
"that those services are in working condition:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:112(para)
msgid ""
"For the storage proxy, ensure that the Object Storage service has resumed:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:117(para)
msgid "Also check that it is functioning:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:125(title)
msgid "Total Cloud Controller Failure"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:127(para)
msgid ""
"The cloud controller could completely fail if, for example, its motherboard "
"goes bad. Users will immediately notice the loss of a cloud controller since "
"it provides core functionality to your cloud environment. If your "
"infrastructure monitoring does not alert you that your cloud controller has "
"failed, your users definitely will. Unfortunately, this is a rough situation."
" The cloud controller is an integral part of your cloud. If you have only "
"one controller, you will have many missing services if it goes down."
"cloud controllers"
"primary>total failure of maintenance/debugging cloud "
"controller total failure "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:144(para)
msgid ""
"To avoid this situation, create a highly available cloud controller cluster. "
"This is outside the scope of this document, but you can read more in the "
"OpenStack "
"High Availability Guide."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:150(para)
msgid ""
"The next best approach is to use a configuration-management tool, such as "
"Puppet, to automatically build a cloud controller. This should not take more "
"than 15 minutes if you have a spare server available. After the controller "
"rebuilds, restore any backups taken (see )."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:156(para)
msgid ""
"Also, in practice, the nova-compute services on the "
"compute nodes do not always reconnect cleanly to rabbitmq hosted on the "
"controller when it comes back up after a long reboot; a restart on the nova "
"services on the compute nodes is required."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:166(title)
msgid "Compute Node Failures and Maintenance"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:168(para)
msgid ""
"Sometimes a compute node either crashes unexpectedly or requires a reboot "
"for maintenance reasons."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:176(para)
msgid ""
"If you need to reboot a compute node due to planned maintenance (such as a "
"software or hardware upgrade), first ensure that all hosted instances have "
"been moved off the node. If your cloud is utilizing shared storage, use the "
"nova live-migration command. First, get a list of instances "
"that need to be moved:compute nodes"
"primary>maintenance maintenance/debugging compute node "
"planned maintenance "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:193(para)
msgid "Next, migrate them one by one:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:197(para)
msgid ""
"If you are not using shared storage, you can use the --block-migrate"
"code> option:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:202(para)
msgid ""
"After you have migrated all instances, ensure that the nova-compute"
"code> service has stopped :"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:208(para)
msgid ""
"If you use a configuration-management system, such as Puppet, that ensures "
"the nova-compute service is always running, you can temporarily "
"move the init files:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:216(para)
msgid ""
"Next, shut down your compute node, perform your maintenance, and turn the "
"node back on. You can reenable the nova-compute service by "
"undoing the previous commands:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:223(para)
msgid "Then start the nova-compute service:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:227(para)
msgid ""
"You can now optionally migrate the instances back to their original compute "
"node."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:234(title)
msgid "After a Compute Node Reboots"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:236(para)
msgid ""
"When you reboot a compute node, first verify that it booted successfully. "
"This includes ensuring that the nova-compute service is running:"
"reboot compute "
"node maintenance/debugging compute node "
"reboot "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:251(para)
msgid "Also ensure that it has successfully connected to the AMQP server:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:257(para)
msgid ""
"After the compute node is successfully running, you must deal with the "
"instances that are hosted on that compute node because none of them are "
"running. Depending on your SLA with your users or customers, you might have "
"to start each instance and ensure that they start correctly."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:269(para)
msgid ""
"You can create a list of instances that are hosted on the compute node by "
"performing the following command:instances maintenance/debugging"
"secondary> maintenance/"
"debugging instances "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:282(para)
msgid ""
"After you have the list, you can use the nova command to start each instance:"
""
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:288(para)
msgid ""
"Any time an instance shuts down unexpectedly, it might have problems on boot."
" For example, the instance might require an fsck on the root "
"partition. If this happens, the user can use the dashboard VNC console to "
"fix this."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:294(para)
msgid ""
"If an instance does not boot, meaning virsh list never shows "
"the instance as even attempting to boot, do the following on the compute "
"node:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:300(para)
msgid ""
"Try executing the nova reboot command again. You should see an "
"error message about why the instance was not able to boot"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:304(para)
msgid ""
"In most cases, the error is the result of something in libvirt's XML file "
"(/etc/libvirt/qemu/instance-xxxxxxxx.xml) that no longer exists."
" You can enforce re-creation of the XML file as well as rebooting the "
"instance by running the following command:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:315(title)
msgid "Inspecting and Recovering Data from Failed Instances"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:317(para)
msgid ""
"In some scenarios, instances are running but are inaccessible through SSH "
"and do not respond to any command. The VNC console could be displaying a "
"boot failure or kernel panic error messages. This could be an indication of "
"file system corruption on the VM itself. If you need to recover files or "
"inspect the content of the instance, qemu-nbd can be used to mount the disk."
"data inspecting/"
"recovering failed instances "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:329(para)
msgid ""
"If you access or view the user's content and data, get approval "
"first!security issues"
"primary>failed instance data inspection "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:337(para)
msgid ""
"To access the instance's disk (/var/lib/nova/instances/instance-"
"xxxxxx /disk ), use the following steps:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:343(para)
msgid "Suspend the instance using the virsh command."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:348(para)
msgid "Connect the qemu-nbd device to the disk."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:352(para) ./doc/openstack-ops/ch_ops_maintenance.xml:412(para)
msgid "Mount the qemu-nbd device."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:356(para)
msgid "Unmount the device after inspecting."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:360(para)
msgid "Disconnect the qemu-nbd device."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:364(para)
msgid "Resume the instance."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:368(para)
msgid ""
"If you do not follow steps 4 through 6, OpenStack Compute cannot manage the "
"instance any longer. It fails to respond to any command issued by OpenStack "
"Compute, and it is marked as shut down."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:372(para)
msgid ""
"Once you mount the disk file, you should be able to access it and treat it "
"as a collection of normal directories with files and a directory structure. "
"However, we do not recommend that you edit or touch any files because this "
"could change the access control lists (ACLs) that are used to determine "
"which accounts can perform what operations on files and directories. "
"Changing ACLs can make the instance unbootable if it is not already."
"access control list (ACL) "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:384(para)
msgid ""
"Suspend the instance using the virsh command, taking note "
"of the internal ID:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:399(para)
msgid "Connect the qemu-nbd device to the disk:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:414(para)
msgid ""
"The qemu-nbd device tries to export the instance disk's different partitions "
"as separate devices. For example, if vda is the disk and vda1 is the root "
"partition, qemu-nbd exports the device as /dev/nbd0 and "
"/dev/nbd0p1 , respectively:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:422(para)
msgid ""
"You can now access the contents of /mnt, which correspond to "
"the first partition of the instance's disk."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:425(para)
msgid ""
"To examine the secondary or ephemeral disk, use an alternate mount point if "
"you want both primary and secondary drives mounted at the same time:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:458(para)
msgid ""
"Once you have completed the inspection, unmount the mount point and release "
"the qemu-nbd device:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:467(para)
msgid "Resume the instance using virsh :"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:485(title)
msgid "Volumes"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:487(para)
msgid ""
"If the affected instances also had attached volumes, first generate a list "
"of instance and volume UUIDs:volume"
"primary>maintenance/debugging maintenance/debugging"
"primary>volumes "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:505(para)
msgid "You should see a result similar to the following:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:515(para)
msgid ""
"Next, manually detach and reattach the volumes, where X is the proper mount "
"point:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:521(para)
msgid ""
"Be sure that the instance has successfully booted and is at a login screen "
"before doing the above."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:528(title)
msgid "Total Compute Node Failure"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:530(para)
msgid ""
"Compute nodes can fail the same way a cloud controller can fail. A "
"motherboard failure or some other type of hardware failure can cause an "
"entire compute node to go offline. When this happens, all instances running "
"on that compute node will not be available. Just like with a cloud "
"controller failure, if your infrastructure monitoring does not detect a "
"failed compute node, your users will notify you because of their lost "
"instances.compute nodes"
"primary>failures maintenance/debugging compute node "
"total failures "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:546(para)
msgid ""
"If a compute node fails and won't be fixed for a few hours (or at all), you "
"can relaunch all instances that are hosted on the failed node if you use "
"shared storage for /var/lib/nova/instances."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:551(para)
msgid ""
"To do this, generate a list of instance UUIDs that are hosted on the failed "
"node by running the following query on the nova database:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:558(para)
msgid ""
"Next, update the nova database to indicate that all instances that used to "
"be hosted on c01.example.com are now hosted on c02.example.com:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:565(para)
msgid ""
"If you're using the Networking service ML2 plug-in, update the Networking "
"service database to indicate that all ports that used to be hosted on c01."
"example.com are now hosted on c02.example.com:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:576(para)
msgid ""
"After that, use the nova command to reboot all instances "
"that were on c01.example.com while regenerating their XML files at the same "
"time:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:582(para)
msgid ""
"Finally, reattach volumes using the same method described in the section "
"Volumes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:589(title)
msgid "/var/lib/nova/instances"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:591(para)
msgid ""
"It's worth mentioning this directory in the context of failed compute nodes. "
"This directory contains the libvirt KVM file-based disk images for the "
"instances that are hosted on that compute node. If you are not running your "
"cloud in a shared storage environment, this directory is unique across all "
"compute nodes./var/lib/nova/instances "
"directory maintenance/debugging /var/lib/"
"nova/instances "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:603(para)
msgid ""
"/var/lib/nova/instances contains two types of directories."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:606(para)
msgid ""
"The first is the _base directory. This contains all the cached "
"base images from glance for each unique image that has been launched on that "
"compute node. Files ending in _20 (or a different number) are "
"the ephemeral base images."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:611(para)
msgid ""
"The other directories are titled instance-xxxxxxxx. These "
"directories correspond to instances running on that compute node. The files "
"inside are related to one of the files in the _base directory. "
"They're essentially differential-based files containing only the changes "
"made from the original _base directory."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:618(para)
msgid ""
"All files and directories in /var/lib/nova/instances are "
"uniquely named. The files in _base are uniquely titled for the glance image "
"that they are based on, and the directory names instance-xxxxxxxx"
"code> are uniquely titled for that particular instance. For example, if you "
"copy all data from /var/lib/nova/instances on one compute node "
"to another, you do not overwrite any files or cause any damage to images "
"that have the same unique name, because they are essentially the same file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:627(para)
msgid ""
"Although this method is not documented or supported, you can use it when "
"your compute node is permanently offline but you have instances locally "
"stored on it."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:636(title)
msgid "Storage Node Failures and Maintenance"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:638(para)
msgid ""
"Because of the high redundancy of Object Storage, dealing with object "
"storage node issues is a lot easier than dealing with compute node issues."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:645(title)
msgid "Rebooting a Storage Node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:647(para)
msgid ""
"If a storage node requires a reboot, simply reboot it. Requests for data "
"hosted on that node are redirected to other copies while the server is "
"rebooting.storage node "
"indexterm>nodes"
"primary>storage nodes maintenance/debugging storage node "
"reboot "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:665(title)
msgid "Shutting Down a Storage Node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:667(para)
msgid ""
"If you need to shut down a storage node for an extended period of time (one "
"or more days), consider removing the node from the storage ring. For example:"
"maintenance/debugging"
"primary>storage node shut down "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:682(para)
msgid "Next, redistribute the ring files to the other nodes:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:689(para)
msgid ""
"These actions effectively take the storage node out of the storage cluster."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:692(para)
msgid ""
"When the node is able to rejoin the cluster, just add it back to the ring. "
"The exact syntax you use to add a node to your swift cluster with "
"swift-ring-builder heavily depends on the original options used "
"when you originally created your cluster. Please refer back to those "
"commands."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:702(title)
msgid "Replacing a Swift Disk"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:704(para)
msgid ""
"If a hard drive fails in an Object Storage node, replacing it is relatively "
"easy. This assumes that your Object Storage environment is configured "
"correctly, where the data that is stored on the failed drive is also "
"replicated to other drives in the Object Storage environment.hard drives, replacing "
"indexterm>maintenance/debugging"
"primary>swift disk replacement "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:716(para)
msgid "This example assumes that /dev/sdb has failed."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:718(para)
msgid "First, unmount the disk:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:722(para)
msgid ""
"Next, physically remove the disk from the server and replace it with a "
"working disk."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:725(para)
msgid "Ensure that the operating system has recognized the new disk:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:730(para)
msgid "You should see a message about /dev/sdb."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:732(para)
msgid ""
"Because it is recommended to not use partitions on a swift disk, simply "
"format the disk as a whole:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:737(para)
msgid "Finally, mount the disk:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:741(para)
msgid ""
"Swift should notice the new disk and that no data exists. It then begins "
"replicating the data to the disk from the other existing replicas."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:750(title)
msgid "Handling a Complete Failure"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:752(para)
msgid ""
"A common way of dealing with the recovery from a full system failure, such "
"as a power outage of a data center, is to assign each service a priority, "
"and restore in order. shows an "
"example.service restoration"
"primary> maintenance/"
"debugging complete failures "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:765(caption)
msgid "Example service restoration priority list"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:769(th)
msgid "Priority"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:771(th)
msgid "Services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:777(para) ./doc/openstack-ops/ch_arch_scaling.xml:94(para) ./doc/openstack-ops/ch_arch_scaling.xml:106(para)
msgid "1"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:779(para)
msgid "Internal network connectivity"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:783(para) ./doc/openstack-ops/ch_arch_scaling.xml:118(para)
msgid "2"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:785(para)
msgid "Backing storage services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:789(para)
msgid "3"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:791(para)
msgid "Public network connectivity for user virtual machines"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:796(para) ./doc/openstack-ops/ch_arch_scaling.xml:130(para)
msgid "4"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:798(para)
msgid ""
"nova-compute , nova-network , cinder "
"hosts"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:803(para)
msgid "5"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:805(para)
msgid "User virtual machines"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:809(para)
msgid "10"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:811(para)
msgid "Message queue and database services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:815(para)
msgid "15"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:817(para)
msgid "Keystone services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:821(para)
msgid "20"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:823(literal)
msgid "cinder-scheduler"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:827(para)
msgid "21"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:829(para)
msgid "Image Catalog and Delivery services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:833(para)
msgid "22"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:835(para)
msgid "nova-scheduler services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:839(para)
msgid "98"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:841(literal)
msgid "cinder-api"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:845(para)
msgid "99"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:847(para)
msgid "nova-api services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:851(para)
msgid "100"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:853(para)
msgid "Dashboard node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:858(para)
msgid ""
"Use this example priority list to ensure that user-affected services are "
"restored as soon as possible, but not before a stable environment is in "
"place. Of course, despite being listed as a single-line item, each step "
"requires significant work. For example, just after starting the database, "
"you should check its integrity, or, after starting the nova services, you "
"should verify that the hypervisor matches the database and fix any mismatches ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:872(para)
msgid ""
"Maintaining an OpenStack cloud requires that you manage multiple physical "
"servers, and this number might grow over time. Because managing nodes "
"manually is error prone, we strongly recommend that you use a configuration-"
"management tool. These tools automate the process of ensuring that all your "
"nodes are configured properly and encourage you to maintain your "
"configuration information (such as packages and configuration options) in a "
"version-controlled repository.configuration management "
"indexterm>networks"
"primary>configuration management "
"indexterm>maintenance/debugging"
"primary>configuration management "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:892(para)
msgid ""
"Several configuration-management tools are available, and this guide does "
"not recommend a specific one. The two most popular ones in the OpenStack "
"community are Puppet, "
"with available OpenStack Puppet modules; and Chef, with available OpenStack Chef recipes. "
"Other newer configuration tools include Juju, Ansible, and Salt"
"link>; and more mature configuration management tools include CFEngine and Bcfg2."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:913(title)
msgid "Working with Hardware"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:915(para)
msgid ""
"As for your initial deployment, you should ensure that all hardware is "
"appropriately burned in before adding it to production. Run software that "
"uses the hardware to its limits—maxing out RAM, CPU, disk, and network. Many "
"options are available, and normally double as benchmark software, so you "
"also get a good idea of the performance of your system.hardware maintenance/debugging"
"secondary> maintenance/"
"debugging hardware "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:933(title)
msgid "Adding a Compute Node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:935(para)
msgid ""
"If you find that you have reached or are reaching the capacity limit of your "
"computing resources, you should plan to add additional compute nodes. Adding "
"more nodes is quite easy. The process for adding compute nodes is the same "
"as when the initial compute nodes were deployed to your cloud: use an "
"automated deployment system to bootstrap the bare-metal server with the "
"operating system and then have a configuration-management system install and "
"configure OpenStack Compute. Once the Compute service has been installed and "
"configured in the same way as the other compute nodes, it automatically "
"attaches itself to the cloud. The cloud controller notices the new node(s) "
"and begins scheduling instances to launch there.cloud controllers new compute "
"nodes and nodes adding "
"indexterm>compute nodes"
"primary>adding "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:959(para)
msgid ""
"If your OpenStack Block Storage nodes are separate from your compute nodes, "
"the same procedure still applies because the same queuing and polling system "
"is used in both services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:963(para)
msgid ""
"We recommend that you use the same hardware for new compute and block "
"storage nodes. At the very least, ensure that the CPUs are similar in the "
"compute nodes to not break live migration."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:971(title)
msgid "Adding an Object Storage Node"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:973(para)
msgid ""
"Adding a new object storage node is different from adding compute or block "
"storage nodes. You still want to initially configure the server by using "
"your automated deployment and configuration-management systems. After that "
"is done, you need to add the local disks of the object storage node into the "
"object storage ring. The exact command to do this is the same command that "
"was used to add the initial disks to the ring. Simply rerun this command on "
"the object storage proxy server for all disks on the new object storage node."
" Once this has been done, rebalance the ring and copy the resulting ring "
"files to the other storage nodes.Object Storage adding nodes"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:989(para)
msgid ""
"If your new object storage node has a different number of disks than the "
"original nodes have, the command to add the new node is different from the "
"original commands. These parameters vary from environment to environment."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:999(title)
msgid "Replacing Components"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1001(para)
msgid ""
"Failures of hardware are common in large-scale deployments such as an "
"infrastructure cloud. Consider your processes and balance time saving "
"against availability. For example, an Object Storage cluster can easily live "
"with dead disks in it for some period of time if it has sufficient capacity. "
"Or, if your compute installation is not full, you could consider live "
"migrating instances off a host with a RAM failure until you have time to "
"deal with the problem."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1016(para)
msgid ""
"Almost all OpenStack components have an underlying database to store "
"persistent information. Usually this database is MySQL. Normal MySQL "
"administration is applicable to these databases. OpenStack does not "
"configure the databases out of the ordinary. Basic administration includes "
"performance tweaking, high availability, backup, recovery, and repairing. "
"For more information, see a standard MySQL administration guide.databases maintenance/"
"debugging maintenance/debugging databases"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1032(para)
msgid ""
"You can perform a couple of tricks with the database to either more quickly "
"retrieve information or fix a data inconsistency error—for example, an "
"instance was terminated, but the status was not updated in the database. "
"These tricks are discussed throughout this book."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1040(title)
msgid "Database Connectivity"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1042(para)
msgid ""
"Review the component's configuration file to see how each OpenStack "
"component accesses its corresponding database. Look for either "
"sql_connection or simply connection. The following "
"command uses grep to display the SQL connection string for "
"nova, glance, cinder, and keystone:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1048(emphasis)
msgid ""
"grep -hE \"connection ?=\" /etc/nova/nova.conf /etc/glance/glance-*.conf /"
"etc/cinder/cinder.conf /etc/keystone/keystone.conf"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1056(para)
msgid "The connection strings take this format:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1064(title)
msgid "Performance and Optimizing"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1066(para)
msgid ""
"As your cloud grows, MySQL is utilized more and more. If you suspect that "
"MySQL might be becoming a bottleneck, you should start researching MySQL "
"optimization. The MySQL manual has an entire section dedicated to this topic:"
" Optimization Overview."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1078(title)
msgid "HDWMY"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1080(para)
msgid ""
"Here's a quick list of various to-do items for each hour, day, week, month, "
"and year. Please note that these tasks are neither required nor definitive "
"but helpful ideas:maintenance/"
"debugging schedule of tasks "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1091(title)
msgid "Hourly"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1095(para)
msgid "Check your monitoring system for alerts and act on them."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1100(para)
msgid "Check your ticket queue for new tickets."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1108(title)
msgid "Daily"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1112(para)
msgid "Check for instances in a failed or weird state and investigate why."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1117(para)
msgid "Check for security patches and apply them as needed."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1125(title)
msgid "Weekly"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1131(para)
msgid "User quotas"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1135(para)
msgid "Disk space"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1139(para)
msgid "Image usage"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1143(para)
msgid "Large instances"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1147(para)
msgid "Network usage (bandwidth and IP usage)"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1129(para)
msgid "Check cloud usage: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1153(para)
msgid "Verify your alert mechanisms are still working."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1161(title)
msgid "Monthly"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1165(para)
msgid "Check usage and trends over the past month."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1169(para)
msgid "Check for user accounts that should be removed."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1173(para)
msgid "Check for operator accounts that should be removed."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1181(title)
msgid "Quarterly"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1185(para)
msgid "Review usage and trends over the past quarter."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1189(para)
msgid "Prepare any quarterly reports on usage and statistics."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1193(para)
msgid "Review and plan any necessary cloud additions."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1197(para)
msgid "Review and plan any major OpenStack upgrades."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1205(title)
msgid "Semiannually"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1209(para)
msgid "Upgrade OpenStack."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1213(para)
msgid ""
"Clean up after an OpenStack upgrade (any unused or new services to be aware "
"of?)."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1223(title)
msgid "Determining Which Component Is Broken"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1225(para)
msgid ""
"OpenStack's collection of different components interact with each other "
"strongly. For example, uploading an image requires interaction from "
"nova-api, glance-api, glance-registry"
"code>, keystone, and potentially swift-proxy. As a result, it "
"is sometimes difficult to determine exactly where problems lie. Assisting in "
"this is the purpose of this section.logging/monitoring tailing logs"
"secondary> maintenance/"
"debugging determining component affected "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1244(title)
msgid "Tailing Logs"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1246(para)
msgid ""
"The first place to look is the log file related to the command you are "
"trying to run. For example, if nova list is failing, try "
"tailing a nova log file and running the command again:tailing logs "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1253(para) ./doc/openstack-ops/ch_ops_maintenance.xml:1268(para)
msgid "Terminal 1:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1257(para) ./doc/openstack-ops/ch_ops_maintenance.xml:1272(para)
msgid "Terminal 2:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1261(para)
msgid ""
"Look for any errors or traces in the log file. For more information, see "
"."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1264(para)
msgid ""
"If the error indicates that the problem is with another component, switch to "
"tailing that component's log file. For example, if nova cannot access "
"glance, look at the glance-api log:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1276(para)
msgid "Wash, rinse, and repeat until you find the core cause of the problem."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1283(title)
msgid "Running Daemons on the CLI"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1285(para)
msgid ""
"Unfortunately, sometimes the error is not apparent from the log files. In "
"this case, switch tactics and use a different command; maybe run the service "
"directly on the command line. For example, if the glance-api "
"service refuses to start and stay running, try launching the daemon from the "
"command line:daemons"
"primary>running on CLI Command-line interface (CLI) "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1300(para)
msgid ""
"The -H flag is required when running the daemons with "
"sudo because some daemons will write files relative to the user's home "
"directory, and this write may fail if -H is left off."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1299(para)
msgid "This might print the error and cause of the problem. "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1307(title)
msgid "Example of Complexity"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1309(para)
msgid ""
"One morning, a compute node failed to run any instances. The log files were "
"a bit vague, claiming that a certain instance was unable to be started. This "
"ended up being a red herring because the instance was simply the first "
"instance in alphabetical order, so it was the first instance that "
"nova-compute would touch."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1315(para)
msgid ""
"Further troubleshooting showed that libvirt was not running at all. This "
"made more sense. If libvirt wasn't running, then no instance could be "
"virtualized through KVM. Upon trying to start libvirt, it would silently die "
"immediately. The libvirt logs did not explain why."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1321(para)
msgid ""
"Next, the libvirtd daemon was run on the command line. Finally "
"a helpful error message: it could not connect to d-bus. As ridiculous as it "
"sounds, libvirt, and thus nova-compute, relies on d-bus and "
"somehow d-bus crashed. Simply starting d-bus set the entire chain back on "
"track, and soon everything was back up and running."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1336(title)
msgid "What to do when things are running slowly"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1338(para)
msgid ""
"When you are getting slow responses from various services, it can be hard to "
"know where to start looking. The first thing to check is the extent of the "
"slowness: is it specific to a single service, or varied among different "
"services? If your problem is isolated to a specific service, it can "
"temporarily be fixed by restarting the service, but that is often only a fix "
"for the symptom and not the actual problem."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1347(para)
msgid ""
"This is a collection of ideas from experienced operators on common things to "
"look at that may be the cause of slowness. It is not, however, designed to "
"be an exhaustive list."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1355(title)
msgid "OpenStack Identity service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1356(para)
msgid ""
"If OpenStack Identity is responding slowly, it could be due to the token "
"table getting large. This can be fixed by running the "
"command."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1361(para)
msgid ""
"Additionally, for Identity-related issues, try the tips in ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1370(para)
msgid ""
"OpenStack Image service can be slowed down by things related to the Identity "
"service, but the Image service itself can be slowed down if connectivity to "
"the back-end storage in use is slow or otherwise problematic. For example, "
"your back-end NFS server might have gone down."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1381(title)
msgid "OpenStack Block Storage service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1382(para)
msgid ""
"OpenStack Block Storage service is similar to the Image service, so start by "
"checking Identity-related services, and the back-end storage. Additionally, "
"both the Block Storage and Image services rely on AMQP and SQL "
"functionality, so consider these when debugging."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1392(title)
msgid "OpenStack Compute service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1393(para)
msgid ""
"Services related to OpenStack Compute are normally fairly fast and rely on a "
"couple of backend services: Identity for authentication and authorization), "
"and AMQP for interoperability. Any slowness related to services is normally "
"related to one of these. Also, as with all other services, SQL is used "
"extensively."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1404(title)
msgid "OpenStack Networking service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1405(para)
msgid ""
"Slowness in the OpenStack Networking service can be caused by services that "
"it relies upon, but it can also be related to either physical or virtual "
"networking. For example: network namespaces that do not exist or are not "
"tied to interfaces correctly; DHCP daemons that have hung or are not "
"running; a cable being physically disconnected; a switch not being "
"configured correctly. When debugging Networking service problems, begin by "
"verifying all physical networking functionality (switch configuration, "
"physical cabling, etc.). After the physical networking is verified, check to "
"be sure all of the Networking services are running (neutron-server, neutron-"
"dhcp-agent, etc.), then check on AMQP and SQL back ends."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1422(title)
msgid "AMQP broker"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1423(para)
msgid ""
"Regardless of which AMQP broker you use, such as RabbitMQ, there are common "
"issues which not only slow down operations, but can also cause real problems."
" Sometimes messages queued for services stay on the queues and are not "
"consumed. This can be due to dead or stagnant services and can be commonly "
"cleared up by either restarting the AMQP-related services or the OpenStack "
"service in question."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1435(title)
msgid "SQL back end"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1436(para)
msgid ""
"Whether you use SQLite or an RDBMS (such as MySQL), SQL interoperability is "
"essential to a functioning OpenStack environment. A large or fragmented "
"SQLite file can cause slowness when using files as a back end. A locked or "
"long-running query can cause delays for most RDBMS services. In this case, "
"do not kill the query immediately, but look into it to see if it is a "
"problem with something that is hung, or something that is just taking a long "
"time to run and needs to finish on its own. The administration of an RDBMS "
"is outside the scope of this document, but it should be noted that a "
"properly functioning RDBMS is essential to most OpenStack services."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1457(title)
msgid "Uninstalling"
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1459(para)
msgid ""
"While we'd always recommend using your automated deployment system to "
"reinstall systems from scratch, sometimes you do need to remove OpenStack "
"from a system the hard way. Here's how:uninstall operation maintenance/debugging"
"primary>uninstalling "
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1472(para)
msgid "Remove all packages."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1476(para)
msgid "Remove remaining files."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1480(para)
msgid "Remove databases."
msgstr ""
#: ./doc/openstack-ops/ch_ops_maintenance.xml:1484(para)
msgid ""
"These steps depend on your underlying distribution, but in general you "
"should be looking for \"purge\" commands in your package manager, like "
"aptitude purge ~c $package . Following this, you can look "
"for orphaned files in the directories referenced throughout this guide. To "
"uninstall the database properly, refer to the manual appropriate for the "
"product in use."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:15(title)
msgid "Scaling"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:17(para)
msgid ""
"Whereas traditional applications required larger hardware to scale "
"(\"vertical scaling\"), cloud-based applications typically request more, "
"discrete hardware (\"horizontal scaling\"). If your cloud is successful, "
"eventually you must add resources to meet the increasing demand.scaling vertical vs. "
"horizontal "
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:27(para)
msgid ""
"To suit the cloud paradigm, OpenStack itself is designed to be horizontally "
"scalable. Rather than switching to larger servers, you procure more servers "
"and simply install identically configured services. Ideally, you scale out "
"and load balance among groups of functionally identical services (for "
"example, compute nodes or nova-api nodes), that "
"communicate on a message bus."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:35(title)
msgid "The Starting Point"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:37(para)
msgid ""
"Determining the scalability of your cloud and how to improve it is an "
"exercise with many variables to balance. No one solution meets everyone's "
"scalability goals. However, it is helpful to track a number of metrics. "
"Since you can define virtual hardware templates, called \"flavors\" in "
"OpenStack, you can start to make scaling decisions based on the flavors "
"you'll provide. These templates define sizes for memory in RAM, root disk "
"size, amount of ephemeral data disk space available, and number of cores for "
"starters.virtual machine (VM)"
"primary> hardware"
"primary>virtual hardware flavor scaling metrics for "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:58(para)
msgid ""
"The default OpenStack flavors are shown in ."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:64(caption)
msgid "OpenStack default flavors"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:80(th)
msgid "Virtual cores"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:82(th)
msgid "Memory"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:92(para)
msgid "m1.tiny"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:96(para)
msgid "512 MB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:98(para)
msgid "1 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:100(para)
msgid "0 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:104(para)
msgid "m1.small"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:108(para)
msgid "2 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:110(para) ./doc/openstack-ops/ch_arch_scaling.xml:122(para) ./doc/openstack-ops/ch_arch_scaling.xml:134(para) ./doc/openstack-ops/ch_arch_scaling.xml:146(para)
msgid "10 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:112(para)
msgid "20 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:116(para)
msgid "m1.medium"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:120(para)
msgid "4 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:124(para)
msgid "40 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:128(para)
msgid "m1.large"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:132(para)
msgid "8 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:136(para)
msgid "80 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:140(para)
msgid "m1.xlarge"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:142(para)
msgid "8"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:144(para)
msgid "16 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:148(para)
msgid "160 GB"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:156(para)
msgid ""
"The number of virtual machines (VMs) you expect to run, ((overcommit "
"fraction "
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:162(para)
msgid "How much storage is required (flavor disk size "
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:153(para)
msgid ""
"The starting point for most is the core count of your cloud. By applying "
"some ratios, you can gather information about: You can use "
"these ratios to determine how much additional infrastructure you need to "
"support your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:168(para)
msgid ""
"Here is an example using the ratios for gathering scalability information "
"for the number of VMs expected as well as the storage needed. The following "
"numbers support (200 / 2) 16 = 1600 VM instances and require 80 TB of "
"storage for /var/lib/nova/instances:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:175(para)
msgid "200 physical cores."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:179(para)
msgid ""
"Most instances are size m1.medium (two virtual cores, 50 GB of storage)."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:184(para)
msgid ""
"Default CPU overcommit ratio (cpu_allocation_ratio in nova."
"conf) of 16:1."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:190(para)
msgid ""
"Regardless of the overcommit ratio, an instance can not be placed on any "
"physical node with fewer raw (pre-overcommit) resources than instance flavor "
"requires."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:195(para)
msgid ""
"However, you need more than the core count alone to estimate the load that "
"the API services, database servers, and queue servers are likely to "
"encounter. You must also consider the usage patterns of your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:200(para)
msgid ""
"As a specific example, compare a cloud that supports a managed web-hosting "
"platform with one running integration tests for a development project that "
"creates one VM per code commit. In the former, the heavy work of creating a "
"VM happens only every few months, whereas the latter puts constant heavy "
"load on the cloud controller. You must consider your average VM lifetime, as "
"a larger number generally means less load on the cloud controller.cloud controllers"
"primary>scalability and "
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:212(para)
msgid ""
"Aside from the creation and termination of VMs, you must consider the impact "
"of users accessing the service—particularly on nova-api "
"and its associated database. Listing instances garners a great deal of "
"information and, given the frequency with which users run this operation, a "
"cloud with a large number of users can increase the load significantly. This "
"can occur even without their knowledge—leaving the OpenStack dashboard "
"instances tab open in the browser refreshes the list of VMs every 30 seconds."
""
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:221(para)
msgid ""
"After you consider these factors, you can determine how many cloud "
"controller cores you require. A typical eight core, 8 GB of RAM server is "
"sufficient for up to a rack of compute nodes — given the above caveats."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:226(para)
msgid ""
"You must also consider key hardware specifications for the performance of "
"user VMs, as well as budget and performance needs, including storage "
"performance (spindles/core), memory availability (RAM/core), network "
"bandwidthbandwidth"
"primary>hardware specifications and (Gbps/"
"core), and overall CPU performance (CPU/core)."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:236(para)
msgid ""
"For a discussion of metric tracking, including how to extract metrics from "
"your cloud, see ."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:243(title)
msgid "Adding Cloud Controller Nodes"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:245(para)
msgid ""
"You can facilitate the horizontal expansion of your cloud by adding nodes. "
"Adding compute nodes is straightforward—they are easily picked up by the "
"existing installation. However, you must consider some important points when "
"you design your cluster to be highly available.compute nodes adding "
"indexterm>high availability"
"primary> configuration "
"options high availability "
"indexterm>cloud controller nodes"
"primary>adding scaling adding cloud controller "
"nodes "
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:269(para)
msgid ""
"Recall that a cloud controller node runs several different services. You can "
"install services that communicate only using the message queue "
"internally—nova-scheduler and nova-console—on a "
"new server for expansion. However, other integral parts require more care."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:275(para)
msgid ""
"You should load balance user-facing services such as dashboard, nova-"
"api, or the Object Storage proxy. Use any standard HTTP load-"
"balancing method (DNS round robin, hardware load balancer, or software such "
"as Pound or HAProxy). One caveat with dashboard is the VNC proxy, which uses "
"the WebSocket protocol—something that an L7 load balancer might struggle "
"with. See also Horizon session storage."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:285(para)
msgid ""
"You can configure some services, such as nova-api and "
"glance-api, to use multiple processes by changing a flag in "
"their configuration file—allowing them to share work between multiple cores "
"on the one machine."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:291(para)
msgid ""
"Several options are available for MySQL load balancing, and the supported "
"AMQP brokers have built-in clustering support. Information on how to "
"configure these and many of the other services can be found in .Advanced Message Queuing Protocol (AMQP) "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:302(title)
msgid "Segregating Your Cloud"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:304(para)
msgid ""
"When you want to offer users different regions to provide legal "
"considerations for data storage, redundancy across earthquake fault lines, "
"or for low-latency API calls, you segregate your cloud. Use one of the "
"following OpenStack methods to segregate your cloud: cells"
"emphasis>, regions , availability zones"
"emphasis>, or host aggregates .segregation methods scaling cloud segregation"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:318(para)
msgid ""
"Each method provides different functionality and can be best divided into "
"two groups:"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:323(para)
msgid ""
"Cells and regions, which segregate an entire cloud and result in running "
"separate Compute deployments."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:328(para)
msgid ""
"Availability zones and "
"host aggregates, which merely divide a single Compute deployment."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:334(para)
msgid ""
" provides a comparison view of each "
"segregation method currently provided by OpenStack Compute.endpoints API endpoint"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:343(caption)
msgid "OpenStack segregation methods"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:349(th)
msgid "Cells"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:351(th)
msgid "Regions"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:353(th)
msgid "Availability zones"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:355(th)
msgid "Host aggregates"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:361(emphasis)
msgid "Use when you need"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:364(para)
msgid ""
"A single API endpoint for compute, or you require a "
"second level of scheduling."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:367(para)
msgid ""
"Discrete regions with separate API endpoints and no coordination between "
"regions."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:370(para)
msgid ""
"Logical separation within your nova deployment for physical isolation or "
"redundancy."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:373(para)
msgid "To schedule a group of hosts with common features."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:378(emphasis)
msgid "Example"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:380(para)
msgid ""
"A cloud with multiple sites where you can schedule VMs \"anywhere\" or on a "
"particular site."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:383(para)
msgid ""
"A cloud with multiple sites, where you schedule VMs to a particular site and "
"you want a shared infrastructure."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:386(para)
msgid "A single-site cloud with equipment fed by separate power supplies."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:389(para)
msgid "Scheduling to hosts with trusted hardware support."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:394(emphasis)
msgid "Overhead"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:396(para)
msgid "Considered experimental."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:396(para)
msgid "A new service, nova-cells."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:397(para)
msgid "Each cell has a full nova installation except nova-api."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:400(para)
msgid "A different API endpoint for every region."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:401(para)
msgid "Each region has a full nova installation."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:404(para) ./doc/openstack-ops/ch_arch_scaling.xml:406(para)
msgid "Configuration changes to nova.conf ."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:410(emphasis)
msgid "Shared services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:413(para) ./doc/openstack-ops/ch_arch_scaling.xml:415(para) ./doc/openstack-ops/ch_arch_scaling.xml:417(para) ./doc/openstack-ops/ch_arch_scaling.xml:419(para)
msgid "Keystone"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:413(code)
msgid "nova-api"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:417(para) ./doc/openstack-ops/ch_arch_scaling.xml:419(para)
msgid "All nova services"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:425(title)
msgid "Cells and Regions"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:427(para)
msgid ""
"OpenStack Compute cells are designed to allow running the cloud in a "
"distributed fashion without having to use more complicated technologies, or "
"be invasive to existing nova installations. Hosts in a cloud are partitioned "
"into groups called cells . Cells are configured in a "
"tree. The top-level cell (\"API cell\") has a host that runs the nova-"
"api service, but no nova-compute services. Each child "
"cell runs all of the other typical nova-* services found in a "
"regular installation, except for the nova-api service. Each "
"cell has its own message queue and database service and also runs nova-"
"cells, which manages the communication between the API cell and child "
"cells.scaling"
"primary>cells and regions cells cloud segregation"
"secondary> region"
"primary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:450(para)
msgid ""
"This allows for a single API server being used to control access to multiple "
"cloud installations. Introducing a second level of scheduling (the cell "
"selection), in addition to the regular nova-scheduler selection "
"of hosts, provides greater flexibility to control where virtual machines are "
"run."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:456(para)
msgid ""
"Unlike having a single API endpoint, regions have a separate API endpoint "
"per installation, allowing for a more discrete separation. Users wanting to "
"run instances across sites have to explicitly select a region. However, the "
"additional complexity of a running a new service is not required."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:462(para)
msgid ""
"The OpenStack dashboard (horizon) can be configured to use multiple regions. "
"This can be configured through the parameter."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:467(title)
msgid "Availability Zones and Host Aggregates"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:469(para)
msgid ""
"You can use availability zones, host aggregates, or both to partition a nova "
"deployment.scaling"
"primary>availability zones "
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:476(para)
msgid ""
"Availability zones are implemented through and configured in a similar way "
"to host aggregates."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:479(para)
msgid "However, you use them for different reasons."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:482(title)
msgid "Availability zone"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:484(para)
msgid ""
"This enables you to arrange OpenStack compute hosts into logical groups and "
"provides a form of physical isolation and redundancy from other availability "
"zones, such as by using a separate power supply or network equipment."
"availability zone "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:491(para)
msgid ""
"You define the availability zone in which a specified compute host resides "
"locally on each server. An availability zone is commonly used to identify a "
"set of servers that have a common attribute. For instance, if some of the "
"racks in your data center are on a separate power source, you can put "
"servers in those racks in their own availability zone. Availability zones "
"can also help separate different classes of hardware."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:499(para)
msgid ""
"When users provision resources, they can specify from which availability "
"zone they want their instance to be built. This allows cloud consumers to "
"ensure that their application resources are spread across disparate machines "
"to achieve high availability in the event of hardware failure."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:507(title)
msgid "Host aggregates zone"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:509(para)
msgid ""
"This enables you to partition OpenStack Compute deployments into logical "
"groups for load balancing and instance distribution. You can use host "
"aggregates to further partition an availability zone. For example, you might "
"use host aggregates to partition an availability zone into groups of hosts "
"that either share common resources, such as storage and network, or have a "
"special property, such as trusted computing hardware.scaling host aggregate"
"secondary> host aggregate"
"primary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:523(para)
msgid ""
"A common use of host aggregates is to provide information for use with the "
"nova-scheduler . For example, you might use a host "
"aggregate to group a set of hosts that share specific flavors or images."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:528(para)
msgid ""
"The general case for this is setting key-value pairs in the aggregate "
"metadata and matching key-value pairs in flavor's extra_specs"
"parameter> metadata. The AggregateInstanceExtraSpecsFilter"
"parameter> in the filter scheduler will enforce that instances be scheduled "
"only on hosts in aggregates that define the same key to the same value."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:534(para)
msgid ""
"An advanced use of this general concept allows different flavor types to run "
"with different CPU and RAM allocation ratios so that high-intensity "
"computing loads and low-intensity development and testing systems can share "
"the same cloud without either starving the high-use systems or wasting "
"resources on low-utilization systems. This works by setting "
"metadata in your host aggregates and matching "
"extra_specs in your flavor types."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:543(para)
msgid ""
"The first step is setting the aggregate metadata keys "
"cpu_allocation_ratio and "
"ram_allocation_ratio to a floating-point value. The "
"filter schedulers AggregateCoreFilter and "
"AggregateRamFilter will use those values rather than "
"the global defaults in nova.conf when scheduling to "
"hosts in the aggregate. It is important to be cautious when using this "
"feature, since each host can be in multiple aggregates but should have only "
"one allocation ratio for each resources. It is up to you to avoid putting a "
"host in multiple aggregates that define different values for the same "
"resource ."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:557(para)
msgid ""
"This is the first half of the equation. To get flavor types that are "
"guaranteed a particular ratio, you must set the extra_specs"
"parameter> in the flavor type to the key-value pair you want to match in the "
"aggregate. For example, if you define extra_specs"
"parameter>cpu_allocation_ratio to \"1.0\", then "
"instances of that type will run in aggregates only where the metadata key "
"cpu_allocation_ratio is also defined as \"1.0.\" In "
"practice, it is better to define an additional key-value pair in the "
"aggregate metadata to match on rather than match directly on "
"cpu_allocation_ratio or "
"core_allocation_ratio . This allows better abstraction."
" For example, by defining a key overcommit and "
"setting a value of \"high,\" \"medium,\" or \"low,\" you could then tune the "
"numeric allocation ratios in the aggregates without also needing to change "
"all flavor types relating to them."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:576(para)
msgid ""
"Previously, all services had an availability zone. Currently, only the "
"nova-compute service has its own availability zone. "
"Services such as nova-scheduler , nova-network"
"literal>, and nova-conductor have always spanned all "
"availability zones."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:587(para)
msgid "nova host-list (os-hosts)"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:591(para)
msgid "euca-describe-availability-zones verbose"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:583(para)
msgid ""
"When you run any of the following operations, the services appear in their "
"own internal availability zone (CONF.internal_service_availability_zone): "
" The internal availability zone is hidden in euca-describe-"
"availability_zones (nonverbose)."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:596(para)
msgid ""
"CONF.node_availability_zone has been renamed to CONF."
"default_availability_zone and is used only by the nova-api"
"literal> and nova-scheduler services."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:601(para)
msgid "CONF.node_availability_zone still works but is deprecated."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:609(title)
msgid "Scalable Hardware"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:611(para)
msgid ""
"While several resources already exist to help with deploying and installing "
"OpenStack, it's very important to make sure that you have your deployment "
"planned out ahead of time. This guide presumes that you have at least set "
"aside a rack for the OpenStack cloud but also offers suggestions for when "
"and what to scale."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:618(title)
msgid "Hardware Procurement"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:620(para)
msgid ""
"“The Cloud” has been described as a volatile environment where servers can "
"be created and terminated at will. While this may be true, it does not mean "
"that your servers must be volatile. Ensuring that your cloud's hardware is "
"stable and configured correctly means that your cloud environment remains up "
"and running. Basically, put effort into creating a stable hardware "
"environment so that you can host a cloud that users may treat as unstable "
"and volatile.servers"
"primary>avoiding volatility in hardware scalability "
"planning scaling hardware procurement"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:640(para)
msgid ""
"OpenStack can be deployed on any hardware supported by an OpenStack-"
"compatible Linux distribution."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:643(para)
msgid ""
"Hardware does not have to be consistent, but it should at least have the "
"same type of CPU to support instance migration."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:646(para)
msgid ""
"The typical hardware recommended for use with OpenStack is the standard "
"value-for-money offerings that most hardware vendors stock. It should be "
"straightforward to divide your procurement into building blocks such as "
"\"compute,\" \"object storage,\" and \"cloud controller,\" and request as "
"many of these as you need. Alternatively, should you be unable to spend "
"more, if you have existing servers—provided they meet your performance "
"requirements and virtualization technology—they are quite likely to be able "
"to support OpenStack."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:657(title)
msgid "Capacity Planning"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:659(para)
msgid ""
"OpenStack is designed to increase in size in a straightforward manner. "
"Taking into account the considerations that we've mentioned in this "
"chapter—particularly on the sizing of the cloud controller—it should be "
"possible to procure additional compute or object storage nodes as needed. "
"New nodes do not need to be the same specification, or even vendor, as "
"existing nodes.capability"
"primary>scaling and weight capacity planning scaling capacity planning"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:678(para)
msgid ""
"For compute nodes, nova-scheduler will take care of differences "
"in sizing having to do with core count and RAM amounts; however, you should "
"consider that the user experience changes with differing CPU speeds. When "
"adding object storage nodes, a weight should be "
"specified that reflects the capability of the node."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:685(para)
msgid ""
"Monitoring the resource usage and user growth will enable you to know when "
"to procure. details some useful "
"metrics."
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:691(title)
msgid "Burn-in Testing"
msgstr ""
#: ./doc/openstack-ops/ch_arch_scaling.xml:693(para)
msgid ""
"The chances of failure for the server's hardware are high at the start and "
"the end of its life. As a result, dealing with hardware failures while in "
"production can be avoided by appropriate burn-in testing to attempt to "
"trigger the early-stage failures. The general principle is to stress the "
"hardware to its limits. Examples of burn-in tests include running a CPU or "
"disk benchmark for several days.testing burn-in testing"
"secondary> troubleshooting burn-in testing"
"secondary> burn-in "
"testing scaling"
"primary>burn-in testing "
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:12(title)
msgid "Customization"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:14(para)
msgid ""
"OpenStack might not do everything you need it to do out of the box. To add a "
"new feature, you can follow different paths.customization paths available"
"secondary> "
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:22(para)
msgid ""
"To take the first path, you can modify the OpenStack code directly. Learn "
"how "
"to contribute, follow the code review workflow, make your changes, "
"and contribute them back to the upstream OpenStack project. This path is "
"recommended if the feature you need requires deep integration with an "
"existing project. The community is always open to contributions and welcomes "
"new functionality that follows the feature-development guidelines. This path "
"still requires you to use DevStack for testing your feature additions, so "
"this chapter walks you through the DevStack environment.OpenStack community customization "
"and "
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:37(para)
msgid ""
"For the second path, you can write new features and plug them in using "
"changes to a configuration file. If the project where your feature would "
"need to reside uses the Python Paste framework, you can create middleware "
"for it and plug it in through configuration. There may also be specific ways "
"of customizing a project, such as creating a new scheduler driver for "
"Compute or a custom tab for the dashboard."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:44(para)
msgid ""
"This chapter focuses on the second path for customizing OpenStack by "
"providing two examples for writing new features. The first example shows how "
"to modify Object Storage (swift) middleware to add a new feature, and the "
"second example provides a new scheduler feature for OpenStack Compute (nova)."
" To customize OpenStack this way you need a development environment. The "
"best way to get an environment up and running quickly is to run DevStack "
"within your cloud."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:53(title)
msgid "Create an OpenStack Development Environment"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:55(para)
msgid ""
"To create a development environment, you can use DevStack. DevStack is "
"essentially a collection of shell scripts and configuration files that "
"builds an OpenStack development environment for you. You use it to create "
"such an environment for developing a new feature.customization development "
"environment creation for development environments, creating "
"indexterm>DevStack"
"primary>development environment creation "
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:71(para)
msgid ""
"You can find all of the documentation at the DevStack website."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:76(title)
msgid "To run DevStack on an instance in your OpenStack cloud:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:80(para)
msgid ""
"Boot an instance from the dashboard or the nova command-line interface (CLI) "
"with the following parameters:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:85(para)
msgid "Name: devstack"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:89(para)
msgid "Image: Ubuntu 14.04 LTS"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:93(para)
msgid "Memory Size: 4 GB RAM"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:97(para)
msgid "Disk Size: minimum 5 GB"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:101(para)
msgid ""
"If you are using the nova client, specify --flavor 3"
"code> for the nova boot command to get adequate memory and disk "
"sizes."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:107(para)
msgid ""
"Log in and set up DevStack. Here's an example of the commands you can use to "
"set up DevStack on a virtual machine:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:114(replaceable)
msgid "username"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:114(replaceable)
msgid "my.instance.ip.address"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:112(para)
msgid "Log in to the instance: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:118(para)
msgid "Update the virtual machine's operating system: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:124(para)
msgid "Install git: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:130(para)
msgid "Clone the devstack repository: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:137(para)
msgid "Change to the devstack repository: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:145(para)
msgid ""
"(Optional) If you've logged in to your instance as the root user, you must "
"create a \"stack\" user; otherwise you'll run into permission issues. If "
"you've logged in as a user other than root, you can skip these steps:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:152(para)
msgid "Run the DevStack script to create the stack user:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:158(para)
msgid ""
"Give ownership of the devstack directory to the stack "
"user:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:165(para)
msgid "Set some permissions you can use to view the DevStack screen later:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:172(para)
msgid "Switch to the stack user:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:180(para)
msgid ""
"Edit the local.conf configuration file that controls "
"what DevStack will deploy. Copy the example local.conf "
"file at the end of this section ():"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:189(para)
msgid "Run the stack script that will install OpenStack: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:195(para)
msgid ""
"When the stack script is done, you can open the screen session it started to "
"view all of the running OpenStack services: "
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:202(para)
msgid ""
"Press Ctrl A followed "
"by 0 to go to the first screen window."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:214(para)
msgid ""
"The stack.sh script takes a while to run. Perhaps you can take "
"this opportunity to join the OpenStack Foundation."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:221(para)
msgid ""
"Screen is a useful program for viewing many related "
"services at once. For more information, see the GNU screen quick reference."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:229(para)
msgid ""
"Now that you have an OpenStack development environment, you're free to hack "
"around without worrying about damaging your production deployment. provides a working environment for running "
"OpenStack Identity, Compute, Block Storage, Image service, the OpenStack "
"dashboard, and Object Storage as the starting point."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:236(title)
msgid "local.conf"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:253(title)
msgid "Customizing Object Storage (Swift) Middleware"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:255(para)
msgid ""
"OpenStack Object Storage, known as swift when reading the code, is based on "
"the Python Paste "
"framework. The best introduction to its architecture is A Do-It-Yourself "
"Framework. Because of the swift project's use of this framework, you "
"are able to add features to a project by placing some custom code in a "
"project's pipeline without having to change any of the core code.Paste framework Python swift swift middleware"
"secondary> Object Storage"
"primary>customization of customization Object Storage"
"secondary> DevStack"
"primary>customizing Object Storage (swift) "
"indexterm>"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:284(para)
msgid ""
"Imagine a scenario where you have public access to one of your containers, "
"but what you really want is to restrict access to that to a set of IPs based "
"on a whitelist. In this example, we'll create a piece of middleware for "
"swift that allows access to a container from only a set of IP addresses, as "
"determined by the container's metadata items. Only those IP addresses that "
"you explicitly whitelist using the container's metadata will be able to "
"access the container."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:293(para)
msgid ""
"This example is for illustrative purposes only. It should not be used as a "
"container IP whitelist solution without further development and extensive "
"security testing.security issues"
"primary>middleware example "
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:302(para)
msgid ""
"When you join the screen session that stack.sh starts with "
"screen -r stack, you see a screen for each service running, "
"which can be a few or several, depending on how many services you configured "
"DevStack to run."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:307(para)
msgid ""
"The asterisk * indicates which screen window you are viewing. This example "
"shows we are viewing the key (for keystone) screen window:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:312(para)
msgid "The purpose of the screen windows are as follows:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:316(code) ./doc/openstack-ops/ch_ops_customize.xml:795(code)
msgid "shell"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:319(para) ./doc/openstack-ops/ch_ops_customize.xml:798(para)
msgid "A shell where you can get some work done"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:324(code)
msgid "key*"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:327(para) ./doc/openstack-ops/ch_ops_customize.xml:806(para)
msgid "The keystone service"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:335(para) ./doc/openstack-ops/ch_ops_customize.xml:814(para)
msgid "The horizon dashboard web application"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:340(code)
msgid "s-{name}"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:343(para)
msgid "The swift services"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:349(title)
msgid "To create the middleware and plug it in through Paste configuration:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:352(para)
msgid ""
"All of the code for OpenStack lives in /opt/stack. Go to the "
"swift directory in the shell screen and edit your middleware "
"module."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:357(para)
msgid "Change to the directory where Object Storage is installed:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:364(para)
msgid "Create the ip_whitelist.py Python source code file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:371(para)
msgid ""
"Copy the code in into "
"ip_whitelist.py . The following code is a middleware "
"example that restricts access to a container based on IP address as "
"explained at the beginning of the section. Middleware passes the request on "
"to another application. This example uses the swift \"swob\" library to wrap "
"Web Server Gateway Interface (WSGI) requests and responses into objects for "
"swift to interact with. When you're done, save and close the file."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:381(title)
msgid "ip_whitelist.py"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:478(para)
msgid ""
"There is a lot of useful information in env and conf"
"code> that you can use to decide what to do with the request. To find out "
"more about what properties are available, you can insert the following log "
"statement into the __init__ method:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:486(para)
msgid "and the following log statement into the __call__ method:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:493(para)
msgid ""
"To plug this middleware into the swift Paste pipeline, you edit one "
"configuration file, /etc/swift/proxy-server.conf :"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:501(para)
msgid ""
"Find the [filter:ratelimit] section in /etc/swift/"
"proxy-server.conf , and copy in the following configuration "
"section after it:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:517(para)
msgid ""
"Find the [pipeline:main] section in /etc/swift/proxy-"
"server.conf , and add ip_whitelist after ratelimit to "
"the list like so. When you're done, save and close the file:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:527(para)
msgid ""
"Restart the swift proxy service to make swift use your "
"middleware. Start by switching to the swift-proxy screen:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:533(para)
msgid ""
"Press Ctrl A followed "
"by 3 ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:541(para) ./doc/openstack-ops/ch_ops_customize.xml:1040(para)
msgid ""
"Press Ctrl C to kill "
"the service."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:549(para) ./doc/openstack-ops/ch_ops_customize.xml:1048(para)
msgid "Press Up Arrow to bring up the last command."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:553(para) ./doc/openstack-ops/ch_ops_customize.xml:1052(para)
msgid "Press Enter to run it."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:559(para)
msgid ""
"Test your middleware with the swift CLI. Start by switching to "
"the shell screen and finish by switching back to the swift-proxy"
"code> screen to check the log output:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:565(para)
msgid ""
"Press Ctrl A followed "
"by 0."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:573(para)
msgid "Make sure you're in the devstack directory:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:580(para)
msgid "Source openrc to set up your environment variables for the CLI:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:587(para)
msgid "Create a container called middleware-test :"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:594(para)
msgid ""
"Press Ctrl A followed "
"by 3 to check the log output."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:605(para)
msgid "Among the log statements you'll see the lines:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:610(para)
msgid ""
"These two statements are produced by our middleware and show that the "
"request was sent from our DevStack instance and was allowed."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:616(para)
msgid ""
"Test the middleware from outside DevStack on a remote machine that has "
"access to your DevStack instance:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:621(para)
msgid ""
"Install the keystone and swift clients on your "
"local machine:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:628(para)
msgid ""
"Attempt to list the objects in the middleware-test "
"container:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:641(para)
msgid ""
"Press Ctrl A followed "
"by 3 to check the log output. Look at the swift log "
"statements again, and among the log statements, you'll see the lines:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:656(para)
msgid ""
"Here we can see that the request was denied because the remote IP address "
"wasn't in the set of allowed IPs."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:661(para)
msgid ""
"Back in your DevStack instance on the shell screen, add some metadata to "
"your container to allow the request from the remote machine:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:667(para)
msgid ""
"Press Ctrl A followed "
"by 0 ."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:675(para)
msgid "Add metadata to the container to allow the IP:"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:681(para)
msgid ""
"Now try the command from Step 10 again and it succeeds. There are no objects "
"in the container, so there is nothing to list; however, there is also no "
"error to report."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:690(para)
msgid ""
"Functional testing like this is not a replacement for proper unit and "
"integration testing, but it serves to get you started.testing functional testing"
"secondary> functional "
"testing "
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:701(para)
msgid ""
"You can follow a similar pattern in other projects that use the Python Paste "
"framework. Simply create a middleware module and plug it in through "
"configuration. The middleware runs in sequence as part of that project's "
"pipeline and can call out to other services as necessary. No project core "
"code is touched. Look for a pipeline value in the project's "
"conf or ini configuration files in /etc/"
"<project> to identify projects that use Paste."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:710(para)
msgid ""
"When your middleware is done, we encourage you to open source it and let the "
"community know on the OpenStack mailing list. Perhaps others need the same "
"functionality. They can use your code, provide feedback, and possibly "
"contribute. If enough support exists for it, perhaps you can propose that it "
"be added to the official swift middleware."
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:719(title)
msgid "Customizing the OpenStack Compute (nova) Scheduler"
msgstr ""
#: ./doc/openstack-ops/ch_ops_customize.xml:721(para)
msgid ""
"Many OpenStack projects allow for customization of specific features using a "
"driver architecture. You can write a driver that conforms to a particular "
"interface and plug it in through configuration. For example, you can easily "
"plug in a new scheduler for Compute. The existing schedulers for Compute are "
"feature full and well documented at Scheduling. However, depending on your user's use cases, the "
"existing schedulers might not meet your requirements. You might need to "
"create a new scheduler.customization"
"primary>OpenStack Compute (nova) Scheduler "
"indexterm>