In Red Hat OpenStack Platform (RHOSP), a Red Hat supported overcloud is built by the undercloud Director node using the Deployment service TripleO. At the same time, the undercloud can deploy and manage multiple overclouds.
The Deployment template
An overcloud consists of numerous deployed nodes, each configured using roles, which specify the services and configuration required for a node to perform that role. Predefined RHOSP role definitions are located in the /usr/share/openstack-tripleo-heat-templates/roles directory and can be viewed with the openstack role command.
Compute node with hyperconverged infrastructure, combining compute and Ceph OSD functionality on a single node.
Components and Services
Core Services
Block Storage Service (Cinder)
Cinder -> Ceph.
Original Swift only provides the object storage. So, Cinder will schedule the back-end Ceph for block and file storage.
Image Service (Glance)
The Image service stores and manages images used to deploy instances. (Object Storage)
Orchestration Service (Heat)
In the undercloud, Heat templates deploy each overcloud as a stack.
In the overcloud, Heat templates deploy application workloads as stacks.
All templates are based on yaml files.
Dashboard Service (Horizon)
Web UI Front-end Dashboard It just looks like a console. More convenient and intuitionistic.
The Dashboard service provides a browser-based interface for self-service cloud users to create and configure resources and launch and manage instances and stacks.
Identity Service (Keystone)
It looks like a gate-keeper to confirm/verify the comer.
The Identity service provides domain, project, and user authorization for other overcloud services.
OpenStack Networking Service (Neutron)
It’s responsible for the network.
The OpenStack Networking service manages virtual networking infrastructure. It provides the virtual network and manages the network interfaces.
Compute Service (Nova)
The Compute service schedules and runs on-demand virtual machines.
It’s responsible for creating, starting, stoping and removing the virtual machines.
Messaging Service (Oslo)
It provides a common messaging framework and provides a compatible and consistent communication feature set between each services.
Object Store (Swift)
Original and default object storage back-end.
The Object Store service provides self-service cloud user object storage. Other overcloud services that collect data or objects can use the Object Store service as a back end.
Discretionary and Operational OpenStack Services
Bare Metal Service (Ironic)
Initializes and introspects -> scaling.
The Bare Metal service locates and prepares compute resources, including bare metal and virtual machines.
File Share Service (Manila)
It can use either the NFS or CIFS protocols to provide file sharing to instances.
Load Balancing Service(Octavia)
It is an original HAProxy-based service. The Load Balancing service enables failover network traffic distribution to instances in a multitier application architecture.
Managing the Overcloud
Listing all Overcloud nodes
1 2 3 4 5 6 7 8 9 10 11
(undercloud) [stack@director ~]$ openstack server list \ > -c 'Name' -c 'Status' -c 'Networks' +-------------+--------+------------------------+ | Name | Status | Networks | +-------------+--------+------------------------+ | controller0 | ACTIVE | ctlplane=172.25.249.56 | | computehci0 | ACTIVE | ctlplane=172.25.249.54 | | compute1 | ACTIVE | ctlplane=172.25.249.53 | | compute0 | ACTIVE | ctlplane=172.25.249.59 | | ceph0 | ACTIVE | ctlplane=172.25.249.58 | +-------------+--------+------------------------+
High Availability
Red Hat OpenStack Platform uses Pacemaker as the cluster resource manager, HAProxy as the load balancing cluster service, and MariaDB Galera as the replicated database service.
RHOSP Director installs a duplicate set of OpenStack components on each controller node and manages them as a single service.
Pacemaker -> Manager and Scheduler
HAProxy -> Allocate all network traffic.
MariaDB -> Duplicate the data.
We could use the command to check the status of our cluster.
[heat-admin@controller0 ~]$ sudo pcs status Cluster name: tripleo_cluster Cluster Summary: * Stack: corosync * Current DC: controller0 (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum * Last updated: Tue May 6 13:33:04 2025 * Last change: Mon May 5 02:29:30 2025 by root via crm_resource on controller0 * 5 nodes configured * 22 resource instances configured
When the overcloud is first installed, the procedure creates the overcloudrc identity environment file under the stackrc user’s home directory with the admin credentials to access the overcloud.
stackrc
It will be created automatically when deploy the undercloud which contains the environment variables in the undercloud.
It could be used to deploy the overcloud.
overcloudrc
After the overcloud was deployed, it will be generated by TripleO.
It is the token/authentication file to access the overcloud.
We could check and manage the service on overcloud by CLI command with it.
[heat-admin@controller0 ~]$ ip --br a lo UNKNOWN 127.0.0.1/8 ::1/128 eth0 UP 172.25.249.56/24 172.25.249.50/32 fe80::5054:ff:fe00:f901/64 eth1 UP fe80::5054:ff:fe01:1/64 eth2 UP fe80::5054:ff:fe02:fa01/64 eth3 UP fe80::5054:ff:fe03:1/64 eth4 UP fe80::5054:ff:fe04:1/64 ovs-system DOWN br-prov1 UNKNOWN fe80::5054:ff:fe03:1/64 genev_sys_6081 UNKNOWN fe80::3c78:cbff:feee:3bf0/64 o-hm0 UNKNOWN 172.23.3.42/16 fe80::f816:3eff:fecd:dfd4/64 br-int UNKNOWN fe80::40e:d7ff:fe34:2944/64 br-ex UNKNOWN 172.25.250.1/24 172.25.250.50/32 fe80::5054:ff:fe02:fa01/64 br-prov2 UNKNOWN fe80::5054:ff:fe04:1/64 vlan40 UNKNOWN 172.24.4.1/24 172.24.4.50/32 fe80::b4ca:55ff:fe88:4fd4/64 vlan10 UNKNOWN 172.24.1.1/24 172.24.1.51/32 172.24.1.50/32 172.24.1.52/32 fe80::f45f:9ff:feed:88a4/64 vlan20 UNKNOWN 172.24.2.1/24 fe80::8cd7:ccff:fe8a:c345/64 vlan50 UNKNOWN 172.24.5.1/24 fe80::c2c:bcff:fe88:d879/64 vlan30 UNKNOWN 172.24.3.1/24 172.24.3.50/32 fe80::14aa:e9ff:fe4e:95a2/64 br-trunk UNKNOWN fe80::5054:ff:fe01:1/64
The eth0 interface is the 172.25.249.0 provisioning network.
The br-trunk is the internal network.
The br-ex bridge is the 172.25.250.0 public network.
Containerized services
As I introduced in the Chapter1, all services are containerized here.
Let’s check all services in podman container.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
[root@controller0 ~]# podman ps --format "table {{.Names}} {{.Status}}" Names Status openstack-manila-share-podman-0 Up 11 hours ago openstack-cinder-volume-podman-0 Up 11 hours ago ovn-dbs-bundle-podman-0 Up 11 hours ago haproxy-bundle-podman-0 Up 11 hours ago rabbitmq-bundle-podman-0 Up 11 hours ago galera-bundle-podman-0 Up 11 hours ago redis-bundle-podman-0 Up 11 hours ago ceph-mgr-controller0 Up 11 hours ago ceph-mon-controller0 Up 11 hours ago ceph-mds-controller0 Up 11 hours ago octavia_worker Up 11 hours ago ...ommited...
More detailed configuration information could be viewed by inspect command.
[root@controller0 containers]# pwd /var/log/containers [root@controller0 containers]# ls aodh glance heat keystone mysql octavia placement stdouts ceilometer gnocchi horizon manila neutron openvswitch rabbitmq swift cinder haproxy httpd memcached nova panko redis [root@controller0 containers]# cd keystone/ [root@controller0 keystone]# ls keystone.log keystone.log.11.gz keystone.log.14.gz keystone.log.4.gz keystone.log.7.gz keystone.log.1 keystone.log.12.gz keystone.log.2.gz keystone.log.5.gz keystone.log.8.gz keystone.log.10.gz keystone.log.13.gz keystone.log.3.gz keystone.log.6.gz keystone.log.9.gz
Containers also retain memory-based log structures that store the container’s console STDOUT activity.(/var/log/containers/stdouts) Use podman logs service to view the container’s console activity, for example podman logs keystone.
[root@controller0 stdouts]# systemctl list-units | grep ceph ceph-mds@controller0.service loaded active running Ceph MDS ceph-mgr@controller0.service loaded active running Ceph Manager ceph-mon@controller0.service loaded active running Ceph Monitor system-ceph\x2dmds.slice loaded active active system-ceph\x2dmds.slice system-ceph\x2dmgr.slice loaded active active system-ceph\x2dmgr.slice system-ceph\x2dmon.slice loaded active active system-ceph\x2dmon.slice
Copyright Notice: This article is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Please attribute the original author and source when sharing.