Purpose of Destroying the Ceph Pool
A five-node Proxmox VE hyper-converged cluster was originally set up to efficiently utilize resources, equipped with high-performance NVMe disks and large-capacity SATA mechanical disks. The high-performance NVMe disks were used for virtual machine system disks and database data storage, while the large mechanical disks were allocated for storage of low-performance-demand scenarios such as images, videos, and shared data. Due to the institution’s rapid business growth and increased revenue (a personal guess), the decision-makers intend to replace all existing mechanical disks with high-performance NVMe disks and repurpose them for other uses.
Main Steps to Destroy the Ceph Pool
Destroying the Ceph Pool involves two major steps: destroying the Ceph Pool and the Ceph OSD. If the OSD destruction step is skipped, the system will continuously throw errors when the cluster’s servers are rebooted without the hard drives.
It is crucial to follow the order of destroying the Ceph Pool first, then the Ceph OSD. If the sequence is reversed, during the gradual destruction of the Ceph OSDs, the remaining undestroyed OSDs will automatically attempt data rebalancing. If the number of destroyed Ceph OSDs is below the minimum required for the Ceph cluster, the system will throw errors, potentially causing other issues and leaving you uneasy.
Detailed Operation Process
Step 1: Destroy the Ceph Pool
In the Proxmox VE hyper-converged cluster’s web management interface, select the Ceph Pool you want to destroy, then click the “Destroy” button.
To prevent accidental operations, the system thoughtfully provides a confirmation screen where you must manually input the name of the Ceph Pool you intend to destroy before the action is executed.
Note: Be careful not to destroy the default “device_health_metrics” pool, as recreating it is very cumbersome!
Once the selected Ceph Pool “hdd_pool” is destroyed, the corresponding item under the “Storage” menu in the data center will automatically disappear, without needing manual deletion.
Step 2: Destroy the Ceph OSD
Destroying the Ceph OSD involves three smaller steps: taking the OSD disk offline, downing the OSD disk, and destroying the OSD disk.
1. Take the OSD Offline: In the Proxmox VE cluster’s web management interface, select the disk you wish to operate on, then click the “Out” button in the upper-right corner of the interface. Be sure to observe the status after execution.
2. After successfully taking the disk offline, click the “Stop” button in the upper-right corner to down the OSD.
To ensure the operation is correct, it is best to confirm whether the Ceph cluster is rebalancing the OSD data. You can check this in the Proxmox VE cluster’s web management interface or by running the “ceph health detail” command on any cluster node. If you check via the web interface, the normal status should be green. If using the command line, the normal output should be “HEALTH_OK.”
Select the OSD disk that is in the “Out” and “Down” state, click the “More” button in the upper-right corner, and then click “Destroy” in the submenu.
Following the above three steps, take all mechanical OSD disks offline and destroy them. In addition to the graphical interface, you can also perform these operations using command-line instructions.
Leave a Reply