DreamCompute offers multiple clusters (also often called availability zones) which are independent OpenStack installations with their own servers, storage and control panel. Some clusters have different features, such as SSD storage that is useful for a given data storage plan. Migrating data between clusters is not automated at this time, but this guide will show you how to accomplish this yourself.
This guide assumes that you are comfortable working with SSH as well as some command line utilities like dd and the OpenStack CLI. For more information about those utilities, see the articles below.
Things to keep in mind
Here are a few things to keep in mind and plan while doing a migration.
This guide focuses solely on migrating volumes, which is useful if you have multiple volumes attached to a single instance and wish to maintain that storage schema in the new cluster. Please note that this method does not work with ephemeral storage and is intended only for volume to volume copying. If you wish to move instances, please see the article below.
Plan a maintenance window
To avoid corruption or other odd behavior, it is safest to move the data when the volume has no running services or open files. The copying of the data is generally fairly quick for smaller volumes, but for larger volumes could take some time. Remember, the copy must complete before service can be restored.
Migrate a Volume using SSH and dd
For this type of move, the assumption is that you have stopped all services using the data on the volume, safely unmounted any partitions on the instance using it, and detached it from the instance in the Volumes menu in the dashboard.
As an overview, the following will be set up to accomplish this task.
SOURCE CLUSTER DESTINATION CLUSTER +---------------+ +---------------+ | Temp Instance |---->| Temp Instance | +---------------+ +---------------+ |(mount) |(mount) +----------------+ +--------------+ | Volume To Copy | | New Volume | +----------------+ +--------------+
- Create two new "copy machine" instances (one each in the source and destination clusters), using the smallest available flavor. This guide uses Ubuntu commands. It is also recommended that you make the instances ephemeral, since they will not be needed after the migration is complete.
- For simplicity, the two temporary instances will be connected using passwords instead of SSH keys, but you can do this any way you prefer. To set up the instances for password authentication, turn it on with the following commands.
[user@instance]$ sudo -i [root@instance]# sed -i -e 's/PasswordAuthentication no/PasswordAuthentication yes/' \ /etc/ssh/sshd_config [root@instance]# sed -i -e 's/PermitRootLogin without-password/PermitRootLogin yes/' \ /etc/ssh/sshd_config [root@instance]# service ssh restart [root@instance]# passwd root
- Create a volume in the destination cluster that is the same size (or larger than) the source volume.
- Attach the source volume to the temporary instance in the source cluster. Attach the destination volume to the temporary instance in the destination cluster. There is no need to mount either of them.
- Determine the drive letter of the volumes on both instances. Generally /dev/vda will be the boot drive of your instance, so it will be /dev/vdb or /dev/vdc. One way to check for it is with the fdisk command:
[root@instance]# fdisk -l /dev/vdb | grep Disk [root@instance]# fdisk -l /dev/vdc | grep Disk
- The one that matches the size of the volume is the one to use. They may have different drive letters on each instance, so take note of that.
- Now the data can be copied using dd and SSH. To keep things simple, log into the instance on the destination cluster using its IPv6 address. Next, in the following command, replace IPV6-OF-SOURCE-INSTANCE with the IPv6 address of the source instance, the first /dev/vdX with the drive letter of the source volume, and the second /dev/vdX with the drive letter of the destination volume.
[root@destinstance]# ssh root@IPV6-OF-SOURCE-INSTANCE \ "dd if=/dev/vdX | gzip -1 -" | gunzip - | dd of=/dev/vdX
- Once the command has completed, detach the destination volume from the instance and check that it has the data you want by trying to boot it, or by attaching it to another instance. If everything looks correct, you are done and can destroy both temporary instances.