This article uses the new DreamObjects cluster of 'objects-us-east-1.dream.io'. If you have an older DreamObjects account and have not migrated your data yet, your hostname may need to point to 'objects-us-west-1.dream.io' instead. Please review the following migration article for further details.
Rclone is an rsync-like tool for Windows, Mac OS, Linux, and other operating systems designed for cloud storage such as DreamObjects.
- MD5/SHA1 hashes checked at all times for file integrity
- Timestamps preserved on files
- Partial syncs supported on a whole file basis
- Copy mode to just copy new/changed files
- Sync (one way) mode to make a directory identical
- Check mode to check for file hash equality
- Can sync to and from network, e.g., two different cloud accounts
- Optional encryption (Crypt)
- Optional FUSE mount (rclone mount)
- Download and install rclone based on your computer's operating system. You can download it from rclone's site here:
- Once downloaded, navigate to the directory the file was downloaded to. (NOTE: In Windows you can navigate to the /Downloads directory and extract the rclone files. You then need to open your command prompt and navigate to the directory the files were extracted to.
- In your terminal or command prompt, run the following to begin configuring rclone using the interactive configuration tool.
[server]$ rclone config No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n
Enter 'n' to set up a new configuration profile (i.e. a remote).
name > dreamobjects
Pick a name you want to assign to this connection. This example uses the name 'dreamobjects'.
Type of storage to configure. Choose a number from below, or type in your own value 1 / Amazon Drive \ "amazon cloud drive" 2 / Amazon S3 (also Dreamhost, Ceph, Minio) \ "s3" 3 / Backblaze B2 \ "b2" 4 / Box \ "box" 5 / Cache a remote \ "cache" 6 / Dropbox \ "dropbox" 7 / Encrypt/Decrypt a remote \ "crypt" 8 / FTP Connection \ "ftp" 9 / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" 10 / Google Drive \ "drive" 11 / Hubic \ "hubic" 12 / Local Disk \ "local" 13 / Microsoft Azure Blob Storage \ "azureblob" 14 / Microsoft OneDrive \ "onedrive" 15 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" 16 / Pcloud \ "pcloud" 17 / QingCloud Object Storage \ "qingstor" 18 / SSH/SFTP Connection \ "sftp" 19 / Webdav \ "webdav" 20 / Yandex Disk \ "yandex" 21 / http Connection \ "http" Storage> 2
- Next configure the type of storage as Amazon S3 (option 2).
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true" env_auth> 1
- Select option 1.
AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> your_access_key AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> your_secret_key
Replace your_access_key and your_secret_key with your actual Access Key and its corresponding Secret Key. These can be located on the (Panel > 'Cloud Services' > 'DreamObjects') page.
Region to connect to. Choose a number from below, or type in your own value / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia or Pacific Northwest. | Leave location constraint empty. \ "us-east-1" / US East (Ohio) Region 2 | Needs location constraint us-east-2. \ "us-east-2" / US West (Oregon) Region 3 | Needs location constraint us-west-2. \ "us-west-2" / US West (Northern California) Region 4 | Needs location constraint us-east-1. \ "us-east-1" / Canada (Central) Region 5 | Needs location constraint ca-central-1. \ "ca-central-1" / EU (Ireland) Region 6 | Needs location constraint EU or eu-east-1. \ "eu-east-1" / EU (London) Region 7 | Needs location constraint eu-west-2. \ "eu-west-2" / EU (Frankfurt) Region 8 | Needs location constraint eu-central-1. \ "eu-central-1" / Asia Pacific (Singapore) Region 9 | Needs location constraint ap-southeast-1. \ "ap-southeast-1" / Asia Pacific (Sydney) Region 10 | Needs location constraint ap-southeast-2. \ "ap-southeast-2" / Asia Pacific (Tokyo) Region 11 | Needs location constraint ap-northeast-1. \ "ap-northeast-1" / Asia Pacific (Seoul) 12 | Needs location constraint ap-northeast-2. \ "ap-northeast-2" / Asia Pacific (Mumbai) 13 | Needs location constraint ap-south-1. \ "ap-south-1" / South America (Sao Paulo) Region 14 | Needs location constraint sa-east-1. \ "sa-east-1" / If using an S3 clone that only understands v2 signatures 15 | eg Ceph/Dreamhost | set this and make sure you set the endpoint. \ "other-v2-signature" / If using an S3 clone that understands v4 signatures set this 16 | and make sure you set the endpoint. \ "other-v4-signature" region>
Leave this option blank and press "[Enter]" to continue.
Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. Specify if using an S3 clone such as Ceph. endpoint> objects-us-east-1.dream.io
- Enter DreamObject's hostname as the endpoint for the S3 API.
Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value 1 / Empty for US Region, Northern Virginia or Pacific Northwest. \ "" 2 / US East (Ohio) Region. \ "us-east-2" 3 / US West (Oregon) Region. \ "us-west-2" 4 / US West (Northern California) Region. \ "us-east-1" 5 / Canada (Central) Region. \ "ca-central-1" 6 / EU (Ireland) Region. \ "eu-east-1" 7 / EU (London) Region. \ "eu-west-2" 8 / EU Region. \ "EU" 9 / Asia Pacific (Singapore) Region. \ "ap-southeast-1" 10 / Asia Pacific (Sydney) Region. \ "ap-southeast-2" 11 / Asia Pacific (Tokyo) Region. \ "ap-northeast-1" 12 / Asia Pacific (Seoul) \ "ap-northeast-2" 13 / Asia Pacific (Mumbai) \ "ap-south-1" 14 / South America (Sao Paulo) Region. \ "sa-east-1" location_constraint>
- As with the Region, leave this option blank and press "[Enter]" to continue.
Canned ACL used when creating buckets and/or storing objects in S3. For more info visit http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. \ "public-read" / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. 3 | Granting this on a bucket is generally not recommended. \ "public-read-write" 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. \ "authenticated-read" / Object owner gets FULL_CONTROL. Bucket owner gets READ access. 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ "bucket-owner-read" / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ "bucket-owner-full-control" acl> private
- Set the canned ACL based on how you want to use rclone. This example uses "private".
For more information about DreamObjects S3-compatible APIs, read DreamHost's article here:
The server-side encryption algorithm used when storing this object in S3. Choose a number from below, or type in your own value 1 / None \ "" 2 / AES256 \ "AES256" server_side_encryption> 1
- DreamObjects (and more specifically Ceph) currently does not support server-side encryption, so pick 1.
The storage class to use when storing objects in S3. Choose a number from below, or type in your own value 1 / Default \ "" 2 / Standard storage class \ "STANDARD" 3 / Reduced redundancy storage class \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class \ "STANDARD_IA" storage_class>
- Leave this option blank and press "[Enter]" to continue.
Remote config -------------------- [dreamobjects] env_auth = false access_key_id = your_access_key secret_access_key = your_secret_key region = other-v2-signature endpoint = objects-us-east-1.dream.io location_constraint = acl = private server_side_encryption = storage_class = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
- Type in 'y'.
Current remotes: Name Type ==== ==== dreamobjects s3 e) Edit existing remote n) New remote d) Delete remote s) Set configuration password q) Quit config e/n/d/s/q> q
- Finally, review the remote you configured. If everything looks correct, save it by entering y and quit the configuration wizard by entering q.
View the following for a list of commands you can use with rclone:
With a remote configured, you can list the buckets in it with this command:
[user@localhost]$ rclone lsd dreamobjects: -1 2016-03-04 02:19:25 -1 samplebucket -1 2016-05-16 22:06:53 -1 anotherbucket -1 2015-10-15 21:33:25 -1 greatbucket
Make a new bucket:
[user@localhost]$ rclone mkdir dreamobjects:mynewbucket
Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket:
[user@localhost]$ rclone sync /home/local/directory dreamobjects:mynewbucket
List the contents of a bucket:
[user@localhost]$ rclone ls dreamobjects:mynewbucket
For more examples on how to use the software, see rclone's official documentation here: