S3cmd is a command-line tool for uploading, retrieving, and managing data in cloud storage service providers that use the S3 protocol such as DreamObjects. It is ideal for scripts, automated backups triggered from cron, and so on.
The following instructions help you install and configure s3cmd to work with DreamObjects.
These instructions were performed with s3cmd v2.0.2. If you’d like to install a different version, you’ll need to modify the file names appropriately.
Log in to your server via SSH.
Create a bin directory in your home directory if you don’t have one already:
[server]$ mkdir ~/bin
Download the latest release of s3cmd from GitHub:
[server]$ curl -O -L https://github.com/s3tools/s3cmd/archive/v2.0.2.tar.gz
Untar the file:
[server]$ tar xzf v2.0.2.tar.gz
- You should now have a directory called s3cmd-2.0.2. Change into that directory:
[server]$ cd s3cmd-2.0.2
Copy the s3cmd executable and S3 folder into the bin directory created earlier:
[server]$ cp -R s3cmd S3 ~/bin
Add the bin directory to your path so that you can execute the newly installed script:
This assumes you’re using the default bash shell. If you’re using a different shell, you must set the path in the proper place.
[server]$ echo "export PATH=$HOME/bin:$PATH" >> ~/.bash_profile
Execute your bash profile for it to take effect:
[server]$ . ~/.bash_profile
This article uses the new DreamObjects cluster of 'objects-us-east-1.dream.io'. If you have an older DreamObjects account and have not migrated your data yet, your hostname may need to point to 'objects-us-west-1.dream.io' instead. Please review the following migration article for further details.
Instead of following the instructions on the s3cmd site to configure it, just do the following:
- Create a file in your home directory called .s3cfg (note the leading “dot”):
[server]$ cd ~ [server]$ touch .s3cfg
- Copy the content of the code block below into it:
[default] access_key = Your_DreamObjects_Access_Key secret_key = Your_DreamObjects_Secret_Key host_base = objects-us-east-1.dream.io host_bucket = %(bucket)s.objects-us-east-1.dream.io enable_multipart = True multipart_chunk_size_mb = 15 use_https = True
- Include your Access and Secret Key from the DreamObjects control panel.
Additional configuration settings
website_endpoint = %(bucket)s.objects-website-us-east-1.dream.io verbosity = ERROR
Listing all buckets
[server]$ s3cmd ls 2018-06-28 16:28 s3://my-bucket
Making a bucket
[server]$ s3cmd mb s3://my-bucket-name Bucket 's3://my-new-bucket/' created
Uploading a file into a bucket
[server]$ s3cmd put testfile.txt s3://my-bucket-name testfile.txt -> s3://my-bucket-name/testfile.txt [1 of 1] 127 of 127 100% in 0s 1522.87 B/s done
Listing the contents of a bucket
[server]$ s3cmd ls s3://my-bucket-name 2018-06-28 16:29 127 s3://my-bucket-name/testfile.txt
Downloading a file from a bucket
[server]$ s3cmd get s3://my-bucket-name/testfile.txt s3://my-bucket-name/testfile.txt -> ./testfile.txt [1 of 1] 127 of 127 100% in 0s 3.46 kB/s done
Deleting a file in a bucket
[server]$ s3cmd del s3://my-bucket-name/testfile.txt File s3://my-bucket-name/testfile.txt deleted
Listing the size of a bucket
[server]$ s3cmd du -H s3://my-bucket-name 40G s3://my-bucket-name
Recursively making every object in a bucket public
[server]$ s3cmd setacl s3://my-bucket-name --acl-public --recursive
Recursively making every object in a bucket private
[server]$ s3cmd setacl s3://my-bucket-name --acl-private --recursive
Disabling Directory Listing in a bucket
[server]$ s3cmd setacl s3://my-bucket-name --acl-private
Working with multiple accounts
It’s possible to use different configuration files, one for each account on DreamObjects. By default s3cmd puts its configuration file in ~/.s3cfg, but you can override a configuration file with the -c option and specify a different configuration file.
[server]$ s3cmd -c .s3cfg-another-identity ls
For convenience, you can use aliases in the ~/.bash_profile file:
# s3cmd aliases for different s3 accounts alias s3my='s3cmd -c ~/.s3cfg-main-identity' alias s3alt='s3cmd -c ~/.s3cfg-another-identity'
How to encrypt your data
S3cmd can encrypt your data while uploading to DreamObjects. To use this functionality, you must first configure your .s3cfg file.
Configuring your .s3cfg file
If you followed the instructions above, you've already created your .s3cfg file. You must now add a few lines so you can use encryption. Open the file:
[server]$ nano ~/.s3cfg
Add the following lines. Make sure to create your own password for 'gpg_passphrase':
check_ssl_certificate = True check_ssl_hostname = True gpg_command = /usr/bin/gpg gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s gpg_passphrase = yourpassword
Your configuration is now set up to encrypt data.
Encrypting data while uploading
To encrypt your data while uploading, use the -e flag. In the following example, 'bucket-name' is the name of the bucket you're uploading content to. textfile.txt is the name of the single file you're uploading:
[server]$ s3cmd -e put testfile.txt s3://bucket-name upload: '/tmp/tmpfile-zFGwbLHMVEINdHh3615n' -> 's3://bucket-name/testfile.txt' [1 of 1] 63 of 63 100% in 0s 129.96 B/s done
To confirm it is encrypted, navigate to the (Panel > 'Cloud Services' > 'DreamObjects') page. Click the 'View Objects' button under your username. A prompt opens for you to view objects in your bucket.
Click the object, copy the URL, and then paste it into a browser.
You'll see the data in the file is encrypted.
Decrypting a file
When you use s3cmd to download an encrypted file, it's automatically decrypted for you:
[server]$ s3cmd get s3://bucket-name/encrypted-file.txt download: 's3://bucket-name/encrypted-file.txt' -> './encrypted-file.txt' [1 of 1] 70 of 70 100% in 0s 323.19 B/s done
If you read the contents of the file, you'll see it's no longer encrypted:
[server]$ cat encrypted-file.txt Testing a file
Signing an S3 URL to provide limited public access
You can manualy set an expiration date for a file to be accessed. To do this, you must first convert the date into a Unix Epoch timestamp.
- Decide on the date you wish to allow the URL to be signed/accessed until.
- Visit Epoch Converter to convert your date to an Epoch Timestamp. It will look like a string of numbers. For example: 1540232086
- Run the following command to sign the URL until this timestamp. (Make sure to change the bucket name and file name to your actual information.)
[server]$ s3cmd signurl s3://my_bucket/my_file.png 1540232086 http://my-bucket.objects-us-east-1.dream.io/my_file.png?AWSAccessKeyId=DHDPTCQ3WFGHPSS5FAXG&Expires=1540232086&Signature=9nf8f9kG%2FqDa76rmET4R%2FpbtaGM%3D
- This outputs the signed URL. You can now share this URL so anyone can access that file until the date you have specified.
You can also sign it for 1 week using this format:
[server]$ s3cmd signurl s3://my_bucket/my_file.png $(echo "`date +%s` + 3600 * 24 * 7" | bc)