For dumping MySQL databases to a DigitalOcean Spaces (or any) bucket.
The dump script uses the --single-transaction flag to create a consistent backup without locking the DB. This only works on InnoDB.
The script also assumes you want to dump a single database from your server. If you want to dump all of them, modify the script to use mysqldump with the --databases or --all-databases options.
We're using an S3 bucket for storage. If you are not careful, it is possible to expose data in buckets. You must ensure all of these precautions are taken.
- Disable publicly listing file contents on the bucket. This should be the default setting. This does not make files private, it only prevents listing them.
- Save files with a random, unguessable name component. (eg. "20240101-e2cb9d0f252ec0adf2ca4f829b418ab7", not "20240101"). This alone is not secure, just adds another safety layer.
- Set permission to "Private" on all uploaded files. This ensures only authenticated users can access the file. This is quite secure.
- Enable encryption and encrypt stored files with a secure encryption key. This ensures that, even in the event a file is compromised, it cannot be read.
- Make a bucket. Save the name.
- Ensure "File Listing" is set to Restricted (This should be the default.)
- Go to API > Spaces Keys and generate an API key. Save the access and secret keys.
- Create a
.envfile with all the fields shown in the.env.samplefile.
apt install s3cmdto install s3cmd, the command-line tool to interact with s3 bucketss3cmd --configureto save the configuration.- Enter the Access key and Secret key
- Select region (Maybe default)
- Enter the spaces endpoint for the region (eg. nyc3.digitaloceanspaces.com) This is not the name of the bucket.
- Enter
%(bucket)s.nyc3.digitaloceanspaces.comas the template.
- Set a secure encryption password.
- Path to GPG program will be the default on linux.
- HTTPS is required.
- Can probably leave proxy server blank.
- Installer will test settings. Test should be successful.
- Set lifecycle rule on the bucket.
s3cmd setlifecycle lifecycle.xml s3://bucket-name. The provided lifecycle.xml will delete files starting with "db/" after 90 days. - Ensure mysqldump is available (
which mysqldump). If it is not you will need to install mysql-server. - Now that configuration is done, set up a cron job for
backup.shto run.
- Find the latest backup in your bucket file explorer.
- From your server, run
s3cmd get <s3 url> <output file>. The s3 url looks likes3://BUCKET_NAME/FILEPATH. Decryption is automatic using your stored s3cmd config (if you're recovering from another machine you'll have to set the encryption password first). - Decompress it
gzip -d <file> - mysqldumps are just a big ol list of queries. Plop that in your database.
mysql [connection params] < [dump file].
Note: You should restore to an empty database. Create a new database, and then pass the database name in your mysql params with -D database_name. Also, the dump will not create the database user, you'll have to configure that yourself.
If you did not dump a single database, then you do not need to create databases; the create queries are included.
It takes about 1 minute per 1.5GB of raw database to create and upload the dump. YMMV.