I want to backup my data to Amazon Glacier (because it's very cheap). I found a BASH script that may gather all the files which need to be backed up and encrypt them before sending them to Glacier. But as a non-programmer it's hard for me to evaluate the script.
Please look at the script below and tell me if it seems good, or if you can recommend something better.
From http://www.triatechnology.com/encryp...cier-in-linux/
Please look at the script below and tell me if it seems good, or if you can recommend something better.
From http://www.triatechnology.com/encryp...cier-in-linux/
Code:
#!/bin/bash
#
# Note, to pull a file from s3 use "s3cmd get s://bucket/file destinationfile"
# You must have the proper .s3cfg file in place to decrypt the file.
# You may also use "gpg encryptedfile" and supply the encryption code if you download
# from the web interface. Good luck.
# The bucket should be set to transfer to Glacier. To retreive, you need to initiate a
# retrieval request from the s3 web interface. To retreieve and entire folder, there is a
# windows program called S3 Browser that can transfer entire folders out of Glacier.
# Define the folders of files to be backed up in SOURCE
SOURCE=(
"/home/owner/Documents"
"/home/owner/Pictures"
"/mnt/files/Photographs"
"/mnt/files/Documents"
"/mnt/files/Home Movies"
)
IFS=$(echo -en "\n\b")
logFile=/mnt/files/scripts/backupmanifest.log
bucket=MyBucket
s3cfg=/home/owner/.s3cfg
touch $logFile
echo Finding files and performing backup: Please wait...
# for loop to go through each item of array SOURCE which should contain the
# directories to be backed up
for i in "${SOURCE[@]}"
do
# nested for loop to run find command on each directory in SOURCE
for x in `find $i`
do
# x is each file or dir found by 'find'. if statement determines if it is a regular file
if [ -f "$x" ]
then
# create a hash to mark the time and date of the file being backed up to compare later for
# incremental backups
fileSize=`stat -c %s $x`
modTime=`stat -c %Y $x`
myHash=`echo $x $fileSize $modTime | sha1sum`
# If statement to see if the hash is found in log, meaning it is already backed up.
# If not found proceed to backup
if ! grep -q $myHash $logFile
then
echo Currently uploading $x
# s3cmd command to put an encrypted file in the s3 bucket
# s3out var should capture anything in stderr in case of file transfer error or some other
# problem. If s3out is blank, the transfer occurred without incident. if an error occurs
# no output is written to the log file but output is written to an error log and s3out is
# written to the screen.
s3out=$(s3cmd -c $s3cfg -e put $x s3://$bucket/$HOSTNAME$x 2>&1 > /dev/null)
if [ "$s3out" = "" ]
then
echo $x :///: $fileSize :///: $modTime :///: $myHash >> $logFile
else
# s3out had content, but was possibly a warning and not an error.. Checking to see if
# there exist an upload file within the last 2 minutes. If so, the file will be considered
# uploaded. Two minutes is to account for variance between local and remote time signatures.
date1=$(date --date="$(s3cmd ls s3://$bucket/$HOSTNAME$x | awk '{print " " " +0000"}')" +%s)
date2=$(date +%s)
datediff=$(($date2-$date1))
if [[ $datediff -ge -120 ]] && [[ $datediff -le 120 ]]
then
echo There was a possible error but the time of the uploaded file was written within
echo the last 2 minutes. File will be considered uploaded and recorded as such.
echo $x :///: $fileSize :///: $modTime :///: $myHash >> $logFile
echo `date`: $x had warnings but seemed to be successfully uploaded and was logged to main log file >> $logFile.err
else
echo $s3out
echo `date`: $s3out >> $logFile.err
fi
echo ------------------------------------------------------------------------------------
fi
fi
fi
done
done
# processed all files in SOURCE. Now upload actual script and file list. They are not encrypted.
echo Uploading $logFile
s3cmd put $logFile s3://Linux-Backup > /dev/null
echo Uploading
s3cmd put s3://Linux-Backup > /dev/null
echo
echo Backup to S3 has been completed. You may proceed with life.
No comments:
Post a Comment