When running a website, it is very important to have a backup of all site files, to prepare for any event that may require reloading data (file corruption, moving hosts, etc.). After building my Ubuntu file server, I knew that I had to find a way to mirror this website so files could be recovered if necessary.
I looked at rsync and curlftpfs, but that combination was complex to set up. Soon after, I stumbled on FTPcopy and have found a good solution.
Here is a bash script to automate the process and create a daily mirror of whatever FTP server you want to back up.
- Download and install FTPCopy from the repositories.
sudo apt-get install ftpcopy
- Change to a directory that will store the script and open a new text file.
i, then copy and paste the following text.
# Issue FTPcopy command
ftpcopy --no-delete -l 1 -u $USER -p $PASS $HOST $REMOTE .
- Be sure to change the values for website, storage directory, remote directory, host, username and password.
- Save and exit the text file by typing
- Make the script executable by typing
chmod a+x /path/to/script
- Add the script to the crontab so it will be executed on a regular basis. I use Webmin for this type of administration work, but it is possible to use the command line. Use this example to sort out the format. Mine runs daily at 12 AM.
To clarify, this code changes directories into the backup folder, then issues the ftpcopy command. The remote directory of public_html is common on many webservers, but be sure to confirm before running the script. The no-delete option means that files are not removed from the backup if they’ve been removed from the web server. The l option simply means provide feedback of what files are being moved — this can be viewed in your user mail.
After the time has passed for the first time, check the folder where your backups will reside to make sure they are being added as planned.
UPDATE: TV.com (which I use to find the show titles) has since changed their format, so I cannot guarantee that this script works as advertised. I will confirm it in the near future.
UPDATE 2: This Applescript has been replaced by a more efficient and lightweight Bash script.
After picking up a 750 GB hard drive for my Airport Extreme, I’ve started to rip my DVDs so that I can watch them on my computer. The problem is that the files don’t come out with descriptive episode titles, leaving you guessing when you want to pick a show to watch. If you decide to add the titles yourself, the process can be tedious and time consuming, not to mention downright annoying.
That is why I’ve created a combination Automator/Applescript workflow to do the job automatically. After entering the name of the TV show you’re looking for, the program takes the selected files and searches TV.com’s vast episode guide for the correct titles. Once the data has been collected, the titles are added to the selected files. The only preparation you must do is rename the files in the format SxxExx where S is the season number, and E is the episode number (S02E05, for example). This step is very easy when you use a program like NameMangler, which can add sequential filenames in one pass.
- Collect files and rename them in the SxxExx format. Using NameMangler seems to work best, but there are other applications out there to do the same thing.
- Select all the files you wish to add titles to.
- Start the program in whichever way you’ve downloaded: directly from Automator, the standalone application, or by the contextual right-click menu and navigating to More > Automator > Get TV Titles.
- Type in the name of the TV show you’re looking for episodes from.
- If you’re running it via the contextual menu, you’ll see this status message in the Finder menubar.
- Once the computer has processed the data, you’ll see all the episodes neatly named.
So far I’ve been able to edit more than 20 files at once. Of course, your mileage may vary, so please comment with feedback and results.