Raspbian – NFS w/Synology

Synology stuff first…

  • Enabled NFS file sharing on my Synology
  • For each share on my Synology NAS, had to go into NFS Permissions and create a rule
    • Disabled “Asynchronous”

Then followed these instructions…

First, install the necessary packages. I use aptitude, but you can use apt-get instead if you prefer; just put apt-get where you see aptitude:

sudo aptitude install nfs-common portmap

(nfs-common may already be installed, but this will ensure that it installs if it isn’t already)

On the current version of Raspbian, rpcbind (part of the portmap package) does not start by default. This is presumably done to control memory consumption on our small systems. However, it isn’t very big and we need it to make an NFS mount. To enable it manually, so we can mount our directory immediately:

sudo service rpcbind start

To make rpcbind start automatically at boot:

sudo update-rc.d rpcbind enable

Now, let’s mount our share:

Make a directory for your mount. I want mine at /public, so:

mkdir /public

Mount manually to test:

sudo mount -t nfs 192.168.222.34:/public /public

  • (my server share path) (where I want it mounted locally)

Now, to make it permanent, you need to edit /etc/fstab (as sudo) to make the directory mount at boot. I added a line to the end of /etc/fstab:

192.168.222.34:/public /public nfs rsize=8192,wsize=8192,timeo=14,intr 0 0

The rsize and wsize entries are not strictly necessary but I find they give me some extra speed.

 

Notes:

  • Everything basically needs to be done via sudo
  • With multiple shares, each needs to be mounted independently.  Such as…
    • /public/dir1
    • /public/dir2

 

Source: http://www.raspbian.org/RaspbianFAQ

Synology My Old Backup Cleanup Script

Since I created my own rsync backup solution for my Synology DS’s, I knew I’d need a script to cleanup and remove the older incremental backups.

The good thing is that since I’m using rsync’s “–link-dest” option, it should be pretty straight forward to cleanup old backups.  Every incremental backup is really a full backup, so just delete the oldest directories.

At least, that’s my theory…

I now have over 30 days of backups, so I was able to create and test a script that follows my theory.

I’ve only run this script manually at this point.  Since I’m actually deleting entire directories, I’m somewhat cautious in creating a new cron job to run this automatically.

#!/bin/sh
#
# Updated:
#
#
# Changes
#
# -
#

# Set to the max number of backups to keep
vMaxBackups=30

# Base directory of backups
dirBackups="/volume1/<directory containing all your backups>"

# Command to sort backups
# sorts based on creation time, oldest
# at the top of the list
cmdLs='ls -drc1'

# Count the number of total current backups
# All my backups start with "In"
cntBackups=`$cmdLs ${dirBackups}/In* | wc -l`

# Create the list of all backups
# All backups start with "In"
vBackupDirs=`$cmdLs ${dirBackups}/In*`

vCnt=0

for myD in $vBackupDirs
do
# Meant to be a safety mechanism
# that will hopefully kick in if the other
# test fails for some reason
tmpCnt=`$cmdLs ${dirBackups}/In* | wc -l`

if [ $tmpCnt -le 14 ]; then
exit
fi

# Main removal code
# If wanting to test first
# comment the "rm" line and uncomment the "echo" line
# echo $myD
rm -rf $myD

# Track how many directories have been deleted
vCnt=$((vCnt+1))

# Check to see if the script should exit
if [ $((cntBackups-vCnt)) -le $vMaxBackups ]; then
exit
fi
done

Synology crond Restart Script

Restarting crond on a Synology NAS is not hard… Once you learn that Synology has a customer service… service.

The key:  synoservice

Run “synoservice –help” to get educated.

I got tired of typing out multiple commands to restart crond, so I created a quick script to do it for me.

#!/bin/sh

echo
echo Restarting crond
echo

echo Stopping crond
synoservice --stop crond

sleep 1
echo
ps | grep crond | grep -v grep
#synoservice --detail crond

echo
echo Starting crond
synoservice --start crond

sleep 1

echo
ps | grep crond | grep -v grep
#synoservice --detail crond

echo
cat /etc/crontab

Synology Custom Daily Cron

Synology DSM 4.0/1 appears to only come with a single crontab file.  There is no implementation for a daily cron directory (that I can find).

So I created my own.

Configure /etc/crontab

  • Open /etc/crontab for editing
  • Add:  “0       1       *       *       *       root    for myCsh in `ls /<your directory path>/cron/cron.daily/*.sh`; do $myCsh;done > /dev/null 2>&1
    • Don’t forget to put in your real directory path above
    • Must use tabs (NOT spaces) between fields.
    • Spaces in the final field (that starts with “for myCsh” are OK
    • The above has the “for” loop being run every day at 1AM
  • Save
  • Restart cron for changes to take effect
    • synoservice –stop crond
    • synoservice –start crond

Configure Cron Daily Directories

  • These are the directories referenced by the above “for” loop
  • Example:  /<something>/cron/cron.daily
  • cd cron.daily
  • Create empty.sh
    • Needed so that if no real scripts are in the directory, the cron “for” loop will not throw any errors
    • Just having an executable file is enough, it does not even need anything in it
    • touch empty.sh
    • chmod 755 empty.sh

Done!

The cron.daily directory is where I put a script that calls my incremental rsync backup script.

I also created a “cron.test” directory for testing different things.  I usually have its line commented out in crontab, but it follows the same concept as above.

Extra Credit

When I was creating this, I wanted some reassurance that the crontab was actually working as expected, so I created a simple script that when ran creates another file in the cron.daily directory with a date time of the last run.

_lastrun.sh

  • I put the “_” in the name so the file should be the first executed by the “for” loop
#!/bin/sh

shDir="$( cd "$( dirname "$0" )" && pwd )"

rm $shDir/*.lastrun

touch $shDir/`date +%Y-%m-%dT%H%M`.lastrun

 

Synology Custom Remote Backup Solution

Created a personalized rsync incremental remote backup solution.

  • Tested with DSM 4.1 (and a little with 4.0)
  • Uses default rsync
  • Does not require any additional packages (e.g. ipkg)
  • Utilizes SSH transport and ssl keys for secure transport
Still To Do
  • Age-off script
  • Combine Full and Incremental backup scripts
  • Create age-off script
Some details
The below scripts utilize rsync’s “–link-dest” option, so that each incremental update takes up very little space compared to the first.  It does require a full backup initially, which is why I currently have 2 scripts.  I believe they can easily be combined, but this is the initial solution.
Hard links are pretty nifty.  Google them and how the rsync “–link-dest” option works.
I do not consider myself an advanced linux user, so there are probably a number of best practices with this solution that were not simply ignored, but totally unknown because of my ignorance.

This system should work in both directions between 2 Synology boxes.  I’ve only implemented in a single direction thus far, but reversing should be pretty simple.

Full backup Script
  • Needs to be run the first time to do the initial full backup
  • If able, recommend doing this locally across a LAN and then doing incremental backups over the WAN
#!/bin/sh

#
# Updated:
#

#
# Changes
#
#
# Future Ideas
# - If there's an rsync error, automatic retry
#

RSYNC=/usr/syno/bin/rsync

#
# Need the directory where the script runs
#
shDir="$( cd "$( dirname "$0" )" && pwd )"

#
# Config files of interest
#
confRemHost="rem_host.sh"
confSrcDirs="src_dirs.conf"
confLastFile="last_backup.conf"

#
# Read in remote host info
#
. ${shDir}/${confRemHost}

#
# Misc Variables
#
vDate=`date +%Y-%m-%dT%H%M`
dirLogBackup="/volume1/<your directory path>/backup_logs"
dirBckupName="Initial_${vDate}"
vRsyncOpts="--archive --partial"
vLogLvl="--verbose"
dirLogs="$shDir"
vLogF=$dirLogs/rsync_${vDate}.log
vErr="no"
vMaxRetries=5

if [ -f "${shDir}/is.test" ]; then
dirLogBackup=${dirLogBackup}/test
rDir_base=${rDir_base}/test
fi

if [ ! -d "$dirLogBackup" ]; then
mkdir -p $dirLogBackup
chmod 777 $dirLogBackup
fi

exec > $vLogF 2>&1

if [ ! -f "${shDir}/${confSrcDirs}" ]; then
echo ---
echo --- $shDir/$confSrcDirs not found. Exiting...
echo ---
exit 1
fi

#
# Loop through each directory to backup
# (There may be a better way to do this, but this works)
#
for myD in `cat $shDir/$confSrcDirs`
do
echo ---
echo "--- Starting directory backup: $myD"
echo ---

if [ -d "$myD" ]; then
$RSYNC $vRsyncOpts $vLogLvl -e "ssh -i ${rUserKey}" \
$myD $rUser@$rHost:$rDir_base/$dirBckupName

if [ "${?}" -ne "0" ]; then
vErr="yes"
echo "ERR ($?) : $myD" >> $dirLogs/rsync_${vDate}.err

# Put a test here for exit code 23, Partial Transfer Error
# And then retry?
# a while loop could work
fi
else
echo
echo "--- WARN: Directory $myD does not exist"
echo
fi

echo ---
echo --- Completed directory backup: $myD
echo ---
echo
done

#
# Some cleanup / completion stuff
#
if [ $vErr = "no" ]; then
echo $dirBckupName > $shDir/$confLastFile #track last backup dir
else
chmod 733 $shDir/*.err
mv $shDir/rsync_*.err $dirLogBackup #save off err file
fi

#
# Want to move log file to new location
#
chmod 733 $shDir/*.log
mv $shDir/rsync_*.log $dirLogBackup
Incremental Backup Scripts
  • Added the “–temp-dir” & “–link-dest” options for rsync when compared to the Full backup script
#!/bin/sh

#
# Updated:
#

#
# Changes
#
# - Added ulimit change to 30720
#

#
# Future Ideas
# - If there's an rsync error, automatic retry
#

RSYNC=/usr/syno/bin/rsync

#
# Need the directory where the script runs
#
shDir="$( cd "$( dirname "$0" )" && pwd )"

#
# Config files of interest
#
confRemHost="rem_host.sh"
confSrcDirs="src_dirs.conf"
confLastFile="last_backup.conf"

#
# Read in remote host info
#
. ${shDir}/${confRemHost}

#
# Misc Variables
#
vDate=`date +%Y-%m-%dT%H%M`
dirLogBackup="/volume1/<your directory path>/backup_logs"
dirBckupName="Incremental_${vDate}"
vRsyncOpts="--archive --partial --delete"
vLogLvl="--verbose"
dirLogs="$shDir"
vLogF=$dirLogs/rsync_${vDate}.log
vErr="no"
vMaxRetries=5
dirTemp="/volume1/meister_backup/backup_temp"

if [ -f "${shDir}/is.test" ]; then
dirLogBackup=${dirLogBackup}/test
rDir_base=${rDir_base}/test
fi

if [ ! -d "$dirLogBackup" ]; then
mkdir -p $dirLogBackup
chmod 777 $dirLogBackup
fi

exec > $vLogF 2>&1

if [ ! -f ${shDir}/${confLastFile} ]; then
echo ---
echo --- $confLastFile not found. Exiting...
echo ---
exit 42
fi

if [ ! -f "${shDir}/${confSrcDirs}" ]; then
echo ---
echo --- $shDir/$confSrcDirs not found. Exiting...
echo ---
exit 1
fi

vLastDir=`cat $shDir/$confLastFile`

#
# Loop through each directory to backup
# (There may be a better way to do this, but this works)
#
for myD in `cat $shDir/$confSrcDirs`
do
echo ---
echo "--- Starting directory backup: $myD"
echo ---

if [ -d "$myD" ]; then
$RSYNC $vRsyncOpts $vLogLvl -e "ssh -i ${rUserKey}" \
--temp-dir=$dirTemp \
--link-dest=$rDir_base/$vLastDir \
$myD $rUser@$rHost:$rDir_base/$dirBckupName

if [ "${?}" -ne "0" ]; then
vErr="yes"
echo "ERR ($?) : $myD" >> $dirLogs/rsync_${vDate}.err

# Put a test here for exit code 23, Partial Transfer Error
# And then retry?
# a while loop could work
fi
else
echo
echo "--- WARN: Directory $myD does not exist"
echo
fi

echo ---
echo --- Completed directory backup: $myD
echo ---
echo
done

#
# Some cleanup / completion stuff
#
if [ $vErr = "no" ]; then
echo $dirBckupName > $shDir/$confLastFile #track last backup dir
else
chmod 733 $shDir/*.err
mv $shDir/rsync_*.err $dirLogBackup #save off err file
fi

#
# Want to move log file to new location
#
chmod 733 $shDir/*.log
mv $shDir/rsync_*.log $dirLogBackup

Additional Required Files

  • src_dirs.conf
    • Located in the same directory as both scripts
    • Used by both scripts
    • Simple text file
    • Separate directory on each line
      • These are the directories that you want backed up
      • Sync’d individually via the “for” loop in each script
  • last_backup.conf
    • Located in the same directory as both scripts
    • Used by both scripts
    • Simple text file
    • Should only have 1 line
      • Name of the last directory where your last backup was placed
      • Only updated if rsync completes with zero / no errors
    • Used primarily by the Incremental backup script to do the incremental backup with rsync that’s implemented by using the “–link-dest” option
  • rem_host.sh
##
#
# Variables Defining the remote host / destination server
#
##

rHost=<ip/hostname>
rUser=<remote user, needs to exist on remote box, should not be root/admin>
rDir_base=/volume1/<base backup directory> # target base directory
rUserKey="/<directory tree to private key>/.ssh/<private-key>"

 

Additional Optional Files

  • There is a little test / debug code in each script.
  • File:  is.test
    • If exists, then backup scripts put backups and logs into separate “test” directories relative to the normal directories
    • If used, need to make sure both “test” locations exist first or else an error will happen
    • Made it easier for me to develop and test the scripts in one directory and then just do a “cp” to a “live” directory.
    • All I had to do in the dev directory was “touch is.test”

FYI

  • Private key authentication
    • Used openssl to create private key.  Nothing special in the creation of the key, so a quick google on the process should work fine
    • When setting up, I had some problems getting things to work for the user I created to do the rsync’s
      • Its totally possible that in my troubleshooting and Synology / DSM ignorance I did (or did not) do something I was (or was not) supposed to do
    • The key thing to remember though, is to make sure that the backup user’s home directory is where you place the “.ssh/backup-key” file
      • In my case, for some reason the user’s home in “/etc/passwd” was not pointing to where I was expecting.  So I changed it and restarted SSH
        • Should be able to use “synoservice” to restart SSH
    • Once I figured this out, everything else with configuring this went smoothly.  Luckily I’d had some experience in this area at work.  So that may have helped.
  • This entire solution works for me and my system setup.  I imagine there are a number of configurations where it will not work “out of the box”.
    • This may have problems if trying to rsync directories that are located on external devices or mounted from remote hosts or …. other stuff.

Synology Suggestions

Time Backup
  • Potential out-of-the-box backup solution
  • Cannot change user that is used for sync’ing.  Forces to use the default admin account.  Unacceptable
CloudStation
  • White and Black Lists
  • Originally envisioned because I don’t want the thumbs.db Windows file to be sync’d so much
  • Advanced idea:  Enable regular expressions for both lists
    • Could  accomplish original goal with *.jpg white list
Photo Viewer
  • I’m unable to use it.  Requires pictures to be in specific directories.
  • Would require duplicating all pictures between Photo Viewer directory and sync directory
  • Too much trouble to find a work around

Synology Configuration Items

Tested on DS412+ and DS411 from www.synology.com

  • Followed many of the items in this tutorial: http://www.synology.com/tutorials/how_to_protect_your_Synology.php?lang=us
    • Created new admin account and then disabled default admin account
    • Enabled auto-IP block
  • Used no-ip.com free DDNS service
  • Enabled local SSH
  • Configured with static LAN IPs
  • Enabled CloudStation
    • Limited number of folders able to be sync’d  🙁
    • Created 1 sync shared folder, placed picture and video folders in here