Preface: Common ownCloud Path

(This is part of My ownCloud Adventure)

For any adventure to come to a successful conclusion, the proper preparations must first be made.

With my previous experience working with the Raspberry Pi I was able to quickly get a dedicated server setup and connected to my Synology NAS via NFS.

I should mention here, to plant a seed of thought, that throughout my endeavors the security posture of my system has been a constant consideration.  As an example, with my NFS configuration there are mounts available on my network that I did not give my ownCloud host access to… I am just not comfortable with some files being remotely accessible.

While not exhaustive, there are some common tasks that should probably be performed when setting up a new Raspbian instance:

SD Card Images

Throughout my adventure I made extensive use of Win32 Disk Imager to create images of the SD card.  This allowed me to configure common features once and just reload an image to start over if needed.

For example, I have an image that I created after performing my basic Raspbian updates and configurations.  After that I have an image with SSL certs and MySQL already taken completed.  This definitely made it much easier to go from Apache2, to lighttpd and finally end up at nginx with a “clean” system.

SSL Certs

To allow any of the webservers to utilize HTTPS, generating SSL certificates is the first task.  There are MANY resources available out there, but here are the basic commands I performed.

  1. cd /etc/sll
  2. sudo mkdir localcerts
  3. sudo openssl req -newkey rsa:2048 -x509 -days 365 -nodes -out /etc/ssl/localcerts/myServer.fqdn.pem -keyout /etc/ssl/localcerts/myServer.fqdn.key
  4. sudo chmod 600 localcerts/meister*

These commands result in 2 files as output:  a PEM certificate & a key.  Both are used by any webserver to enable HTTPS.

You will be asked a number of questions during key generation.  Since this results in a self-signed key, answer them however you like.  Except for the FQDN question, I’m not sure any of them even technically matter.  And in the case of the FQDN question, I didn’t care if its value matched my dynamic DNS name or not.

The one important technical detail is that if you do not want to enter a password every time your webserver starts, then do not enter a password when prompted.

Good Resources:

MySQL

ownCloud supports multiple database backends, but I chose MySQL since its familiar to me (although I do wish MariaDB were available in the Raspbian repository).

  1. sudo aptitude
    1. Install MySQL server
    2. The install will ask for a ‘root’ password for your new database server
  2. mysql_secure_installation
    • A script that performs a number of standard bets practice configurations.  Be sure to follow its recommendations!
  3. mysql -u root -p
    • No need to put your password in as an option, you will be prompted
  4. At the “mysql>” prompt
    • create database myOcDbase;
    • create user ‘myOcUser’@’localhost’ identified by ‘myUserPass’;
    • create user ‘myOcUser’@’127.0.0.1’ identified by ‘myuserPass’;
    • grant all privileges on myOcDbase.* to ‘myOcUser’@’localhost’;
    • grant all privileges on myOcDbase.* to ‘myOcUser’@’127.0.0.1’;
    • exit

Good Resource:  http://dev.mysql.com/doc/refman/5.5/en/index.html

Acquiring ownCloud

Getting a hold of ownCloud is not difficult and can be accomplished via various means.

I originally dabbled with manually adding an ownCloud repository to my system’s repo list.  I just followed the instructions found for Debian off ownCloud’s Linux packages install link.

  1. cd /etc/apt/sources.list.d
  2. sudo nano owncloud.list
    • Enter:  “deb http://download.opensuse.org/repositories/isv:ownCloud:community/Debian_7.0/ /”
    • save and exit
  3. cd
  4. wget http://download.opensuse.org/repositories/isv:ownCloud:community/Debian_7.0/Release.key
  5. apt-key add – < Release.key
  6. sudo apt-get update
  7. sudo apt-get install owncloud

While this method did work and is not a bad way to go, especially considering its many advantages… I was unsure of how quickly the repository would be updated with new versions, so I instead elected to go with the manual install.

  • cd
  • wget http://download.owncloud.org/community/owncloud-5.0.10.tar.bz2
    • As versions change, this link will change.  So be sure to get the latest Tar link.
  • tar -xjvf owncloud-5.0.10.tar.bz2
  • mv owncloud owncloud_5.0.10
  • cp -r owncloud_5.0.10 /var/www
  • cd /var/www
  • sudo chmod -R www-data:www-data owncloud_5.0.10
  • sudo ln -s owncloud_5.0.10 owncloud
    • Using a symbolic link in this fashion can help in the future with manual updates.  Just follow ownCloud’s manual update instructions and pre-position the latest version’s directory under /var/www and re-do the symlink for a quick and easy upgrade

And that seems to wrap up the common activities across each of the volumes in my adventure.

My ownCloud Adventure

I recently came across a project that very quickly caught my interest.

Its called ownCloud.

Its open source and gives you your own personal Drop Box like cloud.  It has a number of available features that could give one the ability to move off Google… If they work properly, which I cannot say at this time as I have yet to dive into those features.

If that all sounds interesting, I do encourage you to go take a look.

As the title says, getting my own ownCloud instance up and running has been an adventure.  As concerning as that may initially sound, I suppose I should clarify from the beginning that the adventure is ALL of my own doing.  Overall, the actual installation and configuration of my personal ownCloud instance has gone exceptionally smoothly… With only one, not so minor, issue.

The one “not so minor” issue is that the provided Gallery application does not work for me.  This may have something to do with the fact that I’m running ownCloud off a dedicated, but still resource limited, Raspberry Pi connected via NFS to a Synology NAS and (at this point) only have PHP configured to use 256MB of memory per script… But (without any evidence) I do believe there is an underlying issue with the gallery app.  I also believe it will be fixed in time.  So for me, patience is required…

Unless an alternative gallery app, such as Gallery2, fills the void.  Unfortunately, while it showed some initial promise by actually displaying my root picture folder, my inability to go below the root level found a bug…

I guess I’m just a natural at this software testing stuff.

Back to my adventure though…

I now have an ownCloud 5.x instance adequately running on a dedicated Raspberry Pi, using Raspbian for the OS and nginx as the webserver.  Its configured with SSL, PHP APC and MySQL.

(For apparently limited philosophical reasons I would have preferred to go with MariaDB, but at this time it is not available via the repositories, so my philosophical reasons appear to be limited as I preferred the easy install route rather than building MariaDB myself).

It seems I took a cue for many great adventures though, and given away the ending.  So I suppose its time to start the beginning…

I did not start with nginx.  My adventure actually started with Apache, as that’s what I’ve had the most experience using.  I also did not go from Apache straight to nginx.  I had a brief fling with lighttpd.

So for proper documentation purposes, I plan to detail my adventure across several “volumes”.

I did mention that I started with the ending, but I think I’ll continue following the popular adventure recipe and consider the ending more of a new beginning…

New rsync Remote, Incremental Backup Script – n2nBackup

With my Pi I took the opportunity to re-write my rsync backup scripts.

This new setup does everything my first shot did, especially incremental backups via rsync’s “–link-dest” command, but I also believe is more modular, even though I have not had the need to use all its (perceived) capabilities… Or completed them all.

Some Basic Capabilities

  • Does incremental local or remote rsync’ing
  • Able to use private certs for authentication
  • Accounts for labeling backups daily or monthly
  • Uses rsync profile directories
  • (future) Allows pre & post command executions

Setup Structure

n2nBackup:
total 20
drwxrwxr-- 2 4096 May 7 22:10 logs
-rwxrwxr-- 1 6857 May 7 22:10 n2nRsync.sh
-rwxrwxr-- 1 245 May 7 22:10 n2nWrapper.sh
drwxrwxr-- 3 4096 May 7 22:11 profiles

n2nBackup/logs:
total 0

n2nBackup/profiles:
total 4
drwxr-xr-- 2 4096 May 7 22:10 template

n2nBackup/profiles/template:
total 24
-rw-r--r-- 1 367 May 7 22:10 dest.conf
-rw-r--r-- 1 78 May 7 22:10 excludes.conf
-rwxr-xr-- 1 21 May 7 22:10 n2nPost.sh
-rwxr-xr-- 1 21 May 7 22:10 n2nPre.sh
-rw-r--r-- 1 585 May 7 22:10 rsync.conf
-rw-r--r-- 1 126 May 7 22:10 src_dirs.conf

Script Usage

Usage: n2nRsync.sh [-p profile] [-c] [-l] [-t] [-h]

 -p profile              rsync profile to execute
 -c                      use when running via cron
                         when used, will output to log file
                         otherwise, defaults to stdout
 -l                      list profiles available
 -t                      runs with --dry-run enabled
 -h                      shows this usage info

Sample crontab Usage

00 00 * * * /dir/n2nBackup/n2nRsync.sh -p profName -c

Some Details

Right now I’m running everything with the n2nRsync.sh.  I have not implemented the n2nWrapper or pre & post command execution stuff.  In my previous backup script, that was run directly on my Synology NAS, I had a need for some pre-backup commands, but for whatever reason… Past bad coding… Ignorance… Synology quirks… Accessing the data via NFS now… The issues I had to work around are no longer being experienced.

I still need to create cleanup scripts that will age-off data based on specified criteria.  My plan right now, since this backup scheme relies on hard links and thus, takes up far less space than independent daily full backups would, is to keep a minimum of 30 daily backups… And since this new setup also labels a single backup as “monthly”… The last 6 monthly backups.  Which are not any different than just a different named daily backup.

I may post the actual script text in the future, but for now I’ll just provide a tgz for download.

n2nBackup.tar.gz

Raspbian – NFS w/Synology

Synology stuff first…

  • Enabled NFS file sharing on my Synology
  • For each share on my Synology NAS, had to go into NFS Permissions and create a rule
    • Disabled “Asynchronous”

Then followed these instructions…

First, install the necessary packages. I use aptitude, but you can use apt-get instead if you prefer; just put apt-get where you see aptitude:

sudo aptitude install nfs-common portmap

(nfs-common may already be installed, but this will ensure that it installs if it isn’t already)

On the current version of Raspbian, rpcbind (part of the portmap package) does not start by default. This is presumably done to control memory consumption on our small systems. However, it isn’t very big and we need it to make an NFS mount. To enable it manually, so we can mount our directory immediately:

sudo service rpcbind start

To make rpcbind start automatically at boot:

sudo update-rc.d rpcbind enable

Now, let’s mount our share:

Make a directory for your mount. I want mine at /public, so:

mkdir /public

Mount manually to test:

sudo mount -t nfs 192.168.222.34:/public /public

  • (my server share path) (where I want it mounted locally)

Now, to make it permanent, you need to edit /etc/fstab (as sudo) to make the directory mount at boot. I added a line to the end of /etc/fstab:

192.168.222.34:/public /public nfs rsize=8192,wsize=8192,timeo=14,intr 0 0

The rsize and wsize entries are not strictly necessary but I find they give me some extra speed.

 

Notes:

  • Everything basically needs to be done via sudo
  • With multiple shares, each needs to be mounted independently.  Such as…
    • /public/dir1
    • /public/dir2

 

Source: http://www.raspbian.org/RaspbianFAQ

Synology My Old Backup Cleanup Script

Since I created my own rsync backup solution for my Synology DS’s, I knew I’d need a script to cleanup and remove the older incremental backups.

The good thing is that since I’m using rsync’s “–link-dest” option, it should be pretty straight forward to cleanup old backups.  Every incremental backup is really a full backup, so just delete the oldest directories.

At least, that’s my theory…

I now have over 30 days of backups, so I was able to create and test a script that follows my theory.

I’ve only run this script manually at this point.  Since I’m actually deleting entire directories, I’m somewhat cautious in creating a new cron job to run this automatically.

#!/bin/sh
#
# Updated:
#
#
# Changes
#
# -
#

# Set to the max number of backups to keep
vMaxBackups=30

# Base directory of backups
dirBackups="/volume1/<directory containing all your backups>"

# Command to sort backups
# sorts based on creation time, oldest
# at the top of the list
cmdLs='ls -drc1'

# Count the number of total current backups
# All my backups start with "In"
cntBackups=`$cmdLs ${dirBackups}/In* | wc -l`

# Create the list of all backups
# All backups start with "In"
vBackupDirs=`$cmdLs ${dirBackups}/In*`

vCnt=0

for myD in $vBackupDirs
do
# Meant to be a safety mechanism
# that will hopefully kick in if the other
# test fails for some reason
tmpCnt=`$cmdLs ${dirBackups}/In* | wc -l`

if [ $tmpCnt -le 14 ]; then
exit
fi

# Main removal code
# If wanting to test first
# comment the "rm" line and uncomment the "echo" line
# echo $myD
rm -rf $myD

# Track how many directories have been deleted
vCnt=$((vCnt+1))

# Check to see if the script should exit
if [ $((cntBackups-vCnt)) -le $vMaxBackups ]; then
exit
fi
done

Synology crond Restart Script

Restarting crond on a Synology NAS is not hard… Once you learn that Synology has a customer service… service.

The key:  synoservice

Run “synoservice –help” to get educated.

I got tired of typing out multiple commands to restart crond, so I created a quick script to do it for me.

#!/bin/sh

echo
echo Restarting crond
echo

echo Stopping crond
synoservice --stop crond

sleep 1
echo
ps | grep crond | grep -v grep
#synoservice --detail crond

echo
echo Starting crond
synoservice --start crond

sleep 1

echo
ps | grep crond | grep -v grep
#synoservice --detail crond

echo
cat /etc/crontab

Synology Custom Daily Cron

Synology DSM 4.0/1 appears to only come with a single crontab file.  There is no implementation for a daily cron directory (that I can find).

So I created my own.

Configure /etc/crontab

  • Open /etc/crontab for editing
  • Add:  “0       1       *       *       *       root    for myCsh in `ls /<your directory path>/cron/cron.daily/*.sh`; do $myCsh;done > /dev/null 2>&1
    • Don’t forget to put in your real directory path above
    • Must use tabs (NOT spaces) between fields.
    • Spaces in the final field (that starts with “for myCsh” are OK
    • The above has the “for” loop being run every day at 1AM
  • Save
  • Restart cron for changes to take effect
    • synoservice –stop crond
    • synoservice –start crond

Configure Cron Daily Directories

  • These are the directories referenced by the above “for” loop
  • Example:  /<something>/cron/cron.daily
  • cd cron.daily
  • Create empty.sh
    • Needed so that if no real scripts are in the directory, the cron “for” loop will not throw any errors
    • Just having an executable file is enough, it does not even need anything in it
    • touch empty.sh
    • chmod 755 empty.sh

Done!

The cron.daily directory is where I put a script that calls my incremental rsync backup script.

I also created a “cron.test” directory for testing different things.  I usually have its line commented out in crontab, but it follows the same concept as above.

Extra Credit

When I was creating this, I wanted some reassurance that the crontab was actually working as expected, so I created a simple script that when ran creates another file in the cron.daily directory with a date time of the last run.

_lastrun.sh

  • I put the “_” in the name so the file should be the first executed by the “for” loop
#!/bin/sh

shDir="$( cd "$( dirname "$0" )" && pwd )"

rm $shDir/*.lastrun

touch $shDir/`date +%Y-%m-%dT%H%M`.lastrun

 

Synology Custom Remote Backup Solution

Created a personalized rsync incremental remote backup solution.

  • Tested with DSM 4.1 (and a little with 4.0)
  • Uses default rsync
  • Does not require any additional packages (e.g. ipkg)
  • Utilizes SSH transport and ssl keys for secure transport
Still To Do
  • Age-off script
  • Combine Full and Incremental backup scripts
  • Create age-off script
Some details
The below scripts utilize rsync’s “–link-dest” option, so that each incremental update takes up very little space compared to the first.  It does require a full backup initially, which is why I currently have 2 scripts.  I believe they can easily be combined, but this is the initial solution.
Hard links are pretty nifty.  Google them and how the rsync “–link-dest” option works.
I do not consider myself an advanced linux user, so there are probably a number of best practices with this solution that were not simply ignored, but totally unknown because of my ignorance.

This system should work in both directions between 2 Synology boxes.  I’ve only implemented in a single direction thus far, but reversing should be pretty simple.

Full backup Script
  • Needs to be run the first time to do the initial full backup
  • If able, recommend doing this locally across a LAN and then doing incremental backups over the WAN
#!/bin/sh

#
# Updated:
#

#
# Changes
#
#
# Future Ideas
# - If there's an rsync error, automatic retry
#

RSYNC=/usr/syno/bin/rsync

#
# Need the directory where the script runs
#
shDir="$( cd "$( dirname "$0" )" && pwd )"

#
# Config files of interest
#
confRemHost="rem_host.sh"
confSrcDirs="src_dirs.conf"
confLastFile="last_backup.conf"

#
# Read in remote host info
#
. ${shDir}/${confRemHost}

#
# Misc Variables
#
vDate=`date +%Y-%m-%dT%H%M`
dirLogBackup="/volume1/<your directory path>/backup_logs"
dirBckupName="Initial_${vDate}"
vRsyncOpts="--archive --partial"
vLogLvl="--verbose"
dirLogs="$shDir"
vLogF=$dirLogs/rsync_${vDate}.log
vErr="no"
vMaxRetries=5

if [ -f "${shDir}/is.test" ]; then
dirLogBackup=${dirLogBackup}/test
rDir_base=${rDir_base}/test
fi

if [ ! -d "$dirLogBackup" ]; then
mkdir -p $dirLogBackup
chmod 777 $dirLogBackup
fi

exec > $vLogF 2>&1

if [ ! -f "${shDir}/${confSrcDirs}" ]; then
echo ---
echo --- $shDir/$confSrcDirs not found. Exiting...
echo ---
exit 1
fi

#
# Loop through each directory to backup
# (There may be a better way to do this, but this works)
#
for myD in `cat $shDir/$confSrcDirs`
do
echo ---
echo "--- Starting directory backup: $myD"
echo ---

if [ -d "$myD" ]; then
$RSYNC $vRsyncOpts $vLogLvl -e "ssh -i ${rUserKey}" \
$myD $rUser@$rHost:$rDir_base/$dirBckupName

if [ "${?}" -ne "0" ]; then
vErr="yes"
echo "ERR ($?) : $myD" >> $dirLogs/rsync_${vDate}.err

# Put a test here for exit code 23, Partial Transfer Error
# And then retry?
# a while loop could work
fi
else
echo
echo "--- WARN: Directory $myD does not exist"
echo
fi

echo ---
echo --- Completed directory backup: $myD
echo ---
echo
done

#
# Some cleanup / completion stuff
#
if [ $vErr = "no" ]; then
echo $dirBckupName > $shDir/$confLastFile #track last backup dir
else
chmod 733 $shDir/*.err
mv $shDir/rsync_*.err $dirLogBackup #save off err file
fi

#
# Want to move log file to new location
#
chmod 733 $shDir/*.log
mv $shDir/rsync_*.log $dirLogBackup
Incremental Backup Scripts
  • Added the “–temp-dir” & “–link-dest” options for rsync when compared to the Full backup script
#!/bin/sh

#
# Updated:
#

#
# Changes
#
# - Added ulimit change to 30720
#

#
# Future Ideas
# - If there's an rsync error, automatic retry
#

RSYNC=/usr/syno/bin/rsync

#
# Need the directory where the script runs
#
shDir="$( cd "$( dirname "$0" )" && pwd )"

#
# Config files of interest
#
confRemHost="rem_host.sh"
confSrcDirs="src_dirs.conf"
confLastFile="last_backup.conf"

#
# Read in remote host info
#
. ${shDir}/${confRemHost}

#
# Misc Variables
#
vDate=`date +%Y-%m-%dT%H%M`
dirLogBackup="/volume1/<your directory path>/backup_logs"
dirBckupName="Incremental_${vDate}"
vRsyncOpts="--archive --partial --delete"
vLogLvl="--verbose"
dirLogs="$shDir"
vLogF=$dirLogs/rsync_${vDate}.log
vErr="no"
vMaxRetries=5
dirTemp="/volume1/meister_backup/backup_temp"

if [ -f "${shDir}/is.test" ]; then
dirLogBackup=${dirLogBackup}/test
rDir_base=${rDir_base}/test
fi

if [ ! -d "$dirLogBackup" ]; then
mkdir -p $dirLogBackup
chmod 777 $dirLogBackup
fi

exec > $vLogF 2>&1

if [ ! -f ${shDir}/${confLastFile} ]; then
echo ---
echo --- $confLastFile not found. Exiting...
echo ---
exit 42
fi

if [ ! -f "${shDir}/${confSrcDirs}" ]; then
echo ---
echo --- $shDir/$confSrcDirs not found. Exiting...
echo ---
exit 1
fi

vLastDir=`cat $shDir/$confLastFile`

#
# Loop through each directory to backup
# (There may be a better way to do this, but this works)
#
for myD in `cat $shDir/$confSrcDirs`
do
echo ---
echo "--- Starting directory backup: $myD"
echo ---

if [ -d "$myD" ]; then
$RSYNC $vRsyncOpts $vLogLvl -e "ssh -i ${rUserKey}" \
--temp-dir=$dirTemp \
--link-dest=$rDir_base/$vLastDir \
$myD $rUser@$rHost:$rDir_base/$dirBckupName

if [ "${?}" -ne "0" ]; then
vErr="yes"
echo "ERR ($?) : $myD" >> $dirLogs/rsync_${vDate}.err

# Put a test here for exit code 23, Partial Transfer Error
# And then retry?
# a while loop could work
fi
else
echo
echo "--- WARN: Directory $myD does not exist"
echo
fi

echo ---
echo --- Completed directory backup: $myD
echo ---
echo
done

#
# Some cleanup / completion stuff
#
if [ $vErr = "no" ]; then
echo $dirBckupName > $shDir/$confLastFile #track last backup dir
else
chmod 733 $shDir/*.err
mv $shDir/rsync_*.err $dirLogBackup #save off err file
fi

#
# Want to move log file to new location
#
chmod 733 $shDir/*.log
mv $shDir/rsync_*.log $dirLogBackup

Additional Required Files

  • src_dirs.conf
    • Located in the same directory as both scripts
    • Used by both scripts
    • Simple text file
    • Separate directory on each line
      • These are the directories that you want backed up
      • Sync’d individually via the “for” loop in each script
  • last_backup.conf
    • Located in the same directory as both scripts
    • Used by both scripts
    • Simple text file
    • Should only have 1 line
      • Name of the last directory where your last backup was placed
      • Only updated if rsync completes with zero / no errors
    • Used primarily by the Incremental backup script to do the incremental backup with rsync that’s implemented by using the “–link-dest” option
  • rem_host.sh
##
#
# Variables Defining the remote host / destination server
#
##

rHost=<ip/hostname>
rUser=<remote user, needs to exist on remote box, should not be root/admin>
rDir_base=/volume1/<base backup directory> # target base directory
rUserKey="/<directory tree to private key>/.ssh/<private-key>"

 

Additional Optional Files

  • There is a little test / debug code in each script.
  • File:  is.test
    • If exists, then backup scripts put backups and logs into separate “test” directories relative to the normal directories
    • If used, need to make sure both “test” locations exist first or else an error will happen
    • Made it easier for me to develop and test the scripts in one directory and then just do a “cp” to a “live” directory.
    • All I had to do in the dev directory was “touch is.test”

FYI

  • Private key authentication
    • Used openssl to create private key.  Nothing special in the creation of the key, so a quick google on the process should work fine
    • When setting up, I had some problems getting things to work for the user I created to do the rsync’s
      • Its totally possible that in my troubleshooting and Synology / DSM ignorance I did (or did not) do something I was (or was not) supposed to do
    • The key thing to remember though, is to make sure that the backup user’s home directory is where you place the “.ssh/backup-key” file
      • In my case, for some reason the user’s home in “/etc/passwd” was not pointing to where I was expecting.  So I changed it and restarted SSH
        • Should be able to use “synoservice” to restart SSH
    • Once I figured this out, everything else with configuring this went smoothly.  Luckily I’d had some experience in this area at work.  So that may have helped.
  • This entire solution works for me and my system setup.  I imagine there are a number of configurations where it will not work “out of the box”.
    • This may have problems if trying to rsync directories that are located on external devices or mounted from remote hosts or …. other stuff.

Synology Suggestions

Time Backup
  • Potential out-of-the-box backup solution
  • Cannot change user that is used for sync’ing.  Forces to use the default admin account.  Unacceptable
CloudStation
  • White and Black Lists
  • Originally envisioned because I don’t want the thumbs.db Windows file to be sync’d so much
  • Advanced idea:  Enable regular expressions for both lists
    • Could  accomplish original goal with *.jpg white list
Photo Viewer
  • I’m unable to use it.  Requires pictures to be in specific directories.
  • Would require duplicating all pictures between Photo Viewer directory and sync directory
  • Too much trouble to find a work around