Volume 3: nginx: A Successful Foundation

(This is part of My ownCloud Adventure)

The Final Fron… Errrr… Wrong adventure.

In the end I have chosen to utilize Nginx to host my personal ownCloud… At least until I have a reason to change my mind, probably for an arbitrary or subjective reason.

Nginx is rather new to me, bug generally speaking, remembering how to pronounce its name was more difficult than getting things working.

Just like getting lighttpd working, I took advantage of Win32 Disk Imager to write an image to the SD card with all of the prep work covered by the Preface completed.

  1.  sudo apt-get update
  2. sudo apt-get install nginx php5-cgi php5-fpm curl libcurl3 php5-curl php5-common php5-gd php-xml-serializer php5-mysql
  3. cd /etc/nginx/sites-available
  4. sudo nano siteName-ssl
    1. copy/paste the Nginx config from ownClouds docs
    2. Edits
      • “root /var/www/owncloud”
      • Find:  “location ~ ^/(data|config|\.ht|db_structure\.xml|README)”
        • Change to (Thanks Dave!):
          • “location ~ ^.*/?(data|config|\.ht|db_structure\.xml|README)”
      • comment out:  fastcgi_pass 127.0.0.1:9000;
      • uncomment:  fastcgi_pass unix:/var/run/php5-fpm.sock
    3. save and exit
  5. cd ../sites-enabled
  6. sudo rm default
  7. sudo ln -s ../sites-available/siteName-ssl
  8. sudo service nginx restart

At this point a choice needs to be made.  One can continue installing the php-apc module, or one can go ahead and do the initial ownCloud configuration… Getting things to work and experiencing the load times as-is… Providing a great before-and-after comparison to fully appreciate what php-apc accomplishes.

PHP-APC

Regardless of which choice is made, the php-apc module is a required addition… And a very quick install.

  1. sudo apt-get install php-apc
  2. sudo service php5-fpm restart

And it is now ready and working!

ownCloud Initial Configuration

The first time you bring up ownCloud, which with the above steps can be found at https://server.ip.or.fqdn/owncloud, you are presented with a pretty simple initial configuration page.

For anyone following this path, I believe its pretty self explanatory.  It only needs a few sets of information…

  • Initial username & password
  • ownCloud’s data directory
  • Database details

Some details may be hidden via an Advanced Options link.

The data directory configuration item is the one that may need the most consideration depending on circumstances.  At this early point, with little experience, I did not have a need to divert from the default.

A Little Tuning

There are a couple of options that I would recommend doing.

So lets go into the ownCloud Admin section.

The most important configuration change to make with under the “Cron” section.  While the default is set to “Ajax”, its recommended to use the “Cron” option.

Configuring the cron is not difficult, but should be done under the default webserver user.  (ownCloud’s cron doc)

  1. sudo -u www-data crontab -e
  2. Add to the bottom:  */15 * * * * php -f /var/www/owncloud/cron.php
  3. save and exit

Beyond the cron configuration, you can also check “Enforce HTTPS”.  While the webserver should be doing this task, a little backup probably won’t hurt.

App Thoughts

External storage support:  Probably one of the most important available apps.  This lets you connect to Dropbox, Google Drive or other accounts.  It also lets you define local storage directories, such as NFS mounts.

ownCloud Dependencies Info:  I do not believe this one is needed normally, but is a good tool out there to make check ensure you have the appropriate dependencies.

Unfortunately I had to disable the Pictures app at this time since it is not working properly for me. I hope it’ll be resolved in the future.

 

And with that… For now… Until next time… The End

Various References:

Preface: Common ownCloud Path

(This is part of My ownCloud Adventure)

For any adventure to come to a successful conclusion, the proper preparations must first be made.

With my previous experience working with the Raspberry Pi I was able to quickly get a dedicated server setup and connected to my Synology NAS via NFS.

I should mention here, to plant a seed of thought, that throughout my endeavors the security posture of my system has been a constant consideration.  As an example, with my NFS configuration there are mounts available on my network that I did not give my ownCloud host access to… I am just not comfortable with some files being remotely accessible.

While not exhaustive, there are some common tasks that should probably be performed when setting up a new Raspbian instance:

SD Card Images

Throughout my adventure I made extensive use of Win32 Disk Imager to create images of the SD card.  This allowed me to configure common features once and just reload an image to start over if needed.

For example, I have an image that I created after performing my basic Raspbian updates and configurations.  After that I have an image with SSL certs and MySQL already taken completed.  This definitely made it much easier to go from Apache2, to lighttpd and finally end up at nginx with a “clean” system.

SSL Certs

To allow any of the webservers to utilize HTTPS, generating SSL certificates is the first task.  There are MANY resources available out there, but here are the basic commands I performed.

  1. cd /etc/sll
  2. sudo mkdir localcerts
  3. sudo openssl req -newkey rsa:2048 -x509 -days 365 -nodes -out /etc/ssl/localcerts/myServer.fqdn.pem -keyout /etc/ssl/localcerts/myServer.fqdn.key
  4. sudo chmod 600 localcerts/meister*

These commands result in 2 files as output:  a PEM certificate & a key.  Both are used by any webserver to enable HTTPS.

You will be asked a number of questions during key generation.  Since this results in a self-signed key, answer them however you like.  Except for the FQDN question, I’m not sure any of them even technically matter.  And in the case of the FQDN question, I didn’t care if its value matched my dynamic DNS name or not.

The one important technical detail is that if you do not want to enter a password every time your webserver starts, then do not enter a password when prompted.

Good Resources:

MySQL

ownCloud supports multiple database backends, but I chose MySQL since its familiar to me (although I do wish MariaDB were available in the Raspbian repository).

  1. sudo aptitude
    1. Install MySQL server
    2. The install will ask for a ‘root’ password for your new database server
  2. mysql_secure_installation
    • A script that performs a number of standard bets practice configurations.  Be sure to follow its recommendations!
  3. mysql -u root -p
    • No need to put your password in as an option, you will be prompted
  4. At the “mysql>” prompt
    • create database myOcDbase;
    • create user ‘myOcUser’@’localhost’ identified by ‘myUserPass’;
    • create user ‘myOcUser’@’127.0.0.1’ identified by ‘myuserPass’;
    • grant all privileges on myOcDbase.* to ‘myOcUser’@’localhost’;
    • grant all privileges on myOcDbase.* to ‘myOcUser’@’127.0.0.1’;
    • exit

Good Resource:  http://dev.mysql.com/doc/refman/5.5/en/index.html

Acquiring ownCloud

Getting a hold of ownCloud is not difficult and can be accomplished via various means.

I originally dabbled with manually adding an ownCloud repository to my system’s repo list.  I just followed the instructions found for Debian off ownCloud’s Linux packages install link.

  1. cd /etc/apt/sources.list.d
  2. sudo nano owncloud.list
    • Enter:  “deb http://download.opensuse.org/repositories/isv:ownCloud:community/Debian_7.0/ /”
    • save and exit
  3. cd
  4. wget http://download.opensuse.org/repositories/isv:ownCloud:community/Debian_7.0/Release.key
  5. apt-key add – < Release.key
  6. sudo apt-get update
  7. sudo apt-get install owncloud

While this method did work and is not a bad way to go, especially considering its many advantages… I was unsure of how quickly the repository would be updated with new versions, so I instead elected to go with the manual install.

  • cd
  • wget http://download.owncloud.org/community/owncloud-5.0.10.tar.bz2
    • As versions change, this link will change.  So be sure to get the latest Tar link.
  • tar -xjvf owncloud-5.0.10.tar.bz2
  • mv owncloud owncloud_5.0.10
  • cp -r owncloud_5.0.10 /var/www
  • cd /var/www
  • sudo chmod -R www-data:www-data owncloud_5.0.10
  • sudo ln -s owncloud_5.0.10 owncloud
    • Using a symbolic link in this fashion can help in the future with manual updates.  Just follow ownCloud’s manual update instructions and pre-position the latest version’s directory under /var/www and re-do the symlink for a quick and easy upgrade

And that seems to wrap up the common activities across each of the volumes in my adventure.

New rsync Remote, Incremental Backup Script – n2nBackup

With my Pi I took the opportunity to re-write my rsync backup scripts.

This new setup does everything my first shot did, especially incremental backups via rsync’s “–link-dest” command, but I also believe is more modular, even though I have not had the need to use all its (perceived) capabilities… Or completed them all.

Some Basic Capabilities

  • Does incremental local or remote rsync’ing
  • Able to use private certs for authentication
  • Accounts for labeling backups daily or monthly
  • Uses rsync profile directories
  • (future) Allows pre & post command executions

Setup Structure

n2nBackup:
total 20
drwxrwxr-- 2 4096 May 7 22:10 logs
-rwxrwxr-- 1 6857 May 7 22:10 n2nRsync.sh
-rwxrwxr-- 1 245 May 7 22:10 n2nWrapper.sh
drwxrwxr-- 3 4096 May 7 22:11 profiles

n2nBackup/logs:
total 0

n2nBackup/profiles:
total 4
drwxr-xr-- 2 4096 May 7 22:10 template

n2nBackup/profiles/template:
total 24
-rw-r--r-- 1 367 May 7 22:10 dest.conf
-rw-r--r-- 1 78 May 7 22:10 excludes.conf
-rwxr-xr-- 1 21 May 7 22:10 n2nPost.sh
-rwxr-xr-- 1 21 May 7 22:10 n2nPre.sh
-rw-r--r-- 1 585 May 7 22:10 rsync.conf
-rw-r--r-- 1 126 May 7 22:10 src_dirs.conf

Script Usage

Usage: n2nRsync.sh [-p profile] [-c] [-l] [-t] [-h]

 -p profile              rsync profile to execute
 -c                      use when running via cron
                         when used, will output to log file
                         otherwise, defaults to stdout
 -l                      list profiles available
 -t                      runs with --dry-run enabled
 -h                      shows this usage info

Sample crontab Usage

00 00 * * * /dir/n2nBackup/n2nRsync.sh -p profName -c

Some Details

Right now I’m running everything with the n2nRsync.sh.  I have not implemented the n2nWrapper or pre & post command execution stuff.  In my previous backup script, that was run directly on my Synology NAS, I had a need for some pre-backup commands, but for whatever reason… Past bad coding… Ignorance… Synology quirks… Accessing the data via NFS now… The issues I had to work around are no longer being experienced.

I still need to create cleanup scripts that will age-off data based on specified criteria.  My plan right now, since this backup scheme relies on hard links and thus, takes up far less space than independent daily full backups would, is to keep a minimum of 30 daily backups… And since this new setup also labels a single backup as “monthly”… The last 6 monthly backups.  Which are not any different than just a different named daily backup.

I may post the actual script text in the future, but for now I’ll just provide a tgz for download.

n2nBackup.tar.gz

Synology My Old Backup Cleanup Script

Since I created my own rsync backup solution for my Synology DS’s, I knew I’d need a script to cleanup and remove the older incremental backups.

The good thing is that since I’m using rsync’s “–link-dest” option, it should be pretty straight forward to cleanup old backups.  Every incremental backup is really a full backup, so just delete the oldest directories.

At least, that’s my theory…

I now have over 30 days of backups, so I was able to create and test a script that follows my theory.

I’ve only run this script manually at this point.  Since I’m actually deleting entire directories, I’m somewhat cautious in creating a new cron job to run this automatically.

#!/bin/sh
#
# Updated:
#
#
# Changes
#
# -
#

# Set to the max number of backups to keep
vMaxBackups=30

# Base directory of backups
dirBackups="/volume1/<directory containing all your backups>"

# Command to sort backups
# sorts based on creation time, oldest
# at the top of the list
cmdLs='ls -drc1'

# Count the number of total current backups
# All my backups start with "In"
cntBackups=`$cmdLs ${dirBackups}/In* | wc -l`

# Create the list of all backups
# All backups start with "In"
vBackupDirs=`$cmdLs ${dirBackups}/In*`

vCnt=0

for myD in $vBackupDirs
do
# Meant to be a safety mechanism
# that will hopefully kick in if the other
# test fails for some reason
tmpCnt=`$cmdLs ${dirBackups}/In* | wc -l`

if [ $tmpCnt -le 14 ]; then
exit
fi

# Main removal code
# If wanting to test first
# comment the "rm" line and uncomment the "echo" line
# echo $myD
rm -rf $myD

# Track how many directories have been deleted
vCnt=$((vCnt+1))

# Check to see if the script should exit
if [ $((cntBackups-vCnt)) -le $vMaxBackups ]; then
exit
fi
done

Synology crond Restart Script

Restarting crond on a Synology NAS is not hard… Once you learn that Synology has a customer service… service.

The key:  synoservice

Run “synoservice –help” to get educated.

I got tired of typing out multiple commands to restart crond, so I created a quick script to do it for me.

#!/bin/sh

echo
echo Restarting crond
echo

echo Stopping crond
synoservice --stop crond

sleep 1
echo
ps | grep crond | grep -v grep
#synoservice --detail crond

echo
echo Starting crond
synoservice --start crond

sleep 1

echo
ps | grep crond | grep -v grep
#synoservice --detail crond

echo
cat /etc/crontab

Synology Custom Daily Cron

Synology DSM 4.0/1 appears to only come with a single crontab file.  There is no implementation for a daily cron directory (that I can find).

So I created my own.

Configure /etc/crontab

  • Open /etc/crontab for editing
  • Add:  “0       1       *       *       *       root    for myCsh in `ls /<your directory path>/cron/cron.daily/*.sh`; do $myCsh;done > /dev/null 2>&1
    • Don’t forget to put in your real directory path above
    • Must use tabs (NOT spaces) between fields.
    • Spaces in the final field (that starts with “for myCsh” are OK
    • The above has the “for” loop being run every day at 1AM
  • Save
  • Restart cron for changes to take effect
    • synoservice –stop crond
    • synoservice –start crond

Configure Cron Daily Directories

  • These are the directories referenced by the above “for” loop
  • Example:  /<something>/cron/cron.daily
  • cd cron.daily
  • Create empty.sh
    • Needed so that if no real scripts are in the directory, the cron “for” loop will not throw any errors
    • Just having an executable file is enough, it does not even need anything in it
    • touch empty.sh
    • chmod 755 empty.sh

Done!

The cron.daily directory is where I put a script that calls my incremental rsync backup script.

I also created a “cron.test” directory for testing different things.  I usually have its line commented out in crontab, but it follows the same concept as above.

Extra Credit

When I was creating this, I wanted some reassurance that the crontab was actually working as expected, so I created a simple script that when ran creates another file in the cron.daily directory with a date time of the last run.

_lastrun.sh

  • I put the “_” in the name so the file should be the first executed by the “for” loop
#!/bin/sh

shDir="$( cd "$( dirname "$0" )" && pwd )"

rm $shDir/*.lastrun

touch $shDir/`date +%Y-%m-%dT%H%M`.lastrun