Synology, Raspbian, NFS and Nobody…

It took me a couple of weeks (or months) to… Not fix a problem… Even discover that I had a problem.

Once I discovered that I even had a problem… And I have a feeling this issue has been causing me a number of permission annoyances… I was able to use some “Google-fu” and resolve it in less than an hour.

With the actual fix taking < 5 mins.

Problem

From one of my Raspberry Pi’s running Raspbian I was seeing files with weird ownership info… Specifically files were owned by “nobody”.  I suppose I originally considered this a quirk of NFS… But then I noticed that this same thing was not happening on my other Pi.

The problem presented itself via many annoying “access denied” or “operation not permitted” failure messages when working in NFS mounted directories.

The final straw was when I could not run dos2unix successfully on a text file.  Such a simple operation… Very interesting, especially when I stopped watching TV and paid more attention to the error message.

dos2unix: Failed to change the owner and group of temporary output file ./d2utmpte0dzV: Operation not permitted

Resolution

The fix is very simple… Force my clients to connect via NFSv3 instead of v4.

In /etc/fstab I added the option “nfsvers=3” to all my mounts.

192.168.222.34:/public /public nfs nfsvers=3,rsize=8192,wsize=8192,timeo=14,intr 0 0

Once added and re-mounted… Either manually or via a quick and easy reboot… Things started working as expected.

Additional Thoughts

This issue was the result of new behavior.  I recently updated my Synology DSM from v4.2 to v4.3.  I have a feeling that this enabled NFSv4 on my Synology NAS.  This could explain why I was seeing things differently between my Pi’s… I’ve been working on and rebooting one often lately, but the other has been up for many weeks and once the NFS mount is established I’m pretty sure the version does not change.

I believe I saw results for bugs and issues related to NFSv4 along this same vein that were not Synology related… So this issue may not be specific to Synology’s NFS support / implementation.

During my troubleshooting I had a need to confirm that the NFS version could be the issue… So I obviously needed to confirm what version of the NFS protocol was being used.  The quick and easy command to determine that is “nfsstat”.

Preface: Common ownCloud Path

(This is part of My ownCloud Adventure)

For any adventure to come to a successful conclusion, the proper preparations must first be made.

With my previous experience working with the Raspberry Pi I was able to quickly get a dedicated server setup and connected to my Synology NAS via NFS.

I should mention here, to plant a seed of thought, that throughout my endeavors the security posture of my system has been a constant consideration.  As an example, with my NFS configuration there are mounts available on my network that I did not give my ownCloud host access to… I am just not comfortable with some files being remotely accessible.

While not exhaustive, there are some common tasks that should probably be performed when setting up a new Raspbian instance:

SD Card Images

Throughout my adventure I made extensive use of Win32 Disk Imager to create images of the SD card.  This allowed me to configure common features once and just reload an image to start over if needed.

For example, I have an image that I created after performing my basic Raspbian updates and configurations.  After that I have an image with SSL certs and MySQL already taken completed.  This definitely made it much easier to go from Apache2, to lighttpd and finally end up at nginx with a “clean” system.

SSL Certs

To allow any of the webservers to utilize HTTPS, generating SSL certificates is the first task.  There are MANY resources available out there, but here are the basic commands I performed.

  1. cd /etc/sll
  2. sudo mkdir localcerts
  3. sudo openssl req -newkey rsa:2048 -x509 -days 365 -nodes -out /etc/ssl/localcerts/myServer.fqdn.pem -keyout /etc/ssl/localcerts/myServer.fqdn.key
  4. sudo chmod 600 localcerts/meister*

These commands result in 2 files as output:  a PEM certificate & a key.  Both are used by any webserver to enable HTTPS.

You will be asked a number of questions during key generation.  Since this results in a self-signed key, answer them however you like.  Except for the FQDN question, I’m not sure any of them even technically matter.  And in the case of the FQDN question, I didn’t care if its value matched my dynamic DNS name or not.

The one important technical detail is that if you do not want to enter a password every time your webserver starts, then do not enter a password when prompted.

Good Resources:

MySQL

ownCloud supports multiple database backends, but I chose MySQL since its familiar to me (although I do wish MariaDB were available in the Raspbian repository).

  1. sudo aptitude
    1. Install MySQL server
    2. The install will ask for a ‘root’ password for your new database server
  2. mysql_secure_installation
    • A script that performs a number of standard bets practice configurations.  Be sure to follow its recommendations!
  3. mysql -u root -p
    • No need to put your password in as an option, you will be prompted
  4. At the “mysql>” prompt
    • create database myOcDbase;
    • create user ‘myOcUser’@’localhost’ identified by ‘myUserPass’;
    • create user ‘myOcUser’@’127.0.0.1’ identified by ‘myuserPass’;
    • grant all privileges on myOcDbase.* to ‘myOcUser’@’localhost’;
    • grant all privileges on myOcDbase.* to ‘myOcUser’@’127.0.0.1’;
    • exit

Good Resource:  http://dev.mysql.com/doc/refman/5.5/en/index.html

Acquiring ownCloud

Getting a hold of ownCloud is not difficult and can be accomplished via various means.

I originally dabbled with manually adding an ownCloud repository to my system’s repo list.  I just followed the instructions found for Debian off ownCloud’s Linux packages install link.

  1. cd /etc/apt/sources.list.d
  2. sudo nano owncloud.list
    • Enter:  “deb http://download.opensuse.org/repositories/isv:ownCloud:community/Debian_7.0/ /”
    • save and exit
  3. cd
  4. wget http://download.opensuse.org/repositories/isv:ownCloud:community/Debian_7.0/Release.key
  5. apt-key add – < Release.key
  6. sudo apt-get update
  7. sudo apt-get install owncloud

While this method did work and is not a bad way to go, especially considering its many advantages… I was unsure of how quickly the repository would be updated with new versions, so I instead elected to go with the manual install.

  • cd
  • wget http://download.owncloud.org/community/owncloud-5.0.10.tar.bz2
    • As versions change, this link will change.  So be sure to get the latest Tar link.
  • tar -xjvf owncloud-5.0.10.tar.bz2
  • mv owncloud owncloud_5.0.10
  • cp -r owncloud_5.0.10 /var/www
  • cd /var/www
  • sudo chmod -R www-data:www-data owncloud_5.0.10
  • sudo ln -s owncloud_5.0.10 owncloud
    • Using a symbolic link in this fashion can help in the future with manual updates.  Just follow ownCloud’s manual update instructions and pre-position the latest version’s directory under /var/www and re-do the symlink for a quick and easy upgrade

And that seems to wrap up the common activities across each of the volumes in my adventure.

My ownCloud Adventure

I recently came across a project that very quickly caught my interest.

Its called ownCloud.

Its open source and gives you your own personal Drop Box like cloud.  It has a number of available features that could give one the ability to move off Google… If they work properly, which I cannot say at this time as I have yet to dive into those features.

If that all sounds interesting, I do encourage you to go take a look.

As the title says, getting my own ownCloud instance up and running has been an adventure.  As concerning as that may initially sound, I suppose I should clarify from the beginning that the adventure is ALL of my own doing.  Overall, the actual installation and configuration of my personal ownCloud instance has gone exceptionally smoothly… With only one, not so minor, issue.

The one “not so minor” issue is that the provided Gallery application does not work for me.  This may have something to do with the fact that I’m running ownCloud off a dedicated, but still resource limited, Raspberry Pi connected via NFS to a Synology NAS and (at this point) only have PHP configured to use 256MB of memory per script… But (without any evidence) I do believe there is an underlying issue with the gallery app.  I also believe it will be fixed in time.  So for me, patience is required…

Unless an alternative gallery app, such as Gallery2, fills the void.  Unfortunately, while it showed some initial promise by actually displaying my root picture folder, my inability to go below the root level found a bug…

I guess I’m just a natural at this software testing stuff.

Back to my adventure though…

I now have an ownCloud 5.x instance adequately running on a dedicated Raspberry Pi, using Raspbian for the OS and nginx as the webserver.  Its configured with SSL, PHP APC and MySQL.

(For apparently limited philosophical reasons I would have preferred to go with MariaDB, but at this time it is not available via the repositories, so my philosophical reasons appear to be limited as I preferred the easy install route rather than building MariaDB myself).

It seems I took a cue for many great adventures though, and given away the ending.  So I suppose its time to start the beginning…

I did not start with nginx.  My adventure actually started with Apache, as that’s what I’ve had the most experience using.  I also did not go from Apache straight to nginx.  I had a brief fling with lighttpd.

So for proper documentation purposes, I plan to detail my adventure across several “volumes”.

I did mention that I started with the ending, but I think I’ll continue following the popular adventure recipe and consider the ending more of a new beginning…

New rsync Remote, Incremental Backup Script – n2nBackup

With my Pi I took the opportunity to re-write my rsync backup scripts.

This new setup does everything my first shot did, especially incremental backups via rsync’s “–link-dest” command, but I also believe is more modular, even though I have not had the need to use all its (perceived) capabilities… Or completed them all.

Some Basic Capabilities

  • Does incremental local or remote rsync’ing
  • Able to use private certs for authentication
  • Accounts for labeling backups daily or monthly
  • Uses rsync profile directories
  • (future) Allows pre & post command executions

Setup Structure

n2nBackup:
total 20
drwxrwxr-- 2 4096 May 7 22:10 logs
-rwxrwxr-- 1 6857 May 7 22:10 n2nRsync.sh
-rwxrwxr-- 1 245 May 7 22:10 n2nWrapper.sh
drwxrwxr-- 3 4096 May 7 22:11 profiles

n2nBackup/logs:
total 0

n2nBackup/profiles:
total 4
drwxr-xr-- 2 4096 May 7 22:10 template

n2nBackup/profiles/template:
total 24
-rw-r--r-- 1 367 May 7 22:10 dest.conf
-rw-r--r-- 1 78 May 7 22:10 excludes.conf
-rwxr-xr-- 1 21 May 7 22:10 n2nPost.sh
-rwxr-xr-- 1 21 May 7 22:10 n2nPre.sh
-rw-r--r-- 1 585 May 7 22:10 rsync.conf
-rw-r--r-- 1 126 May 7 22:10 src_dirs.conf

Script Usage

Usage: n2nRsync.sh [-p profile] [-c] [-l] [-t] [-h]

 -p profile              rsync profile to execute
 -c                      use when running via cron
                         when used, will output to log file
                         otherwise, defaults to stdout
 -l                      list profiles available
 -t                      runs with --dry-run enabled
 -h                      shows this usage info

Sample crontab Usage

00 00 * * * /dir/n2nBackup/n2nRsync.sh -p profName -c

Some Details

Right now I’m running everything with the n2nRsync.sh.  I have not implemented the n2nWrapper or pre & post command execution stuff.  In my previous backup script, that was run directly on my Synology NAS, I had a need for some pre-backup commands, but for whatever reason… Past bad coding… Ignorance… Synology quirks… Accessing the data via NFS now… The issues I had to work around are no longer being experienced.

I still need to create cleanup scripts that will age-off data based on specified criteria.  My plan right now, since this backup scheme relies on hard links and thus, takes up far less space than independent daily full backups would, is to keep a minimum of 30 daily backups… And since this new setup also labels a single backup as “monthly”… The last 6 monthly backups.  Which are not any different than just a different named daily backup.

I may post the actual script text in the future, but for now I’ll just provide a tgz for download.

n2nBackup.tar.gz

Raspbian – NFS w/Synology

Synology stuff first…

  • Enabled NFS file sharing on my Synology
  • For each share on my Synology NAS, had to go into NFS Permissions and create a rule
    • Disabled “Asynchronous”

Then followed these instructions…

First, install the necessary packages. I use aptitude, but you can use apt-get instead if you prefer; just put apt-get where you see aptitude:

sudo aptitude install nfs-common portmap

(nfs-common may already be installed, but this will ensure that it installs if it isn’t already)

On the current version of Raspbian, rpcbind (part of the portmap package) does not start by default. This is presumably done to control memory consumption on our small systems. However, it isn’t very big and we need it to make an NFS mount. To enable it manually, so we can mount our directory immediately:

sudo service rpcbind start

To make rpcbind start automatically at boot:

sudo update-rc.d rpcbind enable

Now, let’s mount our share:

Make a directory for your mount. I want mine at /public, so:

mkdir /public

Mount manually to test:

sudo mount -t nfs 192.168.222.34:/public /public

  • (my server share path) (where I want it mounted locally)

Now, to make it permanent, you need to edit /etc/fstab (as sudo) to make the directory mount at boot. I added a line to the end of /etc/fstab:

192.168.222.34:/public /public nfs rsize=8192,wsize=8192,timeo=14,intr 0 0

The rsize and wsize entries are not strictly necessary but I find they give me some extra speed.

 

Notes:

  • Everything basically needs to be done via sudo
  • With multiple shares, each needs to be mounted independently.  Such as…
    • /public/dir1
    • /public/dir2

 

Source: http://www.raspbian.org/RaspbianFAQ

Raspbian – Misc To-Done

Misc things I’ve done to configure my Pi for my personal usage.

Most have to be done via “sudo”

  • Create a new user, separate from “pi”
    • Command:  ‘adduser’
    • Appears to be a Debian specific command, different than the usual Linux ‘useradd’
  • Make new user’s primary group be “users”
    • Since I’m connecting to my Synology NAS over NFS, this allows any files I create as the new user to be part of a common group between the Pi and NAS
    • Command: ‘usermod -g users <newuser>’
  • As “pi” user, give new user ‘sudo’ access
    • Command: ‘visudo’
  • Create RSA key for authentication
    • Command: ‘ssh-keygen’
    • Be sure to keep your key safe and retrievable so that access is not lost… Don’t lose your key!
  • Add pub key to “~/.ssh/authorized_keys” file for new user
  • After achieving access via key authentication, disable SSH password authentication
    • Edit /etc/ssh/sshd_config
    • “PasswordAuthentication no”
  • Optional, specify SSH access for accounts
    • Edit /etc/ssh/sshd_config
    • At bottom of the file, add:
      • AllowUsers newUser1 newUser2
    • Good way to leave default “pi” user “active”, but not directly accessible via SSH
  • Changed hostname
    • nano /etc/hosts
    • nano /etc/hostname
    • reboot
  • Install rsync
    • aptitude install rsync
  • rsync backup script caused an error
    • Error: Too many open files
    • Testing solution: edit /etc/security/limits.conf
      • @users     hard     nofile     32768
  • Configure NTP to sync with NAS
    • Edit /etc/ntp.conf
    • Comment out existing lines starting with “server” that look like “server 0.debian.pool.ntp.org”
    • Add line like “server <nas IP>”
    • Save
    • service ntp restart

To be continued…

Synology My Old Backup Cleanup Script

Since I created my own rsync backup solution for my Synology DS’s, I knew I’d need a script to cleanup and remove the older incremental backups.

The good thing is that since I’m using rsync’s “–link-dest” option, it should be pretty straight forward to cleanup old backups.  Every incremental backup is really a full backup, so just delete the oldest directories.

At least, that’s my theory…

I now have over 30 days of backups, so I was able to create and test a script that follows my theory.

I’ve only run this script manually at this point.  Since I’m actually deleting entire directories, I’m somewhat cautious in creating a new cron job to run this automatically.

#!/bin/sh
#
# Updated:
#
#
# Changes
#
# -
#

# Set to the max number of backups to keep
vMaxBackups=30

# Base directory of backups
dirBackups="/volume1/<directory containing all your backups>"

# Command to sort backups
# sorts based on creation time, oldest
# at the top of the list
cmdLs='ls -drc1'

# Count the number of total current backups
# All my backups start with "In"
cntBackups=`$cmdLs ${dirBackups}/In* | wc -l`

# Create the list of all backups
# All backups start with "In"
vBackupDirs=`$cmdLs ${dirBackups}/In*`

vCnt=0

for myD in $vBackupDirs
do
# Meant to be a safety mechanism
# that will hopefully kick in if the other
# test fails for some reason
tmpCnt=`$cmdLs ${dirBackups}/In* | wc -l`

if [ $tmpCnt -le 14 ]; then
exit
fi

# Main removal code
# If wanting to test first
# comment the "rm" line and uncomment the "echo" line
# echo $myD
rm -rf $myD

# Track how many directories have been deleted
vCnt=$((vCnt+1))

# Check to see if the script should exit
if [ $((cntBackups-vCnt)) -le $vMaxBackups ]; then
exit
fi
done

Synology crond Restart Script

Restarting crond on a Synology NAS is not hard… Once you learn that Synology has a customer service… service.

The key:  synoservice

Run “synoservice –help” to get educated.

I got tired of typing out multiple commands to restart crond, so I created a quick script to do it for me.

#!/bin/sh

echo
echo Restarting crond
echo

echo Stopping crond
synoservice --stop crond

sleep 1
echo
ps | grep crond | grep -v grep
#synoservice --detail crond

echo
echo Starting crond
synoservice --start crond

sleep 1

echo
ps | grep crond | grep -v grep
#synoservice --detail crond

echo
cat /etc/crontab

Synology Custom Daily Cron

Synology DSM 4.0/1 appears to only come with a single crontab file.  There is no implementation for a daily cron directory (that I can find).

So I created my own.

Configure /etc/crontab

  • Open /etc/crontab for editing
  • Add:  “0       1       *       *       *       root    for myCsh in `ls /<your directory path>/cron/cron.daily/*.sh`; do $myCsh;done > /dev/null 2>&1
    • Don’t forget to put in your real directory path above
    • Must use tabs (NOT spaces) between fields.
    • Spaces in the final field (that starts with “for myCsh” are OK
    • The above has the “for” loop being run every day at 1AM
  • Save
  • Restart cron for changes to take effect
    • synoservice –stop crond
    • synoservice –start crond

Configure Cron Daily Directories

  • These are the directories referenced by the above “for” loop
  • Example:  /<something>/cron/cron.daily
  • cd cron.daily
  • Create empty.sh
    • Needed so that if no real scripts are in the directory, the cron “for” loop will not throw any errors
    • Just having an executable file is enough, it does not even need anything in it
    • touch empty.sh
    • chmod 755 empty.sh

Done!

The cron.daily directory is where I put a script that calls my incremental rsync backup script.

I also created a “cron.test” directory for testing different things.  I usually have its line commented out in crontab, but it follows the same concept as above.

Extra Credit

When I was creating this, I wanted some reassurance that the crontab was actually working as expected, so I created a simple script that when ran creates another file in the cron.daily directory with a date time of the last run.

_lastrun.sh

  • I put the “_” in the name so the file should be the first executed by the “for” loop
#!/bin/sh

shDir="$( cd "$( dirname "$0" )" && pwd )"

rm $shDir/*.lastrun

touch $shDir/`date +%Y-%m-%dT%H%M`.lastrun