Volume 1: Apache2: A Heavy Duty Companion

(This is part of My ownCloud Adventure)

My adventure with ownCloud started out well, focusing on the goal of using Apache for my webserver, but it appears that some of my records were lost during my adventure… With the actual commands I used to install Apache now unavailable (i.e. never recorded to begin with).

So I’ll be providing what believe to be the best reconstruction (i.e. guess) that I am able to provide.

  1. sudo apt-get update
  2. sudo apt-get install apache2 php5-gd php5-curl  php5-cgi libapache2-mod-php5 php5-mysql libcurl3 php5-common php-xml-serializer
  3. cd /etc/apache2/sites-available
  4. sudo cp default mySite
  5. sudo cp default-ssl mySite-ssl
  6. sudo nano mySite-ssl
    1. edit:  SSLCertificateFile /etc/ssl/localcerts/mySite.fqdn.pem
    2. edit:  SSLCertificateKeyFile /etc/ssl/localcerts/mySite.fqdn.key
    3. save and exit
  7. sudo service apache2 stop
  8. sudo a2dissite default
  9. sudo a2ensite mySite
  10. sudo a2ensite mySite-ssl
  11. sudo service apache2 start
  12. Test:  https://server.ip/ownCloud
    1. If the certificates are working, you’ll have to tell your browser that you accept the risk of accepting a self-signed cert
  13. Assuming it worked…
    1. cd /etc/apache2/sites-available
    2. sudo nano mySite
      1. After “DocumentRoot” line, add…
        1. Redirect permanent / https://site.ip/
        2. or:  Redirect permanent / https://fqdn/
      2. save and exit
    3. sudo service apache2 reload

These instructions make a few assumptions that should be mentioned before progressing further.

  • The Preface has been followed
  • apache’s default root directory is “/var/www”
  • an owncloud directory (or symlink) exists at “/var/www/owncloud”

At this point, going to “https://site.ip.or.fqdn/owncloud” should bring one to the initial configuration page for ownCloud.  On a Raspberry Pi, with its limited hardware, It may take more than a few seconds to appear.

One last parting thought… Apache2 is a good webserver.  It has served me well over the years, but as the years have passed its put on some weight.  During this initial ownCloud endeavor… And it hit me when loading ownCloud for the first time…  I learned that there are other, less weighty (i.e. light) webserver options.

So in an effort to not repeat myself, its at this point that Volume 1 will wrap up, as I plan to go into more post-installation details at the end of Volume 3.

Until next time…  Volume 2:  lighttpd:  An Easy Fling

Various References:

 

Preface: Common ownCloud Path

(This is part of My ownCloud Adventure)

For any adventure to come to a successful conclusion, the proper preparations must first be made.

With my previous experience working with the Raspberry Pi I was able to quickly get a dedicated server setup and connected to my Synology NAS via NFS.

I should mention here, to plant a seed of thought, that throughout my endeavors the security posture of my system has been a constant consideration.  As an example, with my NFS configuration there are mounts available on my network that I did not give my ownCloud host access to… I am just not comfortable with some files being remotely accessible.

While not exhaustive, there are some common tasks that should probably be performed when setting up a new Raspbian instance:

SD Card Images

Throughout my adventure I made extensive use of Win32 Disk Imager to create images of the SD card.  This allowed me to configure common features once and just reload an image to start over if needed.

For example, I have an image that I created after performing my basic Raspbian updates and configurations.  After that I have an image with SSL certs and MySQL already taken completed.  This definitely made it much easier to go from Apache2, to lighttpd and finally end up at nginx with a “clean” system.

SSL Certs

To allow any of the webservers to utilize HTTPS, generating SSL certificates is the first task.  There are MANY resources available out there, but here are the basic commands I performed.

  1. cd /etc/sll
  2. sudo mkdir localcerts
  3. sudo openssl req -newkey rsa:2048 -x509 -days 365 -nodes -out /etc/ssl/localcerts/myServer.fqdn.pem -keyout /etc/ssl/localcerts/myServer.fqdn.key
  4. sudo chmod 600 localcerts/meister*

These commands result in 2 files as output:  a PEM certificate & a key.  Both are used by any webserver to enable HTTPS.

You will be asked a number of questions during key generation.  Since this results in a self-signed key, answer them however you like.  Except for the FQDN question, I’m not sure any of them even technically matter.  And in the case of the FQDN question, I didn’t care if its value matched my dynamic DNS name or not.

The one important technical detail is that if you do not want to enter a password every time your webserver starts, then do not enter a password when prompted.

Good Resources:

MySQL

ownCloud supports multiple database backends, but I chose MySQL since its familiar to me (although I do wish MariaDB were available in the Raspbian repository).

  1. sudo aptitude
    1. Install MySQL server
    2. The install will ask for a ‘root’ password for your new database server
  2. mysql_secure_installation
    • A script that performs a number of standard bets practice configurations.  Be sure to follow its recommendations!
  3. mysql -u root -p
    • No need to put your password in as an option, you will be prompted
  4. At the “mysql>” prompt
    • create database myOcDbase;
    • create user ‘myOcUser’@’localhost’ identified by ‘myUserPass’;
    • create user ‘myOcUser’@’127.0.0.1’ identified by ‘myuserPass’;
    • grant all privileges on myOcDbase.* to ‘myOcUser’@’localhost’;
    • grant all privileges on myOcDbase.* to ‘myOcUser’@’127.0.0.1’;
    • exit

Good Resource:  http://dev.mysql.com/doc/refman/5.5/en/index.html

Acquiring ownCloud

Getting a hold of ownCloud is not difficult and can be accomplished via various means.

I originally dabbled with manually adding an ownCloud repository to my system’s repo list.  I just followed the instructions found for Debian off ownCloud’s Linux packages install link.

  1. cd /etc/apt/sources.list.d
  2. sudo nano owncloud.list
    • Enter:  “deb http://download.opensuse.org/repositories/isv:ownCloud:community/Debian_7.0/ /”
    • save and exit
  3. cd
  4. wget http://download.opensuse.org/repositories/isv:ownCloud:community/Debian_7.0/Release.key
  5. apt-key add – < Release.key
  6. sudo apt-get update
  7. sudo apt-get install owncloud

While this method did work and is not a bad way to go, especially considering its many advantages… I was unsure of how quickly the repository would be updated with new versions, so I instead elected to go with the manual install.

  • cd
  • wget http://download.owncloud.org/community/owncloud-5.0.10.tar.bz2
    • As versions change, this link will change.  So be sure to get the latest Tar link.
  • tar -xjvf owncloud-5.0.10.tar.bz2
  • mv owncloud owncloud_5.0.10
  • cp -r owncloud_5.0.10 /var/www
  • cd /var/www
  • sudo chmod -R www-data:www-data owncloud_5.0.10
  • sudo ln -s owncloud_5.0.10 owncloud
    • Using a symbolic link in this fashion can help in the future with manual updates.  Just follow ownCloud’s manual update instructions and pre-position the latest version’s directory under /var/www and re-do the symlink for a quick and easy upgrade

And that seems to wrap up the common activities across each of the volumes in my adventure.

My ownCloud Adventure

I recently came across a project that very quickly caught my interest.

Its called ownCloud.

Its open source and gives you your own personal Drop Box like cloud.  It has a number of available features that could give one the ability to move off Google… If they work properly, which I cannot say at this time as I have yet to dive into those features.

If that all sounds interesting, I do encourage you to go take a look.

As the title says, getting my own ownCloud instance up and running has been an adventure.  As concerning as that may initially sound, I suppose I should clarify from the beginning that the adventure is ALL of my own doing.  Overall, the actual installation and configuration of my personal ownCloud instance has gone exceptionally smoothly… With only one, not so minor, issue.

The one “not so minor” issue is that the provided Gallery application does not work for me.  This may have something to do with the fact that I’m running ownCloud off a dedicated, but still resource limited, Raspberry Pi connected via NFS to a Synology NAS and (at this point) only have PHP configured to use 256MB of memory per script… But (without any evidence) I do believe there is an underlying issue with the gallery app.  I also believe it will be fixed in time.  So for me, patience is required…

Unless an alternative gallery app, such as Gallery2, fills the void.  Unfortunately, while it showed some initial promise by actually displaying my root picture folder, my inability to go below the root level found a bug…

I guess I’m just a natural at this software testing stuff.

Back to my adventure though…

I now have an ownCloud 5.x instance adequately running on a dedicated Raspberry Pi, using Raspbian for the OS and nginx as the webserver.  Its configured with SSL, PHP APC and MySQL.

(For apparently limited philosophical reasons I would have preferred to go with MariaDB, but at this time it is not available via the repositories, so my philosophical reasons appear to be limited as I preferred the easy install route rather than building MariaDB myself).

It seems I took a cue for many great adventures though, and given away the ending.  So I suppose its time to start the beginning…

I did not start with nginx.  My adventure actually started with Apache, as that’s what I’ve had the most experience using.  I also did not go from Apache straight to nginx.  I had a brief fling with lighttpd.

So for proper documentation purposes, I plan to detail my adventure across several “volumes”.

I did mention that I started with the ending, but I think I’ll continue following the popular adventure recipe and consider the ending more of a new beginning…

Simple Child’s Stool or Bench

I wanted to create a simple child’s bench, but one that allowed a fabric storage cube to fit in the middle for storage.  They seem to be the perfect size for a toddler 18+ mos. old (and still good for a 2 1/2 yr old).

Here’s what I created…

childsBench

 

Be sure to countersink the screws… Which I apparently did not notate.  I did not do anything fancy with the screws.  Just…

  • 3 into the legs from the top
  • 3 through the feet into the legs
  • 2 into the shelf through the legs

They can easily be stained… But I did not do this.  I’ll use the excuse that I like the look of crayons and other toddler markings on the wood and absolutely not because I was lazy.

They seem to be very sturdy at this point.  I feel like I can sit on them with no problem… And stand on them.  But not sure that, as designed, an adult should stand on them much, but it probably would not be difficult for the design to be slightly modified to reinforce.

New rsync Remote, Incremental Backup Script – n2nBackup

With my Pi I took the opportunity to re-write my rsync backup scripts.

This new setup does everything my first shot did, especially incremental backups via rsync’s “–link-dest” command, but I also believe is more modular, even though I have not had the need to use all its (perceived) capabilities… Or completed them all.

Some Basic Capabilities

  • Does incremental local or remote rsync’ing
  • Able to use private certs for authentication
  • Accounts for labeling backups daily or monthly
  • Uses rsync profile directories
  • (future) Allows pre & post command executions

Setup Structure

n2nBackup:
total 20
drwxrwxr-- 2 4096 May 7 22:10 logs
-rwxrwxr-- 1 6857 May 7 22:10 n2nRsync.sh
-rwxrwxr-- 1 245 May 7 22:10 n2nWrapper.sh
drwxrwxr-- 3 4096 May 7 22:11 profiles

n2nBackup/logs:
total 0

n2nBackup/profiles:
total 4
drwxr-xr-- 2 4096 May 7 22:10 template

n2nBackup/profiles/template:
total 24
-rw-r--r-- 1 367 May 7 22:10 dest.conf
-rw-r--r-- 1 78 May 7 22:10 excludes.conf
-rwxr-xr-- 1 21 May 7 22:10 n2nPost.sh
-rwxr-xr-- 1 21 May 7 22:10 n2nPre.sh
-rw-r--r-- 1 585 May 7 22:10 rsync.conf
-rw-r--r-- 1 126 May 7 22:10 src_dirs.conf

Script Usage

Usage: n2nRsync.sh [-p profile] [-c] [-l] [-t] [-h]

 -p profile              rsync profile to execute
 -c                      use when running via cron
                         when used, will output to log file
                         otherwise, defaults to stdout
 -l                      list profiles available
 -t                      runs with --dry-run enabled
 -h                      shows this usage info

Sample crontab Usage

00 00 * * * /dir/n2nBackup/n2nRsync.sh -p profName -c

Some Details

Right now I’m running everything with the n2nRsync.sh.  I have not implemented the n2nWrapper or pre & post command execution stuff.  In my previous backup script, that was run directly on my Synology NAS, I had a need for some pre-backup commands, but for whatever reason… Past bad coding… Ignorance… Synology quirks… Accessing the data via NFS now… The issues I had to work around are no longer being experienced.

I still need to create cleanup scripts that will age-off data based on specified criteria.  My plan right now, since this backup scheme relies on hard links and thus, takes up far less space than independent daily full backups would, is to keep a minimum of 30 daily backups… And since this new setup also labels a single backup as “monthly”… The last 6 monthly backups.  Which are not any different than just a different named daily backup.

I may post the actual script text in the future, but for now I’ll just provide a tgz for download.

n2nBackup.tar.gz

Raspbian using SD card & USB thumb drive

Sub-Topic: Creating a backup image of your Raspbian SD card

Decided to try moving my actual linux partition from the standard SD card location to a USB memory stick.

I did this without fully understanding the benefits (or cons).  🙂

I knew people had done this previously with USB hard drives.  Its possible this will reduce the likelihood of the SD card becoming corrupted.  I’m not sure if it provides any performance (read/write) improvements.

So in summary… I just did it… To do it.

Notes:

  • My primary resource for getting this done was: Raspbian on Raspberry Pi using SD card + USB memory stick
  • I primarily use a Win7 laptop, so I’ll be differing from the above link in the details, but the overall concepts remain the same.
  • When working with any images I used the Win32 Disk Imager program (v0.7).
  • I did all this after already having Raspbian working and configured with an SD card

Process:

  1. Determine the device name of the USB memory stick
    1. This can be accomplished via “dmesg” as seen at the above link.
    2. An alternative method is to “sudo tail -f /var/log/messages” prior to putting the memory stick in.  When you put the memory stick in log messages will appear similar to the “dmesg” format.
    3. My memory stick was at “/dev/sda”
    4. Remove the USB memory stick once done
  2. Create a backup image of your SD card
    1. This accomplishes a couple of things.
      1. Creates a nice backup of a working Raspbian image
      2. Makes it so that when you’re successful, the Raspbian image running off your USB stick is already configured and working immediately.  Yes… This was as awesome as it sounds.
    2. With Win32 Disk Imager
      1. Select the SD card drive letter
      2. Type in the name and full path for the backup image (e.g. c:\users\john doe\desktop\rasp_backup.img).  I had to type it in, bringing up the file window only allows selecting an existing img file
      3. Click Read
      4. And wait…
  3. Write the SD card backup image to the thumb drive
    1. Again, using Win32 Disk Imager
      1. Select the thumb drive’s drive letter
      2. Select the backup img file you just created.  You can use the file dialog box this time
      3. Double check that you did selectd the thumb drive’s drive letter
      4. Click Write
      5. And wait…
  4. Prepare the boot SD card
    1. I had a smaller 256MB SD card lying around, so I used it for this purpose
    2. I used Window’s format capability to format the boot SD card
      1. File system:  FAT32 (this is not the default)
      2. Allocation unit size: 1024 (not sure if this is needed, but its what I did… Default was 2048, which probably would have worked)
  5. Copy boot files from USB memory stick to boot SD
    1. Open the USB memory stick via Windows Explorer
    2. Open the boot SD via Windows Explorer (in a separate window)
    3. Copy all displayed files from the USB memory stick to the boot SD
  6. Modify the boot files to point to the USB memory stick
    1. On the boot SD, open “cmdline.txt”  (notepad worked for me)
    2. There is only a single line in the file
    3. Change “root=/dev/mmcblk0p2” to “root=/dev/sda2”
      1. This tells Raspbian to look at the 2nd partition on the USB memory stick instead of the 2nd partition on the SD card
      2. Save.  There should be no issues saving with notepad since you’re editting the middle of the line, so no carriage return issues that could occur… Possibly… I really don’t know in this case.  🙂
      3. Its possible you SD card and USB memory sticks have different device names, but the above is how it looked for me
  7. Put the boot SD and USB thumb drive into your Pi and plug in the power

After I did the above, everything worked for me the first time and with the configuration I had previously working with just an SD card.  Which means I did not have to move my headless Pi to reconnect it to a TV and keyboard and mouse to verify and configure.

Multiple Windows 7 Crashes (Possibly Resolved)

My laptop has crashed 3 times in the last 24 hours on my Asus G73J with Win7.

  • I usually put it into hibernate over night, but left it awake and in a power saving mode to allow it to finish syncing with a server.  The next morning it was off.
  • Put it to sleep while not in use.  Came back and it was off.
  • Don’t recall the 3rd incident… Probably similar to the 2nd.

So now I’m trying to determine what’s happening and have come across some interesting tools and links.

NirSoft BlueScreenView

  • Got this to analyze the “.dmp” files being produced.  Gives me some info, but after just a few minutes of using it (and obviously becoming a quick expert), it does not appear to provide a root cause of my problem.
  • Looks like it can call a “DumpChk” utility.  Vaguely familiar with this.  I believe its part of a Windows SDK package

Windows SDK Package

  • Found the listed pages to help go further than BlueScreenView
    • http://mikemstech.blogspot.com/2011/11/how-to-troubleshoot-0x9f.html
    • http://mikemstech.blogspot.com/2011/11/windows-crash-dump-analysis.html
  • Installed the Windows SDK via the instructions on the 2nd link
    • I did get an error for my first install and it is resolved with this page: http://support.microsoft.com/kb/2717426
    • I had to 2010 versions installed.  I only uninstalled the one that was greater and it worked.
    • Not sure I need to reinstall yet though… Also, after a quick look I could not find it on MS’s website.
  • Now to keep following the first link’s instructions.
    • Instructions from the links about could have been better.  Looked like he was running a linux command, but he was really in the WinDbg program already.
    • Found this link as helpful:  http://social.technet.microsoft.com/wiki/contents/articles/6302.windows-bugcheck-analysis.aspx

Windows Autoruns

  • In my searching, I found this utility from MS.  Looks like it may be useful… Maybe not for this issue, but still useful.  It analyzes all “startup” programs.  One potential and interesting use is that it supposedly can tell you what programs are being autoran but not longer exist or cannot be found or any other reason why the configured path does not work…
  • http://technet.microsoft.com/en-us/sysinternals/bb963902

 Possible Resolution

  • So my analysis basically circled back to a driver that I already knew about… I guess I learned some new stuff, so that’s good.
  • Everything pointed to my crashes being caused by:  NETw5s64.sys
    • Googling suggests this is an Intel Wireless driver and said to make sure its updated and suggested using this Intel page:
      • http://www.intel.com/p/en_US/support/detect?iid=dc_iduu
  • Now time will tell if this resolved my issue… I’m done with troubleshooting for the day. 🙂

Raspbian – NFS w/Synology

Synology stuff first…

  • Enabled NFS file sharing on my Synology
  • For each share on my Synology NAS, had to go into NFS Permissions and create a rule
    • Disabled “Asynchronous”

Then followed these instructions…

First, install the necessary packages. I use aptitude, but you can use apt-get instead if you prefer; just put apt-get where you see aptitude:

sudo aptitude install nfs-common portmap

(nfs-common may already be installed, but this will ensure that it installs if it isn’t already)

On the current version of Raspbian, rpcbind (part of the portmap package) does not start by default. This is presumably done to control memory consumption on our small systems. However, it isn’t very big and we need it to make an NFS mount. To enable it manually, so we can mount our directory immediately:

sudo service rpcbind start

To make rpcbind start automatically at boot:

sudo update-rc.d rpcbind enable

Now, let’s mount our share:

Make a directory for your mount. I want mine at /public, so:

mkdir /public

Mount manually to test:

sudo mount -t nfs 192.168.222.34:/public /public

  • (my server share path) (where I want it mounted locally)

Now, to make it permanent, you need to edit /etc/fstab (as sudo) to make the directory mount at boot. I added a line to the end of /etc/fstab:

192.168.222.34:/public /public nfs rsize=8192,wsize=8192,timeo=14,intr 0 0

The rsize and wsize entries are not strictly necessary but I find they give me some extra speed.

 

Notes:

  • Everything basically needs to be done via sudo
  • With multiple shares, each needs to be mounted independently.  Such as…
    • /public/dir1
    • /public/dir2

 

Source: http://www.raspbian.org/RaspbianFAQ

Raspbian – VNC

Installing VNC on the Pi

We’re going to use Tight VNC here (server on the Raspberry Pi and Viewer on Windows).

There’s an excellent tutorial over at Penguin Tutor if you need more information.

First of all install the Tight VNC Server from the command prompt:

sudo apt-get install tightvncserver

Let it finish installing (if you’re asked to confirm anything, just hit ‘y’ on the keyboard). When complete start the server:

vncserver

You’ll be asked to create a password, enter one and confirm. I used raspberry for ease of use, but probably not the most secure!

When asked to create a view only password, say No.

Every time you start VNC you’ll see something like:

New 'X' desktop is raspberrypi:1

Note the :1. This is the desktop session created. You can add more by running VNC again.

Head over to TightVNC on your windows box and install the viewer.

 

Source: http://www.neil-black.co.uk/raspberry-pi-beginners-guide#.UTk0TDC9t8F

Raspbian – Misc To-Done

Misc things I’ve done to configure my Pi for my personal usage.

Most have to be done via “sudo”

  • Create a new user, separate from “pi”
    • Command:  ‘adduser’
    • Appears to be a Debian specific command, different than the usual Linux ‘useradd’
  • Make new user’s primary group be “users”
    • Since I’m connecting to my Synology NAS over NFS, this allows any files I create as the new user to be part of a common group between the Pi and NAS
    • Command: ‘usermod -g users <newuser>’
  • As “pi” user, give new user ‘sudo’ access
    • Command: ‘visudo’
  • Create RSA key for authentication
    • Command: ‘ssh-keygen’
    • Be sure to keep your key safe and retrievable so that access is not lost… Don’t lose your key!
  • Add pub key to “~/.ssh/authorized_keys” file for new user
  • After achieving access via key authentication, disable SSH password authentication
    • Edit /etc/ssh/sshd_config
    • “PasswordAuthentication no”
  • Optional, specify SSH access for accounts
    • Edit /etc/ssh/sshd_config
    • At bottom of the file, add:
      • AllowUsers newUser1 newUser2
    • Good way to leave default “pi” user “active”, but not directly accessible via SSH
  • Changed hostname
    • nano /etc/hosts
    • nano /etc/hostname
    • reboot
  • Install rsync
    • aptitude install rsync
  • rsync backup script caused an error
    • Error: Too many open files
    • Testing solution: edit /etc/security/limits.conf
      • @users     hard     nofile     32768
  • Configure NTP to sync with NAS
    • Edit /etc/ntp.conf
    • Comment out existing lines starting with “server” that look like “server 0.debian.pool.ntp.org”
    • Add line like “server <nas IP>”
    • Save
    • service ntp restart

To be continued…