btsync: How To Download Older Versions

I’m not sure how long these will work, but here are sample links.  Its based on released versions.

I’m going to give the new v2.0 a try, but since they are now trying to monetize that version… And with a yearly license too!!!!  🙁    I’m afraid it will limit my options.

P.S.  I would even consider paying for btsync, if it was a one-time deal… But not yearly.

Fail: Burris Fastfire III

I wanted to like the Burris Fastfire III.

I did like the sight.

It was at a pretty good price point for me. It felt nice in the hand and was much smaller than I expected based on the picture. I have a TacSol Picatinny rail on my Browning Buckmark.  And it fit perfectly.

I had high hopes.

I’m guessing it was not properly calibrated at the factory. It was WAY off zero from the start and the available adjustment could not compensate. It was still 3+ inches high at the point that it could no longer be adjusted.

Now I’m returning it.

Very sad.

G.P.S. Handgunner Backpack, Meet Kaizen Foam

I decided to get a new range bag, and after a bit of searching and uncertainty, ordered the GPS Handgunner Backpack.  (And of course its price has dropped since I last looked!)

GPS Handgunner Backpack

It has really good reviews and has pockets for all the basics.  The one part I was not sure about was whether I’d really be able to get my handguns into the compartment and its (very sturdy feeling) foam tray comfortably.

GPS Foam Tray

And once I got it, I found that my concerns were justified.  5″+ guns do not fit well.  And add in an optical sight (even a relatively small one) or the extra width of a revolver’s cylinder and things just did not fit.

But I decided to try another solution first before returning the bag.  It won’t fit 4 handguns as the bag is designed, but I think I’ll ultimately be able to squeeze in 3 (is that a good excuse to buy another gun?)., but it looks like its going to work out well.

I bought some Kaizen foam, cut it into rectangles for the guns to sit horizontally (instead of vertically) and then cut out space for each gun.

20150214_153506
20150214_153600

Just finished all that.  I haven’t taken it to the range to give things a real whirl yet, but I’m happy with how things turned out.

20150214_153536
20150214_153427
20150214_154526

The one thing I still want to do is glue a very thin piece of wood to the bottom of each piece of foam.  I think that could be good for taking the foam blocks out and setting them down on things at the range.

My 3 day weekend can now be considered successful.  😉

The Idea: Garage Storage Re-Do

Not long after we moved into our home we realized that shelving was needed for the garage.  So we put up some very simple, but very sturdy shelving.  It needed to hold way too many plastic tubs as well as whatever else we could fit up there.

They were good and have served us well and would continue to serve us well, but we’re of the belief that we can do better.  Especially since we are barely able to have enough room to walk through… usually.

And so the research started.  There are many options out there and I was almost leaning toward simple metal free-standing shelves.  Most of the wall mount options either did not give a good estimate for how much weight they could hold or they were not deep enough.

Then I came across a DIY plan for some garage shelving and since it includes sliding doors, of course my significant other voted for the option of being able to hide things.

Source:  The Family Handyman

The design described and illustrated was very close to what we would need and so we moved out of the research phase and into the design.

It was close and 90% of the design is exactly what we need, but our design does change a couple of things beyond some simple dimension adjustments.

  • Depth is increased by about 6″.  This will allow more tub storage as they can be stored depth wise.
  • Not wall mounted.  Building on top of cap blocks has a couple of benefits.
    1. Moves the weight from the wall/studs to the floor
    2. Allows even the top of the unit to be used for storage
  • (Possible) Plywood backing

And so, here are the initial drawings that I’m planning to use…

garage_cabinet_plans_3

 

garage_cabinet_plans_1

garage_cabinet_plans_2

garage_cabinet_plans_4

garage_cabinet_plans_7

 

Issue: Chrome 37 Update Looks Bigger

Normally I’m a big fan of Chrome.  And its automatic updates are also normally pretty harmless.

Then there was Chrome 37 and it made me think I was crazy.  It just looked wrong.  I thought my computer’s resolution somehow got messed up, but I checked and it was good.

And then I did a little googling and came across the answer!

“XXXXXXXX\chrome.exe” /high-dpi-support=1 /force-device-scale-factor=1

After making the updated to my Chrome shortcut and re-ping it to my taskbar in Win7, everything went back to normal, which is all that matters in the end.

Hopefully this is not a permanent change from Google.

SSH, VNC & Tunnelling (updated)

I’ve been meaning to implement, what turned out to be, a simple security improvement with for connecting to my Pi over VNC.

Specifically, tunnelling my VNC connection over an SSH connection using Putty.

Prior to this change I used my router to forward an external non-standard port to the standard VNC port that I could use internally. In addition, I was only running my VNC connection as needed and strictly killing the service once it was no longer needed.  Thus limiting external exposure.

Now, if someone gets access to my system via SSH, I have far bigger issues to be concerned with, so while killing the service to reduce resource usage may be a good practice, its not the same level of concern.

Now, not only do I get to remove a port forwarding rule from my router, but my entire VNC session is encrypted.

The Method (minus Madness)

Starting with a Putty profile that I can already use to establish a successful SSH connection…

  1. Load the “Saved Session”
  2. Modify “Saved Session” name (e.g. add “- vnc” to the end) and save
  3. Category -> Connection -> SSH -> Tunnels
    1. Source Port: Set to an open local port (e.g. 5900)
    2. Destination:  Set to the VNC server’s address and port (e.g. localhost:5901)
    3. Click “Add”
  4. Go back to the “Saved Sessions” section and save again
  5. Open the VNC saved session in Putty

Now you should be able to connect to the VNC server with your VNC Viewer by going to “localhost:1” (change as appropriate if not using the default settings).

Source: http://martybugs.net/smoothwall/puttyvnc.cgi

Update: Connecting to a separate Windows box using Windows Remote Desktop

After successfully doing the above, I did not see a reason why it would not similarly work a host other than ‘localhost’.

And I was right, with the above instructions only changing slightly.

  • I set the Source Port to “3388”
  • And for the Destination, I set the value to the remote machine’s IP & port (e.g. 10.0.10.10:3389)

Since they do not conflict, I was even able to add this forwarded port to the same profile as my VNC tunneling.

Once configured, I only have to connect to “localhost:3388” with Windows Remote Desktop.

Note:  Port 3389 is the standard Windows Remote Desktop port.

Synology, Raspbian, NFS and Nobody…

It took me a couple of weeks (or months) to… Not fix a problem… Even discover that I had a problem.

Once I discovered that I even had a problem… And I have a feeling this issue has been causing me a number of permission annoyances… I was able to use some “Google-fu” and resolve it in less than an hour.

With the actual fix taking < 5 mins.

Problem

From one of my Raspberry Pi’s running Raspbian I was seeing files with weird ownership info… Specifically files were owned by “nobody”.  I suppose I originally considered this a quirk of NFS… But then I noticed that this same thing was not happening on my other Pi.

The problem presented itself via many annoying “access denied” or “operation not permitted” failure messages when working in NFS mounted directories.

The final straw was when I could not run dos2unix successfully on a text file.  Such a simple operation… Very interesting, especially when I stopped watching TV and paid more attention to the error message.

dos2unix: Failed to change the owner and group of temporary output file ./d2utmpte0dzV: Operation not permitted

Resolution

The fix is very simple… Force my clients to connect via NFSv3 instead of v4.

In /etc/fstab I added the option “nfsvers=3” to all my mounts.

192.168.222.34:/public /public nfs nfsvers=3,rsize=8192,wsize=8192,timeo=14,intr 0 0

Once added and re-mounted… Either manually or via a quick and easy reboot… Things started working as expected.

Additional Thoughts

This issue was the result of new behavior.  I recently updated my Synology DSM from v4.2 to v4.3.  I have a feeling that this enabled NFSv4 on my Synology NAS.  This could explain why I was seeing things differently between my Pi’s… I’ve been working on and rebooting one often lately, but the other has been up for many weeks and once the NFS mount is established I’m pretty sure the version does not change.

I believe I saw results for bugs and issues related to NFSv4 along this same vein that were not Synology related… So this issue may not be specific to Synology’s NFS support / implementation.

During my troubleshooting I had a need to confirm that the NFS version could be the issue… So I obviously needed to confirm what version of the NFS protocol was being used.  The quick and easy command to determine that is “nfsstat”.

Volume 3: nginx: A Successful Foundation

(This is part of My ownCloud Adventure)

The Final Fron… Errrr… Wrong adventure.

In the end I have chosen to utilize Nginx to host my personal ownCloud… At least until I have a reason to change my mind, probably for an arbitrary or subjective reason.

Nginx is rather new to me, bug generally speaking, remembering how to pronounce its name was more difficult than getting things working.

Just like getting lighttpd working, I took advantage of Win32 Disk Imager to write an image to the SD card with all of the prep work covered by the Preface completed.

  1.  sudo apt-get update
  2. sudo apt-get install nginx php5-cgi php5-fpm curl libcurl3 php5-curl php5-common php5-gd php-xml-serializer php5-mysql
  3. cd /etc/nginx/sites-available
  4. sudo nano siteName-ssl
    1. copy/paste the Nginx config from ownClouds docs
    2. Edits
      • “root /var/www/owncloud”
      • Find:  “location ~ ^/(data|config|\.ht|db_structure\.xml|README)”
        • Change to (Thanks Dave!):
          • “location ~ ^.*/?(data|config|\.ht|db_structure\.xml|README)”
      • comment out:  fastcgi_pass 127.0.0.1:9000;
      • uncomment:  fastcgi_pass unix:/var/run/php5-fpm.sock
    3. save and exit
  5. cd ../sites-enabled
  6. sudo rm default
  7. sudo ln -s ../sites-available/siteName-ssl
  8. sudo service nginx restart

At this point a choice needs to be made.  One can continue installing the php-apc module, or one can go ahead and do the initial ownCloud configuration… Getting things to work and experiencing the load times as-is… Providing a great before-and-after comparison to fully appreciate what php-apc accomplishes.

PHP-APC

Regardless of which choice is made, the php-apc module is a required addition… And a very quick install.

  1. sudo apt-get install php-apc
  2. sudo service php5-fpm restart

And it is now ready and working!

ownCloud Initial Configuration

The first time you bring up ownCloud, which with the above steps can be found at https://server.ip.or.fqdn/owncloud, you are presented with a pretty simple initial configuration page.

For anyone following this path, I believe its pretty self explanatory.  It only needs a few sets of information…

  • Initial username & password
  • ownCloud’s data directory
  • Database details

Some details may be hidden via an Advanced Options link.

The data directory configuration item is the one that may need the most consideration depending on circumstances.  At this early point, with little experience, I did not have a need to divert from the default.

A Little Tuning

There are a couple of options that I would recommend doing.

So lets go into the ownCloud Admin section.

The most important configuration change to make with under the “Cron” section.  While the default is set to “Ajax”, its recommended to use the “Cron” option.

Configuring the cron is not difficult, but should be done under the default webserver user.  (ownCloud’s cron doc)

  1. sudo -u www-data crontab -e
  2. Add to the bottom:  */15 * * * * php -f /var/www/owncloud/cron.php
  3. save and exit

Beyond the cron configuration, you can also check “Enforce HTTPS”.  While the webserver should be doing this task, a little backup probably won’t hurt.

App Thoughts

External storage support:  Probably one of the most important available apps.  This lets you connect to Dropbox, Google Drive or other accounts.  It also lets you define local storage directories, such as NFS mounts.

ownCloud Dependencies Info:  I do not believe this one is needed normally, but is a good tool out there to make check ensure you have the appropriate dependencies.

Unfortunately I had to disable the Pictures app at this time since it is not working properly for me. I hope it’ll be resolved in the future.

 

And with that… For now… Until next time… The End

Various References:

Volume 2: lighttpd: An Easy Fling

(This is part of My ownCloud Adventure)

While my initial foray into ownCloud was successful… Even if I did not see it all the way through…  I felt I needed to investigate a solution that would bet better suited for the limited hardware of my Raspberry Pi.

The path chosen was a brief dalliance with lighttpd.

Taking advantage of Win32 Disk Imager, I was able to write an image to the SD card with all the completed prep work that covered in the Preface.  This did save a good bit of time.

  1. sudo apt-get update
  2. sudo apt-get install lighttpd php5-cgi curl libcurl3 php5-curl php5-common php5-gd php-sml-serializer php5-mysql
  3. cd /etc/php5/cgi
  4. sudo nano php.ini
    • uncomment:  “cgi.fix_pathinfo = 1”
    • Not actually sure this is required, as a re-read of the description says that “1” is now the default value
  5. cd /etc/lighttpd
  6. sudo lighty-enable-mod fastcgi-php
  7. cd conf-available
  8. sudo cp 10-expire.conf 10-expire-myHost.conf
  9. sudo nano 10-expire-myHost.conf
    • Append
      • $HTTP[“url”] =~ “(css|js|png|jpg|ico|gif)$” {
        expire.url = ( “” => “access 7 days” )
        }
      • etag.use-inode = “enable”
      • etag.use-mtime = “enable”
      • etag.use-size = “enable”
      • static-file.etags = “enable”
  10. sudo service lighttpd restart

And with that the ownCloud first-run webpage should be accessible at https://server.ip.or.fqdn/owncloud.  Which leaves things in basically the same spot as the end of Volume 1.

After describing everything needed to get lighttpd up and running with ownCloud though, I’m hoping a detail was not left out… This does appear to live up to the “Easy Fling” description though.

Its possible this would have been the end of my adventure, except for one detail.  In the Preface I mentioned security… And while I have no particular knowledge or specific security concern to doubt lighttpd… I was a little put-off by the last update being over 9 months ago in November 2012.

And similarly, while I cannot speak to nginx’s security posture being better that lighttpd, it appears to be more actively maintained… Not to mention its quickly growing popularity.

Which leads us to… Volume 3:  nginx:  A Successful Foundation

Various References: