All posts by Daniel Karni

Creating a Permanent SSH Tunnel from Linux

I often find myself having to create long running SSH tunnels from one machine to another to maintain whatever service or access needs.

The following script is heavily based on this article by Brandon Checketts (the general script structure and the auth setup required for this to work) and this Stack Exchange answer by Chuss (checking the tunnel itself instead of relying on an additional SSH connection).

in my use case I wanted to maintain tunnels from my development box sitting at home to DBs in the office, both an Oracle and a MySQL connection. I also preferred to be able to duplicate this for whatever other tunnel I might need without having to edit the script.

#!/bin/bash

LOCAL_TUNNEL_PORT=$1
REMOTE_TUNNEL_PORT=$2
REMOTE_TUNNEL_SERVER=$3
SSH_SERVER=$4
SSH_USER=$5

createTunnel() {
    /usr/bin/ssh -f -N -L$LOCAL_TUNNEL_PORT:$REMOTE_TUNNEL_SERVER:$REMOTE_TUNNEL_PORT $SSH_USER@$SSH_SERVER

    if [[ $? -eq 0 ]]; then
#        echo Tunnel to hostb created successfully
        exit
    else
        echo An error occurred creating a tunnel to $REMOTE_TUNNEL_SERVER. RC was $?
    fi
}
## check if tunnel is active.  If it returns non-zero, then create a new connection
r=$(bash -c 'exec 3<> /dev/tcp/'localhost'/'$LOCAL_TUNNEL_PORT';echo $?' 2>/dev/null)
if [ "$r" = "0" ]; then
        exit
else
        echo Creating new tunnel connection
        createTunnel
fi

In my crontab I added the following lines:

* * * * * /usr/local/scripts/maintain_ssh_tunnel.sh 1521 1521 oracle-db.my-office.com ssh-server.my-office.com root
* * * * * /usr/local/scripts/maintain_ssh_tunnel.sh 3306 3306 mysql-db.my-office.com ssh-server.my-office.com root

These will re-run every minute to check if the tunnel is still up and try to reconnect otherwise.

Wriggling out of Google’s embrace – part 1

Google is watching you

I’ve been using Google’s various services as much as the next person, but around the time I’ve decided to buy my FairPhone, I’ve also realized I don’t want gapps (the base package of Google Apps, including your Google account on android, the Play store and Gmail) on my new phone and that I’ll have to find some alternatives.

There are many good reasons to limit your exposure to Google. If you’re reading this, you’ve probably already decided it’s worth thinking about, so I’ll focus on the how, rather than the why.

First of all, my goal isn’t to stop using Google, but to limit it’s access to me, especially on my phone. Using gapps, Google has access to practically every aspect of my phone’s state. The first step was to not install gapps on my new FairPhone. Thankfully, gapps isn’t preinstalled on the FairPhone’s default ROM.

Based on several articles, with varying degrees of OSS purism, I’ve set up the following:

  • App Repository: 1Mobile Market
    • 1Mobile Market contains the vast majority of free android apps, but doesn’t make any promises with regards to compatibility, the way Google Play does.
    • It also doesn’t let you buy the pro versions for many apps (like Titanium Backup)
    • It does manage version updates and manages your APKs reasonably well.
    • F-Droid is another alternative, but it focuses solely on OSS free software (as in both beer and speech). I find it a bit limiting in its purism, but it’s still very a very useful resource, listing apps’ anti-features (example).
  • Backup: Titanium Backup Pro (on 1Mobile Market)
    • Since I’m not using Google Play, I bought the license for the pro version directly off the author’s website and paid with PayPal. expect about 30 minutes wait between payment and receipt of the license file by email.
  • Contacts: Fruux (on 1Mobile Market)

    • Export the google contacts from your phone (exporting from the website, doesn’t include the avatars) to a VCF file, which you can then upload to Fruux.
    • I’ve had a bit of trouble with DOB records, which I had to manually remove from the file with a text editor (notepad++ FTW)
    • After merging all my contact info manually on Fruux website (I exported all contacts, so many of them had duplicates from skype and other accounts), I had a decent contact list, ready to be used on my phone.
    • Note that the VCF export on the phone uses low quality avatar images. I haven’t figured how to export these correctly, so I replaced the LQ images with better ones for the contacts I cared enough about (close friends and family) on my phone. The LQ avatars look just as good as the originals as thumbnails.
  • Calendar: Fruux (on 1Mobile Market)

    • Export your Google calendar to file, import on fruux. Nothing much to it.
    • Right now Fruux doesn’t support consuming CalDAV calendars from other providers, like your usual religious and national holiday, but you can import them into Google and export as file, so it should be good enough for the upcoming year or so, till they hopefully release that feature.
  • IM: Xabber (on 1Mobile Market)

    • OSS XMPP/Jabber client, simple and to the point.
    • Allows multiple accounts
  • Google Maps (POIs and public transit navigation)
    • You can actually use Google Maps even without a Google account set up on your phone, and I do. It simply has the best POI for traveling abroad, coupled with public transit navigation, it’s really hard to beat.
    • Moovit (on 1Mobile Market) claims to be a viable alternative for public transit navigation, but doesn’t seem to have predefined POIs or the ability to save your own POIs.
  • Google Maps Engine (My Maps): oruxmaps (on 1Mobile Market)
    • I like planning trips abroad in advance, mapping out points of interest like restaurants, museums and such.
    • This is a real treat, full featured map application able to hook onto openstreetmap and other sources.
    • For Google maps with full-featured navigation, use this hack, you can also find the onlinemapsources.xml file here.
    • You can easily export maps from Google’s MyMaps maps into KML file format, which oruxmaps supports. It’s not as dynamic as you’d be able to get using native Google My Maps (save on desktop, load on mobile), but it’s definitely close enough.
  • Other apps I use (links to 1Mobile Market):

Part 2 of this post will focus on migrating away from Gmail and GoogleTalk (now the insufferable “Hangouts”).

Automatically pin and tag builds in TeamCity

I’m using TeamCity for both building and deploying my artifacts to the different environments (DEV, CI, PROD, etc…).
I’m also using the daily Build History Clean-up feature to conserve disk space.
This means I might end up deploying to QA for manual testing, by the time the tests are done (in some cases this could take over a couple of days), the cleanup may have already removed the release candidate build (and its artifacts) from the disk, having to compromise between uploading a release that’s not fully tested to delaying the release on account of unplanned testing.

The solution is build pinning, this is equivalent to Jenkins’ “keep this build forever” button.

But you can’t ask a build configuration to pin you build automatically on success , and in my case, I want to pin the build I’m depending on (I have a build config that builds artifacts and other build configurations to deploy those artifacts to the various environments).

It’s pretty simple to use the TeamCity REST API (on a linux box with curl).

buildmanager is a user I’ve created in order to show that the pinning or tagging were done automatically, otherwise it may have used the user of whoever ran the build

Pin this build:

curl -v --basic --user buildmanager:buildmanager --request PUT "http://localhost/TeamCity/httpAuth/app/rest/builds/id:%teamcity.build.id%/pin/

Pin the BuildArtifacts build, on which I’m depending:

curl -v --basic --user buildmanager:buildmanager --request PUT "http://localhost/TeamCity/httpAuth/app/rest/builds/id:%dep.BuildArtifacts.teamcity.build.id%/pin/

Also tag it (%environment.name% is a configuration parameter I set on my build configurations that deploy to use the environment name in scripts):

curl -v --basic --user buildmanager:buildmanager --request POST --header "Content-Type: application/xml" --data '<?xml version="1.0" encoding="UTF-8" standalone="yes"?><tags><tag>%environment.name%</tag></tags>' "http://localhost/TeamCity/httpAuth/app/rest/builds/id:%dep.BuildArtifacts.teamcity.build.id%/tags/"

I’ve simply used these one-liners as custom scripts in a command line type build step at the end of my deployment build configurations.

 

Formatted Date Parameter – A Plugin for TeamCity

I’ve been using TeamCity for a short while now at my new job. it seemed like there’s nothing it can’t do for me.

I decided I want to get a timestamp generated and incorporated into the build number. However, that seemed a bit more difficult than i thought.

A quick search shows a couple of options:

  • Date Build Number Plugin
    • It sets the entire build number to the timestamp, so you can’t put anything else there, like the VCS revision, build counter or Maven version number.
    • Installation is kinda funky, having to dilddle with TC configuration files.
  • Groovy Plug
    • This plugin provides the build start timestamp as a parameter, which is great
    • The timestamp format isn’t easily configurable
    • Installation is again non-standard
    • The plugin notes say it’s a bit of a memory hog, and it was meant as a demo anyway

So I just wrote my own.

Formatted Date Parameter provides a confguration parameter (named build.formatted.timestamp), which during build will contain the build start timestamp. The timestamp format is ISO-1806 by default (“yyyy-MM-dd’T’HH:mmZ”). The timestamp format can be configured using another configuration parameter (named build.timestamp.format), which uses standard SimpleDateFormat syntax.

Installation:

Just like the guide says

  • Copy the zip (see link at bottom of page) to the .BuildServer/plugins dir (for me it was /home/username/.BuildServer/plugins)
  • Restart the Tomcat instance your TeamCity WAR is deployed to.

 

Usage:

In my case, I wanted the week-in-year (01-52) to appear in the build number, so I

  1. Added build.formatted.timestamp to the build number format field (build configuration -> general settings page)

    using the %build.formatted.timestamp% parameter in the build number field
    using the %build.formatted.timestamp% parameter in the build number field

     

  2. Then I set the value of build.timestamp.format to show only the week number in 2 digits “ww” (build configuration -> build parameters page -> Add configuration parameter button)

    Setting the timestamp format
    Setting the timestamp format

     

  3. Now my build number contains the current week number. Hurray.

 

 

FormattedDateParameter v1.1 plugin for TeamCity (download) (source on github)

My top 10 JavaOne 2013 talks

javaone

JavaOne 2013 was pretty cool and the sessions just became available online.

Here are my top 10 sessions:

  1. Jim Manico – Top 10 Web Application Defenses for Java Developers
  2. Matt Raible – The Modern Java Web Developer
  3. Kevin Nilson – Seeing Through the Clouds
  4. Ian Robertson – The Science and Art of Backward Compatibility
  5. Nicolas De Loof – Cloud Patterns
  6. Peter Hendriks – Practices and Tools for Building Better APIs
  7. Simon Maple – The Adventurous Developer’s Guide to Application Servers
  8. Ram Lakshmanan – Seven Secrets of Wells Fargo SOA Platform’s 99.99 Percent Availability
  9. Nikita Salnikov-Tarnovski – I Bet You Have a Memory Leak in Your Application
  10. Simon Maple – The Adventurous Developer’s Guide to JVM Languages

 

Wake up your computers after a power outage with a Raspberry Pi

raspbian_logo[1]

Even a snazzy networked NUT setup, covering all your windows and linux boxes at home, making sure they all shut down when your over-kill UPS goes into the Low Battery state can seem like a waste of time if you have to come back home and start everything manually before your batcave is back online.

Enter the Raspberry Pi. This is a linux box that will immediately start booting when your power comes back on, assuming you’ve connected it directly to the outlet and not through your UPS. All that’s left is just to wake the rest of your boxes once the Pi’s booted.

In my case I wrote a short script and put it in /usr/local/scripts/wakeEmUp.sh

#!/bin/sh

sleep 600

echo "Waking up everybody"
wakeonlan -f /etc/homeLan.wol

The sleep in there will make sure you won’t start your other computer before a full 10 minutes have passed after the Pi finished booting. this will prevent “flapping”.

/etc/homeLan.wol is a file in the following format:

#
# This an example of a text file containing hardware addresses
#
# File structure
# --------------
# - blank lines are ignored
# - comment lines are ignored (lines starting with a hash mark '#')
# - other lines are considered valid records and can have 3 columns:
#
#       Hardware address, IP address, destination port
#
#   the last two are optional, in which case the following defaults
#   are used:
#
#       IP address: 255.255.255.255 (the limited broadcast address)
#       port:       9 (the discard port)
#

#Computer1
00:11:22:33:44:55

#Computer2
AA:BB:CC:DD:EE:FF

Finally, we need to actually run this when the Pi boots, this can be easily done by adding the invocation to /etc/rc.local:

#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.

# Print the IP address
_IP=$(hostname -I) || true
if [ "$_IP" ]; then
  printf "My IP address is %s\n" "$_IP"
fi

/usr/local/scripts/wakeEmUp.sh &

exit 0

UPS monitoring with NUT, Nagios and Check_MK

Nagios, Check_MK and NUT

I have two UPS units at home, powering my batcave, which will get its own post sometime. Check_MK’s agent architecture really appeals to me, I naturally wanted to leverage it into the solution.

Basically, the Check_MK Agent is a small bash script that dumps a whole lot of plain text info out to STDOUT, this output is usually exposed over the network using xinetd, so when you open a socket on the port (6556 by default), with a telnet command, for example, you just get the whole output dumped right out.
This means that adding output is very simple, especially as the agent will try to run any executable files in the plugins dir, you can just read the agent script to see where that is on your installation. Creating the following script in the plugins dir, simply adds the output to the text dumped by the agent when it’s being invoked.

#!/bin/sh
if which upsc > /dev/null 2>&1 ; then
    echo '<<<nut>>>'
    for ups in $(upsc -l)
    do
         upsc $ups| sed "s,^,$ups ,"
    done
fi

On my box, the output looks like this:

<<<nut>>>
VT650 battery.voltage: 14.10
VT650 battery.voltage.high: -1.08
VT650 battery.voltage.low: -0.87
VT650 device.type: ups
VT650 driver.name: blazer_usb
VT650 driver.parameter.bus: 005
VT650 driver.parameter.pollinterval: 2
VT650 driver.parameter.port: auto
VT650 driver.version: 2.6.4
VT650 driver.version.internal: 0.08
VT650 input.frequency: 49.9
VT650 input.voltage: 237.3
VT650 input.voltage.fault: 140.0
VT650 output.voltage: 238.3
VT650 ups.beeper.status: enabled
VT650 ups.delay.shutdown: 30
VT650 ups.delay.start: 180
VT650 ups.load: 25
VT650 ups.productid: 0000
VT650 ups.status: OL
VT650 ups.temperature: 30.0
VT650 ups.type: offline / line interactive
VT650 ups.vendorid: ffff
JP2000 battery.voltage: 28.10
JP2000 battery.voltage.high: -1.08
JP2000 battery.voltage.low: -0.87
JP2000 device.type: ups
JP2000 driver.name: blazer_usb
JP2000 driver.parameter.bus: 004
JP2000 driver.parameter.pollinterval: 2
JP2000 driver.parameter.port: auto
JP2000 driver.version: 2.6.4
JP2000 driver.version.internal: 0.08
JP2000 input.frequency: 49.9
JP2000 input.voltage: 231.8
JP2000 input.voltage.fault: 150.0
JP2000 output.voltage: 235.2
JP2000 ups.beeper.status: enabled
JP2000 ups.delay.shutdown: 30
JP2000 ups.delay.start: 180
JP2000 ups.load: 20
JP2000 ups.productid: 0000
JP2000 ups.status: OL
JP2000 ups.temperature: 30.0
JP2000 ups.type: offline / line interactive
JP2000 ups.vendorid: ffff

I’ll go into further detail on my NUT setup in a later post.

Now we have the data which is great, but Check_MK can’t make heads or tails of it. We need to parse it on the server side. Since Check_MK uses a python API, we are forced to use python to parse our data. The Check_MK parser should be in your check_MK server’s checks directory.

Since the python script itself is a bit on the long side of things, you can find it in my Check_MK plugin repo on github. This script has been created for Check_MK v1.1.12p7, but if you’re interested in adapting it to the latest (and more elegant)  API, you’re welcome to fork it on github or ask me to do it.

The end result is quite pleasing and allows me to easily track my UPS load, get live email alerts about power outages at home and have all this data collected into RRDtool graphs through PNP4Nagios.

A green metric is a happy metric.
A green metric is a happy metric.

image
Low load percentages will allow your UPS to keep your equipment up longer

Making an audio mixing box

A few months ago, I got a new desktop and handed down my previous desktop to my wife as a gaming rig to complement her day-to-day laptop.
Ending up with two computers, but with the same single set of speakers we had before raised the question of how to connect the audio from both computers to the speakers.

One’s immediate instinct is to just connect all the wires and let ‘er rip. However, that’s a very bad idea.

Your science is bad and you should feel bad!
Your science is bad and you should feel bad!

This post explains exactly why this is a bad idea and describes a simple mixer circuit that sums together two inputs into one output. However, it doesn’t allow any mixing control.

Simple and useful
Simple, useful, yet lacking.

In case you’re not familiar with circuits and whatnots, check out sparkfun.com for some tutorial goodness.

This doesn’t deal with a very common situation where the volumes on the two inputs might not be set to comparatively reasonable levels, causing the output to effectively feel one sided. with no means to control the mixing ratio, I felt it wasn’t good enough. Additionally, if one of the inputs is connected to a computer that’s shut down at the moment, some static noises might be annoying, so an option to disconnect one of the inputs temporarily might be nice.

Complete with switches and double pots.
Complete with switches and double pots.

My design required a few specific components, for example the two pots are actually a double potentiometer (two pots on a shared axis) affecting both left and right channels in the same intensity. the switches are also double switches (123circuits only had triple switches) so both U1_S1 and U1_S2 should connect and disconnect together, without actually touching one another electrically (crossing the streams, sword-fighting if you will).

It took me a while to collect everything I needed, including buying solder for the first time in like a decade. but last week I was finally ready to build.

here’s the final result:

A very "sound" design
A very “sound” design

Using it is as intuitive as pointing the knob in the direction of the input you want to be more dominant. I skipped on the switches as the ones I had from dx.com were a bit flimsy and messed the audio a bit. I will probably post an update once those are added.

Plugs go here.
Plugs go here.

The actual inputs are aligned with the labels and directionality of the pot, so it should all just “make sense” to the user. Output is in the middle.

a quick lookie loo
A quick look inside

This was my first soldering work in over a decade so I won’t be posting any close ups of that poor poor PCB.