11 Mar 2014, 17:50

Installing Ruby with RVM on Archlinux

I’ve started doing some Ruby development. The commonly used platform for the development team(s) tends to be one of OS X, Ubuntu, or Mint. I do not fall in to that category.

Getting RVM to install Ruby under Archlinux was fraught with issues. Most of them were trivial and just a matter of getting the right dependencies installed. There was, though, one issue that was particularly misleading. When trying to rvm install 2.1.1, I received this error:

Error running '__rvm_make -j2',
showing last 15 lines of /home/sbarker/.rvm/log/1394556125_ruby-2.1.1/make.log
make[2]: Leaving directory '/home/sbarker/.rvm/src/ruby-2.1.1/ext/readline'
exts.mk:199: recipe for target 'ext/readline/all' failed
make[1]: *** [ext/readline/all] Error 2
make[1]: *** Waiting for unfinished jobs....
compiling ossl_x509crl.c
compiling ossl_digest.c
compiling ossl_pkey_rsa.c
compiling ossl_engine.c
compiling ossl_ssl.c
installing default openssl libraries
linking shared-object openssl.so
make[2]: Leaving directory '/home/sbarker/.rvm/src/ruby-2.1.1/ext/openssl'
make[1]: Leaving directory '/home/sbarker/.rvm/src/ruby-2.1.1'
uncommon.mk:180: recipe for target 'build-ext' failed
make: *** [build-ext] Error 2
There has been an error while running make. Halting the installation.

A similar-ish error occurred when trying to install Ruby 2.0.0.

Initially this looked to be an error with OpenSSL dependencies. I spent longer than I would care to admit going down that road, trying several different options, including rvm autolibs enable, all to no avail. I finally gave up, and in admitting defeat, spun up an Ubuntu VM. And behold, the same error occurred. It wasn’t an OS issue.

Back to the drawing board, I dove in to ~/.rvm/src/ruby/2.1.1 to ./configure and make the package by hand to see if I could discover anything new. As it turns out, seeing the full context around the error pointed me in the right direction.

The first bit of the error reported by RVM is the key.

Error running '__rvm_make -j2',
showing last 15 lines of /home/sbarker/.rvm/log/1394556125_ruby-2.1.1/make.log
make[2]: Leaving directory '/home/sbarker/.rvm/src/ruby-2.1.1/ext/readline'
exts.mk:199: recipe for target 'ext/readline/all' failed
make[1]: *** [ext/readline/all] Error 2
make[1]: *** Waiting for unfinished jobs....

The issue lies with readline and not with openssl dependencies. RVM’s autolibs setting obviously didn’t trigger, since readline was present. I told RVM to install its own readline with rvm pkg install readline. When that finished building, I attempted to install Ruby 2.1.1 again with rvm install 2.1.1 -C --with-readline-dir=$HOME/.rvm/usr. The result, great success. RVM happily downloaded, built, and installed 2.1.1 for me. I repeated this for 2.0.0 and also had great success. 1.9.3 was never an issue, and that continued to be not an issue.

29 Jan 2014, 02:33

Another VPN update, Private Internet Access

So a friend of mine noticed my previous post about updating all of my VPN configuration and asked the question, “Why did you decide to go with HideMyAss when they keep logs?” I had a couple of reasons, the primary was that they weren’t the provider that I was using before, and secondarily because I didn’t really care that much.

The more that I thought about it, the more that I decided that I did care that much. To that end, I’ve taken his non-direct advise and switched providers yet again. This time, I’ve gone with PrivateInternetAccess.com, who according to their privacy policy, don’t collect any logs.

Due to the changes that I outlined in my previously mentioned post, switching providers was very, very trivial. I only needed to add a new peer, which I symlink‘ed to vpn, updated my chap-secrets with the relevant information, and I had a small change my ip-up.d and ip-down.d scripts that were responsible for my iptables rules to route all of my traffic through the ppp0 interface, as well as my fastest_ip.py script to find the fastest route.

I know I can still improve on this process, but for now, done and done.

13 Jan 2014, 09:22

Follow-up: Squid, Sick-Beard, Deluge and a VPN, now with 100% more HideMyAss

So, it’s been a little bit over half a year since I published the article about how to set up an always-on seed-box/VPN using Squid, Sick-Beard, and Deluge. A little bit has changed since then.

First, I no longer use IPVanish. I had an issue with them where they double charged me for a month, and gave me a little bit of a run-around trying to resolve it. Specifically, after contacting their support, they told me that only one of the transactions was successful, the other failed. My PayPal account and my financial institute disagreed. Then they told me I’d have to take it up with PayPal. I took this as a sign that it was time to switch providers. To their (mild) credit, after pressing them for more information, they just went ahead and reversed the charge. Unfortunately for them (not that they probably care that much), I had already switched providers. I now use HideMyAss Pro VPN (disclosure: that’s an affiliate link).

In addition to having switched to HideMyAss Pro VPN, I’ve updated the infrastructure in a couple of different ways to be a bit easier to work with and a bit more flexible.

First, there’s no longer a ipvanish config file, since that’s been replaced with a hidemyass file. But that’s been symlinked to vpn via ln -s hidemyass vpn. That file, just as with the previous one for ipvanish contains the necessary config bits to connect to HideMyAss. The options.pptp file isn’t referenced, so I just left that alone and it’s ignored. I updated chap-secrets to contain the credentials that I use for HideMyAss. Of note, HideMyAss uses a different password for PPTP and L2P connections than your normal password. Find that in your dashboard.

The ipvanish.service unit for systemctl has been renamed to vpn.service so that it’ll stand up semantically to provider changes. It’s also been updated to remove any ipvanish references in favor of the more generic term vpn. It’s also not directly calling pon anymore to turn the VPN on and off. I created a couple of scripts to manage this for me.

The togglevpn.sh script is what’s called by the systemctl unit vpn.service. It just passes on or off just as were passed directly to pon. That script first calls update_vpn_to_fastest_ip.sh which calls fastest_ip.py to retrieve the IP of the fastest VPN node that’s near me (this is just a local-ish subset of the IPs that HideMyAss provides), and updates the /opt/ppp/peers/vpn link (which points to /opt/ppp/peers/hidemyass) to use that IP. After that, pon is called to turn the VPN on. Finally, Squid is updated with update_squid_outgoing_ip_to_interface.sh and then restarted.

/opt/togglevpn.sh:

#!/bin/sh
case "$1" in
on)
        echo "Finding fastest IP..."
        /opt/update_vpn_to_fastest_ip.sh
        sleep 2s
        echo "Turning VPN on..."
        /usr/bin/pon vpn
        sleep 2s
        /opt/update_squid_outgoing_ip_to_interface.sh ppp0
        sleep 2s
;;

off)
        echo "Turning proxy off..."
        /usr/bin/poff vpn
        sleep 2s
        /opt/update_squid_outgoing_ip_to_interface.sh eth0
        sleep 2s
;;

restart)
        $0 off
        $0 on
;;
esac

systemctl restart squid

/opt/fastest_ip.py:

#!/usr/bin/python2.7
# Finds the fastest Seattle IP for HMA

import sys
import re
from subprocess import Popen, PIPE
from threading import Thread

ips = [
        "173.208.32.98",
        "216.6.236.34",
        "108.62.61.26",
        "216.6.228.42",
        "173.208.32.66",
        "173.208.32.74",
        "208.43.175.43",
        "70.32.34.90",
        "108.62.62.18",
        "173.208.33.66",
        "23.19.35.2"
]

fastest_ip = ""
lowest_ping = 100
for ip in ips:
        p = Popen(['/usr/bin/ping', '-c 1 ', ip], stdout=PIPE)
        time = str(p.stdout.read())
        m = re.search("time=([0-9.]+) ms", time)
        if m:
                ms = float(m.group(1))
                if ms < lowest_ping:
                        lowest_ping = ms
                        fastest_ip = ip
                #print("%s is alive.  round trip time: %f ms" % (ip, ms))

#print("Fastest ip is %s at %s" % (fastest_ip, lowest_ping))
print(fastest_ip)

/opt/update_vpn_to_fastest_ip.sh:

#!/bin/bash
ipaddy=`/opt/fastest_ip.py`

echo "Updating VPN to $ipaddy..."
sed -i -e "s/^pty.*/pty \"pptp $ipaddy --nolaunchpppd\"/g" /etc/ppp/peers/vpn</pre>

/opt/update_squid_outgoing_ip_to_interface:

#!/bin/bash
case "$1" in

ppp0)
        ipaddy=`ip addr | grep ppp0 | grep inet | cut -d' ' -f6`
;;

eth0)
        ipaddy=`ip addr | grep eth0 | grep inet | cut -d' ' -f6 | sed 's/\/24//g'`
;;

esac

echo "Updating squid to $ipaddy..."
sed -i -e "s/^tcp_outgoing_address.*/tcp_outgoing_address $ipaddy/g" /etc/squid/squid.conf</pre>

All in all this works rather well for me. I have occasional issues with ppp0 dropping out. I’m not sure if this is my problem or theirs, but I just log in and systemctl restart vpn and I’m back to the races. I’ve considering setting up a cron job to do this for me every hour or so, but it’s not been that much of a problem.

19 Jun 2013, 10:14

Installing Squid, Sick-Beard, Deluge, and an always-on VPN (IPVanish) on Archlinux for an automated seed box

I recently signed up for VPN service through IP Vanish (well, several providers, but that’s the one that stuck). While I like their client software, I was mildly annoyed with having to start and stop the thing when I wanted to run traffic through it, and with having it run ALL my traffic when that’s not necessarily what I wanted.

My solution was to spin up an Archlinux Hyper-V virtual machine on Windows Server 2012 server, and configure it to be a Squid caching proxy and VPN. Then I just pointed the applications that I wanted at it and let it proxy my traffic through the VPN. I went one step further by abandoning uTorrent and installing Deluge and BrickyBox’s Sick-Beard clone for torrent management and saving data to my Drobo-FS.

Note: I have removed all of the comments from these configuration files since most of them were in the default files to begin so you can still really read them if you want, and aren’t relevant to the configurations themselves. I encourage you to understand what these files are actually doing, not just pasting them in to your configs.

Configuring PPP for the IPVanish VPN

Dependencies

pacman -S pptpd

Configuration

/etc/ppp/chap-secrets:

# Secrets for authentication using CHAP
YOUR_USER_NAME	SERVER_ALIAS	PASSWORD    BIND_IPS

Obviously, replace YOUR_USER_NAME, SERVER_ALIAS, and PASSWORD with your specific information. For BIND_IPS, I used an asterisk to bind to all ip addresses. You can be more specific here if you’d like.

/etc/ppp/peers/ipvanish:

persist
maxfail 0
pty "pptp sea-a01.ipvanish.com --nolaunchpppd"
name YOUR_USER_NAME
remotename SERVER_ALIAS
require-mppe-128
file /etc/ppp/options.pptp
ipparam SERVER_ALIAS
updetach

Again, change YOUR_USER_NAME to reflect your IP Vanish username, make sure that SERVER_ALIAS matches what you put in chap-secrets, and use the server that you want to connect to for the pty parameter.

/etc/ppp/options.pptp:

lock
noauth
nobsdcomp
nodeflate

Enable traffic routing

Now that we have a functioning VPN, we want to route all of our traffic through it. Be sure to chmod +x both of these.

/etc/ppp/ip-up.d/10-start-all-to-tunnel-routing.sh:

PRIMARY=eth0
SERVER=$5
GATEWAY="192.168.1.1"
CONNECTION=$6
if [ "${CONNECTION}" = "" ]; then CONNECTION=${PPP_IPPARAM}; fi
TUNNEL=$1
if [ "${TUNNEL}" = "" ]; then TUNNEL=${PPP_IFACE}; fi
if [ "${CONNECTION}" = "ipvanish" ] ; then
 ip route del ${SERVER} dev ${TUNNEL}
 if [ "${GATEWAY}" = "" ] ; then
   ip route add -host ${SERVER} dev ${PRIMARY}
 else
   ip route add -host ${SERVER} gw ${GATEWAY} dev ${PRIMARY}
 fi
 ip route del default ${PRIMARY}
 ip route add default dev ${TUNNEL}
fi

/etc/ppp/ip-down.d/80-stop-all-to-tunnel-routing.sh:

PRIMARY=eth0
SERVER=$5
GATEWAY="192.168.1.1"
CONNECTION=$6
if [ "${CONNECTION}" = "" ]; then CONNECTION=${PPP_IPPARAM}; fi
TUNNEL=$1
if [ "${TUNNEL}" = "" ]; then TUNNEL=${PPP_IFACE}; fi
if [ "${CONNECTION}" = "ivanish" ] ; then
 # direct packets back to the original interface
 ip route del default ${TUNNEL}
 ip route del ${SERVER} dev eth0
 if [ "${GATEWAY}" = "" ] ; then
   ip route add default dev ${PRIMARY}
 else
   ip route add default gw ${GATEWAY} dev ${PRIMARY}
 fi
fi

Creating a custom systemctl unit

To help facilitate automation, I create a custom systemctl unit for the VPN so I wouldn’t have to manually start and stop it all the time.

/usr/lib/systemd/system/ipvanish.service:

[Unit]
Description=IPVanish Proxy
After=network.target

[Service]
Type=oneshot
RemainAfterExit=yes
PIDFile=/run/ipvanish.pid
ExecStart=/usr/bin/pon ipvanish
ExecStop=/usr/bin/poff ipvanish

[Install]
WantedBy=multi-user.target

After you create the unit, you can start and stop the proxy with systemctl start ipvanish and systemctl stop ipvanish respectively. You can also make it start at boot with systemctl enable ipvanish.

Installing Squid

Dependencies

pacman -S squid

Configuration

The Squid package will automatically create a proxy user for you, as well as the necessary systemd units. The only changes that are necessary are in the /etc/squid/squid.conf file. A lot of those changes are going to be predicated on your caching needs. I’m not going to go in to too much detail here, and just show the two lines that you need in your squid config to make this work. The rest of the stuff for actually storing objects and ACLs and the like, I’ll leave as an exercise to the reader.

/etc/squid/squid.conf:

tcp_outgoing_address 172.20.0.3 # This is the IP of your VPN
http_port 192.168.1.126:3128 # This is the IP of your machine

Mounting network shares

Dependencies

pacman -S smbclient autofs

I am mounting my network shares with AutoFS so they’ll come up as soon as someone (deluged) tries to use them. This will mount the one share specified in my auto.media file as /mnt/MOUNT_NAME. Be sure to change NAS_IP, NAS_PATH, and MOUNT_NAME to reflect your setup. In the credentials file, set your USERNAME and PASSWORD for the user that you’ll be connecting to the NAS as. The dir_mode and file_mode directives are the UMASK for the mount points. Mine are set to 777 so that everybody has write access to them, specifically the deluged user.

/etc/autofs/auto.master:

/mnt /etc/autofs/auto.media

/etc/autofs/auto.media:

MOUNT_NAME -fstype=cifs,file_mode=0777,dir_mode=0777,credentials=/etc/samba/credentials,workgroup=WORKGROUP ://NAS_IP/NAS_PATH

/etc/samba/credentials:

username=USERNAME
password=PASSWORD

Installing Deluge

Dependencies

pacman -S deluge python-mako

This was incredibly trivial. Just install the packages from the Arch repository. I did some light configuration through the web-ui to point deluged at my mounted NAS shares, and I was off to the races.

26 Jan 2013, 22:17

Adding custom units/services to systemd on Arch Linux

archlinux-logo

I recently migrated my Arch Linux VMs from sysvinit to systemd as the newer Arch install media are all using systemd and support for sysvinit is discontinued. The migration itself was an incredibly trivial and straight-forward process. I had only one hang up with the migration in general; every time I’d reboot the machine, eth0 wouldn’t come back up. That was fixed with a simple systemctl enable dhcpcd@eth0.

After I had the migration complete, sshd (and everything else) enabled, and eth0 finally coming up, I had two services (or as systemd refers to them, units) that I needed to configure; Chiliproject and git-daemon. Adding these custom units to systemd was incredibly easy.

Your custom service files live in /etc/systemd/system and end with the extension .service. Here the contents of the two that I created:

sdbarker.com-chiliproject.service:

[Unit]
Description=sdbarker.com Chiliproject
Requires=mysqld.service nginx.service
Wants=mysqld.service nginx.service

[Service]
User=www-data
WorkingDirectory=/path/to/chiliproject/install
ExecStart=/usr/bin/bundle exec /usr/bin/unicorn_rails -E production -c config/unicorn.rb
PIDFile=/path/to/chiliproject/install/tmp/pids/server.pid

[Install]
WantedBy=multi-user.target

git-daemon.service:

[Unit]
Description=Git daemon

[Service]
Type=forking
User=git
WorkingDirectory=/path/to/git/home
ExecStart=/usr/lib/git-core/git-daemon --pid-file=/path/to/git/home/temp/pids/git-daemon.pid --detach --syslog --verbose --base-path=/path/to/git/home/repositories
PIDFile=/path/to/git/home/pids/git-daemon.pid

[Install]
WantedBy=multi-user.target

Overall, it was an incredibly easy migration, and my startup time (however rare) is greatly reduced, service management is much, much easier, journalctl for viewing logs is very convenient, and removing all of the boilerplate initscript code that was always a pain in the ass and honestly always a bit flakey was very welcome.

04 Jan 2013, 08:13

Archlinux logs are all empty? Here's the fix.

archlinux-logo

I noticed the other day that my Archlinux logs were all zero bytes. Empty logs don’t do much good, so I spent a while trolling around the internets and had a difficult time finding anything valuable.

I did find one valuable thing in my boot log at /var/log/boot:

Starting Syslog-NG [BUSY] Error binding socket; addr='AF_UNIX(/run/systemd/journal/syslog)', error='No such file or directory (2)'

Which was enough to lead me to this post, https://bbs.archlinux.org/viewtopic.php?id=151132 (which had an unrelated title). The solutioon is to update /etc/syslog-ng/syslog-ng.conf and change the line that sets unix-dgram to read unix-dgram("/dev/log");, and then restart (or start, since it probably never started in the first place) syslog-ng with rc.d start syslog-ng. If you log out and log back in after that, you should see at the very least auth.log will have content.

Of note, I also had a permissions problem with a lot of the log files, so I had to do a quick chmod --recursive g+w /var/log/* to give the log group permissions to write to the logs.