13 Jan 2014, 09:22

Follow-up: Squid, Sick-Beard, Deluge and a VPN, now with 100% more HideMyAss

So, it’s been a little bit over half a year since I published the article about how to set up an always-on seed-box/VPN using Squid, Sick-Beard, and Deluge. A little bit has changed since then.

First, I no longer use IPVanish. I had an issue with them where they double charged me for a month, and gave me a little bit of a run-around trying to resolve it. Specifically, after contacting their support, they told me that only one of the transactions was successful, the other failed. My PayPal account and my financial institute disagreed. Then they told me I’d have to take it up with PayPal. I took this as a sign that it was time to switch providers. To their (mild) credit, after pressing them for more information, they just went ahead and reversed the charge. Unfortunately for them (not that they probably care that much), I had already switched providers. I now use HideMyAss Pro VPN (disclosure: that’s an affiliate link).

In addition to having switched to HideMyAss Pro VPN, I’ve updated the infrastructure in a couple of different ways to be a bit easier to work with and a bit more flexible.

First, there’s no longer a ipvanish config file, since that’s been replaced with a hidemyass file. But that’s been symlinked to vpn via ln -s hidemyass vpn. That file, just as with the previous one for ipvanish contains the necessary config bits to connect to HideMyAss. The options.pptp file isn’t referenced, so I just left that alone and it’s ignored. I updated chap-secrets to contain the credentials that I use for HideMyAss. Of note, HideMyAss uses a different password for PPTP and L2P connections than your normal password. Find that in your dashboard.

The ipvanish.service unit for systemctl has been renamed to vpn.service so that it’ll stand up semantically to provider changes. It’s also been updated to remove any ipvanish references in favor of the more generic term vpn. It’s also not directly calling pon anymore to turn the VPN on and off. I created a couple of scripts to manage this for me.

The togglevpn.sh script is what’s called by the systemctl unit vpn.service. It just passes on or off just as were passed directly to pon. That script first calls update_vpn_to_fastest_ip.sh which calls fastest_ip.py to retrieve the IP of the fastest VPN node that’s near me (this is just a local-ish subset of the IPs that HideMyAss provides), and updates the /opt/ppp/peers/vpn link (which points to /opt/ppp/peers/hidemyass) to use that IP. After that, pon is called to turn the VPN on. Finally, Squid is updated with update_squid_outgoing_ip_to_interface.sh and then restarted.

/opt/togglevpn.sh:

#!/bin/sh
case "$1" in
on)
        echo "Finding fastest IP..."
        /opt/update_vpn_to_fastest_ip.sh
        sleep 2s
        echo "Turning VPN on..."
        /usr/bin/pon vpn
        sleep 2s
        /opt/update_squid_outgoing_ip_to_interface.sh ppp0
        sleep 2s
;;

off)
        echo "Turning proxy off..."
        /usr/bin/poff vpn
        sleep 2s
        /opt/update_squid_outgoing_ip_to_interface.sh eth0
        sleep 2s
;;

restart)
        $0 off
        $0 on
;;
esac

systemctl restart squid

/opt/fastest_ip.py:

#!/usr/bin/python2.7
# Finds the fastest Seattle IP for HMA

import sys
import re
from subprocess import Popen, PIPE
from threading import Thread

ips = [
        "173.208.32.98",
        "216.6.236.34",
        "108.62.61.26",
        "216.6.228.42",
        "173.208.32.66",
        "173.208.32.74",
        "208.43.175.43",
        "70.32.34.90",
        "108.62.62.18",
        "173.208.33.66",
        "23.19.35.2"
]

fastest_ip = ""
lowest_ping = 100
for ip in ips:
        p = Popen(['/usr/bin/ping', '-c 1 ', ip], stdout=PIPE)
        time = str(p.stdout.read())
        m = re.search("time=([0-9.]+) ms", time)
        if m:
                ms = float(m.group(1))
                if ms < lowest_ping:
                        lowest_ping = ms
                        fastest_ip = ip
                #print("%s is alive.  round trip time: %f ms" % (ip, ms))

#print("Fastest ip is %s at %s" % (fastest_ip, lowest_ping))
print(fastest_ip)

/opt/update_vpn_to_fastest_ip.sh:

#!/bin/bash
ipaddy=`/opt/fastest_ip.py`

echo "Updating VPN to $ipaddy..."
sed -i -e "s/^pty.*/pty \"pptp $ipaddy --nolaunchpppd\"/g" /etc/ppp/peers/vpn</pre>

/opt/update_squid_outgoing_ip_to_interface:

#!/bin/bash
case "$1" in

ppp0)
        ipaddy=`ip addr | grep ppp0 | grep inet | cut -d' ' -f6`
;;

eth0)
        ipaddy=`ip addr | grep eth0 | grep inet | cut -d' ' -f6 | sed 's/\/24//g'`
;;

esac

echo "Updating squid to $ipaddy..."
sed -i -e "s/^tcp_outgoing_address.*/tcp_outgoing_address $ipaddy/g" /etc/squid/squid.conf</pre>

All in all this works rather well for me. I have occasional issues with ppp0 dropping out. I’m not sure if this is my problem or theirs, but I just log in and systemctl restart vpn and I’m back to the races. I’ve considering setting up a cron job to do this for me every hour or so, but it’s not been that much of a problem.

19 Jun 2013, 10:14

Installing Squid, Sick-Beard, Deluge, and an always-on VPN (IPVanish) on Archlinux for an automated seed box

I recently signed up for VPN service through IP Vanish (well, several providers, but that’s the one that stuck). While I like their client software, I was mildly annoyed with having to start and stop the thing when I wanted to run traffic through it, and with having it run ALL my traffic when that’s not necessarily what I wanted.

My solution was to spin up an Archlinux Hyper-V virtual machine on Windows Server 2012 server, and configure it to be a Squid caching proxy and VPN. Then I just pointed the applications that I wanted at it and let it proxy my traffic through the VPN. I went one step further by abandoning uTorrent and installing Deluge and BrickyBox’s Sick-Beard clone for torrent management and saving data to my Drobo-FS.

Note: I have removed all of the comments from these configuration files since most of them were in the default files to begin so you can still really read them if you want, and aren’t relevant to the configurations themselves. I encourage you to understand what these files are actually doing, not just pasting them in to your configs.

Configuring PPP for the IPVanish VPN

Dependencies

pacman -S pptpd

Configuration

/etc/ppp/chap-secrets:

# Secrets for authentication using CHAP
YOUR_USER_NAME	SERVER_ALIAS	PASSWORD    BIND_IPS

Obviously, replace YOUR_USER_NAME, SERVER_ALIAS, and PASSWORD with your specific information. For BIND_IPS, I used an asterisk to bind to all ip addresses. You can be more specific here if you’d like.

/etc/ppp/peers/ipvanish:

persist
maxfail 0
pty "pptp sea-a01.ipvanish.com --nolaunchpppd"
name YOUR_USER_NAME
remotename SERVER_ALIAS
require-mppe-128
file /etc/ppp/options.pptp
ipparam SERVER_ALIAS
updetach

Again, change YOUR_USER_NAME to reflect your IP Vanish username, make sure that SERVER_ALIAS matches what you put in chap-secrets, and use the server that you want to connect to for the pty parameter.

/etc/ppp/options.pptp:

lock
noauth
nobsdcomp
nodeflate

Enable traffic routing

Now that we have a functioning VPN, we want to route all of our traffic through it. Be sure to chmod +x both of these.

/etc/ppp/ip-up.d/10-start-all-to-tunnel-routing.sh:

PRIMARY=eth0
SERVER=$5
GATEWAY="192.168.1.1"
CONNECTION=$6
if [ "${CONNECTION}" = "" ]; then CONNECTION=${PPP_IPPARAM}; fi
TUNNEL=$1
if [ "${TUNNEL}" = "" ]; then TUNNEL=${PPP_IFACE}; fi
if [ "${CONNECTION}" = "ipvanish" ] ; then
 ip route del ${SERVER} dev ${TUNNEL}
 if [ "${GATEWAY}" = "" ] ; then
   ip route add -host ${SERVER} dev ${PRIMARY}
 else
   ip route add -host ${SERVER} gw ${GATEWAY} dev ${PRIMARY}
 fi
 ip route del default ${PRIMARY}
 ip route add default dev ${TUNNEL}
fi

/etc/ppp/ip-down.d/80-stop-all-to-tunnel-routing.sh:

PRIMARY=eth0
SERVER=$5
GATEWAY="192.168.1.1"
CONNECTION=$6
if [ "${CONNECTION}" = "" ]; then CONNECTION=${PPP_IPPARAM}; fi
TUNNEL=$1
if [ "${TUNNEL}" = "" ]; then TUNNEL=${PPP_IFACE}; fi
if [ "${CONNECTION}" = "ivanish" ] ; then
 # direct packets back to the original interface
 ip route del default ${TUNNEL}
 ip route del ${SERVER} dev eth0
 if [ "${GATEWAY}" = "" ] ; then
   ip route add default dev ${PRIMARY}
 else
   ip route add default gw ${GATEWAY} dev ${PRIMARY}
 fi
fi

Creating a custom systemctl unit

To help facilitate automation, I create a custom systemctl unit for the VPN so I wouldn’t have to manually start and stop it all the time.

/usr/lib/systemd/system/ipvanish.service:

[Unit]
Description=IPVanish Proxy
After=network.target

[Service]
Type=oneshot
RemainAfterExit=yes
PIDFile=/run/ipvanish.pid
ExecStart=/usr/bin/pon ipvanish
ExecStop=/usr/bin/poff ipvanish

[Install]
WantedBy=multi-user.target

After you create the unit, you can start and stop the proxy with systemctl start ipvanish and systemctl stop ipvanish respectively. You can also make it start at boot with systemctl enable ipvanish.

Installing Squid

Dependencies

pacman -S squid

Configuration

The Squid package will automatically create a proxy user for you, as well as the necessary systemd units. The only changes that are necessary are in the /etc/squid/squid.conf file. A lot of those changes are going to be predicated on your caching needs. I’m not going to go in to too much detail here, and just show the two lines that you need in your squid config to make this work. The rest of the stuff for actually storing objects and ACLs and the like, I’ll leave as an exercise to the reader.

/etc/squid/squid.conf:

tcp_outgoing_address 172.20.0.3 # This is the IP of your VPN
http_port 192.168.1.126:3128 # This is the IP of your machine

Mounting network shares

Dependencies

pacman -S smbclient autofs

I am mounting my network shares with AutoFS so they’ll come up as soon as someone (deluged) tries to use them. This will mount the one share specified in my auto.media file as /mnt/MOUNT_NAME. Be sure to change NAS_IP, NAS_PATH, and MOUNT_NAME to reflect your setup. In the credentials file, set your USERNAME and PASSWORD for the user that you’ll be connecting to the NAS as. The dir_mode and file_mode directives are the UMASK for the mount points. Mine are set to 777 so that everybody has write access to them, specifically the deluged user.

/etc/autofs/auto.master:

/mnt /etc/autofs/auto.media

/etc/autofs/auto.media:

MOUNT_NAME -fstype=cifs,file_mode=0777,dir_mode=0777,credentials=/etc/samba/credentials,workgroup=WORKGROUP ://NAS_IP/NAS_PATH

/etc/samba/credentials:

username=USERNAME
password=PASSWORD

Installing Deluge

Dependencies

pacman -S deluge python-mako

This was incredibly trivial. Just install the packages from the Arch repository. I did some light configuration through the web-ui to point deluged at my mounted NAS shares, and I was off to the races.