Browse Source

rm renamed files. import more guides from

Dan Ballard 7 years ago
  1. 129
  2. 80
  3. 47
  4. 33
  5. 29
  6. 203
  7. 12
  8. 16


@ -0,0 +1,129 @@
# Lisp (SBCL + emacs + slime) on Hardened-ish Gentoo on Xen (take 2) #
*Sep 16, 2010*
A while ago I tried with mixed success to get Lisp onto my Gentoo Hardened server. I had to go a binary only route and kind of stopped there not taking it any farther. Now, 2 years later, I need the full meal deal, lisp + emacs + slime, on my server, which is now a Xen VPS with as much hardening as I could get (much less kernel based hardening since it's the VPS's kernel). It was still too much for SBCL to compile in portage so here's what I did to get it all working.
So you need an out of tree binary copy of SBCL. Live with it. It works. The problem with going with out of tree software, especially for a language, is that what ever binary you get isn't supported and hasn't been tested against all the software in-tree. For instance I initially tried the newest version of SBCL (1.0.42) but ran into problems with portage's stable slime.
Ultimately I went with the closest I could get to portage's stable version. Portage has 1.0.19 marked as the most recent stable version so I went out and downloaded the binary of that version
$ wget
$ tar -xjf sbcl-1.0.19-x86-linux-binary.tar.bz2
So change into the directory and check out INSTALL. Basically installation is easy. Binary SBCL is configured around installing into /usr/local but that can be gotten around. So we'll go with a more traditional install into /usr
*Note*: My test box is a VPS with a Xen kernel not a hardened kernel so I didn't have any PaX problems, but my notes for the last time I tired this on a full hardened install mention that you need do disable some PaX features before SBCL will work:
$ paxctl -p -e -m -r -x -s " on src/runtime/sbcl
Install to /usr
# INSTALL_ROOT=/usr sh
Now SBCL is installed but it won't work because the binary is preconfigured to look for the core in /usr/local. So we'll borrow the gentoo SBCL config files to get that setup properly.
# env-update
The above file and command set up the system environment variables to tell SBCL where it's really installed. Now is as good a time as and to `source /etc/profile` to get those changes.
Now SBCL is installed and working, we need to let portage know that. There used to be a `emerge --inject` method, but that's been deprecated in place of a new provides file
Now portage knows about our SBCL so we can start installing things that depend on it like the rest of our tool chain
# emerge cl-asdf emacs slime -va
So now we have all the pieces, all they need is some gluing together. Again we'll borrow from the Gentoo SBCL files.
;;; The following is required if you want source location functions to
;;; work in SLIME, for example.
(setf (logical-pathname-translations "SYS")
'(("SYS:SRC;**;*.*.*" #p"/usr/$(get_libdir)/sbcl/src/**/*.*")
("SYS:CONTRIB;**;*.*.*" #p"/usr/$(get_libdir)/sbcl/**/*.*")))
;;; Setup ASDF
(load "/etc/gentoo-init.lisp")
(in-package #:cl-user)
#+(or sbcl ecl) (require :asdf)
#-(or sbcl ecl) (load #p"/usr/share/common-lisp/source/asdf/asdf.lisp")
(push #p"/usr/share/common-lisp/systems/" asdf:*central-registry*)
(asdf:oos 'asdf:load-op :asdf-binary-locations)
(setf asdf:*centralize-lisp-binaries* t)
(setf asdf:*source-to-target-mappings* '((#p"/usr/lib/sbcl/" nil) (#p"/usr/lib64/sbcl/" nil)))
Now everything should work. You just need to set up your emacs and slime
; your SLIME directory
(add-to-list 'load-path "/usr/share/emacs/site-lisp/slime/")
; your Lisp system
(setq inferior-lisp-program "/usr/bin/sbcl")
(require 'slime)
(global-set-key (kbd "C-c C-q") 'slime-close-all-parens-in-sexp)
Now It's all glued together, give it a go
$ emacs
M-x slime
If you don't get any compilation errors you should be in emacs + slime.
And there you have it, SBCL emacs and slime on Gentoo Hardened.
## Cavets ##
**1)** For some reason this approach adds some annoying extra text to vanilla SBCL start up that I can't seem to get rid of
$ sbcl
This is SBCL 1.0.19, an implementation of ANSI Common Lisp.
More information about SBCL is available at <http: //>.
SBCL is free software, provided as is, with absolutely no warranty.
It is mostly in the public domain; some portions are provided under
BSD-style licenses. See the CREDITS and COPYING files in the
distribution for more information.
; loading system definition from
; /usr/share/common-lisp/systems/asdf-binary-locations.asd into
; #<package "ASDF0">
**2)** The system I tested this on is a VPS so the kernel is a Xen kernel, not a hardened kernel, so there may be additional complications on a full hardened install. Please let me know if you have any, and especially any working solutions.
## Comments ##
**Lisper** Says:
September 17th, 2010
What version of Xen are you running?
**Dan Ballard** Says:
September 17th, 2010
What ever version uses. The kernel I’m using is #1 SMP, their Paravirt version of the kernel.
**Stelian Ionescu** Says:
September 18th, 2010
In order to get up-to-date CL packages on Gentoo you need to use the lisp overlay – of which I maintain the CL packages – and to keyword all packages as ~arch, since they won’t be stabilized any time soon given the shortage of manpower


@ -0,0 +1,80 @@
# Liberating Flash Video From an RTMP Server #
*Jan 17, 2011*
Let's say you did a presentation that was recorded and you'd like to post it to your website. Sadly, let's now say there are some problems, like that your 5 minute presentation is part of a nearly 2 hour video only available in a flash player that doesn't even have a time display so you couldn't even point people to the video and say jump to 1 hour and 15 minutes to see me. It sucks. Technically your presentation is available online, but it's not really accessible. So here is how you might rescue it!
It turns out there are two ways flash players server videos these days. The first and easiest is that a simple flash player loads in your browser, and uses your browser to make a GET request to the server to load a .flv file (FLash Video). This is relatively easy to intercept, there are lots of tools and plugins for Firefox that do this automatically for you. Even better, on Linux for example, these videos are usually stored in /tmp so your browser does the whole job and gives them to you. No work required.
The other more complicated but more secure option is that the flash player connects to a dedicated rtmp server that streams flash video. The flash plugin does the networking and there is no file to save, it's a stream.
If you are lucky enough to have a player using the first option, you are done. Assuming you have the second option, then your fun has just begun.
First we need to try and figure out where the server that your flash video is.
My first approach was to use wireshark to sniff the traffic. Through this I discovered the basics, like the address the server and the port, 1935.
Next I installed [rtmpdump]( RTMP is the Real Time Messaging Protocol and rtmpdump is a program that can connect to an RTMP server, get a stream and save it to a file. Sadly the data I got from wireshark didn't have all the parameters I needed to get the file. Or I couldn't read it properly. So while I knew where the server was and could now connect to it, I still didn't know how to ask for the video I wanted.
Thankfully rtmpdump comes with several other utilities. After reading its README I went the rtmpsuck route. I set local redirecting of all port 1935 requests to localhost with iptables and ran the rtmpsuck proxy server. In theory it was supposed to intercept all calls from the flash player to the rtmp server, decode and spit them out, and then forward them along. Even better, it would try to save the stream on the way back as it passed through it.
# iptables -t nat -A OUTPUT -p tcp --dport 1935 -m owner --uid-owner OWNER_UID -j REDIRECT
$ ./rtmpsuck
Where OWNER_UID is the uid of the user running rtmpsuck. With this running I just reloaded the page with the player (twice, it's a bit glitchy) and then tried to skip to where my part was so it would save the stream from there.
It was partially successful. It spit out on the console all the pertinent path parameters about the video on the server, but it kept chocking on bad packets of data and stopped recording. Also for some reason the video it did store was very large, space-consuming wise.
Armed with the right parameters though I was able to use rtmpdump to suck down the whole video from the server surprisingly quickly and in a reasonably sized format.
$ ./rtmpdump -r rtmp:// -o video.flv
Now the video was liberated from its flash interface and in my possession, I just had to cut out my small part and then convert it to a more common format.
$ mencoder -ss 1:15:50 -endpos 0:05:57 -ovc copy -oac copy video.flv -o result.flv
$ ffmpeg -i result.flv result.avi
And volia. I now have just my part of the video and in a common format. I mean you hypothetically do! Yes...
Completely unrelatedly, you can expect to see my presentation on my project Cortex from the BCNet Broadband Innovation Challenge (where I got second place) online soon.
## Comments ##
**Alun Jones** Says:
May 31st, 2011
Silly question – since you’re using ffmpeg to convert the FLV to AVI, why not use ffmpeg to read the FLV from the RTMP server in the first place?
**Dan Ballard** Says:
May 31st, 2011
Can it do that for streams? If so, then because I didn’t know it could and my clearly flawed google search didn’t reveal that :(
And glancing at their website it looks like it can. Shame on me.
**Alun Jones** Says:
June 1st, 2011
ffmpeg -i “rtmp://site:port/app/file ” -vcodec copy -acodec copy -f flv output.flv
This is working for me, although I do find that often the stream gets interrupted, and there’s no “resume”. [If you figure that out, I’d appreciate you repeating how!]
My target server requires a number of options to be given, such as the swfUrl, swfVfy, pageUrl, etc. These can each be set with options.
You have, of course, to have an ffmpeg compiled with rtmp support in – this is a compile-time option. The Windows downloads seem to have it.
I note that the start and end time are also options (specified in milliseconds), so you should be able to make the whole operation one single ffmpeg command.
To get a list of RTMP options, run the command:
ffmpeg -i “rtmp:/// =”
**Alun Jones** Says:
June 1st, 2011
There’s a lot about ffmpeg that is implemented, but not documented. I’m still just discovering what I can use from it. I’d love to know, for instance, how to add a subtitle stream.
**Dan Ballard** Says:
June 2nd, 2011
Well thank you very much for the info anyways. As does presumably anyone else stumbling upon this article :)
Yeah, mplayer, among many crufty old and massively powerful open source project, does a poor job of documenting all the awesome tucked away inside it. It’s a bit of a shame, but oh well, and good luck for anyone doing spelunking into it!


@ -0,0 +1,47 @@
# Finding "lost" computers on the web the homebrew way #
*Jan 23, 2012*
During the course of updating my home computer I rebooted it because of a kernel update. Later that week at work I went to connect to my home computer and discovered that it's dynamic IP had changed and it's DNS name was invalid.
So following common advice to "fix a problem two ways to prevent it in the future" I fixed the DNS, but I also wanted an automated way to track my computers when and if their IPs changed.
So the first thing I needed was shared place to store the IP information. Thinking about it I realized that Dropbox would work well for that. So all I needed was a simple script.
So the solution was to put a script that determined the IP of the computer in Dropbox and have cron on all the computers run it. Each user can call cron with
$ crontab -e
And I created a crontab directory that I could add more scripts to later if need be with and run them hourly with the following entry
0 * * * * cd /home/dan && run-parts Dropbox/cron
The script itself was a file called `getip` and it used's automation detection script.
wget -O /tmp/`hostname`.ip
if ! diff -q ${tmp_file} ${dst_file} > /dev/null ; then
`cp ${tmp_file} ${dst_file}`
Then I just created `Dropbox/var/log` and installed the crontab on all my computers, and volia, homebrew IP tracking for all my compters accessible to me from anywhere.
## Comments ##
**garza** Says:
January 25th, 2012
or just use gnudip?
** Dan Ballard** Says:
January 27th, 2012
@garza Assuming you have a server to run it from, which while I do, not everyone may, and in this case, this with dropbox and whatismyip replaces.


@ -0,0 +1,33 @@
# Adding DKIM to my Gentoo Postifx mail server #
*Oct 24, 2012*
So after being alterted to the existence of DKIM by [this article]( posted on [HackerNews]( I wanted to implement it immediatly on my server. DKIM is Domain Keys for Identified Mail, a crypo signing protocol where a pub key sits in your DNS and your mail servers sign your mail as it passes through your server. Seems a little stronger than SPF from a few years ago for authenticating mail's origin so I was keen to adopt it.
So I found the [freshest instructions on the Gentoo wiki]( and followed them. They were a bit spartan so I went looking for a bit more material and found this [Ubuntu tutorial]( which had some helpful suggestions like the testing section.
After giving the OpenDKIM instructions a first run through I gave the testing a try.
First using []( I found that the Gentoo OpenDKIM config tool had spat out invalid TXT. It had spat out
v=DKIM1;=rsa; p=MIGfM......
And after some quick internet consultation I found out I needed to fix it to
v=DKIM1; k=rsa; p=MIGfM.....
The second test from the Ubuntu docs was an auto-respond test email system that along with wikipedia I learned about [ADSP]( from. So I added IN TXT "dkim=discardable"
to my Bind config as well. (I'm still not 100% about the final '.'). Also it seems the autoresponder email tool doesn't update its DNS too often so I may have to wait a bit to retest.
So now it seems I should have DKIM signed/valid email! :) Just another step to make sure my email is valid, slightly less spoofable and liked/accepted by the big email providers.
Also, seeing results like this from Gmail after receiving my email seems good:
Received-SPF: pass ( domain of designates as permitted sender) client-ip=;
Authentication-Results:; spf=pass ( domain of designates as permitted sender); dkim=pass


@ -1,29 +0,0 @@
# Getting started with my softkinetic DepthSense 325 #
So a bit ago I bought a DepthSense 325 camera. I've been pretty busy since then but today I finally sat down to get started with it. First thing, it was on my netbook so I had to resetup the software stack and SDK. The SDK is free from softkinetic and works on Linux (which is awesome, and also a big reason I bought this camera) but I think it's more aimed at Ubuntu 12.04 so there was one or two extra steps to make it go on 13.10.
First, regardless of Ubuntu version, you need to add the DepthSense libraries to the LD_LOAD_PATH and the now recommended way is adding a file to /etc/ like this
Then run `sudo ldconfig` to regenerate the cache or what ever. Now you can link agianst the libraries.
Next, at least for Ubuntu 13.10, you need to fake having Thankfully worked fine so run
sudo ln -s /lib/x86_64-linux-gnu/ /lib/x86_64-linux-gnu/
At this point DepthSenseViewer that comes with the SDK should work and you are good to go.
So today's mission after getting set up was to get some code pulling form the camera and displaying using opencv (because I ultimately want to feed it through [ROS]( filters and as was suggested on a [forum post](, the best way to hook the DS325 into ROS was through openCV and then the ros opencv bridge). Thankfully I found what I needed on the softkinetic forum in [Example Linux/OpenCV Code to display/store DS325 data]( The first code needed some slight fixes as detailed in the second (but slightly corrupted formatted) post. With a little poking and proding I had it compiling and working.
g++ ds_show.cxx -I /opt/softkinetic/DepthSenseSDK/include/ -L /opt/softkinetic/DepthSenseSDK/lib -lDepthSense -lopencv_core -lopencv_highgui
Not actually that much coding today, but a lot of pieces in place.
See ds_show.cxx in [references/ds_show.cxx](references/ds_show.cxx)


@ -1,203 +0,0 @@
#'s Guide to Setting up a Git Server #
*Last Updated 2009 07*
I've found documentation on the setup of git servers and public repositories kind of lacking, so here is my best attempt at documenting what works for me. Feel free to comment with bugs or enhancements please.
1. [Setting Up A Local Repository](#git.1)
1. [From Scratch](#git.1.1)
2. [From An Existing Project](#git.1.2)
2. [Setting Up A Remote Repository](#git.2)
1. [Remote Repository For Developer Only (ssh)](#git.2.1)
2. [Remote Repository For Public Access (git://)](#git.2.2)
3. [Shared Multi-Developer Public Repository](#git.2.3)
3. [Managing Multiple Developers, Repositories and Branches](#git.3)
4. [Comments](#comments)
<a name="git.1" ></a>
## 1. Setting Up A Local Repository ##
Alice is going to start developing a project and she wants to add source control to it. There are a couple of reasons to set up a local repository that Alice likes including branch control, so that she can revert her code to previous releases, fix, patch or merge a bug fix, roll a release, and then pop back to the current development branch.
<a name="git.1.1" ></a>
### 1.1 From Scratch ###
To set up a git repository for her project, Alice does the following:
alice@home $ mkdir proj
alice@home $ cd proj
alice@home $ git init
The project directory is now an empty git repository. As she creates files, she can add them to the respository with
alice@home $ git add newfile.src
And when she's done work or at least reached some break point, she can commit the new files, and all changes with
alice@home $ git commit -a
<a name="git.1.2" ></a>
### 1.2 From An Existing Project ###
Also, occasionally Alice gets excited and starts coding before creating a repository. To create a repository from an already started project is as simple as
alice@home $ cd ~/proj
alice@home $ git init
and either
alice@home $ git add .
to add all the files, or
alice@home $ git add file1 file2 file3
to add just some of the files, both followed by
alice@home $ git commit -a
for the initial commit of the code to the new repository.
<a name="git.2"></a>
## 2. Setting Up A Remote Repository ##
Some times Alice needs her repositories to be remote and internet accessible. Sometimes she needs to work on them from several locations, and sometimes she wants her project's code to always be accessible to the public.
There are two primary methods for making remote git repositories accessible online. The first method is over ssh, which developers can use to both read and write to the repository and the second is through a dedicated git server which the public can use for read only access.
<a name="git.2.1"></a>
### 2.1 Remote Repository For Developer Only (ssh) ###
If Alice's project is personal and she just needs a central repository to access from a few locations like both work and home, she can set up a repository on any unix machine she has access to as follows.
Alice needs to create a bare repository clone of her working code and then transfer it to the server she will be using as the repository host
alice@home $ git clone --bare ~/proj proj.git
alice@home $ tar -czf proj.git.tar.gz proj.git
alice@home $ scp proj.git.tar.gz
Then, on the server
alice@server $ tar -xzf proj.git.tar.gz
alice@server $ mv proj.git proj
Now Alice can create working copies of the repository from anywhere, like work, and work on the code as normal as follows
alice@work $ git clone ssh://
alice@work $ cd proj
alice@work $ commit -a
However all this does is create a local clone of the repository and commit the changes to the new local clone. To push changes to the local repository back to the central repository, Alice does
alice@work $ git push
(As a note, Alice will also need to perform this clone of the remote repository at home so that her repository is aware of the remote repository, or she can use 'git remote add' to make her current original repository aware of the remote one)
When Alice gets home she can check out the latest changes with a simple
alice@home $ git pull
which pulls all the latest changes from the remote repository. Then she can develop, commit and push her changes and then the next day at work she can pull all those changes.
<a name="git.2.2"></a>
### 2.2 Remote Repository For Public Access (git://) ###
Now, to allow public read only access of the repository over the git:// protocol the steps of setting up a remote repository are all the same, however there are additional steps that need to be taken.
At a minimum, Alice needs to setup the git daemon on the server and tell each git repository that she wants to be publically accessible that it is so.
Setting up the a basic git daemon is up to Alice and her server's distribution, but once it is installed and running, it will try to export any directory on the server filesystem that is a) a git repository, and is b) flagged to be publically accessible.
To make her repositories accessible, Alice does the following
alice@server $ touch ~/proj/git-daemon-export-ok
Now when Bob hears about Alice's project, he can check out a copy of the repository himself as follows
bob@home $ git clone git://
Bob actually ends up with the a full clone of the repository and can work with the code, and if he wants he can make changes and commit them to his local clone of the repository as normal. However, the one thing Bob cannot do is 'push' his changes back to the central repository.
He can, however, even stay up to date with the repository with git pull
bob@home $ git pull
and he'll always get the latest changes.
<a name="git.2.3"></a>
### 2.3 Shared Multi-Developer Public Repository ###
*(Note: This is for those more used to CVS and Subversion style source control. Defacto and "proper" git style is outlined in section [3. Managing Multiple Developers, Repositories and Branches](#git.3).)*
Alice happens to have root access to her server and wants to set up a multiple developer git repository.
First she creates a git user group and makes a root git directory.
root@server # groupadd git
root@server # mkdir /git
Then Alice configures the git daemon to only export repositories in /git in the git-daemon's config file
GITDAEMON_OPTS="--syslog --verbose /git"
Now Alice creates a shared repository. She untars the git repository like normal, but sets its group to git and makes sure it'll stick by setting the stick bit, and then she makes it "shared" which means all the files are writeable by the group git.
root@server # cd /git
root@server # tar -xzf proj.git.tar.gz
root@server # mv proj.git proj
root@server # chgrp -R git proj
root@server # chmod g+ws proj -R
root@server # cd proj
root@server # git config core.sharedRepository true
And of course if Alice wants it to be publically viewable
root@server # touch git-daemon-export-ok
Now Alice has a git repository that several developers on the server can all use. Anyone in the git group can commit to the repository.
Alice's friend Charlie wants to develop for the project so Alice gives him an account on the server. Charlie can then start developing just like normal
charlie@home $ git clone ssh://
charlie@home $ cd proj
charlie@home $ git commit -a
charlie@home $ git push
Alice can get these changes at home, and any she's made from work with a simple
alice@home $ git pull
And if the repository was made public and exportable then Bob can checkout the code and keep up to date too
bob@home $ git clone git://
bob@home $ cd proj
bob@home $ git pull
<a name="git.3"></a>
## 3. Managing Multiple Developers, Repositories and Branches ##
The proper way to use git with multiple developers is for each developer to have their own repository and branches and have a central manager who pulls from all the other branches and merges the code together before release. This is how Linux works (git was created by Linux's creator Linus).
*Note: I know this is the proper way but I haven't really had any experience with it, so until I get time to play with it unfortunately this part of the document will be empty. Check out the official git manual for a good idea of how this should be managed, especially chapter 4 [Sharing Development](*.
<a name="comments"></a>
## Comments ##
**Anon posted on 2010 10**
Section 2.3:
root@server # chmod g+ws proj -R
this makes all files setgid when i think you really only wanted directories to be g+s
root@server # find proj -type f -print0 | xargs -0 chmod g+w
root@server # find proj -type d -print0 | xargs -0 chmod g+ws
is probably better…
<a name="git.ref"></a>
## References ##
* [Git User's Manual (for version 1.5.3 or newer)](
* [Git Ref](
* [Pro Git](
* (appears dead now) [Setting up your Git repositories for open source projects at GitHub](
* [Google](


@ -1,12 +0,0 @@
# Notes on installing Ubuntu on a Lenovo Q190 #
*Aug 25, 2013*
So I like running a full computer on my TV. It's just convenient to be able to easily to Youtube, torrent stuff directly on it, copy files to it over sftp, play any media file I can find, etc etc. Our last "tv box" was a small nettop that was a little under powered: it chocked on high def video files and full screen youtube. So I've been waiting for a replacement that fit the following parameters: cheap (less than $400), small nettop form factor and light on power consumption, and more powerful. The [Lenovo Q190]( hit the mark with dual core and 4gb ram. My only concern was it was a Windows 8 box so it'd be the first time installing Linux on a secure boot machine.
The good news is it went really well. First note, the Windows 8 partition resizer may be the best thing about Windows 8. I remember being stoked when Windows got its own partition resizer back in Vista or Windows 7 days. The only slight con was that it was pretty hungry. If you were using only 20GB on a fresh install, it wouldn't shrink much lower than 60GB... But the new Windows 8 one is hilarious and will let you shrink down to 100% full. Also it's unbelievably fast, like I had to reload it to double check it had actually done the shrink.
After that, rebooted the machine and jammed F1 (ok actually I jammed F1, F2, F8, F9, F10, F11, F12, and ESC) to bring up the "BIOS". There I turned off Secure boot in the security menu, and turned off Quick Boot. Then the USB stick booted Ubuntu fine. The install ran fine too. I did have to go back into the BIOS after to reorder the boot drives. Ubuntu manager to name its partition in a way the BIOS could recognize so I just moved that in priority above Windows 8 and next reboot got Grub!
The device runs fine, all the high def files I could throw at it it ran fine and so far even full screen youtube on higher def seems to be ok.
The box is supposed to have Wifi but that doesn't seem to have been recognized out of the box but that's ok, I can either poke at it or leave it plugged into ethernet, not a deal breaker. It also came with an adorable hand held keyboard/mouse device but it's bluetooth also doesn't seem to be supported. But it fulfils what I need well so I'm pleased and secureboot was less of a pain to work around than I'd worried about, so yeah!


@ -1,16 +0,0 @@
# StrongSwan VPN (and ufw) #
*Jan 26, 2015*
I make ample use of SSH tunnels. They are easy which is the primary reason. But sometimes you need something a little more powerful, like for a phone so all your traffic can't be snooped out of the air around you, or so that all your traffic not just SOCKS proxy aware apps can be sent over it. For that reason I decided to delve into VPN software over the weekend. After a pretty rushed survey I ended up going with [StrongSwan]( OpenVPN brings back nothing but memories of complexity and OpenSwan seemed a bit abandoned so I had to pick one of its decendands and StrongSwan seemed a bit more popular than LibreSwan. Unscientific and rushed, like I said.
So there are several scripts floating around that will just auto set it up for you, but where's the fun (and understanding allowing tweeking) in that. So I found two guides and smashed them together to give me what I wanted:
[strongSwan 5: How to create your own private VPN]( [[local ref](references/strongswan/] is the much more comprehensive one, but also set up a cert style login system. I wanted passwords initially.
[strongSwan 5 based IPSec VPN, Ubuntu 14.04 LTS and PSK/XAUTH]([[local ref](references/strongswan/] has a few more details on a password based setup
Additional notes: I pretty much ended up doing the first one stright through except creating client certs. Also the XAUTH / IKE1 setup of the password tutorial seems incompatible with the Android StrongSwan client, so I used EAP / IKE2, pretty much straight out of the first one. Also seems like you still need to install the CA cert and vpnHost cert on the phone unless I was missing something.
Also, as an aside, and a curve ball to make things more dificult, this was done one a new server I am playing with. Even since I'd played with OpenBSD's pf, I've been ruined for iptables. It's just not as nice. So I'd been hearing about ufw from the Ubuntu community from a while and was curious if it was nicer and better. I figured after several years maybe it was mature enough to use on a server. I think maybe I misunderstood its point. Uncomplicated maybe meant not-featureful. Sure for unblocking ports for an app it's cute and fast, and even for straight unblocking a port its syntax is a bit clearer I guess? But as I delved into it I realized I might have made a mistake. It's built ontop of the same system iptables uses, but created all new tables so iptables isn't really compatible with it. The real problem however is that the ufw command has no way to setup NAT masquerading. None. The interface cannot do that. Whoops. There is a hacky work around I found at [OpenVPN – forward all client traffic through tunnel using UFW]( which involves editing config files in pretty much iptables style code. Not uncomplicated or easier or less messy like I'd been hopnig for.
So a little unimpressed with ufw (but learned a bunch about it so that's good and I guess what I was going for) and had to add "remove ufw and replace with iptables on that server" to my todo list, but after a Sunday's messing around I was able to get my phone to work over the VPN to my server and the internet. So a productive time.