Skip to content

Nextcloud and Docker with Apache Proxy

I decided I wanted to try migrating away from Dropbox, Amazon Drive, and Google Drive to my own server using open source tools. After a bit of research I determined Nextcloud would be the best fit for what I wanted to do right now and some optional features later on. Nextcloud can be installed via packages in the major distributions, but I wanted to use this opportunity to test drive Docker at the same time. One of the reasons I wanted to use a container was so that the installation is mostly isolated from the host install which is good for security purposes but also makes it easier if I want to migrate the whole thing to a different host later on. Now the host I want to use already has a web server, Apache, listening on ports 80 and 443, so we’ll configure it to act as a proxy between the web server in the Nextcloud Docker image and the client. This will also fit well with the SSL certificate the host has.

The first step is getting Docker installed and running. This is pretty easy for most distributions and is covered in detail on the official Docker Community Edition site for Ubuntu, CentOS, and the Gentoo Wiki even has instructions as well.

Once you’ve got Docker up and running. let’s test it out first:

docker run hello-world


This should download the hello-world image and run it.

Now, let’s get to docker. First let me mention that I already have a database (PostgreSQL) that I used so I’ll skip that step here, but if you don’t already have a database available now would be the point to pause and go get that resolved. Since data is not saved between Docker containers, we will need to instruct Docker to create a mount point for the Nextcloud image that will keep our data safe. This can be accomplished with the ‘-v’ flag. We also need to tell Docker what port we want to open up, but in my case I don’t want it using port 80 or 443 so we’ll have to further instruct Docker to forward the port in our ‘-p’ flag.

docker run -d -v nextcloud:/var/www/html -p 8181:80 nextcloud


In the above command I have select port 8181 on the host to be passed to port 80 on the Nextcloud Docker image. Once the Docker container loads completely you should be able to access it via http://your_ip:8181 and see the setup page, but before we do that let’s setup our proxy.

On the host side we will create an Apache vhost with a few lines like this:

:80>
ServerName nextcloud.my_domain.com
http://localhost:8181/
Redirect permanent / https://nextcloud.my_domain.com


:443>
ServerName nextcloud.my_domain.com

>
Require host localhost
Require all granted

ProxyPass / http://localhost:8181/

ServerEnvironment apache apache


Header always set Strict-Transport-Security “max-age=15552000; includeSubDomains”


SSLEngine on
SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL
SSLCertificateFile /etc/my_certificate.pem
SSLCertificateKeyFile /etc/my_certificate-private.pem
SSLCertificateChainFile /etc/my_certificate-full.pem


What this configuration does is force all non-SSL connections to use SSL. Then under the SSL configuration it proxies all connections to port 8181 on localhost where nextcloud is running. Then finally we use our SSL certificate from the host. Don’t forget to setup DNS for your nextcloud domain!

At this point we should be ready to continue with the setup via web. Load up https://nextcloud.my_domain.com in your web browser and follow the on-screen instructions. One of the pages should detect the proxy setup and ask for additional domain(s) to be configured so be sure to add https://nextcloud.my_domain.com to the list in addition to http://host.my_domain.com:8181 (if desired).

One final step that I suggest is setting up a cron job on the host (not inside the Nextcloud Docker image). I have mine set to run ever 15 minutes. In order for this to work we need to install sudo in the container by entering the container:

docker exec -it 16765e565e25 bash


Update apt sources:

apt-get update

Install sudo:

apt-get install sudo


Now finally on the host (NOT container) create a crontab with this line:

*/15 * docker exec -d /usr/bin/sudo -u www-data /usr/local/bin/php /var/www/html/cron.php


Be sure to replace with the real one which you can find by running ‘docker ps’ on the host.

At this point you should have a fully functional Nextcloud server!

Client PPtP Connection From A VM

I encountered an issue recently with trying to make a PPtP connection from a Linux VM as the client to a remote commercial device or server where the GRE packets were being dropped. The same PPtP credentials worked on another server that is bare metal. This lead me speculate that the issue might be something between the routing devices and the client. After a bit of investigative work with wireshark I discovered the GRE packets were in fact getting to the virtualization host but not to the guest VM. I suspect this issue may be present with other types of virtualization software, but to be clear this particular VM host is running KVM/QEMU.

It has been a while (read: years) since I’ve done much with PPtP beyond just using it. Adding a configuration that was working on another server to this particular system I discovered the connection would not complete much to my dismay. Looking at what ppp logged to the system log revealed it never got a proper GRE reply. Well, there were a lot of things in the log but the one that stood out looked like this:

warn[decaps_hdlc:pptp_gre.c:204]: short read (-1): Input/output error


After a bit of Googling and reading the documentation for pptp-client I decided re-try the setup on the previously mentioned working system and watch the log closely for further clues. Where the second system was failing the original system sailed right past and worked fine. My next attempt was to look at what connections the first system had going which lead to me realize and make a mental connection to the documentation/Googling had revealed about PPtP using protocol 47 (GRE) on TCP port 1723 for the control. Watching another attempt on the second system showed the outgoing request for GRE but nothing coming back. Repeating the last test but watching for incoming GRE on the host showed that it was being received but not being passed on to the guest VM. Looking at my options I discovered that there is a whole set of modules and a kernel configuration option to allow forwarding of PPtP.

The missing pieces to the puzzle include adding a line to your sysctl.conf:

net.netfilter.nf_conntrack_helper=1


Then loading these kernel modules:

nf_conntrack_proto_gre
nf_nat_proto_gre
nf_conntrack_pptp
nf_nat_pptp


As soon as these were in place PPtP started working as expected in the guest VM. What started out as a mystery turned out to be a fairly simple solution. While there are probably not a lot of people still using PPtP these days, it is a better alternative to using a proprietary VPN client.

Managing Multiple Machines Simultaneously With Ansible

If I have to do it more than once, it’s probably going to get scripted. That has been my general attitude towards mundane system administration tasks for many years, and is also shared by many others. How about taking that idea a little further and applying it to multiple machines? Well there’s a tool for that too, and it’s named ansible.

We need ansible installed on the system we will be using as the client/bastion. This machine needs to be able to SSH into all of the remote systems we want to manage without issue. So stop and make sure that works unhindered before continuing. On the remote machine, the requirements are fairly low and typically revolve around python2. In Gentoo python2 is already installed as it is required by several things including emerge itself. On Ubuntu 16.04 LTS, python2 is not installed by default and you will need to install the package ‘python-minimal’ to regain it.

Once we have python installed on the remote machines and ansible installed on the local machine, we can move on to editing the ansible configuration with a list of our hosts. This file is fairly simple and there are lots of examples available, but here is a snippet of my /etc/ansible/hosts file:

[ubuntu-staging]
ubuntu-staging-dev
ubuntu-staging-www
ubuntu-staging-db


Here you can see I have three hosts listed under a group named ubuntu-staging.

Once we have hosts defined we can do a simple command line test:

ansible ubuntu-staging -m command -a “w”


The ‘-m’ tells ansible we wish to use a module named ‘command’ and ‘-a’ indicates that it has arguments that need to be passed which is immediately given as ‘w’. The output from this command should be similar to this:

$ ansible ubuntu-staging -m command -a “w”
ubuntu-staging-www | SUCCESS | rc=0 >>
10:25:57 up 8 days, 12:29, 1 user, load average: 0.22, 0.31, 0.35
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
canuteth pts/2 192.168.13.221 10:25 1.00s 0.25s 0.01s w

ubuntu-staging-dev | SUCCESS | rc=0 >>
10:25:59 up 8 days, 12:17, 1 user, load average: 0.16, 0.03, 0.01
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
canuteth pts/0 192.168.13.221 10:25 0.00s 0.37s 0.00s w

ubuntu-staging-db | SUCCESS | rc=0 >>
10:26:02 up 8 days, 12:25, 1 user, load average: 0.17, 0.09, 0.09
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
canuteth pts/0 192.168.13.221 10:26 0.00s 0.28s 0.00s w


Okay, that shows promise right? Let’s try something a little more complicated:

$ ansible ubuntu-staging -s -K -m command -a “apt-get update”
SUDO password:
[WARNING]: Consider using apt module rather than running apt-get

ubuntu-staging-db | SUCCESS | rc=0 >>
Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Get:4 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]
Fetched 306 kB in 5s (59.3 kB/s)
Reading package lists…

ubuntu-staging-www | SUCCESS | rc=0 >>
Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Hit:4 https://apt.dockerproject.org/repo ubuntu-xenial InRelease
Get:5 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]
Get:6 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [544 kB]
Get:7 http://us.archive.ubuntu.com/ubuntu xenial-updates/main i386 Packages [528 kB]
Get:8 http://us.archive.ubuntu.com/ubuntu xenial-updates/main Translation-en [220 kB]
Get:9 http://us.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [471 kB]
Get:10 http://us.archive.ubuntu.com/ubuntu xenial-updates/universe i386 Packages [456 kB]
Get:11 http://us.archive.ubuntu.com/ubuntu xenial-updates/universe Translation-en [185 kB]
Get:12 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [276 kB]
Get:13 http://security.ubuntu.com/ubuntu xenial-security/main i386 Packages [263 kB]
Get:14 http://security.ubuntu.com/ubuntu xenial-security/main Translation-en [118 kB]
Get:15 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [124 kB]
Get:16 http://security.ubuntu.com/ubuntu xenial-security/universe i386 Packages [111 kB]
Get:17 http://security.ubuntu.com/ubuntu xenial-security/universe Translation-en [64.2 kB]
Fetched 3,666 kB in 6s (598 kB/s)
Reading package lists…

ubuntu-staging-dev | SUCCESS | rc=0 >>
Hit:1 http://us.archive.ubuntu.com/ubuntu zesty InRelease
Get:2 http://us.archive.ubuntu.com/ubuntu zesty-updates InRelease [89.2 kB]
Get:3 http://security.ubuntu.com/ubuntu zesty-security InRelease [89.2 kB]
Get:4 http://us.archive.ubuntu.com/ubuntu zesty-backports InRelease [89.2 kB]
Get:5 http://us.archive.ubuntu.com/ubuntu zesty-updates/main i386 Packages [94.4 kB]
Get:6 http://us.archive.ubuntu.com/ubuntu zesty-updates/main amd64 Packages [96.2 kB]
Get:7 http://us.archive.ubuntu.com/ubuntu zesty-updates/main Translation-en [43.0 kB]
Get:8 http://us.archive.ubuntu.com/ubuntu zesty-updates/main amd64 DEP-11 Metadata [41.8 kB]
Get:9 http://us.archive.ubuntu.com/ubuntu zesty-updates/main DEP-11 64×64 Icons [14.0 kB]
Get:10 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe i386 Packages [53.4 kB]
Get:11 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe amd64 Packages [53.5 kB]
Get:12 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe Translation-en [31.1 kB]
Get:13 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe amd64 DEP-11 Metadata [54.1 kB]
Get:14 http://us.archive.ubuntu.com/ubuntu zesty-updates/universe DEP-11 64×64 Icons [43.5 kB]
Get:15 http://us.archive.ubuntu.com/ubuntu zesty-updates/multiverse amd64 DEP-11 Metadata [2,464 B]
Get:16 http://us.archive.ubuntu.com/ubuntu zesty-backports/universe amd64 DEP-11 Metadata [3,980 B]
Get:17 http://security.ubuntu.com/ubuntu zesty-security/main amd64 Packages [67.0 kB]
Get:18 http://security.ubuntu.com/ubuntu zesty-security/main i386 Packages [65.5 kB]
Get:19 http://security.ubuntu.com/ubuntu zesty-security/main Translation-en [29.6 kB]
Get:20 http://security.ubuntu.com/ubuntu zesty-security/main amd64 DEP-11 Metadata [5,812 B]
Get:21 http://security.ubuntu.com/ubuntu zesty-security/universe amd64 Packages [28.8 kB]
Get:22 http://security.ubuntu.com/ubuntu zesty-security/universe i386 Packages [28.7 kB]
Get:23 http://security.ubuntu.com/ubuntu zesty-security/universe Translation-en [19.9 kB]
Get:24 http://security.ubuntu.com/ubuntu zesty-security/universe amd64 DEP-11 Metadata [5,040 B]
Fetched 1,049 kB in 6s (168 kB/s)
Reading package lists…


This time we passed ansible the parameter ‘-s’ which tells ansible we want to use sudo and we also passed ‘-K’ which tells ansible to prompt us for a password. You’ll also notice that it warns us to use the ‘apt’ module, which is a better choice for interacting with apt-get.

The command module will work with pretty much any command that is non-interactive and doesn’t use pipes or redirection. I often use it for checking things on multiple machines quickly. For example, if I need to install updates and I want to know if anyone is using a particular machine, I can use w, who, users, etc. to see who is logged in before proceeding.

If we needed to interact with one a few hosts and not an entire group, we can name the hosts, separated by a comma, in the same fashion: ‘ansible ubuntu-staging-www,ubuntu-staging-db …’

Now lets look at trying something a bit more complicated.. say we need to copy a configuration file /etc/ssmtp/ssmtp.conf to all of our hosts. For this we will write an ansible playbook that I named ssmtp.yml:


# copy ssmtp.conf to all ubuntu-staging hosts

– hosts: ubuntu-staging
user: canutethegreat
sudo: yes

tasks:
– copy: src=/home/canutethegreat/staging/conf/etc/ssmtp/ssmtp.conf
dest=/etc/ssmtp/ssmtp.conf
owner=root
group=ssmtp
mode=0640


We can invoke the command with ‘ansible-playbook ssmtp.yml’ and it will do as directed. The syntax is fairly straightforward and there are quite a number of examples.

There are lots of examples for a wide range of tasks in the Ansible github repo and be sure to take a look at the intro to playbooks page. Just remember that you are doing things to multiple servers at once so if you do something dumb it’ll be carried out on all of the selected servers! Testing things on staging servers and using pretend/simulate are always good ideas anyway.