Skip to content

Portable DevOps Platform — GitLab In An LXD Container

GitLab is an entire DevOps lifecycle tool with a web-based interface. Within the GitLab platform we are provided with a git-repository manager, wiki, issue-tracking, integration, and deployment pipelines and best of all it is all open source software! It also makes for a great, self-hosted replacement for GitHub.

There are plenty of organizations that run GitLab on bare metal, in a dedicated Virtual Machine (VM) in VMWare, Hyper-V, or Linux KVM, as well as on cloud host instances from providers such as AWS, Azure, and GCP. The cloud platforms have some built-in ability to be scalable and some form of high availability for migrating to a new host if there is a failure at a lower level. For those that host locally on bare metal or in a VM we have to deal with high availability and scalability ourselves.

There are official GitLab Docker images that can be used if Docker is your container of choice. However, I prefer to use Linux containers (LXD) as it provides more of standalone container that can more closely mimic a dedicated machine. Spinning up new LXD containers is fast, easy, and allows me to use them as lightweight VMs.

In order to get started you will need to have LXD installed on your host machine. On Ubuntu this is as simple as:
snap install lxd

Followed by creating an initial configuration with:
lxd init


The defaults should be fine, however, I prefer to use “dir” as the storage backend as it allows me to easily move the container around the filesystem with symlinks.

If you want to make sure LXD is installed and configured you run this quick command:
lxc info


You will be presented with a bunch of output about the host system and the LXD and the available features. The rest of this guide assumes you have a working LXD install.

Now that we have LXD installed and configured, let’s go ahead and create a new container for dedicated GitLab use on our host system:
lxc launch ubuntu: gitlab


The above command will create a new container, using the latest stable Ubuntu and name the container gitlab.

Now, before we proceed with installing GitLab prerequisites, we need to tweak a few things on the host and with the container’s configuration.
lxc stop gitlab


With the container stopped, we need to temporarily give the container privileged access to the host. This is needed in order for the GitLab installer to set some kernel configuration options.
lxc config set gitlab security.privileged=true


If the host system is running Ubuntu with apparmor, we will need to tell apparmor not to restrict the container (you can change it back later after the install if you wish.)
lxc config set gitlab raw.lxc "lxc.apparmor.profile=unconfined"


Next, let’s make a backup of our current kernel settings:
sudo sysctl -a > sysctl.bak


Now that we have a backup, we can go ahead and start our new container up:
lxc start gitlab


Give it a few seconds and check the status with:
lxc list


If it shows the gitlab container as running, we can go ahead and move our work to inside the container:
lxc exec gitlab /bin/bash


We should now be presented with a root shell that is running side the container! Since we are running inside the container as root from here we do not need to run “sudo” before any commands while inside the container.

Our first task from inside the container, is to remount the kernel’s proc filesystem from read-only to read-write mode. This is required otherwise the gitlab installer will not be able to make any changes to the kernel’s configuration. Once we finish the install you will not need to execute the below command for normal operation.
mount -o remount rw /proc/sys


Now we can move on to checking for updates:
apt update
apt upgrade


If there are any available updates now would be a good time to install. This will ensure we are working with the latest security and bug fixes.

Next up, we need to make sure we have some basic packages installed:
apt install -y curl openssh-server ca-certificates tzdata perl


Now we will want to install postfix in order for GitLab to have the ability to send notifications.
apt install -y postfix


You will be prompted to select a configuration type for the postfix mail server, you will want to select “Internet Site” from the list and if there happens to be any additional questions just hit enter to select the defaults.

We are now ready to add the official GitLab apt repository for the community edition (note: if using the enterprise edition, change “ce” to “ee”.) with this curl command:
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | bash


Now we can finally get to installing GitLab itself, again we are going to install the community edition (ce) if you are instead installing the enterprise edition (ee) you will need to change the package name. Also, if you are not using a real domain name or you have your own SSL certificates, you can add “LETSENCRYPT=”false”” before “apt” to disable the automatic SSL certificate creation. We also need to tell the installer what our intended URL is, even if it is not a real DNS name. For example, in testing I often use “https://gitlab.lxd” as the EXTERNAL_URL.
EXTERNAL_URL="https://gitlab.example.com" LETSENCRYPT="false" apt install gitlab-ce


The above step will take a while, so go grab some coffee and a bagel.

When the install finishes you will be be given some info about the state of things and returned to the container’s bash prompt. If you encounter any errors you can always run the above apt command again and it will perform an upgrade and no data (if any) will be lost. I sometimes encounter errors with network connections during the install, such is the life of working from home, and re-running the install allows me to work through them.

In order to access the GitLab web interface, we will need the IP address from the lxc output:
canutethegreat@diagonalley:~$ lxc info lxc
Name: gitlab
Location: none
Remote: unix://
Architecture: x86_64
Created: 2021/02/04 19:00 UTC
Status: Running
Type: container
Profiles: default
Pid: 72313
Ips:
lo: inet 127.0.0.1
lo: inet6 ::1
eth0: inet 10.12.206.95 veth4544f711
eth0: inet6 fd42:197c:de0f:bd1e:216:3eff:fe30:9fe5 veth4544f711
eth0: inet6 fe80::216:3eff:fe30:9fe5 veth4544f711
Resources:
Processes: 384
CPU usage:
CPU usage (in seconds): 1382
Memory usage:
Memory (current): 2.83GB
Memory (peak): 2.84GB
Swap (current): 79.26MB
Swap (peak): 79.20MB
Network usage:
eth0:
Bytes received: 890.94kB
Bytes sent: 476.43kB
Packets received: 808
Packets sent: 391
lo:
Bytes received: 415.07MB
Bytes sent: 415.07MB
Packets received: 87005
Packets sent: 87005
or from the ip command from inside the container:
root@gitlab:~# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
13: eth0@if14: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:30:9f:e5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.12.206.95/24 brd 10.12.206.255 scope global dynamic eth0
valid_lft 2961sec preferred_lft 2961sec
inet6 fd42:197c:de0f:bd1e:216:3eff:fe30:9fe5/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 3492sec preferred_lft 3492sec
inet6 fe80::216:3eff:fe30:9fe5/64 scope link
valid_lft forever preferred_lft forever


As you can see from both of the output’s above, the container has an IPv4 IP address of “10.12.206.95.”

We will enter “https://” (“https://10.12.206.95” in my case) in our web browser running on the host and we will be presented with the initial login screen:



Screenshot after initial install of gitlab
Go ahead and type in a secure password for the root user and type it in a second time to confirm. We will then be asked to login, type in “root” as the username and the password we just set:




That’s it, we now having a running gitlab install inside an LXD container! Where you go from here depends on what you need. I typically create a user account for myself so that I am not using the root account for everything, even if I’m just testing.

Now that we have completed the initial setup, we can optionally undo one of the steps related to allowing the container to access the kernel as it is only needed for installation and upgrades. From the host run:
lxc config set gitlab security.privileged=false

to remove the containers privileged access to the host.

It is also worth mentioning that if you ever remove the gitlab container and wish to revert the kernel changes, you can do so on the host if you have your backup file handy:
cat sysctl.bak | sudo sysctl -e -p -


I find it interesting to check on the resource consumption of the gitlab container. You can get info from LXD on the host about the container by running the same command we used to get the IP address previously:
lxc info gitlab


The output on my test machine after logging into gitlab web interface look like this:
Name: gitlab
Location: none
Remote: unix://
Architecture: x86_64
Created: 2021/02/04 19:00 UTC
Status: Running
Type: container
Profiles: default
Pid: 72313
Ips:
lo: inet 127.0.0.1
lo: inet6 ::1
eth0: inet 10.12.206.95 veth4544f711
eth0: inet6 fd42:197c:de0f:bd1e:216:3eff:fe30:9fe5 veth4544f711
eth0: inet6 fe80::216:3eff:fe30:9fe5 veth4544f711
Resources:
Processes: 370
CPU usage:
CPU usage (in seconds): 94
Memory usage:
Memory (current): 2.59GB
Memory (peak): 2.80GB
Swap (current): 41.24MB
Network usage:
eth0:
Bytes received: 797.37kB
Bytes sent: 469.80kB
Packets received: 342
Packets sent: 327
lo:
Bytes received: 5.60MB
Bytes sent: 5.60MB
Packets received: 1501
Packets sent: 1501


I find that my GitLab containers seem to typically use around ~4GB of memory with only a couple of users.

With GitLab up and running in an LXD container we can use the power of LXD to do cool things such as snapshots, live migrations, and cloning as well as take advantage of security by design, advanced resource control, and more! If you have multiple machines set up with LXD clustering you can selfhost and also not have to worry about downtime due to node failures.

The multitude of features provided by GitLab and the flexibility of LXD makes for an awesome combination to use in both my development environment and production. I can spin-up test containers in dev, deploy new versions in prod, and migrate to new hardware with ease. I hope this article finds you well and has provided some useful information on what it looks like to run GitLab inside LXD and how to get started with it. I enjoy receiving feedback; be it suggestions, corrections, or questions. Feel free to drop some love, be safe, and hack away!

Stop Hogging All The Bandwidth!

It’s like a scene from an American western: you have a gigabit connection (The Good), a couple of heavy users (The Bad), and a monthly bandwidth limit (The Ugly.) In this strange new world of working from home you need a an Internet connection that has good bandwidth (speed) and good latency (response time) and it needs to work with multiple devices simultaneously. There are several ways we could solve this ranging from paying for top tier connections at a premium price or with some fancy do-it-yourself routing. We will be focusing on the latter in this article because, well, simply put we don’t have deep pockets!

First, let us take a moment to go over a few things. What exactly is bandwidth and latency in relation to network connections? Whenever I get asked to explain network bandwidth and network latency I like to use a water pipe analogy. If you can imagine a big water pipe with a shutoff valve on each end and two workers, also one on each end, that have to communicate when it is time to send more water. If you think of bandwidth as representing the diameter of the water pipe and latency as the speed the two workers are communicating at when it is time to send water through the pipe, you should be able to imagine how they work together to affect the volume of water moving through a pipe over time. The bigger the water pipe (or bandwidth) we have, the more water (data) can be transferred at a given time. Now latency is a bit tricker and that’s why I use workers in my analogy. The workers need to be able to communicate quickly so that water is sent when it needs to be stopped when it needs to stop.

So in terms of latency, if the workers were using a chalk board to write “start” or “stop” and holding the board up so the other working could see it, then we’d have a functional communication system but it would be very slow. Now instead, let us upgrade the workers to walkie-talkies and suddenly the communication is much quicker. If we can communicate quicker (low latency) we are able to get the water (data) flow control to be quicker resulting in more responsiveness.

Bandwidth and latency need to work hand-in-hand to provide “fast” internet. Latency affects response times and can be very noticeable in real-time communication and online video games. A connection with low bandwidth is going to transfer data slowly and is why your Netflix and video conferencing will reduce in quality or even stutter. Alright, enough with that, let’s get on with bandwidth throttling!

It is not too difficult to set up a Linux router and have it throttle all the bandwidth by rate limiting. The downside to that is that every user and every device will get throttled. While the kids do not need to watch Netflix at 4k at all hours of the day (and night), some of us do need to have a good Internet connection for work. So how can we work from home and use the full pipe available to select devices (i.e. work laptop) but still limiting everything else? On the other side of things if we let the kids watch 4k videos 24/7 we will hit our monthly bandwidth cap. We can handle both of these problems by dividing our users and devices into two groups: those of us on the lease that pay the bills (the parents) and those that are freeloaders (the kids.) For some it might be more important to slowing down Netflix, Disney+, Hulu, etc. while still allowing our computer to have full speed access.

If you take a moment to Google various related keywords you will find there are several guides out there on how to do basic rate limiting/bandwidth throttling, but here we are interested in allowing some devices to be in an “unlimited” group while others get rate limited. What we will be doing is controlling devices by their assigned IP addresses. This will allow us to decide who gets full speed and who gets rate limited without having to list out every device individually.

I feel the need to take a moment to describe my home network so that you can understand the lay of the land. On my network, IP addresses are assigned both statically and dynamically by my custom Linux router/server. My Linux router provides DHCPv4, DHCPv6 (my ISP gives a /64 block), DNS, firewall, and some other services. If you do not have a machine that can be dedicated as a router a good alternative is to buy a router (https://openwrt.org/supported_devices) that can run OpenWRT (https://openwrt.org/).

The part that really matters here is that my router runs Linux and that IP addresses are consistent in my network. This is important because the way we will be handling the rate limiting is by IP address. If we have short-lived IP addresses we will have to update the config often for both the IPs that we want in the “unlimited” group and those in the rate limited group. That just sounds like a lot of busy work and I don’t have time for such things.

The behind the scenes magic is handled in the Linux kernel. The way we tell it what we want to happen is by interacting through the traffic control (tc) tool. I wrote a small GNU Bash script that sets up two network groups, one unlimited and one limited, and to handle setting the traffic control options up.
Here is the entire script for your enjoyment:
#! /bin/bash
# To check the status try something like: tc class show dev $NETDEV
# The network device we are throttling
NETDEV=eno1
# reinit
tc qdisc del dev $NETDEV root handle 1
# create the default class
tc qdisc add dev $NETDEV root handle 1: htb default 1
tc class add dev $NETDEV parent 1: classid 1:1 htb rate 1000mbps
tc class add dev $NETDEV parent 1: classid 1:10 htb rate 250mbps ceil 500mbps
for IP in {100..199}
do
tc filter add dev $NETDEV protocol ip parent 1:0 prio 1 u32 match ip src 192.168.1.$IP flowid 1:10
tc filter add dev $NETDEV protocol ip parent 1:0 prio 1 u32 match ip dst 192.168.1.$IP flowid 1:10
done


Let’s go over each line so you can understand what is happening. The first line that does not start with a hash/pound “#” symbol has “NETDEV=eno1” This line contains the network interface card (NIC) name of the NIC that is connected internally to our local area network (LAN.) Then we have “tc qdisc del dev $NETDEV root handle 1” which sets up the root device. This is followed by “tc qdisc add dev $NETDEV root handle 1: htb default 1” sets up the root device to use hierarchy token bucket (HTB) which allows us to control bandwidth through classes. The following 2 lines set the speed (rate) of the root device (1:1) which will be used by the “unlimited” to 1 gigabit per second (that’s the maximum our download speed from my ISP provides) and then we set the speed (rate) of the “limited” group (1:10) to have a base speed (rate) of 250 megabits per second with a burstable ceiling of 500 megabits per second. The last section is a scripted way of setting the IP addresses between 192.168.1.100 to 192.168.1.199 to be in the “limited” group. These are the addresses my router will assign dynamically. The line with “src” represents the source address and the “dst” line is the destination address. This way we can control the speed (rate) for both incoming and outcoming connections on a given IP address. Any IP address that is not between 192.168.1.100 and 192.168.1.199 automatically falls into the “unlimited” group . I have my DHCP server set up to assign 192.168.1.10 to 192.168.1.99 statically to the devices and machines that I do not want to be rate limited. This group includes my work laptop, my home servers, and some of my projects.

The above script can be run directly or you could drop it in somewhere to be executed at system boot. Another option, for systemd users, is to put it in /etc/networkd-dispatcher/routable.d/09-my-traffic-controller.sh which will allow it to run as soon as your network devices reach a routable state.

As you can see by the example script, bandwidth limiting is fairly easy and does not require a specialized degree from your local technical college. It allows everyone to have internet access but keeps the kids from sucking down the whole internet watching Disney+. We can even expand the script further and add additional groups. Maybe we want to add a new group for the Xbox that is neither in the “unlimited” group nor the “limited” group because we don’t want game updates to take hours. We can accomplish that by adding a new group, say 1:11, that is defined with more bandwidth. It is also worth noting that we could increase the ceiling for this new group so that if nobody is using the internet except the Xbox it can have a larger burst. This can come in handy if the kids have gone off to bed and you are on the couch with your significant other and the two of you want to play a quick match in Fortnite, but darn it, there’s an update you have to install first — burstable ceiling to the rescue!

The options that tc provides is quite flexible and it is worth the time to take a quick peek at the man page. It certainly has made the quality of my work-at-home life better and kept me from pulling out all my hair — only some! I hope this article finds you well and has provided some useful information on what traffic control looks like in Linux and how to get started with it. I enjoy receiving feedback; be it suggestions, corrections, or questions. Feel free to drop some love, be safe, and hack away!

How to keep your ports private — Port knocking with knockd

In this article we will be looking at knockd, a port knocking tool, and how we can utilize it on our bastion/firewall server. This guide can be used for a production Linux server or our home Linux-based router. The concepts are the same as well as the purpose.

There are many cogs in the proverbial wheel within the computer security tool kit that all work together to make a more complete security solution for us. Computer security is important for servers and for our home networks.

One of these cogs is to hide things or otherwise make them inaccessible until we want access to it. How can we hide things? I’m glad you asked! Enter knockd, an open source port-knock server that blocks access to a port unless it receives a special “knock” on the door so to speak. This knock is actually a series of connections to specific ports in a specific order. One of the great features is that we can have it only open access to the IP address that we are connecting from!

What is this magic you ask? Well, knockd accomplishes all of this by listening on all the traffic on a machine’s network interface, be it ethernet, ppp, etc., for a sequence of port hits. We can configure knockd to listen for TCP, UDP, or both types of packets. In addition, we can configure the port numbers used in the port knocking sequence, and we can even define how much time is allowed to pass before it stops considering a sequence to be valid.

So you might be thinking “why would anyone want to hide a port?” or “isn’t the point of a server to provide services?” and even “we already have authentication, why do we need to hide the service at all?” These are all valid questions and to answer them it requires us to step back for a moment and remember that sometimes there are vulnerabilities in protocols, services, as well as compromised user accounts. There is also the annoyance of having tons of bots, script kiddies, and the like trying to look for vulnerabilities in our system and spamming our logs with their connection attempts. Anyway, enough of that… let’s move on to installing and configuration of knockd.

We need to install knockd on our server before we can configure and make use of it. In this article I will use Ubuntu in the examples as it is a commonly used operating system, but it should be straightforward to convert the steps to work with a different Linux distribution.

Install the repository package containing knockd:
sudo apt install knockd


The configuration file location is /etc/knockd.conf and using your favorite editor open that file:
sudo vim /etc/knockd.conf


The defaults will look something similar to this:
[options]
logfile = /var/log/knockd.log
[openSSH]
sequence = 7000,8000,9000
seq_timeout = 10
tcpflags = syn
command = /usr/sbin/iptables -A INPUT -s %IP% -j ACCEPT
[closeSSH]
sequence = 9000,8000,7000
seq_timeout = 10
tcpflags = syn
command = /usr/sbin/iptables -D INPUT -s %IP% -j ACCEPT


The first section titled “options” controls where the logfile for knockd is stored. This default is fine unless you have a specific requirement.

The remaining two sections are an example of setting up knockd to open and close access to SSH (Secure SHell) as two separate sequences. Let’s go ahead and comment out everything starting with the “openSSH” line by adding pound/hash “#” to the beginning of each line. Alternatively you can delete these lines, but I like to keep them as a quick reference when making edits. Your config should now look like this:
[options]
logfile = /var/log/knockd.log
#[openSSH]
# sequence = 7000,8000,9000
# seq_timeout = 10
# tcpflags = syn
# command = /usr/sbin/iptables -A INPUT -s %IP% -j ACCEPT
#[closeSSH]
# sequence = 9000,8000,7000
# seq_timeout = 10
# tcpflags = syn
# command = /usr/sbin/iptables -D INPUT -s %IP% -j ACCEPT


Now we will want to decide the port knocking sequence we want to utilize. Traditionally the ports used in TCP/IP need to be equal to or greater than 1000 (ports below 1000 are reserved) and less than 65535. For this article’s example, I am going to use 1941, 1942, and 1945. These numbers represent dates when events took place in history. Specifically, 1941 is the year that the code the German Enigma machine used was broken. While 1942 is the year that the Japanese Navy’s cipher (designated JN-25) was broken and finally, 1945 is the year that World War II ended. Yes, I am a history nerd… among other things!

Now we need to decide what port or ports we would like to hide? In my environment I typically hide access to SSH on port 22 and RDP (Remote Desktop Protocol) on port 3389. This scenario applies to both a business as well as a home setup. I need to allow access via SSH for developers and RDP for GUI access for either Windows Terminal Server (multiuser business) or a single user desktop at home. Also, as our society as a whole is generally a forgetful and lazy bunch, it would seem like a good idea to make the ports that are opened to automatically be blocked again after a certain amount of time has elapsed. Note, this will not affect active connections, i.e. you won’t break a running SSH session and only applies to new connection attempts.

Our updated config will now look like this:
[options]
logfile = /var/log/knockd.log
[openSSHandRDP]
sequence = 1941,1942,1945
seq_timeout = 5
start_command = /usr/sbin/iptables -I INPUT -s %IP% -p tcp --dport 22 -j ACCEPT; /usr/sbin/iptables -I INPUT -s %IP% -p tcp --dport 3389 -j ACCEPT
cmd_timeout = 500
tcpflags = syn
stop_command = /usr/sbin/iptables -D INPUT -s %IP% -p tcp --dport 22 -j ACCEPT; /usr/sbin/iptables -D INPUT -s %IP% -p tcp --dport 3389 -j ACCEPT


Let’s break down what each line is doing. Under “options” we have our logfile location. Under “openSSHandRDP” we have our knock sequence, followed by the sequence timeout. The sequence timeout is how long in seconds knockd will consider a knock on the ports to be part of one attempt. Next we have our start_command and this is where the opening of the ports happens in the kernel’s firewall. The cmd_timeout is how long in seconds the start_command will be run before we run the stop_command. Which brings us to the stop_command where we delete the two iptables rules from start_command and thereby close the ports back up.

So bringing it all together what we now have is the ability to knock on ports 1941, 1942, and 1945 in that specific order and knockd will do its thing.

Now we need to set a default rule to be in place that denies access to SSH and RDP by default. The place to put this command depends on your firewall. On my bastion I write the firewall rules myself and interact via the iptables command line tool. So on my machine I edit /etc/iptables/rules.v4 which contains two lines that looks like this:
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -j DROP
-A INPUT -i eth0 -p tcp -m tcp --dport 3389 -j DROP


Where “eth0" is the primary network connection. Without this knockd is rather pointless as the ports are always open. If you want to make these changes take affect immediately without a reboot you can simply run “iptables …” where the “…” is each of the lines above:
sudo iptables -A INPUT -i eth0 -p tcp -m tcp --dport 22 -j DROP
sudo iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3389 -j DROP


At this point you should be wondering “this is all fine and dandy, but how do I actually initialization a knock?” This part is easy! We can use a variety of tools: nmap, telnet, knock, and others — basically any utility/command that is capable of opening either a TCP or UDP connection to a given port. The easiest is probably going to be knock since that is its sole purpose.

Let’s get knock installed:
sudo apt install knock


Now we can run it from the command line and send our secret sequence to our remote bastion machine:
knock 1941 1942 1945


If you have a slower connection (i.e. tethered cell phone) you can also add the “-d” command line flag to knock. This will add a delay in milliseconds between the knocks on each port. I often use 100 as the option to the “-d” flag. This, in all its glory, would look like:
knock -d 100 1941 1942 1945


We now find ourselves with hidden SSH and RDP services on port 22 and 3389 that will help reduce our attack vector. These ports are only open when we instruct knockd to open them and only open to the IP address we are connecting from. There are many more things you can do with knockd and while in my example I put both 22 and 3389 in the same section (openSSHandRDP), we could have put them into separate sections each with a different port sequence.

I hope this article finds you well and has provided some useful information on what knockd is and how to get started with it. I enjoy receiving feedback; be it suggestions, corrections, or questions. Feel free to drop some love, be safe, and hack away!