Skip to content

Checking For Outdated Ciphers

Keeping the software up-to-date on your machine is important and evermore so for security reasons. However, some people forget to update their configurations when they update their software. Running an old config could be just as dangerous as running old software!

I am going to show how to check a network-listening service for outdated ciphers. First make sure you have nmap installed. Second grab the nmap script named 'ssl-enum-ciphers.nse' from the official nmap website.

Example checking a webserver:

nmap --script ssl-enum-ciphers -p 443


I ran this against an internal webserver that is running Ubuntu 16.04:

Starting Nmap 7.91 ( https://nmap.org ) at 2021-08-06 12:38 PDT
Nmap scan report for 10.53.209.159
Host is up (0.00015s latency).

PORT STATE SERVICE
443/tcp open https
| ssl-enum-ciphers:
| TLSv1.0:
| ciphers:
| TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA (dh 2048) - C
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_SEED_CBC_SHA (dh 2048) - A
| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (secp256r1) - C
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
| TLS_ECDHE_RSA_WITH_RC4_128_SHA (secp256r1) - C
| TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_RC4_128_MD5 (rsa 2048) - C
| TLS_RSA_WITH_RC4_128_SHA (rsa 2048) - C
| TLS_RSA_WITH_SEED_CBC_SHA (rsa 2048) - A
| compressors:
| NULL
| cipher preference: client
| warnings:
| 64-bit block cipher 3DES vulnerable to SWEET32 attack
| Broken cipher RC4 is deprecated by RFC 7465
| Ciphersuite uses MD5 for message integrity
| TLSv1.1:
| ciphers:
| TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA (dh 2048) - C
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_SEED_CBC_SHA (dh 2048) - A
| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (secp256r1) - C
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
| TLS_ECDHE_RSA_WITH_RC4_128_SHA (secp256r1) - C
| TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_RC4_128_MD5 (rsa 2048) - C
| TLS_RSA_WITH_RC4_128_SHA (rsa 2048) - C
| TLS_RSA_WITH_SEED_CBC_SHA (rsa 2048) - A
| compressors:
| NULL
| cipher preference: client
| warnings:
| 64-bit block cipher 3DES vulnerable to SWEET32 attack
| Broken cipher RC4 is deprecated by RFC 7465
| Ciphersuite uses MD5 for message integrity
| TLSv1.2:
| ciphers:
| TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA (dh 2048) - C
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (dh 2048) - A
| TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (dh 2048) - A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (dh 2048) - A
| TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (dh 2048) - A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_SEED_CBC_SHA (dh 2048) - A
| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (secp256r1) - C
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_RC4_128_SHA (secp256r1) - C
| TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_RC4_128_MD5 (rsa 2048) - C
| TLS_RSA_WITH_RC4_128_SHA (rsa 2048) - C
| TLS_RSA_WITH_SEED_CBC_SHA (rsa 2048) - A
| compressors:
| NULL
| cipher preference: client
| warnings:
| 64-bit block cipher 3DES vulnerable to SWEET32 attack
| Broken cipher RC4 is deprecated by RFC 7465
| Ciphersuite uses MD5 for message integrity
|_ least strength: C

Nmap done: 1 IP address (1 host up) scanned in 0.43 seconds


We want our target to show the least strength cipher as "A" and we do not want NULL ciphers or options. This particular host is running Apache2, so we need to edit /etc/apache2/mods-enabled/ssl.conf and look for or add a line like this:

SSLCipherSuite HIGH:!aNULL


Then restart apache2 and retest:

Starting Nmap 7.91 ( https://nmap.org ) at 2021-08-06 12:55 PDT
Nmap scan report for 10.53.209.159
Host is up (0.00015s latency).

PORT STATE SERVICE
443/tcp open https
| ssl-enum-ciphers:
| TLSv1.0:
| ciphers:
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) - A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) - A
| compressors:
| NULL
| cipher preference: client
| TLSv1.1:
| ciphers:
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) - A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) - A
| compressors:
| NULL
| cipher preference: client
| TLSv1.2:
| ciphers:
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (dh 2048) - A
| TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (dh 2048) - A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (dh 2048) - A
| TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (dh 2048) - A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) - A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) - A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) - A
| compressors:
| NULL
| cipher preference: client
|_ least strength: A

Nmap done: 1 IP address (1 host up) scanned in 0.47 seconds


Now we can see the lowest grade cipher in-use is an "A" which is what we want to see.

This was just a basic intro to cipher checking with nmap and I hope this article is helpful to someone. I enjoy receiving feedback; be it suggestions, corrections, or questions. Feel free to drop some love, be safe, and hack away!

Portable DevOps Platform — GitLab In An LXD Container

GitLab is an entire DevOps lifecycle tool with a web-based interface. Within the GitLab platform we are provided with a git-repository manager, wiki, issue-tracking, integration, and deployment pipelines and best of all it is all open source software! It also makes for a great, self-hosted replacement for GitHub.

There are plenty of organizations that run GitLab on bare metal, in a dedicated Virtual Machine (VM) in VMWare, Hyper-V, or Linux KVM, as well as on cloud host instances from providers such as AWS, Azure, and GCP. The cloud platforms have some built-in ability to be scalable and some form of high availability for migrating to a new host if there is a failure at a lower level. For those that host locally on bare metal or in a VM we have to deal with high availability and scalability ourselves.

There are official GitLab Docker images that can be used if Docker is your container of choice. However, I prefer to use Linux containers (LXD) as it provides more of standalone container that can more closely mimic a dedicated machine. Spinning up new LXD containers is fast, easy, and allows me to use them as lightweight VMs.

In order to get started you will need to have LXD installed on your host machine. On Ubuntu this is as simple as:
snap install lxd

Followed by creating an initial configuration with:
lxd init


The defaults should be fine, however, I prefer to use “dir” as the storage backend as it allows me to easily move the container around the filesystem with symlinks.

If you want to make sure LXD is installed and configured you run this quick command:
lxc info


You will be presented with a bunch of output about the host system and the LXD and the available features. The rest of this guide assumes you have a working LXD install.

Now that we have LXD installed and configured, let’s go ahead and create a new container for dedicated GitLab use on our host system:
lxc launch ubuntu: gitlab


The above command will create a new container, using the latest stable Ubuntu and name the container gitlab.

Now, before we proceed with installing GitLab prerequisites, we need to tweak a few things on the host and with the container’s configuration.
lxc stop gitlab


With the container stopped, we need to temporarily give the container privileged access to the host. This is needed in order for the GitLab installer to set some kernel configuration options.
lxc config set gitlab security.privileged=true


If the host system is running Ubuntu with apparmor, we will need to tell apparmor not to restrict the container (you can change it back later after the install if you wish.)
lxc config set gitlab raw.lxc "lxc.apparmor.profile=unconfined"


Next, let’s make a backup of our current kernel settings:
sudo sysctl -a > sysctl.bak


Now that we have a backup, we can go ahead and start our new container up:
lxc start gitlab


Give it a few seconds and check the status with:
lxc list


If it shows the gitlab container as running, we can go ahead and move our work to inside the container:
lxc exec gitlab /bin/bash


We should now be presented with a root shell that is running side the container! Since we are running inside the container as root from here we do not need to run “sudo” before any commands while inside the container.

Our first task from inside the container, is to remount the kernel’s proc filesystem from read-only to read-write mode. This is required otherwise the gitlab installer will not be able to make any changes to the kernel’s configuration. Once we finish the install you will not need to execute the below command for normal operation.
mount -o remount rw /proc/sys


Now we can move on to checking for updates:
apt update
apt upgrade


If there are any available updates now would be a good time to install. This will ensure we are working with the latest security and bug fixes.

Next up, we need to make sure we have some basic packages installed:
apt install -y curl openssh-server ca-certificates tzdata perl


Now we will want to install postfix in order for GitLab to have the ability to send notifications.
apt install -y postfix


You will be prompted to select a configuration type for the postfix mail server, you will want to select “Internet Site” from the list and if there happens to be any additional questions just hit enter to select the defaults.

We are now ready to add the official GitLab apt repository for the community edition (note: if using the enterprise edition, change “ce” to “ee”.) with this curl command:
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | bash


Now we can finally get to installing GitLab itself, again we are going to install the community edition (ce) if you are instead installing the enterprise edition (ee) you will need to change the package name. Also, if you are not using a real domain name or you have your own SSL certificates, you can add “LETSENCRYPT=”false”” before “apt” to disable the automatic SSL certificate creation. We also need to tell the installer what our intended URL is, even if it is not a real DNS name. For example, in testing I often use “https://gitlab.lxd” as the EXTERNAL_URL.
EXTERNAL_URL="https://gitlab.example.com" LETSENCRYPT="false" apt install gitlab-ce


The above step will take a while, so go grab some coffee and a bagel.

When the install finishes you will be be given some info about the state of things and returned to the container’s bash prompt. If you encounter any errors you can always run the above apt command again and it will perform an upgrade and no data (if any) will be lost. I sometimes encounter errors with network connections during the install, such is the life of working from home, and re-running the install allows me to work through them.

In order to access the GitLab web interface, we will need the IP address from the lxc output:
canutethegreat@diagonalley:~$ lxc info lxc
Name: gitlab
Location: none
Remote: unix://
Architecture: x86_64
Created: 2021/02/04 19:00 UTC
Status: Running
Type: container
Profiles: default
Pid: 72313
Ips:
lo: inet 127.0.0.1
lo: inet6 ::1
eth0: inet 10.12.206.95 veth4544f711
eth0: inet6 fd42:197c:de0f:bd1e:216:3eff:fe30:9fe5 veth4544f711
eth0: inet6 fe80::216:3eff:fe30:9fe5 veth4544f711
Resources:
Processes: 384
CPU usage:
CPU usage (in seconds): 1382
Memory usage:
Memory (current): 2.83GB
Memory (peak): 2.84GB
Swap (current): 79.26MB
Swap (peak): 79.20MB
Network usage:
eth0:
Bytes received: 890.94kB
Bytes sent: 476.43kB
Packets received: 808
Packets sent: 391
lo:
Bytes received: 415.07MB
Bytes sent: 415.07MB
Packets received: 87005
Packets sent: 87005
or from the ip command from inside the container:
root@gitlab:~# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
13: eth0@if14: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:30:9f:e5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.12.206.95/24 brd 10.12.206.255 scope global dynamic eth0
valid_lft 2961sec preferred_lft 2961sec
inet6 fd42:197c:de0f:bd1e:216:3eff:fe30:9fe5/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 3492sec preferred_lft 3492sec
inet6 fe80::216:3eff:fe30:9fe5/64 scope link
valid_lft forever preferred_lft forever


As you can see from both of the output’s above, the container has an IPv4 IP address of “10.12.206.95.”

We will enter “https://” (“https://10.12.206.95” in my case) in our web browser running on the host and we will be presented with the initial login screen:



Screenshot after initial install of gitlab
Go ahead and type in a secure password for the root user and type it in a second time to confirm. We will then be asked to login, type in “root” as the username and the password we just set:




That’s it, we now having a running gitlab install inside an LXD container! Where you go from here depends on what you need. I typically create a user account for myself so that I am not using the root account for everything, even if I’m just testing.

Now that we have completed the initial setup, we can optionally undo one of the steps related to allowing the container to access the kernel as it is only needed for installation and upgrades. From the host run:
lxc config set gitlab security.privileged=false

to remove the containers privileged access to the host.

It is also worth mentioning that if you ever remove the gitlab container and wish to revert the kernel changes, you can do so on the host if you have your backup file handy:
cat sysctl.bak | sudo sysctl -e -p -


I find it interesting to check on the resource consumption of the gitlab container. You can get info from LXD on the host about the container by running the same command we used to get the IP address previously:
lxc info gitlab


The output on my test machine after logging into gitlab web interface look like this:
Name: gitlab
Location: none
Remote: unix://
Architecture: x86_64
Created: 2021/02/04 19:00 UTC
Status: Running
Type: container
Profiles: default
Pid: 72313
Ips:
lo: inet 127.0.0.1
lo: inet6 ::1
eth0: inet 10.12.206.95 veth4544f711
eth0: inet6 fd42:197c:de0f:bd1e:216:3eff:fe30:9fe5 veth4544f711
eth0: inet6 fe80::216:3eff:fe30:9fe5 veth4544f711
Resources:
Processes: 370
CPU usage:
CPU usage (in seconds): 94
Memory usage:
Memory (current): 2.59GB
Memory (peak): 2.80GB
Swap (current): 41.24MB
Network usage:
eth0:
Bytes received: 797.37kB
Bytes sent: 469.80kB
Packets received: 342
Packets sent: 327
lo:
Bytes received: 5.60MB
Bytes sent: 5.60MB
Packets received: 1501
Packets sent: 1501


I find that my GitLab containers seem to typically use around ~4GB of memory with only a couple of users.

With GitLab up and running in an LXD container we can use the power of LXD to do cool things such as snapshots, live migrations, and cloning as well as take advantage of security by design, advanced resource control, and more! If you have multiple machines set up with LXD clustering you can selfhost and also not have to worry about downtime due to node failures.

The multitude of features provided by GitLab and the flexibility of LXD makes for an awesome combination to use in both my development environment and production. I can spin-up test containers in dev, deploy new versions in prod, and migrate to new hardware with ease. I hope this article finds you well and has provided some useful information on what it looks like to run GitLab inside LXD and how to get started with it. I enjoy receiving feedback; be it suggestions, corrections, or questions. Feel free to drop some love, be safe, and hack away!

Stop Hogging All The Bandwidth!

It’s like a scene from an American western: you have a gigabit connection (The Good), a couple of heavy users (The Bad), and a monthly bandwidth limit (The Ugly.) In this strange new world of working from home you need a an Internet connection that has good bandwidth (speed) and good latency (response time) and it needs to work with multiple devices simultaneously. There are several ways we could solve this ranging from paying for top tier connections at a premium price or with some fancy do-it-yourself routing. We will be focusing on the latter in this article because, well, simply put we don’t have deep pockets!

First, let us take a moment to go over a few things. What exactly is bandwidth and latency in relation to network connections? Whenever I get asked to explain network bandwidth and network latency I like to use a water pipe analogy. If you can imagine a big water pipe with a shutoff valve on each end and two workers, also one on each end, that have to communicate when it is time to send more water. If you think of bandwidth as representing the diameter of the water pipe and latency as the speed the two workers are communicating at when it is time to send water through the pipe, you should be able to imagine how they work together to affect the volume of water moving through a pipe over time. The bigger the water pipe (or bandwidth) we have, the more water (data) can be transferred at a given time. Now latency is a bit tricker and that’s why I use workers in my analogy. The workers need to be able to communicate quickly so that water is sent when it needs to be stopped when it needs to stop.

So in terms of latency, if the workers were using a chalk board to write “start” or “stop” and holding the board up so the other working could see it, then we’d have a functional communication system but it would be very slow. Now instead, let us upgrade the workers to walkie-talkies and suddenly the communication is much quicker. If we can communicate quicker (low latency) we are able to get the water (data) flow control to be quicker resulting in more responsiveness.

Bandwidth and latency need to work hand-in-hand to provide “fast” internet. Latency affects response times and can be very noticeable in real-time communication and online video games. A connection with low bandwidth is going to transfer data slowly and is why your Netflix and video conferencing will reduce in quality or even stutter. Alright, enough with that, let’s get on with bandwidth throttling!

It is not too difficult to set up a Linux router and have it throttle all the bandwidth by rate limiting. The downside to that is that every user and every device will get throttled. While the kids do not need to watch Netflix at 4k at all hours of the day (and night), some of us do need to have a good Internet connection for work. So how can we work from home and use the full pipe available to select devices (i.e. work laptop) but still limiting everything else? On the other side of things if we let the kids watch 4k videos 24/7 we will hit our monthly bandwidth cap. We can handle both of these problems by dividing our users and devices into two groups: those of us on the lease that pay the bills (the parents) and those that are freeloaders (the kids.) For some it might be more important to slowing down Netflix, Disney+, Hulu, etc. while still allowing our computer to have full speed access.

If you take a moment to Google various related keywords you will find there are several guides out there on how to do basic rate limiting/bandwidth throttling, but here we are interested in allowing some devices to be in an “unlimited” group while others get rate limited. What we will be doing is controlling devices by their assigned IP addresses. This will allow us to decide who gets full speed and who gets rate limited without having to list out every device individually.

I feel the need to take a moment to describe my home network so that you can understand the lay of the land. On my network, IP addresses are assigned both statically and dynamically by my custom Linux router/server. My Linux router provides DHCPv4, DHCPv6 (my ISP gives a /64 block), DNS, firewall, and some other services. If you do not have a machine that can be dedicated as a router a good alternative is to buy a router (https://openwrt.org/supported_devices) that can run OpenWRT (https://openwrt.org/).

The part that really matters here is that my router runs Linux and that IP addresses are consistent in my network. This is important because the way we will be handling the rate limiting is by IP address. If we have short-lived IP addresses we will have to update the config often for both the IPs that we want in the “unlimited” group and those in the rate limited group. That just sounds like a lot of busy work and I don’t have time for such things.

The behind the scenes magic is handled in the Linux kernel. The way we tell it what we want to happen is by interacting through the traffic control (tc) tool. I wrote a small GNU Bash script that sets up two network groups, one unlimited and one limited, and to handle setting the traffic control options up.
Here is the entire script for your enjoyment:
#! /bin/bash
# To check the status try something like: tc class show dev $NETDEV
# The network device we are throttling
NETDEV=eno1
# reinit
tc qdisc del dev $NETDEV root handle 1
# create the default class
tc qdisc add dev $NETDEV root handle 1: htb default 1
tc class add dev $NETDEV parent 1: classid 1:1 htb rate 1000mbps
tc class add dev $NETDEV parent 1: classid 1:10 htb rate 250mbps ceil 500mbps
for IP in {100..199}
do
tc filter add dev $NETDEV protocol ip parent 1:0 prio 1 u32 match ip src 192.168.1.$IP flowid 1:10
tc filter add dev $NETDEV protocol ip parent 1:0 prio 1 u32 match ip dst 192.168.1.$IP flowid 1:10
done


Let’s go over each line so you can understand what is happening. The first line that does not start with a hash/pound “#” symbol has “NETDEV=eno1” This line contains the network interface card (NIC) name of the NIC that is connected internally to our local area network (LAN.) Then we have “tc qdisc del dev $NETDEV root handle 1” which sets up the root device. This is followed by “tc qdisc add dev $NETDEV root handle 1: htb default 1” sets up the root device to use hierarchy token bucket (HTB) which allows us to control bandwidth through classes. The following 2 lines set the speed (rate) of the root device (1:1) which will be used by the “unlimited” to 1 gigabit per second (that’s the maximum our download speed from my ISP provides) and then we set the speed (rate) of the “limited” group (1:10) to have a base speed (rate) of 250 megabits per second with a burstable ceiling of 500 megabits per second. The last section is a scripted way of setting the IP addresses between 192.168.1.100 to 192.168.1.199 to be in the “limited” group. These are the addresses my router will assign dynamically. The line with “src” represents the source address and the “dst” line is the destination address. This way we can control the speed (rate) for both incoming and outcoming connections on a given IP address. Any IP address that is not between 192.168.1.100 and 192.168.1.199 automatically falls into the “unlimited” group . I have my DHCP server set up to assign 192.168.1.10 to 192.168.1.99 statically to the devices and machines that I do not want to be rate limited. This group includes my work laptop, my home servers, and some of my projects.

The above script can be run directly or you could drop it in somewhere to be executed at system boot. Another option, for systemd users, is to put it in /etc/networkd-dispatcher/routable.d/09-my-traffic-controller.sh which will allow it to run as soon as your network devices reach a routable state.

As you can see by the example script, bandwidth limiting is fairly easy and does not require a specialized degree from your local technical college. It allows everyone to have internet access but keeps the kids from sucking down the whole internet watching Disney+. We can even expand the script further and add additional groups. Maybe we want to add a new group for the Xbox that is neither in the “unlimited” group nor the “limited” group because we don’t want game updates to take hours. We can accomplish that by adding a new group, say 1:11, that is defined with more bandwidth. It is also worth noting that we could increase the ceiling for this new group so that if nobody is using the internet except the Xbox it can have a larger burst. This can come in handy if the kids have gone off to bed and you are on the couch with your significant other and the two of you want to play a quick match in Fortnite, but darn it, there’s an update you have to install first — burstable ceiling to the rescue!

The options that tc provides is quite flexible and it is worth the time to take a quick peek at the man page. It certainly has made the quality of my work-at-home life better and kept me from pulling out all my hair — only some! I hope this article finds you well and has provided some useful information on what traffic control looks like in Linux and how to get started with it. I enjoy receiving feedback; be it suggestions, corrections, or questions. Feel free to drop some love, be safe, and hack away!