Tag: linux

Using Linux nftables to block traffic outside of a VPN tunnel

For systems that commonly connect to untrusted networks, such as laptops, it can be useful to only allow outgoing traffic through a pre-configured, known-trusted (to the extent that such is a thing) VPN tunnel. This serves to ensure that unprotected traffic isn’t routed through a potentially unknown, potentially adversarial uplink provider.

Fortunately, Linux’s nftables functionality provides everything we need for that.

Usually, nftables is configured in such a way that incoming traffic is filtered, but outgoing traffic is implicitly trusted. Take, for example, Debian 11/Bullseye’s /usr/share/doc/nftables/examples/workstation.nft:

#!/usr/sbin/nft -f

flush ruleset

table inet filter {
	chain input {
		type filter hook input priority 0;

		# accept any localhost traffic
		iif lo accept

		# accept traffic originated from us
		ct state established,related accept

		# activate the following line to accept common local services
		#tcp dport { 22, 80, 443 } ct state new accept

		# accept neighbour discovery otherwise IPv6 connectivity breaks.
		ip6 nexthdr icmpv6 icmpv6 type { nd-neighbor-solicit,  nd-router-advert, nd-neighbor-advert } accept

		# count and drop any other traffic
		counter drop
	}
}

This implicitly creates an output (and forward) chain as well:

table inet filter {
	chain output {
		type filter hook output priority 0;
	}
}

Since this chain doesn’t have any policy, the default policy accept applies. In other words, everything is allowed.

To block the unwanted traffic, we need to identify the traffic that does need to be allowed. There are three kinds of traffic that need to be allowed to flow outside of the VPN tunnel:

  • Traffic for the purpose of bringing the interface up (DHCP, IPv6 neighbor discovery, …)
  • Traffic for the purpose of bringing the VPN tunnel up (DNS)
  • The VPN tunnel itself (Wireguard, OpenVPN, …)

Begin by determining on which interfaces you want to be able to establish an outgoing VPN connection. For some people this will be the wired interface, for some it might be the wireless interface, and for some, it might be both. Running ip addr sh in a terminal is one way to find the actual interface name, which will be needed in a moment. Also open the nftables configuration file (likely /etc/nftables.conf, but check your distribution’s documentation) in a text editor. If you don’t have one yet, you can start out with this, which is Debian’s example stripped of comments but the implicit chains included:

#!/usr/sbin/nft -f

flush ruleset

table inet filter {
	chain input {
		type filter hook input priority 0;
		iif "lo" accept
		ct state established,related accept
		ip6 nexthdr icmpv6 icmpv6 type { nd-neighbor-solicit,  nd-router-advert, nd-neighbor-advert } accept
	}

	chain forward {
		type filter hook forward priority 0;
	}

	chain output {
		type filter hook output priority 0;
	}
}

For our purposes, we will be focused on the output chain, so I will be eliding the other parts of the configuration.

It’s useful to allow traffic that is routed locally on the host, for example for inter-process communication, so immediately after the type stanza, add a rule to allow traffic over the loopback interface (oif is output interface):

oif "lo" accept

Since all interfaces may not have been brought up yet by the time nftables rules are initially loaded, for the next several stanzas use oifname instead of oif. The use of oifname comes at a bit of a performance penalty, but it is more flexible especially with interfaces that aren’t always there.

First, allow DHCP traffic, which uses UDP with source and destination ports both either 67 or 68:

oifname { "en...", "wl..." } udp sport { 67, 68 } udp dport { 67, 68 } accept

Replace the "en...", "wl..." part with the name of the interface(s) in question.

Second, allow DNS traffic for initial name resolution, which uses UDP or TCP with a destination port of 53. If you configure your VPN tunnel with an IP address as a target instead of a DNS name, then you don’t need this.

oifname { "en...", "wl..." } meta l4proto { tcp, udp } th dport 53 accept

As an alternative, you can create two rules, one each for TCP and UDP; doing so will have the same effect, at a slight performance and maintenance penalty:

oifname { "en...", "wl..." } tcp dport 53 accept
oifname { "en...", "wl..." } udp dport 53 accept

Then add rules to allow traffic to the VPN concentrator. The more tightly scoped you can make this, the better. For example, if you know the IP address and the port used, you can add a stanza such as:

oifname { "en...", "wl..." } ip daddr 192.0.2.128 udp dport 29999 accept

If the VPN concentrator runs on either a standard port that is rarely used for other purposes (such as OpenVPN’s default 1194) or an uncommon port (as is often the case with Wireguard) but you don’t know its exact IP address ahead of time, you can either use a set, or elide the IP address specification:

oifname { "en...", "wl..." } ip daddr { 192.0.2.128/28, 198.51.100.0/27 } udp dport 29999 accept

or

oifname { "en...", "wl..." } udp dport 29999 accept

Then allow traffic as needed through the VPN tunnel interface. The exact name of this interface will vary with the VPN technology you’re using; for example, Wireguard tunnels typically allow you to specify the interface name, whereas OpenVPN tunnels use a semi-unpredictable interface name. For this, the ability of oifname to match a prefix by appending * can be useful. For example, for OpenVPN you might use:

oifname "tun*" accept

whereas for a Wireguard tunnel you might end up with:

oifname "wgmyvpn" accept

As a final touch, add a policy to block traffic not matched by other rules. Since all output rules specify on which interfaces traffic is allowed to flow, this blocks traffic outside of the VPN tunnel except for the traffic that is explicitly allowed to flow outside of the VPN tunnel.

The policy typically goes at the top, just below the type stanza, whereas the reject stanza must appear below all other rules.

policy drop;
reject

The purpose of also having a reject stanza is to provide more immediate feedback. In its absence, packets will simply be dropped, resulting in long wait times before attempts time out; with it, clients will be notified immediately that the connection failed and can report this back to the user.

The final output chain might look something like:

chain output {
	type filter hook output priority 0;
	policy drop;

	oif "lo" accept
	oifname { "en...", "wl..." } udp sport { 67, 68 } udp dport { 67, 68 } accept
	oifname { "en...", "wl..." } meta l4proto { tcp, udp } th dport 53 accept
	oifname { "en...", "wl..." } ip daddr { 203.0.113.113, 203.0.113.114 } udp dport 1194 accept
	oifname "tun*" accept
	reject
}

Reload the nftables rule set (sudo nft -f /etc/nftables.conf) and verify that you can connect to the VPN and access the Internet (or the remote network) through it. Disconnect the VPN and verify that traffic is blocked, for example by attempting to reload a web page.

Reboot the computer and verify that the network interface comes up and that you can connect to the VPN, access the Internet through it, and that traffic is again blocked when you disconnect from the VPN.

Keep in mind that this ruleset isn’t perfect. For example, if routes aren’t set up properly when starting the VPN tunnel, traffic can leak through ordinary DNS queries outside of it; and it relies on interface name matching which can match unexpected interfaces. Therefore, this does not serve as a proper “kill switch” for all traffic. However, it does form a decent second (or third) line of defense against unexpected but not actively malicious traffic leaks outside of the VPN tunnel, which for a system that would otherwise allow everything going out is very much an improvement.

Sound “clicks” on Debian 10, 11 Linux with ALSA and PulseAudio

Under some conditions, there can be repeated, clearly audible “clicks” in sound on at least Debian 10 and 11 (Buster and Bullseye) GNU/Linux, accompanied by momentary audio output device switches. Web searches indicate that other distributions (at least Debian derivatives) are affected as well; I have been able to locate cases where Ubuntu and Mint users have both been affected by this type of issue.

I haven’t dug very deeply into exactly why this happens, but it seems to be somehow related to ALSA port availability changes; which is kind of odd when it happens without any changes in what hardware is available.

The fix, however, is actually quite simple. Open /etc/pulse/default.pa in an editor running as root:

$ sudo nano /etc/pulse/default.pa

Locate the line

load-module module-switch-on-port-available

Prepend a # to comment it out:

#load-module module-switch-on-port-available

Save the file and exit the editor (in nano, by pressing Ctrl+O, confirm saving, then Ctrl+X to exit), then under the user account suffering from this problem, stop the running PulseAudio daemon.

$ pulseaudio --kill

A new PulseAudio instance should start as soon as it is needed, reading the new configuration as it does so.

This should resolve the issue.

Turning off fwupdmgr and lvfs automatic updates on Debian 11/Bullseye

Debian Bullseye ships with the Linux Vendor Firmware Service (LVFS) fwupdmgr enabled by default.

There are many situations in which that’s a good thing; firmware is a central part of today’s hardware and software ecosystem, and you generally want to use the latest version available.

However, even though (supposedly; I haven’t yet been in a situation to actually experience this) actual updates need to be triggered manually, there are situations in which you want to reduce polling of external systems – especially when such polling could be used to deduce whether you have a particular piece of hardware or not.

Fortunately, it’s easy to disable the automatic checks in Debian. Simply enough:

sudo systemctl mask fwupd-refresh.timer

(For some reason, it is insufficient to simply disable the timer.)

You can still perform a manual check when appropriate by simply starting the unit that would normally be started by the timer:

sudo systemctl start fwupd-refresh.service

To see the result of the check, look at the unit log:

sudo journalctl --unit=fwupd-refresh.service

Powered by WordPress & Theme by Anders Norén