The case for an evacuation checklist

Allow me to spend a moment to dwell on a subject that no one wants to think much about, and that few people will ever find themselves in a situation of needing to act on.

Specifically: that of having to evacuate, and being prepared for the possibility.

There are two major types of evacuations that one might end up facing. One is when you absolutely have to leave right now and can bring, at most, what is in your easy immediate reach on your way out; the other is when you have some time to get out, but still have to leave promptly. Being an evacuation, in both cases you likely don’t know for exactly how long you will be unable to return.

There is also the case where you can see a situation developing which may prompt a later evacuation, but which does not present an immediate threat. I’m not covering that here, but many of the points might still apply.

The “must leave right now” scenario can perhaps be exemplified by a typical fire drill at once’s workplace, where you are working along normally and, with no previous warning, the fire alarm starts blaring and you are expected to leave immediately and go to some known location outside to meet up with colleagues and bosses, hopefully to be told “this was just an exercise, go back to your workplace”.

For that, it may be beneficial to have a pre-packed, easy-to-reach way of grabbing some useful items as you are leaving, whether at home or at work. Depending on your situation, this might be for example a backpack with such things as:

  • A phone charger (possibly a power bank)
  • A paper notebook and pens
  • Some area maps or a road atlas
  • Spare car keys
  • An extra credit card, some small-denominations cash
  • An extra ID card or a passport
  • Some of any medications you need (make sure to rotate them so that they remain current)
  • A spare pair of glasses
  • Some high-energy snacks
  • A change of clothes

If you use a laptop and doing so does not slow you down on the way out, then this might also include something like closing the laptop lid, grabbing it and its charger and dropping both into the backpack on the way out, relying on either automatic hibernation or simply turning it off later. The focus in this situation, though, is to immediately get out of danger. A computer is just a computer; you will survive without it, and it can be replaced.

Your personal circumstances will heavily influence and ultimately dictate what items should go into such a “bug out bag”. Also take a moment to consider the possibility of an emergency coinciding with bad weather; it might be a good idea to put some or even all of the items in individual resealable, transparent or clearly marked plastic bags (ziplock or similar). Doing so also keeps cables from becoming entangled.

Keep in mind that in an actual emergency, cellular networks are likely to be heavily congested, so you cannot necessarily count on being able to do anything that requires external connectivity, whether calling, text messaging or large-scale data traffic. You can simulate this by placing your phone in airplane mode and see what works; can you still view maps, get directions, look up phone numbers, or do other things that you may reasonably need to do during an evacuation?

The other case is that of needing to leave promptly, but not necessarily immediately. This is a situation which allows you to take steps and gather items which will be helpful, but you still need to get away from where you are, you don’t have a lot of time to do it, and you can’t predict when it will happen.

Examples of this could be a large fire nearby either threatening to spread to your location or producing obnoxious, possibly toxic, smoke; a large-scale traffic accident whether road or rail, possibly including toxic spills; or even the aftermath of some deliberate criminal actions.

In such a scenario, what items to do you want to bring with you? What do you need to do before you leave? (For example, do you need to turn off a gas supply, or a water pump? How do you do that?)

Since I am a stickler for checklists, and particularly so for potentially stressful situations, I suggest to go around your home with a notebook and write down the really important things that you would need or want to bring or do as you are leaving in such a situation. Don’t worry at first about prioritizing the list; just write down what would be important to remember in such a situation. This will probably start with something like the bug out bag contents list I mentioned above (which if you have that bug out bag you will already have in a single place), and might continue with things like:

  • If you have kids, one or two of the kids’ favorite toys, stuffed animals, or the likes. Maybe dedicated kids’ tablets or phones. (You will probably survive without it, but having it can make an already stressful situation more bearable.)
  • If you have pets, things like a bag of pet food, carriers/crates, leashes/collars/harnesses, coats, favorite toys, vaccination records, identification documents (EU pet passport, kennel club registration certificate, …), insurance details. Also to actually bring your pets (don’t laugh; people have been known to prepare to go to dog shows to actually show their dog and realize some time after leaving home that they forgot to bring the dog).
  • Cash, bank cards.
  • Keys (mechanical or electronic), including spare keys, neighbors’ keys, work keys, and keys to any property you own or rent elsewhere. If you have a safe deposit box, keys to it.
  • Cell phone, including a headset if you have one. Tablet or ebook reader, if you have either.
  • Laptop, if you use one. Also its charger, especially if it does not charge over USB-C PD. (If you’re not sure, just bring its dedicated charger.)
  • Computer external backup drive (you do make regular backups, don’t you?), and any dedicated power supply it might have.
  • Important papers and personal documents: ID, driver’s license, passport, deeds, land titles, car registration papers, insurance documents, bank account details, … (repeat for everyone in the household)
  • Some more clothes, including weather- and season-appropriate items; for example, a simple baseball cap can be a lifesaver if you find yourself outside in rain, and it stuffs into a pocket when not needed.
  • Bank physical authentication tokens, such as authentication code generators and corresponding smartcards.
  • Power bank to provide extra battery life for your cell phone.
  • Physical 2FA tokens, such as physical FIDO2/U2F/WebAuthn/Passkey tokens.
  • Flashlight or headlamp, batteries (especially if it uses less common batteries such as 18650 LiIon cells), any dedicated charge cable or charger. Pack these carefully so that they cannot short-circuit.
  • A power strip (lets you charge multiple devices from a single electrical outlet), possibly one with both USB and AC outlets.
  • Any necessary steps to take before you leave, and how and where to do that. For example, if you have gas heating, then maybe you will need to turn off the gas supply before you leave; if so, where do you do that and how do you do it?

Keep in mind that the above is not intended to be an exhaustive list, nor does necessarily every item I list apply to your situation. These are merely examples; adjust as appropriate for your situation!

Next up, once you have the raw list it’s time to prioritize it. Consider the possibility that you can do only one thing, or bring only one item, on that list, and ask yourself what is truly most important. That goes on top. Then look at the rest of the list and ask the same question about the remainder; that becomes number two. And so on until you have prioritized the whole list. Also consider that some items on the list might benefit from consolidation, either on the list or physically. If you list three items to grab that are physically located together, then it makes sense to group them together on the list even though objectively they might not be of similar importance; or if you can put all of the documents you list in one binder ahead of time so that you can simply grab the binder, then it might very well make sense to do so and put something like “in the bookcase just inside the living room door, bring the blue binder marked ‘important documents’ (placed at eye level)” on the list.

When prioritizing, it helps to consider what you can more easily replace or recover from the loss of. For example, if your phone charges over plain USB (it has a mini-USB, micro-USB or USB-C charge port), you can probably pick up a workable charger at a reasonable price from just about any electronics store if you simply have a means to pay for it; but if you lose your phone, that might put you in a worse situation. Similarly, if you have good ID, then you can probably talk to your bank and get a new authentication token (though they might charge you for it), but if you don’t have ID, a lot of things become more complicated.

Once you have a prioritized evacuation checklist, date it, then print it out (whatever causes you to need to evacuate may very well coincide with a power outage) and place the printed version in some location where you will know, even under stress, where it is. Also set up a calendar reminder, perhaps yearly, to go through it regularly and update it as necessary.

Chances are quite good that you won’t need to refer to that evacuation checklist in a real emergency, because in all honesty, most people probably never find themselves in a situation where evacuating is the correct answer. But in the highly unlikely situation that you do at some point need to evacuate, at least you will have a step-by-step checklist of what to do and what to bring which you can trust that you have thought through in advance.

Indicating that a domain never legitimately originates email

There are situations in which domains should never legitimately be used to originate email. One example might be domain or host names that simply aren’t used for email at all: web-only domains, administrative host names (such as ssh.example.com), and so on.

Note that this does not necessarily apply to domains which need to either send or receive email (just not both), nor, obviously, domains that are used for two-way email traffic. Some of it may apply, but the thrust of this is about DNS names that are never used for either.

For domains that truly should never originate email, there are a few relatively trivial things that a domain owner can do to indicate this to mail servers elsewhere on the Internet. All of the techniques listed here requires control over the domain’s DNS records, and especially when used together can significantly reduce the risk of that domain being used as an origin address for spam resulting in loss of trust or backscatter bounces.

All of these are also trivial to reverse, should you later on change your mind and decide that the domain in question really should be used for email.

Publish a null MX

RFC 7505 defines a “null MX” resource record. Whereas a domain name with no MX (mail exchanger) resource record (RR) in DNS can still both send and receive mail, publishing a null MX RR explicitly states that this domain does not accept (receive) mail.

In part because this also precludes even the possibility of sending a bounce, many mail servers take it as a strong signal to not accept any mail from that domain. This behavior is even spelled out in the standard, using “SHOULD NOT” language about publishing a null MX record for domains that originate email.

To publish a null MX, publish only one single MX record to the domain or host name in question, with priority 0 and a zero-length MX hostname (often expressed as .). In BIND DNS master file syntax, this becomes something very similar to:

@ IN MX 0 .

Publish a deny-all SPF policy

Sender Policy Framework (SPF) is a way for domain owners to specify which hosts are allowed to use a domain name in the envelope sender, which is used during the SMTP exchange and typically for such purposes as sending bounces. The current SPF standard is RFC 7208 with updates, and covers primarily the SMTP HELO/EHLO and MAIL FROM (envelope sender) commands. It can also be used as a spam scoring signal with the user-visible sender address, which can differ from the envelope sender address, although DMARC (see below) is specifically designed for use with the user-visible sender address.

A SPF record is technically a DNS TXT (text) type resource record on the host name that would be used as the envelope sender address domain part or the HELO/EHLO host name.

There can be only one SPF TXT record on a single host name; having more than one is an error and can lead to unpredictable or undesirable behavior, especially if they are in conflict. The contents of this record starts with v=spf1 (the whitespace is important) to identify it as a SPF version 1 record, and continues to list authorization terms (which the standard refers to as “mechanisms”).

SPF includes a term all which “always matches”, regardless of where the SMTP connection is coming from. Each term is optionally preceded by a qualifier; the qualifier - means “fail”. Thus, an authorization term -all means “always fail”. If this is the only authorization term published, then SPF will indicate that email is not allowed to use this domain as the domain part of the envelope sender from any host.

Again in BIND DNS master file syntax:

@ IN TXT "v=spf1 -all"

Publish a deny-all DMARC policy

Whereas SPF is designed for use on the SMTP-level domain and host names, DMARC (RFC 7489 with updates) uses the user-visible (RFC 5322) sender address. It is designed to prevent phishing in the form of making an email appear to come from an address that the sender is not authorized to use as a user-visible sender address.

DMARC is potentially more complex than SPF, but in its simplest form, a deny-all policy must include only the DMARC version and a simple policy. DMARC also supports for example applying a policy to only a subset sampling of traffic, or applying a different policy to subdomains.

The definition of the DMARC TXT record is given primarily in the RFC’s sections 6.1 and 6.3. Unlike SPF, it uses a dedicated record under the domain name in question, _dmarc. Therefore, for example, the policy for a user-visible sender address of some.one@email.example.com is found at _dmarc.email.example.com.

Again in BIND DNS master file syntax, and assuming that the zone origin is set correctly (in other words, that @ would map to the user-visible sender address domain name of interest):

_dmarc IN TXT "v=DMARC1;p=reject;"

In summary

You can indicate that a domain name will never legitimately originate emails, and that any emails that claim to originate from it should be rejected, by publishing the following three DNS records at the domain name in question:

@ IN MX 0 .
@ IN TXT "v=spf1 -all"
_dmarc IN TXT "v=DMARC1;p=reject;"

Given this, most modern mail servers on the Internet should reject email claiming to come from this domain outright, regardless of its origin on the network.

Should you later change your mind, then simply delete or update the above records as appropriate and wait for any DNS caching to expire.

Using per-account SSH key files with OpenSSH

OpenSSH is a SSH server and client, in current incarnations originating on OpenBSD but used on many Unix-like operating systems, including being a common choice of SSH server and client alike on many Linux systems.

Unfortunately, it (and perhaps other SSH clients as well) in the default configuration and typical use has a somewhat nasty information leak when used with key pair authentication.

This is because of the interaction between three things.

First, a SSH client will, during authentication, offer a series of keys to the server, effectively asking for each “will you let me authenticate as this user using this key?”.

Second, the initial exchange that offers each key in turn contains enough information that the key can, effectively, be uniquely identified. It must for the server to reach a meaningful answer.

Third, the OpenSSH SSH client will, by default, try every key that it knows about to find one that the server is willing to accept for a connection attempt.

All of this would already be bad enough from a potential information leak perspective, but in isolation, it still largely only allows a rogue server to learn what keys exist on the connecting system and user account while the user is actively connecting to it, but nothing more about them or what other context those keys exist within. Not great, but not horrible.

However, additional information exists. For example, as noted by Andrew Ayer, GitHub actually publishes each user’s authorized SSH keys. This in itself isn’t a huge problem either; only the public keys are published, so as long as the keys are secure enough, there’s no real risk of compromise of a person’s GitHub access.

Put all of this together, though, and it becomes quite possible for a SSH server to derive the GitHub username of a connecting user, if that person uses OpenSSH with its defaults.

All of a sudden, a SSH server can potentially deanonymize a connecting person by, with a rather high degree of certainty, associating the connecting user with a GitHub user account.

Similarly, if multiple services publish keys in this manner, it’s fairly easy to collate them together and look for matches. If the same key is authorized for more than one account, or for accounts with more than one service, there is a rather high probability that those accounts belong to the same person, even if there is nothing else to suggest this.

A necessary first step to protect against this information leak is to use different key pairs for each such service. ssh-keygen has -f to specify the base file name to which to save the newly generated key pair; ssh has -i to specify the identity file to use; and ssh_config (usually ~/.ssh/config and /etc/ssh/ssh_config) has the IdentityFile directive. However, this doesn’t necessarily prevent the SSH client from presenting other known keys during connection key exchange.

To prevent the latter, use the IdentitiesOnly yes directive in ssh_config. This causes the SSH client to only present any explicitly configured identities during public key authentication, protecting against the server you are connecting to learning more about what keys you have on the system you are connecting from than you intended.

Unfortunately, setting these on a per-host basis in the SSH client configuration quickly gets tiresome if you have multiple accounts, and is error-prone.

Thankfully, OpenSSH offers macro expansion in the IdentityFile value based on information about, among other things, your local user account and the connection you are making. (See the ssh_config(5) manual page for a full list and description of the macro expansion tokens.) This is especially useful in conjunction with wildcard Host stanzas to provide a set of defaults.

Putting all this together you can, for example, put at the bottom of your ssh_config something like

Host *
IdentitiesOnly yes
IdentityFile %d/.ssh/keys/%h/%r/current
PasswordAuthentication yes
PubkeyAuthentication yes
PreferredAuthentications publickey,password
User nobody

and together with it (I prefer above, with the Host * providing the defaults), a Host stanza to simply set the correct username

Host ssh.example.com
User myself

With this in place, when you connect to ssh.example.com, OpenSSH will offer only the key pair in ~/.ssh/keys/ssh.example.com/myself/current for authentication. (%d expands to the path to your local home directory; %h expands to the name of the host you are connecting to; and %r expands to the remote username.)

To then add keys for a new account, use something like

$ mkdir -p ~/.ssh/keys/sftp.example.net/u1234567
$ ssh-keygen -f ~/.ssh/keys/sftp.example.net/u1234567/current

and either specify the username when connecting (for example, sftp u1234567@sftp.example.net ...), or add another Host stanza to your ssh_config specifying the username

Host sftp.example.net
User u1234567

If you don’t do either, the OpenSSH client will try to read the key pair from ~/.ssh/keys/sftp.example.net/nobody/current (because of the Host * stanza’s User nobody), find nothing at that file location, and not offer any key pair at all for authentication to the server. In the example case above, it will then fall back to password authentication. Since nobody likely doesn’t have a valid password, this effectively blocks the login attempt in a non-destructive manner while leaking minimal information either over the network or to the remote server.

Setting this in your ssh_config as defaults like this also neatly fits into many tools’ SSH integration, where it can be tricky to pass additional parameters, especially if those are dependent on for example where you are connecting to.

With this in place, you can still use the same key pair for more than one account by putting the actual key pair files in some location and symlinking from the location expanded to based on the IdentityFile directive. However, instead of the same key pair being used by default for every account everywhere unless you take special care to use separate key pairs for each account, using the same key pair for multiple accounts now becomes the active, rather than passive and by default, choice.

It also becomes much easier to rotate a key pair if you ever have reason to, because with this in place, you don’t need to stop to consider where it’s used; where it’s stored locally tells you the one remote account for which it’s being used.

Someone could still look at your authorized SSH keys on GitHub, but now it’s very little more than an anonymous blob of encoded public key data that can’t be matched against any other keys that they might encounter.

Fixing pfSense 22.05 to 23.01 upgrade breaking OpenVPN tunnels

After upgrading pfSense from 22.05 to 23.01, some users report that OpenVPN tunnels fail to establish with a “Cannot open TUN/TAP dev /dev/tun*: No such file or directory” error. I, too, ran full speed into this issue during what initially looked like it should have been a relatively straight-forward upgrade. In my case, I got that error soon after “Peer Connection Initiated” for a case where the pfSense instance acts as an OpenVPN client.

One reason why this can happen is because under some conditions, the linker.hints file isn’t refreshed to match an upgraded kernel. Since the upgrade from 22.05 to 23.01 includes an upgrade from FreeBSD 12 to FreeBSD 14 as the underlying operating system, naturally the kernel is also upgraded.

To fix this in the short term, one can use the web interface command line tool (found as Diagnostics > Command Prompt in the menu) to manually regenerate the linker.hints file by running the command:

kldxref /boot/kernel

and rebooting the instance. This is suggested by Netgate employee stephenw10, and several affected users have subsequently confirmed in that forum thread that doing so solved the problem. It also solved the problem for me.

Supposedly a fix to automatically regenerate the linker.hints file on each reboot will be included in the next scheduled pfSense update, which will very likely be 23.05.

It’s just too bad that this still isn’t mentioned in the 23.01 release notes and errata, which is where information about this type of potential issues should be collected. That way, users wouldn’t have to go digging around the forums after the fact when they are hit by an already-known issue.

Preventing DNS leaks with Linux NetworkManager VPN connections

A good software design principle is that of least surprise. Software should do what one can reasonably expect it to do in response to user actions and any configuration that has been made.

Another good design principle is to fail safely (or securely). If for some reason a program cannot perform a requested action, it should put itself, or the system, into a known-safe state. That state probably won’t be that which the user was seeking to achieve, but it should be one that does not cause the user to unexpectedly do anything dangerous or which might endanger the system further.

When used with VPN connections (both OpenVPN and Wireguard), the Linux NetworkManager tool unfortunately comes up horribly short in both areas.

In short: absent special configuration, DNS queries (and responses) can quite easily leak outside of the VPN tunnel to the DNS resolver provided by whatever network you are on. These DNS queries can allow whoever operates that DNS resolver to see the host names you are connecting to.

If you are on a trusted network, such as at home, while surprising, this is probably not a major issue.

If you are on an untrusted network, such as a public network at a café, an airport, or a hotel, where you might want to use a general-purpose VPN for traffic confidentiality, this can be a much larger issue.

By default, when connecting to a VPN, NetworkManager will combine the DNS servers provided by that network (either through fixed configuration or obtained dynamically via DHCP) with whatever resolver configuration existed previously, likely obtained when connecting to the network you are already on and connecting to the VPN through.

Consequently, if an attacker can disrupt the VPN traffic at the right moment, they can cause DNS requests to be made to the lower-priority DNS resolver: that on the local network, outside the VPN.

Additionally, if you mistype a host name, then it is possible that the search suffix can lead to a slight loss of anonymity against the operator of the DNS resolver that happens to be used.

Worse, there is no way to configure this behavior through the NetworkManager GUI.

Fortunately, it’s easy to configure through nmcli. Open a terminal window, and check the current settings:

$ nmcli con show "vpn connection name" | grep ipv.\.dns-priority

This will most likely show two lines of output, similar to:

ipv4.dns-priority:     50
ipv6.dns-priority:     50

(The exact value for both of these can vary.)

The actual encoding of the priority value is a little peculiar. In particular:

  • Lower values are considered higher priority
  • Negative values exclude configurations with higher values
  • DNS configurations obtained through networks with the same priority value are combined

The value itself is a signed 32-bit integer, so the valid range is -2147483647 through +2147483647.

To force only the DNS servers obtained through this particular connection to be used when this connection is active, use nmcli to modify the connection to set both of these to the largest negative value possible: -2147483647. This ensures that no more negative priority value can exist on a different connection, causing the DNS configuration for this particular connection to always have priority.

$ nmcli con modify "vpn connection name" +ipv4.dns-priority -2147483647
$ nmcli con modify "vpn connection name" +ipv6.dns-priority -2147483647

For OpenVPN connections, after modifying the connection in this manner, you will need to provide the VPN user’s password (not your local login password) when connecting to it the next time.

Note that if you have multiple connections with the same value for the respective dns-priority properties, and connect to those networks simultaneously, the configurations are combined. Therefore, you do not want to set this on any potentially untrusted network that you might be connected to at the same time as the VPN connection.

Having made the configuration change, connect and disconnect the VPN connection repeatedly and observe the effect on the system name resolver configuration in /etc/resolv.conf:

$ watch cat /etc/resolv.conf

If everything is working as intended, you will see the set of search and nameserver directives being replaced as you connect and disconnect the VPN connection, instead of amended.

Server-side port knocking with Linux nftables

Port knocking is a technique to selectively allow for connections by sending a semi-secret sequence of packets, often called a “knocking” sequence.

While port knocking can very easily cut down on the amount of noise seen in logs, it’s important to keep in mind that it does not provide any significant level of security against a well-positioned adversary, as the knocking is done in the clear. The service that is hidden behind the port knocking still needs to be able to deal with being accessible from the outside network.

It used to be fairly complex on Linux to implement port knocking without relying on dedicated software to listen for the knocking packets and modify the firewall rules accordingly (which required that the listening software ran as root, which is usually something to be avoided if at all possible). Thankfully, with nftables, it’s relatively straight-forward to implement port knocking without ever leaving the firewall configuration, by using nftables’ set support.

The idea is to maintain two lists (sets) for a particular service: one of currently knocking clients, and one of clients that have successfully completed the knocking sequence.

In nftables syntax, it boils down to something similar to:

table inet filter {
  define ssh_knock_1 = 10000
  define ssh_knock_2 = 2345
  define ssh_knock_3 = 3456
  define ssh_knock_4 = 1234
  define ssh_service_port = 22

  set ssh_progress_ipv4 {
    type ipv4_addr . inet_service;
    flags timeout;
  }
  set ssh_clients_ipv4 {
    type ipv4_addr;
    flags timeout;
  }

  chain input {
    type filter hook input priority 0; policy drop;

    # ... other rules as needed ... #

    tcp dport $ssh_knock_1 update @ssh_progress_ipv4 { ip saddr . $ssh_knock_2 timeout 5s } drop
    tcp dport $ssh_knock_2 ip saddr . tcp dport @ssh_progress_ipv4 update @ssh_progress_ipv4 { ip saddr . $ssh_knock_3 timeout 5s } drop
    tcp dport $ssh_knock_3 ip saddr . tcp dport @ssh_progress_ipv4 update @ssh_progress_ipv4 { ip saddr . $ssh_knock_4 timeout 5s } drop
    tcp dport $ssh_knock_4 ip saddr . tcp dport @ssh_progress_ipv4 update @ssh_clients_ipv4 { ip saddr timeout 10s } drop

    ip saddr @ssh_clients_ipv4 tcp dport $ssh_service_port ct state new accept

    # ... other rules as needed ... #
  }
}

This works by, each time a port knock TCP connection attempt is received:

  • check that this particular knock is in @ssh_progress_ipv4 (with the exception of the first knock in the sequence)
  • for all but the last knock in the sequence, store the next expected knock in @ssh_progress_ipv4, and drop the knock packet (so to an outside observer, it looks no different from any other port)
  • for the last knock in the sequence, store the connecting IP address in @ssh_clients_ipv4, and drop the knock packet
  • when the actual connection attempt arrives, accept the connection only if the connecting IP address exists in @ssh_clients_ipv4

The knocking status is stored with a brief timeout (in the example above: 5 seconds during knocking, and 10 seconds on successful completion), ensuring that any lingering knockers are evicted promptly from the status sets.

The above example is for a four-port knocking sequence, but it could easily be both shorter and longer. The only state transition stanzas that are special is the very first and the last before the final decision stanza.

To also support port knocking and connections over IPv6, duplicate the two state sets (but use ipv6_addr for those instead of ipv4_addr), and duplicate the respective state transition and final decision stanzas (but use ip6 instead of ip).

Client-side TCP port knocking in Powershell, *nix

Port knocking is a technique to selectively allow for connections by sending a semi-secret sequence of packets, often called a “knocking” sequence.

On a *nix system, nc (netcat) is useful for port knocking. Individual knocks can be sent by nc -w 1 -z host port; this will send a TCP connection attempt to the specified host and port, with a timeout of 1 second, and without sending any data.

To use nc to send a TCP knock sequence of ports 10000, 2345, 3456, 1234 to 192.0.2.234, you might do something like

nc -w 1 -z 192.0.2.234 10000
nc -w 1 -z 192.0.2.234 2345
nc -w 1 -z 192.0.2.234 3456
nc -w 1 -z 192.0.2.234 1234

Doing the same thing in Microsoft’s Powershell is rather more verbose:

Start-Job -ScriptBlock {Test-NetConnection -ComputerName "192.0.2.234" -Port 10000 -InformationLevel Quiet} | Wait-Job -Timeout 1

Start-Job -ScriptBlock {Test-NetConnection -ComputerName "192.0.2.234" -Port 2345 -InformationLevel Quiet} | Wait-Job -Timeout 1

Start-Job -ScriptBlock {Test-NetConnection -ComputerName "192.0.2.234" -Port 3456 -InformationLevel Quiet} | Wait-Job -Timeout 1

Start-Job -ScriptBlock {Test-NetConnection -ComputerName "192.0.2.234" -Port 1234 -InformationLevel Quiet} | Wait-Job -Timeout 1

A simple bash shell script to perform port knocking and then connect and hand a connected pipe to the calling process might look something like:

#!/bin/bash
host=$1
port=$2
nport=$3
while test -n "$nport"
do
  nc -w 1 -z $host $port
  shift
  port=$2
  nport=$3
done
test "$port" != "0" && exec nc $host $port

The above script takes the host name or IP address of the remote host as the first parameter, followed by a series of TCP port numbers; the last port number is the final connection port. This can be used for example with OpenSSH’s ProxyCommand directive:

$ cat .ssh/config
Host 192.0.2.234
  ProxyCommand ~/.local/bin/portknock-connect %h 10000 2345 3456 1234 %p
$

Linux KVM + host nftables + guest networking

The difficulties of getting the combination of Linux KVM, host-side modern nftables packet filtering, and guest-side networking to work together without resorting to firewalld on the host are fairly well published; for example, here. The recommended solution usually involves going back to iptables on the host, and sometimes to define libvirt-specific nwfilter rules. While that might be tolerable for dedicated virtualization hosts, it’s less than ideal for systems that also see other uses, especially uses where nftables’ expressive power and relative ease of use is desired.

Fortunately, it can be worked around without giving up on nftables.

I’m assuming that you have already set up a typical basic nftables client-style ruleset on the host, something along the lines of:

#!/usr/bin/nft -f
flush ruleset
table inet filter {
    chain input {
        type filter hook input priority 0; policy drop;
        ct state invalid drop
        ct state established accept
        ct state related accept
        iifname "lo" accept
    }
    chain forward {
        type filter hook forward priority 0; policy drop;
    }
    chain output {
        type filter hook output priority 0; policy accept;
    }
}

Start out by setting the KVM network to start automatically on boot. The network startup will also cause libvirt to create some NAT post-routing tables through iptables, which through the magic of conversion tools get transformed into a corresponding nftables table ip nat. This might cause an error to be displayed initially, but that’s OK for now. Reboot the host, run virsh net-list --all to check that the network is active, and nft list table ip nat to check to make sure that the table and chains were created. It should all look something like:

$ sudo virsh net-list --all
 Name      State    Autostart   Persistent
--------------------------------------------
 default   active   yes         yes

$ sudo nft list table ip nat
table ip nat {
    chain LIBVIRT_PRT {
        ... a few moderately complex masquerading rules ...
    }
    chain POSTROUTING {
        type nat hook postrouting priority srcnat; policy accept;
        counter packets 0 bytes 0 jump LIBVIRT_PRT
    }
}
$

Letting libvirt’s magic and the iptables-to-nftables conversion tools handle the insertion of the routing filters makes it less likely that issues will develop later on due to for example changes in what rules newer versions need. An alternative approach, which works currently for me but might not work for you or in the future, is to manually create a postrouting chain; the nftables magic incantation can be reduced to something similar to:

table ip nat {
    chain postrouting {
        type nat hook postrouting priority 100; policy accept;
        ip saddr 192.168.122.0/24 masquerade
    }
}

(In the above snippet, 192.168.122.0/24 maps to the details from the <ip> node in the output of virsh net-dumpxml <name> for each network listed by virsh net-list earlier.)

You do, however, need to add some rules to the table inet filter to allow incoming and forwarded packets to pass through to and from the physical network interface (eth0 here; substitute as appropriate, ip addr sh will tell you the interface name):

table inet filter {
    chain input {
        # ... add at some appropriate location ...
        iifname "virbr0" accept
    }
    chain forward {
        # ... add at some appropriate location ...
        iifname "virbr0" oifname "eth0" accept
        iifname "eth0" oifname "virbr0" accept
    }
}

The forward chain rules probably aren’t necessary if your forward chain has the default accept policy, but it’s generally better to have a drop or reject policy and only allow the traffic that is actually needed.

The finishing touch is to make sure that sysctl net.ipv4.ip_forward = 1 on the host; without it, IPv4 forwarding won’t work at all.

Unfortunately, as KVM still tries to use iptables to create a NAT table when its network is started, and this can’t be done when a nftables NAT table exists, the table ip nat portion, if manually configured, needs to go into a nftables script that is loaded after the KVM network is started thus replacing the automatically generated chain, whereas most distributions are set up to load the normal nftables rule set quite early during the boot process, likely and hopefully before basic networking is even fully up and running (to close the window of opportunity for traffic to sneak through). The easiest way to deal with this is very likely to just let the iptables compatibility tools handle this for you when the KVM network is started and accept the need for a reboot during the early KVM configuration process. The most likely scenario in which this simple approach won’t work seems to be if you are already using nftables to do other IP forwarding magic as well; in that case, you may need to resort to a split nftables configuration and loading the post-routing NAT ruleset late during the boot process, such as perhaps through /etc/rc.local (which is typically executed very late during boot). If so, then it’s probably worth the trouble to rewrite one or the other in terms of nft add commands instead of a full-on, atomic nft -f script.

With all this in place, KVM guests should now be able to access the outside world over IPv4, NATed through the host, including after a reboot of the host.

A huge tip of the proverbial hat to user regox on the Gentoo forums, who posted what I was able to transform into most of the above.

Using Linux nftables to block traffic outside of a VPN tunnel

For systems that commonly connect to untrusted networks, such as laptops, it can be useful to only allow outgoing traffic through a pre-configured, known-trusted (to the extent that such is a thing) VPN tunnel. This serves to ensure that unprotected traffic isn’t routed through a potentially unknown, potentially adversarial uplink provider.

Fortunately, Linux’s nftables functionality provides everything we need for that.

Usually, nftables is configured in such a way that incoming traffic is filtered, but outgoing traffic is implicitly trusted. Take, for example, Debian 11/Bullseye’s /usr/share/doc/nftables/examples/workstation.nft:

#!/usr/sbin/nft -f

flush ruleset

table inet filter {
	chain input {
		type filter hook input priority 0;

		# accept any localhost traffic
		iif lo accept

		# accept traffic originated from us
		ct state established,related accept

		# activate the following line to accept common local services
		#tcp dport { 22, 80, 443 } ct state new accept

		# accept neighbour discovery otherwise IPv6 connectivity breaks.
		ip6 nexthdr icmpv6 icmpv6 type { nd-neighbor-solicit,  nd-router-advert, nd-neighbor-advert } accept

		# count and drop any other traffic
		counter drop
	}
}

This implicitly creates an output (and forward) chain as well:

table inet filter {
	chain output {
		type filter hook output priority 0;
	}
}

Since this chain doesn’t have any policy, the default policy accept applies. In other words, everything is allowed.

To block the unwanted traffic, we need to identify the traffic that does need to be allowed. There are three kinds of traffic that need to be allowed to flow outside of the VPN tunnel:

  • Traffic for the purpose of bringing the interface up (DHCP, IPv6 neighbor discovery, …)
  • Traffic for the purpose of bringing the VPN tunnel up (DNS)
  • The VPN tunnel itself (Wireguard, OpenVPN, …)

Begin by determining on which interfaces you want to be able to establish an outgoing VPN connection. For some people this will be the wired interface, for some it might be the wireless interface, and for some, it might be both. Running ip addr sh in a terminal is one way to find the actual interface name, which will be needed in a moment. Also open the nftables configuration file (likely /etc/nftables.conf, but check your distribution’s documentation) in a text editor. If you don’t have one yet, you can start out with this, which is Debian’s example stripped of comments but the implicit chains included:

#!/usr/sbin/nft -f

flush ruleset

table inet filter {
	chain input {
		type filter hook input priority 0;
		iif "lo" accept
		ct state established,related accept
		ip6 nexthdr icmpv6 icmpv6 type { nd-neighbor-solicit,  nd-router-advert, nd-neighbor-advert } accept
		counter drop
	}

	chain forward {
		type filter hook forward priority 0;
	}

	chain output {
		type filter hook output priority 0;
	}
}

For our purposes, we will be focused on the output chain, so I will be eliding the other parts of the configuration.

It’s useful to allow traffic that is routed locally on the host, for example for inter-process communication, so immediately after the type stanza, add a rule to allow traffic over the loopback interface (oif is output interface):

oif "lo" accept

Since all interfaces may not have been brought up yet by the time nftables rules are initially loaded, for the next several stanzas use oifname instead of oif. The use of oifname comes at a bit of a performance penalty, but it is more flexible especially with interfaces that aren’t always there.

First, allow DHCP traffic, which uses UDP with source and destination ports both either 67 or 68:

oifname { "en...", "wl..." } udp sport { 67, 68 } udp dport { 67, 68 } accept

Replace the "en...", "wl..." part with the name of the interface(s) in question.

Second, allow DNS traffic for initial name resolution, which uses UDP or TCP with a destination port of 53. If you configure your VPN tunnel with an IP address as a target instead of a DNS name, then you don’t need this.

oifname { "en...", "wl..." } meta l4proto { tcp, udp } th dport 53 accept

As an alternative, you can create two rules, one each for TCP and UDP; doing so will have the same effect, at a slight performance and maintenance penalty:

oifname { "en...", "wl..." } tcp dport 53 accept
oifname { "en...", "wl..." } udp dport 53 accept

Then add rules to allow traffic to the VPN concentrator. The more tightly scoped you can make this, the better. For example, if you know the IP address and the port used, you can add a stanza such as:

oifname { "en...", "wl..." } ip daddr 192.0.2.128 udp dport 29999 accept

If the VPN concentrator runs on either a standard port that is rarely used for other purposes (such as OpenVPN’s default 1194) or an uncommon port (as is often the case with Wireguard) but you don’t know its exact IP address ahead of time, you can either use a set, or elide the IP address specification:

oifname { "en...", "wl..." } ip daddr { 192.0.2.128/28, 198.51.100.0/27 } udp dport 29999 accept

or

oifname { "en...", "wl..." } udp dport 29999 accept

Then allow traffic as needed through the VPN tunnel interface. The exact name of this interface will vary with the VPN technology you’re using; for example, Wireguard tunnels typically allow you to specify the interface name, whereas OpenVPN tunnels use a semi-unpredictable interface name. For this, the ability of oifname to match a prefix by appending * can be useful. For example, for OpenVPN you might use:

oifname "tun*" accept

whereas for a Wireguard tunnel you might end up with:

oifname "wgmyvpn" accept

As a final touch, add a policy to block traffic not matched by other rules. Since all output rules specify on which interfaces traffic is allowed to flow, this blocks traffic outside of the VPN tunnel except for the traffic that is explicitly allowed to flow outside of the VPN tunnel.

The policy typically goes at the top, just below the type stanza, whereas the reject stanza must appear below all other rules.

policy drop;
reject

The purpose of also having a reject stanza is to provide more immediate feedback. In its absence, packets will simply be dropped, resulting in long wait times before attempts time out; with it, clients will be notified immediately that the connection failed and can report this back to the user.

The final output chain might look something like:

chain output {
	type filter hook output priority 0;
	policy drop;

	oif "lo" accept
	oifname { "en...", "wl..." } udp sport { 67, 68 } udp dport { 67, 68 } accept
	oifname { "en...", "wl..." } meta l4proto { tcp, udp } th dport 53 accept
	oifname { "en...", "wl..." } ip daddr { 203.0.113.113, 203.0.113.114 } udp dport 1194 accept
	oifname "tun*" accept
	reject
}

Reload the nftables rule set (sudo nft -f /etc/nftables.conf) and verify that you can connect to the VPN and access the Internet (or the remote network) through it. Disconnect the VPN and verify that traffic is blocked, for example by attempting to reload a web page.

Reboot the computer and verify that the network interface comes up and that you can connect to the VPN, access the Internet through it, and that traffic is again blocked when you disconnect from the VPN.

Keep in mind that this ruleset isn’t perfect. For example, if routes aren’t set up properly when starting the VPN tunnel, traffic can leak through ordinary DNS queries outside of it; and it relies on interface name matching which can match unexpected interfaces. Therefore, this does not serve as a proper “kill switch” for all traffic. However, it does form a decent second (or third) line of defense against unexpected but not actively malicious traffic leaks outside of the VPN tunnel, which for a system that would otherwise allow everything going out is very much an improvement.

Exposing pfSense uplink information to LAN hosts

Sometimes, it’s beneficial to be able to programatically tell from a client which uplink connection is being used by pfSense to route traffic, or simply have access to the current value of some property that maps to each respective uplink. This can be the case if, for example, there is a desire to pause certain network-intense activities running on a client when a metered, data-capped or lower-bandwidth uplink (for example mobile broadband) is in use.

Unfortunately, this information is not readily exposed in any way I have been able to find. However, it also isn’t that difficult to get at.

This post is aimed mainly at simple primary/backup multi-homed configurations, not load-balancing configurations or primary/backup load-balanced configurations. Some adjusting may be required if your multi-homed pfSense configuration includes load-balancing.

On FreeBSD (on which pfSense is based), the way to print the routing table is netstat -r -n. Add an additional either -4 or -6 to print only the IPv4 or IPv6 routing table, respectively; by default, it prints both.

The uplink that at each time is being used by pfSense will typically be the default IP route. The default route, when printing the routing table through netstat -r -n, will have a first field with the value default.

To view the full output through the web interface, use Diagnostics > Command Prompt > Execute Shell Command. Be very careful; a typo or errant whitespace can be critical!

pfSense also includes awk, which is quite handy for filtering table-like text output such as that produced by netstat. We are primarily interested in the “Netif” (network interface) column of the output, for the line where the “Destination” field (the first one) has the value default.

Log in to the administration interface. If you haven’t already installed the Cron package, do so first through System > Package Manager.

Once Cron is installed, go to Services > Cron > Settings, and add a new entry. The command to be executed should be something very similar to:

/usr/bin/netstat -rn4 | /usr/bin/awk '($1 == "default" && $4 == "mvnetaMM") { print "ONE" } ($1 == "default" && $4 == "mvnetaNN") { print "OTHER" }' >/usr/local/www/uplink.local.txt

This will write ONE to /usr/local/www/uplink.local.txt if the default route is through the interface mvnetaMM, and will write OTHER if the default route is through mvnetaNN. The directory /usr/local/www, in turn, is exposed to local clients as / by the built-in administration interface web server.

You can add additional mappings (from physical interface name to an arbitrary value) on the same form if you have additional uplink interfaces. Look at Interfaces > Assignments in the administration web interface to see which physical interface name maps to which mnenomic name, and then from there decide what to expose if the default route is through that interface.

To avoid issues with quoting and encoding, I suggest only using US-ASCII alphanumeric characters in the awk print statements.

Do note that because Cron can only be configured to execute commands at a minute granularity, there will be a slight delay before a change in the default route is reflected in the file that is accessible from clients.

With the cron job in place, make the client request /uplink.local.txt from the firewall (no authentication required!) and take whatever action is desired based on its contents, or the change in its contents. For example, on Linux, you might do:

wget -q -O - --no-check-certificate https://pfsense.home.arpa/uplink.local.txt

or

curl -s --insecure https://pfsense.home.arpa/uplink.local.txt

The --no-check-certificate or --insecure respectively is needed if the respective tool does not trust the TLS certificate for the pfSense host. If your client trusts the certificate, it’s better to remove that part.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén