Local IP port redirection using Linux nftables

It can occasionally be useful to expose a (TCP or UDP) port on a different port, without passing traffic on to a different host as is usually the case with port forwarding. In effect, changing the destination port of TCP or UDP traffic.

With Linux nftables, this is most easily done in a prerouting chain.

Starting with a typical example nftables configuration:

#!/usr/bin/nft -f
flush ruleset
table inet filter {
    chain input {
        type filter hook input priority 0; policy drop;
        ct state invalid drop
        ct state established accept
        ct state related accept
        iifname "lo" accept
    chain forward {
        type filter hook forward priority 0; policy drop;
    chain output {
        type filter hook output priority 0; policy accept;

Add a prerouting chain with the correct priority:

table inet filter {
chain prerouting {
type nat hook prerouting priority dstnat; policy accept;

Add a rule to this new chain which results in a redirect action:

table inet filter {
    chain prerouting {
        type nat hook prerouting priority dstnat; policy accept;
        tcp dport 80 redirect to :22

With the chain in place, to add the redirect rule programmatically, use something like:

# nft add rule inet filter prerouting tcp dport 80 redirect to :22

where inet filter maps to the table specification, and prerouting is the name of the chain to which to add the rule.

With this in place, any packets arriving at TCP port 80 will, in your subsequent input chain (and by the time they reach userspace), have a destination port of 22, thereby exposing your SSH server (listening on port 22) on the standard HTTP port (80) but subject to all normal tcp dport 22 conditions (and none of the tcp dport 80 ones) within the input chain.

In the words of the nft(8) man page:

The redirect statement is a special form of dnat which always translates the destination address to the local host’s one. It comes in handy if one only wants to alter the destination port of incoming traffic on different interfaces.

And yes, it works even with sysctl net.ipv4.ip_forward = 0.

Respecting DNT / GPC HTTP headers by only serving session cookies from Apache

The EU General Data Protection Regulation (GDPR) specifies that a person may object to data processing “by automated means using technical specifications”. (GDPR Article 21(5).)

In the context of web browsing, two such technical specifications are the Do Not Track (DNT) and Global Privacy Control (Sec-GPC) HTTP headers, which browsers can be configured to send. Germany’s Berlin Regional Court has recently ruled that under the GDPR, a web browser sending a DNT header provides sufficient notice of such data processing objection by the individual using that browser.

Cookies are widely used across the web to enable session-like behavior over an inherently sessionless protocol (HTTP). Every HTTP cookie optionally has a maximum lifetime and/or an expiration point in time, after which the cookie is considered expired. A cookie where the server specifies neither of these is considered a “session” cookie and will typically be deleted either when the browser is closed normally, or sooner; a cookie where either is specified is requested by the server to be kept until that time has expired, although the actual cookie eviction policy is up to the client.

Cookies are also often used to enable persistent tracking of individual users, which has earned them quite a bad name. This is a particular issue with persistent (non-session) cookies, as those generally have a longer lifetime. Of course, not all cookies are used for tracking purposes, but especially for long-lived cookies it can be very difficult to tell from the outside whether this is the case. (The same is, naturally, true also for session cookies; but since session cookies by their very nature are not persistent, they cannot by themselves be used to track an individual between browser sessions.)

To turn a persistent cookie into a session cookie, the Set-Cookie HTTP response header must be edited to remove both the Expires and the Max-Age settings.

With the Apache web server, this can be done conditional on whether the client sends either the DNT or Sec-GPC headers by using mod_headers together with the If directive. First enable the mod_headers module, and then add to the HTTP server configuration:

<If "req_novary('Sec-GPC') == '1' || req_novary('DNT') == '1'>
  Header always edit Set-Cookie "^([^;]+; *)(Expires=[^;]+;?)(.+)?$" "$1$3"
  Header always edit Set-Cookie "^([^;]+; *)(Max-Age=[0-9]+;?)(.+)?$" "$1$3"
  Header always edit* Set-Cookie "; +" "; "
  Header always edit Set-Cookie "^([^;]+;)* *$" "$1"
  Header always edit Set-Cookie ";$" ""

(Please do mind the whitespace; it is important. You probably want to copy and paste the above, not re-type it.)

This will, just before the HTTP response is sent over the network (because the early directive is not specified and thus late processing is requested), if in the request the user’s browser sent either a DNT or GPC header indicating a preference not to be tracked, rewrite all Set-Cookie headers in the response to turn any persistent cookies into session cookies by deleting the Expires and Max-Age specifications.

The last three edit statements in the above snippet collapse any double whitespace between fields and delete any lingering whitespace or field separators at the end of the header. The formal Set-Cookie header syntax requires that any semicolons within the cookie value are percent-encoded and that the last attribute’s value is not terminated with a semicolon.

Do note that this can break functionality which relies on the server indicating that a cookie should be evicted by setting it to already having expired, but without also simultaneously clearing the value of the cookie. At least some applications use Max-Age=0 to indicate a request for cookie eviction, which technically is against the formal header syntax (the value for Max-Age is required to be one non-zero digit followed by any number of digits). To special-case that and more closely match only values for Max-Age which are valid according to the header syntax, you can replace the Max-Age line in the snippet above with:

Header always edit Set-Cookie "^([^;]+; *)(Max-Age=[1-9][0-9]*;?)(.+)?$" "$1$3"

Doing so will let Set-Cookie headers through with Max-Age=0, but delete any non-zero Max-Age value specification.

Applying OpenSSH host settings on a per-network basis

Up through the 1980s, the Internet was a neatly organized place, clearly split into “class A”, “class B” and “class C” networks. In this classful environment, numerical IP addresses were split between the network and the host portion at an octet boundary, and where this boundary was placed was defined by the first few bits of the first octet. For example, by definition was host 100 on the “class C” network 192.0.2. Even today, people occasionally speak of “class A”, “class B” or “class C” networks when what they really mean is a /8, /16 or a /24 network respectively.

In the early 1990s, we got what is today known as CIDR, or classless inter-domain routing. This is the slash-notation for network prefix length that is commonly seen today, and which does not need to line up with an octet boundary at all or have any connection with the bits of the first octet;,,,, 2001:200::/48 and 2001:db8::/33 are all examples of perfectly valid CIDR network prefix specifications, but only the first matches the old classful assignment scheme (as 192 was part of the old class C space, and what used to be termed a class C network in modern terms is called a /24).

Also, at times, it is useful to apply specific OpenSSH configuration directives to every host on a network. For example, you might want StrictHostKeyChecking yes for a production network, but StrictHostKeyChecking ask or maybe even no for a testing network where you regularly replace machines.

If these networks are segregated on an IP address octet boundary, it’s fairly easy. For example, if testing is, you can feel fairly certain that OpenSSH will do the right thing if you give it something like:

Host 10.99.*.*
StrictHostKeyChecking ask

Host *
StrictHostKeyChecking yes

However, if your testing network is, say, an IPv4 /28 which straddles a hundreds boundary, it quickly gets unwieldy; already for, you might need:

Host ! ! ! !
StrictHostKeyChecking ask

Thankfully, OpenSSH can execute external commands to determine whether a configuration block should be applied. Enter Match exec combined with grepcidr.

Match exec "echo %h | grepcidr -sx >/dev/null 2>/dev/null"
StrictHostKeyChecking ask

The redirections are needed because unfortunately grepcidr has no equivalent to GNU grep’s -q (quiet or, if you wish, query) option.

Some sites suggest using bash-isms:

Match exec "grepcidr -sx <(echo %h) &>/dev/null"

which I found will probably work well when run from a typical interactive shell, but maybe not so well when ssh is being run from automated scripts or background processes such as cron or at. This can affect even connections which wouldn’t match that network at all.

There are two big things to keep in mind with this.

First, in OpenSSH configuration parlace, %h expands to the host name as given (either on the command line or through for example a Hostname directive); if this is not an IP address but rather a DNS name, the above examples won’t match because grepcidr does no DNS resolution, resulting in those settings not being applied to the connection. If this is a consideration, you’ll probably need to wrap it in some kind of helper script to detect a non-IP-address and resolve it before passing the resultant IP addresses to grepcidr. If you do, then beware of time-of-check-to-time-of-use issues!

Second, and simpler, the above snippets all rely on the user’s $PATH to find the tools involved. Especially if putting this in a system-wide configuration file, it’s advisable to extend the full paths to each respective binary; which might be, for example, /bin/echo and /usr/bin/grepcidr.

Bokrecension: Löpa varg, av Kerstin Ekman 🇸🇪

En recension dök upp någonstans i mitt flöde som gjorde mig nyfiken på boken Löpa varg av Kerstin Ekman. När den under bokrean fanns till salu som ebok för 55 kronor så passade jag på, sen låg den en stund innan jag kom till skott att läsa den.

Jag kan lika gärna säga det direkt: jag blev besviken. I synnerhet från en väl etablerad författare och genom ett stort förlag förväntade jag mig bättre.

Boken inleds i stort sett med att huvudpersonen, den 70-årige Ulf Norrstig, sitter ute i sina marker när han får se en varg.

Förutom den korta beskrivningen av det mötet på avstånd handlar en stor del av första kanske tredjedelen av boken om hur han blickar tillbaka på sin tidiga karriär som jägare. Vi får genom hans jaktdagböcker och minnen följa hans första tid som jägare.

Men i alla fall för mig saknas i stort sett helt svar på den centrala inledande frågan: jaha, och? Ekman misslyckas i mitt tycke fullständigt med att etablera varför detta är en person man som läsare ska vara särskilt intresserad av att följa, och att en så stor del av boken handlar om den person han var, utan någon särskilt tydlig koppling till den person han är eller den person han blir till slutet av boken, hjälper knappast i sammanhanget.

Vi får följa när huvudpersonen är ute i sina marker och jagar tillsammans med sina jaktkamrater, hur han väljer att kliva av som ledare för jaktlaget, när han hamnar på sjukhus och hans tankar när det meddelas om licensjakt på varg i området. Vi får följa när han väljer att till polisen inte dela med sig av saker han känner till och inser är relevanta för en polisutredning om bland annat grovt jaktbrott, och hans tankar om den verkliga världens svenska skogsbränder år 2018 och värmen några år senare.

Mycket av detta hade potential. Istället för att relativt kort konstatera att han valde att inte dela med sig av information och ett kort stycke dialog med personen det gällde, så är det något som hade kunnat utvecklas och problematiseras mycket mer ur många olika vinklar. Istället används utrymmet i boken till reflektioner baserat på korta anteckningar i 50-60 år gamla jaktdagböcker och gamla böcker om vilda djur. Utrymmet som användes för att upprepade gånger beskriva hur han i sitt yrkesliv använt ett citat ur en viss översättning av Kiplings Djungelboken hade kunnat användas till att utveckla meningsutbytet framåt slutet av boken med den person han till slut kom fram till var den skyldige till jaktbrottet, och kanske ännu mer intressant hans egna tankar inför och under den situationen. Den verkliga världens nyliga extremvärme och skogsbränder som finns med i berättelsen hade kunnat användas som inramning till reflektioner i en rad frågor, allt från “så här har det inte varit tidigare” till funderingar kring vilda djurs situation; men det närmaste vi kommer något sådant är antagligen en reflektion om att skogen ju alltid brunnit då och då även om det här verkade värre än vanligt, att hustrun hade fått tag på den sista fläkten i affären och att hunden inte orkade gå lika långt.

Avslutningen kändes därtill väldigt abrupt. Känslan blev närmast en av “vad, tog det slut här?” och att berättelsen lämnades oavslutad.

Det här är en berättelse om en pensionär som ser tillbaka på vad han har åstadkommit under sitt liv. Som sådan kan den i någon mån vara läsvärd, men samma berättelse hade antagligen kunnat skrivas även med någon annan djurobservation i början – eller för den delen någon helt annan inledande händelse. Och just den här boken hade nog mått bra av några rejäla vändor med en bra sax eller rödpenna följt av diverse tillägg här och var. Antagligen hade runt 20% av texten kunnat klippas bort utan att någon läsare skulle ha märkt det efter lite redigering, och det utrymmet hade istället kunnat användas till just att dyka djupare i de ganska stora frågor som trots allt berörs.

Något annat som kraftigt drog ner helhetsintrycket var den tekniska kvalitén på ebokfilen. Kapitelrubriker saknades; citattecken eller andra indikationer på vad som sades, tänktes och så vidare saknades. Köper jag en bok utgiven av ett etablerat förlag (Albert Bonnier i det här fallet) via en etablerad återförsäljare (Adlibris) så förväntar jag mig en bok där det i alla fall tydligt framgår vad som är vad i texten. Här fick jag ibland gissa vad som var beskrivande text och vad som var något som någon av karaktärerna sa eller tänkte. Tio sekunders mänsklig kvalitetskontroll hade antagligen fångat upp de här felen. Extra pinsamt blir det när det i slutet av eboken står klart och tydligt att den “tillhandahålls och återges oförändrat” jämfört med den tryckta boken.

Debian 12 with encrypted /boot, GRUB2 asking for root file system LUKS passphrase

After upgrading from Debian 11 to Debian 12 (as part of which was an upgrade of GRUB 2 from 2.06-3~deb11u5 to 2.06-13) on a system with separately encrypted / and /boot both using LUKS, GRUB began prompting for the LUKS passphrase to unlock the container holding the root file system even though it had no need for it (and in fact booted perfectly fine if I just pressed Enter at that prompt).

The relevant part of the file system layout is:

  • GPT partitioning
    • partition 2
      • LUKS1 container
        • /boot
    • partition 3
      • LUKS2 container
        • /

This setup is based on the description of setting up encrypted /boot with GRUB 2 >=2.02~beta2-29 on Debian (also).

Repeated web searches did not bring up anything relevant, so armed with the LUKS container UUID (from cryptsetup luksDump) I started sleuthing through /boot/grub/grub.cfg to see where it referenced the LUKS container holding /. Surprisingly, I found it near the top, generated through /etc/grub.d/00_header, in a seemingly unrelated place: code intended to load fonts. This was somewhat unexpected because the second prompt actually appeared after a replacement font appeared to already have been loaded.

Looking through /etc/grub.d/00_header and trying to match what I was seeing in grub.cfg against its generation logic, I found that the location of the container UUID within grub.cfg matched a prepare_grub_to_access_device call described in an immediately preceding comment as “Make the font accessible”.

That, in turn, was controlled by $GRUB_FONT.

With this newfound knowledge, I took a stab at /etc/default/grub and noted the commented-out GRUB_TERMINAL=console, described as “Uncomment to disable graphical terminal”.

Well, I’m fine with an 80×25 text menu and the BIOS font for GRUB, so I figured it was worth a try. Creating a new file /etc/grub.d/console-terminal.cfg setting that variable and running update-grub, the generated /boot/grub/grub.cfg no longer referenced that LUKS container; and on rebooting, GRUB again only prompted me for the LUKS passphrase for /boot.


Extracting TLS certificate fingerprint on the Linux command line

It is sometimes helpful to get the details, not least the fingerprint, of a remote server’s TLS certificate from the command line.

Unfortunately, I’m not aware of any tool which is readily available on the typical Linux system to make this particularly easy.

Fortunately, one can be cobbled together using OpenSSL, which is rather universally available.

The first step is to get the certificate data itself in PEM format:

true | openssl s_client -connect www.example.com:443 -showcerts -no_ign_eof -certform PEM 2>/dev/null

The true | at the beginning simply provides an empty standard input to the OpenSSL connecting process, and I explicitly specify -no_ign_eof to make sure it will exit once there is no more data to be read from standard input (which in this case will be immediately). The 2>/dev/null silences the complete output from the certificate chain validation; you can use -verify_quiet instead, which in the absence of certificate chain problems has almost, but not quite, the same effect.

Note that since openssl s_client is a general-purpose debugging tool, the TCP port number must be specified. For HTTPS web sites, the typical port number is 443. If you are connecting by IP address, you can use -servername something.example.com to set the SNI server name in the TLS session.

Given the certificate data in PEM format from the above command, openssl x509 can be used to display information about the certificate:

openssl x509 -in filename.pem -noout -text -sha256 -fingerprint

where filename.pem contains the output from the previous command. If -in is not specified, then certificate data is read from standard input.

Useful variations are -text to print lots of technical details from the certificate, and -sha256 -fingerprint to print the SHA-256 fingerprint. Including both will cause both to be printed. If for some reason you need the insecure MD5 fingerprint, use -md5 instead of -sha256. Fingerprints are printed in colon-separated hexadecimal notation.

Putting all of this together, we get the following somewhat long command:

true | openssl s_client -connect www.example.com:443 -showcerts -no_ign_eof -certform PEM 2>/dev/null | openssl x509 -noout -sha256 -fingerprint

If you want to introduce something like torsocks to this, it should generally go with the openssl s_client command, as that is the part that is actually making the outbound network connection:

true | torsocks openssl s_client -connect www.example.com:443 -showcerts -no_ign_eof -certform PEM 2>/dev/null | openssl x509 -noout -sha256 -fingerprint

Both of these will, if successful, print the SHA-256 fingerprint of the TLS certificate received from the server. Currently, this results in this single line of output:

sha256 Fingerprint=5E:F2:F2:14:26:0A:B8:F5:8E:55:EE:A4:2E:4A:C0:4B:0F:17:18:07:D8:D1:18:5F:DD:D6:74:70:E9:AB:60:96

And there it is!

Back up your Mastodon!

A lot of people have joined the Fediverse lately, and many have done so through one of the many Mastodon instances, foremost among but certainly not the only one of which is mastodon.social.

Especially if you’re not self-hosting Mastodon or paying for a personal instance, the risk always exists that your chosen instance will shut down. For example, the large instance home.social recently effectively shut down, and social.vivaldi.net had difficulties from which the team were able to recover. Many Mastodon instances are run by individuals on what is effectively a best-effort basis and while responsible administrators will provide advance notice in case of a shutdown, things can happen that will cause an instance to shut down with very little or even no advance notice.

It is possible to migrate accounts from one Mastodon instance to another, but the process is lossy and requires the cooperation of the old instance.

For this reason, it’s a good idea to regularly back up your profile and to have at least one backup account on a different instance, run by different people, to which you can move should something happen to your preferred instance.

I speak specifically of Mastodon in this post, but the same general principle also applies to other Fediverse applications.

I recommend setting a calendar reminder for going through the backup process. Depending on how prolific a user you are, you may want to do it once every few weeks or maybe once every six months.

Here’s how you can do it.

Making a backup

  1. Log in normally to your Mastodon account.
  2. In the left-hand side bar and next to your profile picture, click Edit profile.
  3. Go to Import and export, then Data export, then click Request your archive. This will take a while to complete, but it runs in the background on the server and you will be notified when it finishes.
  4. Still under Data export, there is a series of CSV download links. Use each in turn to save the file somewhere you will be able to find it easily if you need it.
  5. Go to Filters, and then click on Edit filter for each in turn. You can open these in tabs if you want to. Save each page to a file locally; you can save just the web page without graphics (what Firefox calls “Web page, HTML only”), but it will look better if you also save the rest of it (“Web page, complete”). You can also use a browser add-on such as SingleFile for Firefox to save each page as a single self-contained file.
  6. Click Back to Mastodon, click the button near the Edit profile link you used previously, then click Followed hashtags in the menu that opens. On the resulting page, scroll all the way down to the bottom (you may need to scroll multiple times for this). Select all of the content, copy it to a text file and save that file together with the rest of your files.
  7. By now, it’s quite possible that your archive is complete. Go back to Edit profile, then Import and export, then Data export, and look at the bottom of the page. You should see a Download your archive link near the current date and time. Download that file and save it with the rest of your files.

At this point, you have a copy of most of what makes up your Mastodon experience. If you want to, you can also go through the rest of the profile settings and save those.

Set up a second account

If you haven’t already, consider setting up a second account on a completely different instance, which you can switch to in case your preferred instance becomes unusable for some reason. Just sign up on a different instance with the same (or a different) email address. It’s helpful, but not a requirement, if this account has the same username (the part between the two @ signs, like example in @example@mastodon.social) as your existing account, as well as the same avatar. You can use the display name or bio field to indicate that this is a secondary account belonging to the same person as your existing (named) account, and otherwise leave the account dormant if you want to.

Restoring on a different account

The best way to migrate to a different instance is by using the account migration feature built into Mastodon. However, if that isn’t possible for some reason, you can use your backup to get back up and running with a new account.

To restore your backup onto a different account, go to Edit profile, then Import and export, then Import. For each entry under Import type except for following list, select the corresponding file that you downloaded previously, and choose whether to merge or overwrite what’s in the account you are importing to, then upload. Again: don’t upload your following list just yet!

Once that is done, go to Filters and set up your filters according to the files you saved previously.

Finally, go to Profile and update your display name and/or bio as appropriate.

Then go back to Mastodon, and for each tag in your followed tags list, paste it into the search box, then navigate to the correct tag and follow it.

Post a short #introduction saying that you are the same as whatever your previous handle was. Remember, from an outside point of view, this profile is empty!

Once that is done, go back to Edit profile, Import and export, Import and import your following list.

It will take a short while for your home feed to repopulate, but this should get you back mostly to where you were, except for old posts which will still be on the old instance (or gone, if the old instance shut down).

Forget what everyone tells you makes a password strong

Yes, the title is a little bit click-baity. But please bear with me for a moment.

The Web is replete with the traditional advice on “how to create a strong password”. A quick web search for secure passwords brought up, among many others:

Really, I could go on. Except I won’t.

I’m here to tell you that adding random symbols to your password does not make it appreciably more secure. Even mixing letter case (lowercase and uppercase) doesn’t help a lot.

In my password tips, I mention a few different ways of generating passwords which have a work factor of approximately 277, which is plenty enough for most people even if the place where the password is used messes up the basics of handling passwords. In brief, the work factor is simply a number that expresses how hard a password (or other secret) is to guess; the higher the work factor, the more secure it is.

Assuming that a password is generated at random, mathematically, the work factor is simply the size of the symbol set to the power of the length of the password. The work factor is commonly expressed in bits (as a power of two), in which case you need to take the two-logarithm of this value. Really, this likely sounds more complicated than it is. Again, just keep in mind that as long as the password is generated at random, the larger the number, the more secure the password is.

An alphabetical password, using the lower-case English letters only (a-z), has a symbol set size of 26. An alphanumeric password with mixed case (a-z, A-Z, 0-9) has a symbol set size of 62 (which is 26+26+10). Assuming 20 symbols (for example, the set !?@#$%&{}[]+-*/\.,<>), this pushes us to a symbol set size of 82.

Now, what does it take to get to 277 with each of those?

With a simple alphabetical password, not varying case, 16 characters gives a 275 work factor, while 17 characters gives 280. Since we don’t have half-characters, I’ll call this 17 characters.

Mixing upper and lower case, 13 characters gives 274, and 14 characters gives 280. Again, lacking half-characters, I’ll go with 14.

Adding digits, 13 characters gives 277 for the upper- and lower-case case, as does 15 characters for single-case alphabetic plus digits.

Adding those 20 symbols to the mixed-case alphanumerics symbol set, 12 characters gives 276 (which I figure is close enough to our 277 target for a meaningful comparison).

Or, laid out as a table:

Symbol setLeast characters for ≥277Example password
a-z, 0-915leyie7aineih8mu
a-z, A-Z14voiWahnuuZuxuu
a-z, A-Z, 0-913Eu0ighaeJ2aex
a-z, A-Z, 0-9, 20 symbols12ye3M&e5rae{f
Password lengths for a given security level, by symbol set

Indeed, compared to only a lower-case English letters password, to keep approximately the same security level, with all of this we still only reduced the length required from 17 characters to 12 characters. And in doing so, we went from a password similar to kohthaephaeguahxe to one similar to Ee*ix&p0chFi.

Although not directly spelled out, this truth of mathematics is almost certainly a part of the reason why NIST (which sets information security standards for US government organizations) in 2017 changed their previous advice on password complexity, and now say, among else:

Verifiers SHALL require subscriber-chosen memorized secrets to be at least 8 characters in length. Verifiers SHOULD permit subscriber-chosen memorized secrets at least 64 characters in length.


Verifiers SHOULD NOT impose other composition rules (e.g., requiring mixtures of different character types or prohibiting consecutively repeated characters) for memorized secrets.

NIST Special Publication 800-63B, section, July 2017 (as updated through 03-02-2020)

Certainly, if you are using a password manager to handle that password (and you almost certainly should), including additional types of symbols in your password won’t exactly hurt. But doing so is not the password strength panacea it is often presented as.

If you are using a password manager to handle the password, then you shouldn’t be typing it in anyway, in which case the length savings for a similar security level is essentially irrelevant.

Also, when people think of “including digits and symbols in the password”, more often than not this means doing things like replacing the letter O with the digit 0, or replacing the letter A with the symbol @, or putting some easily typed symbol like ! or % at the end of the password. Password crackers have been on to this game for years and years. Technically, doing this does increase security, but it does so only by a miniscule amount.

For to go beyond even just a-z in passwords to provide any significant benefit, the password must actually be randomly generated, and it only really helps when character count is the limiting factor. Increasing password length provides huge returns in password security even without extending the character set; for example, a 20 characters single-case alphabetic password already has a work factor of 294 (219 or about 500000 times stronger than one 16 characters long; a mere four additional letters).

In the introduction to the documentary film Citizenfour about the 2013 Edward Snowden revelations, a passphrase attack rate of one trillion guesses per second is mentioned for PGP secret keys, and has likely served as a good rule of thumb since then. The PGP S2K (string-to-key) function is unfortunately notoriously weak by modern standards, and top-of-the-line CPU transistor count has increased by roughly a factor of 10 since then, so to at present assume a rate of ten trillion guesses per second for a highly motivated, highly resourceful adversary is probably not unreasonable. (Most people don’t need to worry about the NSA trying to figure out their social media password!) This is approximately 245/s. Because of how exponents work out, you can simply subtract this exponent from that of the work factor of your password to determine how long it would take to crack.

A 277 work factor password, at that attack rate (given present-day technology) would have a reasonably guaranteed breach in 277-45 = 232 seconds, and on average succumb to the attacker in half that time. Half of 232 seconds is approximately seventy years. And again, this is against a highly motivated, highly resourceful adversary.

To within experimental error, nobody is going to spend even 70 years on cracking that one password. And if you are worried, add two more letters to it for an 18-19 characters password; doing so brings the average out to about 270 years.

This is not to say anything but to make sure you use strong passwords. But do know that simply adding a non-alphanumeric character to a password won’t necessarily significantly increase the security of it, and doing so properly will likely make your password a good bit harder to type correctly.

Forcing reasonable scroll bar colors in Firefox

One of the more recent fads in web design is to use custom colors for scroll bars. Sometimes this is done to good effect, but more often than not, it becomes more of a problem than helpful. For example, in both its light and dark themes, Mastodon has extremely low contrast between the scrollbar background (the track) and the draggable scrollbar handle which in modern user interfaces indicates both the current position within a scrollable area (such as a web page) and gives an approximation of the size of the scrollable area.

After being annoyed at this for some time, I decided to solve the problem. There are probably browser add-ons for this, but I am wary of installing more add-ons than necessary, especially ones which by necessity must have the ability to muck around with the content of every single page I browse to.

First, locate your Firefox profile directory. On a Linux system, for example, your browser profile will likely be stored in a directory .mozilla/firefox/abcdefgh.* within your home directory, where abcdefgh is a random identifier and the part after the period is some moderately descriptive name (possibly default). Regardless of the platform you are using Firefox on, you should be able to navigate to about:support and look at the Profile Directory to obtain the full path to your profile directory.

Second, if under it you do not already have such a directory, create a directory named chrome. (This has nothing to do with Google’s web browser, but mostly refers to the parts of the web browser surrounding the actual web page: tabs, address bar, windowing controls, and so on.)

Third, within the chrome directory, create a plain text file named userContent.css and put something like the following in it:

html {
    scrollbar-color: yellow navy !important;
    scrollbar-width: auto !important;

For scrollbar-color, the first color is for the handle, and the second color is for the scrollbar background. You can use whichever colors you like, but you should use colors that contrast against colors typically used as background on pages that you regularly visit. You can specify a color in any way that works in CSS, including by CSS color name (as in my example above), a hexadecimal RGB value (such as #d2691e or #ff0), or using the rgb() CSS color function.

If there is, for example, only a single domain which is problematic, you can restrict the rule to only that domain by using a @-moz-document property, like so:

@-moz-document domain("example.com") {
    html {
        scrollbar-color: yellow navy !important;
        scrollbar-width: auto !important;

Fourth, save that file, and then open a Firefox browser window and navigate to about:config. If you have not disabled it, you will get a scary warning about the need to be careful. Proceed past that warning and then, in the search field at the top of the page you get to, enter toolkit.legacyUserProfileCustomizations.stylesheets. Verify that the entry that shows up has that name.

If the value shows as false, then double-click on it to change its value to true. If the value already shows as true, you are done here. Close that tab.

As a last step, completely restart the browser by closing all Firefox windows and then restarting Firefox.

Navigate to a long web page, such as a favorite blog, social media or news site, and look at the scroll bar. Its colors should now reflect your choices, rather than that site’s designer’s.

Disable clipboard sharing (clipboard integration) with QEMU/KVM and SPICE

For some reason that eludes me, sharing of the clipboard between the KVM host and the guest (sometimes referred to as clipboard integration) is on by default with QEMU/KVM and SPICE graphics.

To disable such clipboard sharing:

  1. Edit the guest definition XML; virsh edit $GUEST and locate domain/devices/graphics, or through virt-manager’s View > Details > Display Spice > XML
  2. Add or update an element <clipboard copypaste="no"/> under the <graphics> node
  3. Save/apply the change
  4. Fully shut down the VM (if running)

If starting the VM from the command line, another option is to try adding -spice disable-copy-paste to the qemu-system-* command line. (See here.)

Clipboard sharing between the host and the guest will be disabled the next time the VM is powered on.

Page 1 of 3

Powered by WordPress & Theme by Anders Norén