Applying OpenSSH host settings on a per-network basis

Up through the 1980s, the Internet was a neatly organized place, clearly split into “class A”, “class B” and “class C” networks. In this classful environment, numerical IP addresses were split between the network and the host portion at an octet boundary, and where this boundary was placed was defined by the first few bits of the first octet. For example, 192.0.2.100 by definition was host 100 on the “class C” network 192.0.2. Even today, people occasionally speak of “class A”, “class B” or “class C” networks when what they really mean is a /8, /16 or a /24 network respectively.

In the early 1990s, we got what is today known as CIDR, or classless inter-domain routing. This is the slash-notation for network prefix length that is commonly seen today, and which does not need to line up with an octet boundary at all or have any connection with the bits of the first octet; 192.0.2.0/24, 10.0.0.0/13, 198.18.0.0/15, 172.16.0.192/27, 2001:200::/48 and 2001:db8::/33 are all examples of perfectly valid CIDR network prefix specifications, but only the first matches the old classful assignment scheme (as 192 was part of the old class C space, and what used to be termed a class C network in modern terms is called a /24).

Also, at times, it is useful to apply specific OpenSSH configuration directives to every host on a network. For example, you might want StrictHostKeyChecking yes for a production network, but StrictHostKeyChecking ask or maybe even no for a testing network where you regularly replace machines.

If these networks are segregated on an IP address octet boundary, it’s fairly easy. For example, if testing is 10.99.0.0/16, you can feel fairly certain that OpenSSH will do the right thing if you give it something like:

Host 10.99.*.*
StrictHostKeyChecking ask

Host *
StrictHostKeyChecking yes

However, if your testing network is, say, an IPv4 /28 which straddles a hundreds boundary, it quickly gets unwieldy; already for 192.0.2.192/28, you might need:

Host !192.0.2.190 !192.0.2.191 192.0.2.19? !192.0.2.208 !192.0.2.209 192.0.2.20?
StrictHostKeyChecking ask

Thankfully, OpenSSH can execute external commands to determine whether a configuration block should be applied. Enter Match exec combined with grepcidr.

Match exec "echo %h | grepcidr -sx 192.0.2.192/27 >/dev/null 2>/dev/null"
StrictHostKeyChecking ask

The redirections are needed because unfortunately grepcidr has no equivalent to GNU grep’s -q (quiet or, if you wish, query) option.

Some sites suggest using bash-isms:

Match exec "grepcidr -sx 192.0.2.192/27 <(echo %h) &>/dev/null"

which I found will probably work well when run from a typical interactive shell, but maybe not so well when ssh is being run from automated scripts or background processes such as cron or at. This can affect even connections which wouldn’t match that network at all.

There are two big things to keep in mind with this.

First, in OpenSSH configuration parlace, %h expands to the host name as given (either on the command line or through for example a Hostname directive); if this is not an IP address but rather a DNS name, the above examples won’t match because grepcidr does no DNS resolution, resulting in those settings not being applied to the connection. If this is a consideration, you’ll probably need to wrap it in some kind of helper script to detect a non-IP-address and resolve it before passing the resultant IP addresses to grepcidr. If you do, then beware of time-of-check-to-time-of-use issues!

Second, and simpler, the above snippets all rely on the user’s $PATH to find the tools involved. Especially if putting this in a system-wide configuration file, it’s advisable to extend the full paths to each respective binary; which might be, for example, /bin/echo and /usr/bin/grepcidr.

Bokrecension: Löpa varg, av Kerstin Ekman 🇸🇪

En recension dök upp någonstans i mitt flöde som gjorde mig nyfiken på boken Löpa varg av Kerstin Ekman. När den under bokrean fanns till salu som ebok för 55 kronor så passade jag på, sen låg den en stund innan jag kom till skott att läsa den.

Jag kan lika gärna säga det direkt: jag blev besviken. I synnerhet från en väl etablerad författare och genom ett stort förlag förväntade jag mig bättre.

Boken inleds i stort sett med att huvudpersonen, den 70-årige Ulf Norrstig, sitter ute i sina marker när han får se en varg.

Förutom den korta beskrivningen av det mötet på avstånd handlar en stor del av första kanske tredjedelen av boken om hur han blickar tillbaka på sin tidiga karriär som jägare. Vi får genom hans jaktdagböcker och minnen följa hans första tid som jägare.

Men i alla fall för mig saknas i stort sett helt svar på den centrala inledande frågan: jaha, och? Ekman misslyckas i mitt tycke fullständigt med att etablera varför detta är en person man som läsare ska vara särskilt intresserad av att följa, och att en så stor del av boken handlar om den person han var, utan någon särskilt tydlig koppling till den person han är eller den person han blir till slutet av boken, hjälper knappast i sammanhanget.

Vi får följa när huvudpersonen är ute i sina marker och jagar tillsammans med sina jaktkamrater, hur han väljer att kliva av som ledare för jaktlaget, när han hamnar på sjukhus och hans tankar när det meddelas om licensjakt på varg i området. Vi får följa när han väljer att till polisen inte dela med sig av saker han känner till och inser är relevanta för en polisutredning om bland annat grovt jaktbrott, och hans tankar om den verkliga världens svenska skogsbränder år 2018 och värmen några år senare.

Mycket av detta hade potential. Istället för att relativt kort konstatera att han valde att inte dela med sig av information och ett kort stycke dialog med personen det gällde, så är det något som hade kunnat utvecklas och problematiseras mycket mer ur många olika vinklar. Istället används utrymmet i boken till reflektioner baserat på korta anteckningar i 50-60 år gamla jaktdagböcker och gamla böcker om vilda djur. Utrymmet som användes för att upprepade gånger beskriva hur han i sitt yrkesliv använt ett citat ur en viss översättning av Kiplings Djungelboken hade kunnat användas till att utveckla meningsutbytet framåt slutet av boken med den person han till slut kom fram till var den skyldige till jaktbrottet, och kanske ännu mer intressant hans egna tankar inför och under den situationen. Den verkliga världens nyliga extremvärme och skogsbränder som finns med i berättelsen hade kunnat användas som inramning till reflektioner i en rad frågor, allt från “så här har det inte varit tidigare” till funderingar kring vilda djurs situation; men det närmaste vi kommer något sådant är antagligen en reflektion om att skogen ju alltid brunnit då och då även om det här verkade värre än vanligt, att hustrun hade fått tag på den sista fläkten i affären och att hunden inte orkade gå lika långt.

Avslutningen kändes därtill väldigt abrupt. Känslan blev närmast en av “vad, tog det slut här?” och att berättelsen lämnades oavslutad.

Det här är en berättelse om en pensionär som ser tillbaka på vad han har åstadkommit under sitt liv. Som sådan kan den i någon mån vara läsvärd, men samma berättelse hade antagligen kunnat skrivas även med någon annan djurobservation i början – eller för den delen någon helt annan inledande händelse. Och just den här boken hade nog mått bra av några rejäla vändor med en bra sax eller rödpenna följt av diverse tillägg här och var. Antagligen hade runt 20% av texten kunnat klippas bort utan att någon läsare skulle ha märkt det efter lite redigering, och det utrymmet hade istället kunnat användas till just att dyka djupare i de ganska stora frågor som trots allt berörs.

Något annat som kraftigt drog ner helhetsintrycket var den tekniska kvalitén på ebokfilen. Kapitelrubriker saknades; citattecken eller andra indikationer på vad som sades, tänktes och så vidare saknades. Köper jag en bok utgiven av ett etablerat förlag (Albert Bonnier i det här fallet) via en etablerad återförsäljare (Adlibris) så förväntar jag mig en bok där det i alla fall tydligt framgår vad som är vad i texten. Här fick jag ibland gissa vad som var beskrivande text och vad som var något som någon av karaktärerna sa eller tänkte. Tio sekunders mänsklig kvalitetskontroll hade antagligen fångat upp de här felen. Extra pinsamt blir det när det i slutet av eboken står klart och tydligt att den “tillhandahålls och återges oförändrat” jämfört med den tryckta boken.

Debian 12 with encrypted /boot, GRUB2 asking for root file system LUKS passphrase

After upgrading from Debian 11 to Debian 12 (as part of which was an upgrade of GRUB 2 from 2.06-3~deb11u5 to 2.06-13) on a system with separately encrypted / and /boot both using LUKS, GRUB began prompting for the LUKS passphrase to unlock the container holding the root file system even though it had no need for it (and in fact booted perfectly fine if I just pressed Enter at that prompt).

The relevant part of the file system layout is:

  • GPT partitioning
    • partition 2
      • LUKS1 container
        • /boot
    • partition 3
      • LUKS2 container
        • /

This setup is based on the description of setting up encrypted /boot with GRUB 2 >=2.02~beta2-29 on Debian (also).

Repeated web searches did not bring up anything relevant, so armed with the LUKS container UUID (from cryptsetup luksDump) I started sleuthing through /boot/grub/grub.cfg to see where it referenced the LUKS container holding /. Surprisingly, I found it near the top, generated through /etc/grub.d/00_header, in a seemingly unrelated place: code intended to load fonts. This was somewhat unexpected because the second prompt actually appeared after a replacement font appeared to already have been loaded.

Looking through /etc/grub.d/00_header and trying to match what I was seeing in grub.cfg against its generation logic, I found that the location of the container UUID within grub.cfg matched a prepare_grub_to_access_device call described in an immediately preceding comment as “Make the font accessible”.

That, in turn, was controlled by $GRUB_FONT.

With this newfound knowledge, I took a stab at /etc/default/grub and noted the commented-out GRUB_TERMINAL=console, described as “Uncomment to disable graphical terminal”.

Well, I’m fine with an 80×25 text menu and the BIOS font for GRUB, so I figured it was worth a try. Creating a new file /etc/grub.d/console-terminal.cfg setting that variable and running update-grub, the generated /boot/grub/grub.cfg no longer referenced that LUKS container; and on rebooting, GRUB again only prompted me for the LUKS passphrase for /boot.

Success!

Extracting TLS certificate fingerprint on the Linux command line

It is sometimes helpful to get the details, not least the fingerprint, of a remote server’s TLS certificate from the command line.

Unfortunately, I’m not aware of any tool which is readily available on the typical Linux system to make this particularly easy.

Fortunately, one can be cobbled together using OpenSSL, which is rather universally available.

The first step is to get the certificate data itself in PEM format:

true | openssl s_client -connect www.example.com:443 -showcerts -no_ign_eof -certform PEM 2>/dev/null

The true | at the beginning simply provides an empty standard input to the OpenSSL connecting process, and I explicitly specify -no_ign_eof to make sure it will exit once there is no more data to be read from standard input (which in this case will be immediately). The 2>/dev/null silences the complete output from the certificate chain validation; you can use -verify_quiet instead, which in the absence of certificate chain problems has almost, but not quite, the same effect.

Note that since openssl s_client is a general-purpose debugging tool, the TCP port number must be specified. For HTTPS web sites, the typical port number is 443. If you are connecting by IP address, you can use -servername something.example.com to set the SNI server name in the TLS session.

Given the certificate data in PEM format from the above command, openssl x509 can be used to display information about the certificate:

openssl x509 -in filename.pem -noout -text -sha256 -fingerprint

where filename.pem contains the output from the previous command. If -in is not specified, then certificate data is read from standard input.

Useful variations are -text to print lots of technical details from the certificate, and -sha256 -fingerprint to print the SHA-256 fingerprint. Including both will cause both to be printed. If for some reason you need the insecure MD5 fingerprint, use -md5 instead of -sha256. Fingerprints are printed in colon-separated hexadecimal notation.

Putting all of this together, we get the following somewhat long command:

true | openssl s_client -connect www.example.com:443 -showcerts -no_ign_eof -certform PEM 2>/dev/null | openssl x509 -noout -sha256 -fingerprint

If you want to introduce something like torsocks to this, it should generally go with the openssl s_client command, as that is the part that is actually making the outbound network connection:

true | torsocks openssl s_client -connect www.example.com:443 -showcerts -no_ign_eof -certform PEM 2>/dev/null | openssl x509 -noout -sha256 -fingerprint

Both of these will, if successful, print the SHA-256 fingerprint of the TLS certificate received from the server. Currently, this results in this single line of output:

sha256 Fingerprint=5E:F2:F2:14:26:0A:B8:F5:8E:55:EE:A4:2E:4A:C0:4B:0F:17:18:07:D8:D1:18:5F:DD:D6:74:70:E9:AB:60:96

And there it is!

Back up your Mastodon!

A lot of people have joined the Fediverse lately, and many have done so through one of the many Mastodon instances, foremost among but certainly not the only one of which is mastodon.social.

Especially if you’re not self-hosting Mastodon or paying for a personal instance, the risk always exists that your chosen instance will shut down. For example, the large instance home.social recently effectively shut down, and social.vivaldi.net had difficulties from which the team were able to recover. Many Mastodon instances are run by individuals on what is effectively a best-effort basis and while responsible administrators will provide advance notice in case of a shutdown, things can happen that will cause an instance to shut down with very little or even no advance notice.

It is possible to migrate accounts from one Mastodon instance to another, but the process is lossy and requires the cooperation of the old instance.

For this reason, it’s a good idea to regularly back up your profile and to have at least one backup account on a different instance, run by different people, to which you can move should something happen to your preferred instance.

I speak specifically of Mastodon in this post, but the same general principle also applies to other Fediverse applications.

I recommend setting a calendar reminder for going through the backup process. Depending on how prolific a user you are, you may want to do it once every few weeks or maybe once every six months.

Here’s how you can do it.

Making a backup

  1. Log in normally to your Mastodon account.
  2. In the left-hand side bar and next to your profile picture, click Edit profile.
  3. Go to Import and export, then Data export, then click Request your archive. This will take a while to complete, but it runs in the background on the server and you will be notified when it finishes.
  4. Still under Data export, there is a series of CSV download links. Use each in turn to save the file somewhere you will be able to find it easily if you need it.
  5. Go to Filters, and then click on Edit filter for each in turn. You can open these in tabs if you want to. Save each page to a file locally; you can save just the web page without graphics (what Firefox calls “Web page, HTML only”), but it will look better if you also save the rest of it (“Web page, complete”). You can also use a browser add-on such as SingleFile for Firefox to save each page as a single self-contained file.
  6. Click Back to Mastodon, click the button near the Edit profile link you used previously, then click Followed hashtags in the menu that opens. On the resulting page, scroll all the way down to the bottom (you may need to scroll multiple times for this). Select all of the content, copy it to a text file and save that file together with the rest of your files.
  7. By now, it’s quite possible that your archive is complete. Go back to Edit profile, then Import and export, then Data export, and look at the bottom of the page. You should see a Download your archive link near the current date and time. Download that file and save it with the rest of your files.

At this point, you have a copy of most of what makes up your Mastodon experience. If you want to, you can also go through the rest of the profile settings and save those.

Set up a second account

If you haven’t already, consider setting up a second account on a completely different instance, which you can switch to in case your preferred instance becomes unusable for some reason. Just sign up on a different instance with the same (or a different) email address. It’s helpful, but not a requirement, if this account has the same username (the part between the two @ signs, like example in @example@mastodon.social) as your existing account, as well as the same avatar. You can use the display name or bio field to indicate that this is a secondary account belonging to the same person as your existing (named) account, and otherwise leave the account dormant if you want to.

Restoring on a different account

The best way to migrate to a different instance is by using the account migration feature built into Mastodon. However, if that isn’t possible for some reason, you can use your backup to get back up and running with a new account.

To restore your backup onto a different account, go to Edit profile, then Import and export, then Import. For each entry under Import type except for following list, select the corresponding file that you downloaded previously, and choose whether to merge or overwrite what’s in the account you are importing to, then upload. Again: don’t upload your following list just yet!

Once that is done, go to Filters and set up your filters according to the files you saved previously.

Finally, go to Profile and update your display name and/or bio as appropriate.

Then go back to Mastodon, and for each tag in your followed tags list, paste it into the search box, then navigate to the correct tag and follow it.

Post a short #introduction saying that you are the same as whatever your previous handle was. Remember, from an outside point of view, this profile is empty!

Once that is done, go back to Edit profile, Import and export, Import and import your following list.

It will take a short while for your home feed to repopulate, but this should get you back mostly to where you were, except for old posts which will still be on the old instance (or gone, if the old instance shut down).

Forget what everyone tells you makes a password strong

Yes, the title is a little bit click-baity. But please bear with me for a moment.

The Web is replete with the traditional advice on “how to create a strong password”. A quick web search for secure passwords brought up, among many others:

Really, I could go on. Except I won’t.

I’m here to tell you that adding random symbols to your password does not make it appreciably more secure. Even mixing letter case (lowercase and uppercase) doesn’t help a lot.

In my password tips, I mention a few different ways of generating passwords which have a work factor of approximately 277, which is plenty enough for most people even if the place where the password is used messes up the basics of handling passwords. In brief, the work factor is simply a number that expresses how hard a password (or other secret) is to guess; the higher the work factor, the more secure it is.

Assuming that a password is generated at random, mathematically, the work factor is simply the size of the symbol set to the power of the length of the password. The work factor is commonly expressed in bits (as a power of two), in which case you need to take the two-logarithm of this value. Really, this likely sounds more complicated than it is. Again, just keep in mind that as long as the password is generated at random, the larger the number, the more secure the password is.

An alphabetical password, using the lower-case English letters only (a-z), has a symbol set size of 26. An alphanumeric password with mixed case (a-z, A-Z, 0-9) has a symbol set size of 62 (which is 26+26+10). Assuming 20 symbols (for example, the set !?@#$%&{}[]+-*/\.,<>), this pushes us to a symbol set size of 82.

Now, what does it take to get to 277 with each of those?

With a simple alphabetical password, not varying case, 16 characters gives a 275 work factor, while 17 characters gives 280. Since we don’t have half-characters, I’ll call this 17 characters.

Mixing upper and lower case, 13 characters gives 274, and 14 characters gives 280. Again, lacking half-characters, I’ll go with 14.

Adding digits, 13 characters gives 277 for the upper- and lower-case case, as does 15 characters for single-case alphabetic plus digits.

Adding those 20 symbols to the mixed-case alphanumerics symbol set, 12 characters gives 276 (which I figure is close enough to our 277 target for a meaningful comparison).

Or, laid out as a table:

Symbol setLeast characters for ≥277Example password
a-z17quoakithoozafebau
a-z, 0-915leyie7aineih8mu
a-z, A-Z14voiWahnuuZuxuu
a-z, A-Z, 0-913Eu0ighaeJ2aex
a-z, A-Z, 0-9, 20 symbols12ye3M&e5rae{f
Password lengths for a given security level, by symbol set

Indeed, compared to only a lower-case English letters password, to keep approximately the same security level, with all of this we still only reduced the length required from 17 characters to 12 characters. And in doing so, we went from a password similar to kohthaephaeguahxe to one similar to Ee*ix&p0chFi.

Although not directly spelled out, this truth of mathematics is almost certainly a part of the reason why NIST (which sets information security standards for US government organizations) in 2017 changed their previous advice on password complexity, and now say, among else:

Verifiers SHALL require subscriber-chosen memorized secrets to be at least 8 characters in length. Verifiers SHOULD permit subscriber-chosen memorized secrets at least 64 characters in length.

[…]

Verifiers SHOULD NOT impose other composition rules (e.g., requiring mixtures of different character types or prohibiting consecutively repeated characters) for memorized secrets.

NIST Special Publication 800-63B, section 5.1.1.2, July 2017 (as updated through 03-02-2020)

Certainly, if you are using a password manager to handle that password (and you almost certainly should), including additional types of symbols in your password won’t exactly hurt. But doing so is not the password strength panacea it is often presented as.

If you are using a password manager to handle the password, then you shouldn’t be typing it in anyway, in which case the length savings for a similar security level is essentially irrelevant.

Also, when people think of “including digits and symbols in the password”, more often than not this means doing things like replacing the letter O with the digit 0, or replacing the letter A with the symbol @, or putting some easily typed symbol like ! or % at the end of the password. Password crackers have been on to this game for years and years. Technically, doing this does increase security, but it does so only by a miniscule amount.

For to go beyond even just a-z in passwords to provide any significant benefit, the password must actually be randomly generated, and it only really helps when character count is the limiting factor. Increasing password length provides huge returns in password security even without extending the character set; for example, a 20 characters single-case alphabetic password already has a work factor of 294 (219 or about 500000 times stronger than one 16 characters long; a mere four additional letters).

In the introduction to the documentary film Citizenfour about the 2013 Edward Snowden revelations, a passphrase attack rate of one trillion guesses per second is mentioned for PGP secret keys, and has likely served as a good rule of thumb since then. The PGP S2K (string-to-key) function is unfortunately notoriously weak by modern standards, and top-of-the-line CPU transistor count has increased by roughly a factor of 10 since then, so to at present assume a rate of ten trillion guesses per second for a highly motivated, highly resourceful adversary is probably not unreasonable. (Most people don’t need to worry about the NSA trying to figure out their social media password!) This is approximately 245/s. Because of how exponents work out, you can simply subtract this exponent from that of the work factor of your password to determine how long it would take to crack.

A 277 work factor password, at that attack rate (given present-day technology) would have a reasonably guaranteed breach in 277-45 = 232 seconds, and on average succumb to the attacker in half that time. Half of 232 seconds is approximately seventy years. And again, this is against a highly motivated, highly resourceful adversary.

To within experimental error, nobody is going to spend even 70 years on cracking that one password. And if you are worried, add two more letters to it for an 18-19 characters password; doing so brings the average out to about 270 years.

This is not to say anything but to make sure you use strong passwords. But do know that simply adding a non-alphanumeric character to a password won’t necessarily significantly increase the security of it, and doing so properly will likely make your password a good bit harder to type correctly.

Forcing reasonable scroll bar colors in Firefox

One of the more recent fads in web design is to use custom colors for scroll bars. Sometimes this is done to good effect, but more often than not, it becomes more of a problem than helpful. For example, in both its light and dark themes, Mastodon has extremely low contrast between the scrollbar background (the track) and the draggable scrollbar handle which in modern user interfaces indicates both the current position within a scrollable area (such as a web page) and gives an approximation of the size of the scrollable area.

After being annoyed at this for some time, I decided to solve the problem. There are probably browser add-ons for this, but I am wary of installing more add-ons than necessary, especially ones which by necessity must have the ability to muck around with the content of every single page I browse to.

First, locate your Firefox profile directory. On a Linux system, for example, your browser profile will likely be stored in a directory .mozilla/firefox/abcdefgh.* within your home directory, where abcdefgh is a random identifier and the part after the period is some moderately descriptive name (possibly default). Regardless of the platform you are using Firefox on, you should be able to navigate to about:support and look at the Profile Directory to obtain the full path to your profile directory.

Second, if under it you do not already have such a directory, create a directory named chrome. (This has nothing to do with Google’s web browser, but mostly refers to the parts of the web browser surrounding the actual web page: tabs, address bar, windowing controls, and so on.)

Third, within the chrome directory, create a plain text file named userContent.css and put something like the following in it:

html {
    scrollbar-color: yellow navy !important;
    scrollbar-width: auto !important;
}

For scrollbar-color, the first color is for the handle, and the second color is for the scrollbar background. You can use whichever colors you like, but you should use colors that contrast against colors typically used as background on pages that you regularly visit. You can specify a color in any way that works in CSS, including by CSS color name (as in my example above), a hexadecimal RGB value (such as #d2691e or #ff0), or using the rgb() CSS color function.

If there is, for example, only a single domain which is problematic, you can restrict the rule to only that domain by using a @-moz-document property, like so:

@-moz-document domain("example.com") {
    html {
        scrollbar-color: yellow navy !important;
        scrollbar-width: auto !important;
    }
}

Fourth, save that file, and then open a Firefox browser window and navigate to about:config. If you have not disabled it, you will get a scary warning about the need to be careful. Proceed past that warning and then, in the search field at the top of the page you get to, enter toolkit.legacyUserProfileCustomizations.stylesheets. Verify that the entry that shows up has that name.

If the value shows as false, then double-click on it to change its value to true. If the value already shows as true, you are done here. Close that tab.

As a last step, completely restart the browser by closing all Firefox windows and then restarting Firefox.

Navigate to a long web page, such as a favorite blog, social media or news site, and look at the scroll bar. Its colors should now reflect your choices, rather than that site’s designer’s.

Disable clipboard sharing (clipboard integration) with QEMU/KVM and SPICE

For some reason that eludes me, sharing of the clipboard between the KVM host and the guest (sometimes referred to as clipboard integration) is on by default with QEMU/KVM and SPICE graphics.

To disable such clipboard sharing:

  1. Edit the guest definition XML; virsh edit $GUEST and locate domain/devices/graphics, or through virt-manager’s View > Details > Display Spice > XML
  2. Add or update an element <clipboard copypaste="no"/> under the <graphics> node
  3. Save/apply the change
  4. Fully shut down the VM (if running)

If starting the VM from the command line, another option is to try adding -spice disable-copy-paste to the qemu-system-* command line. (See here.)

Clipboard sharing between the host and the guest will be disabled the next time the VM is powered on.

Book series review: Wolves of the South, by Hannah Steenbock

Hannah Steenbock, in the Wolves of the South series of thus far six books (with a seventh being worked on), does somewhat of a headflip of the typical trope of portraying werewolves as bloodthirsty monsters unable to control their instincts and signature traits, not least their shapeshifting.

Instead, we get to follow a race of wolf shifters with a strong sense of family and honor, strong ethics and a clear sense of what they feel is right and wrong, who are just as in control of themselves as are most real-world humans. Struggling to survive.

Hunted by men.

The first book in the series, A Wolf’s Quest (which is available free of charge in ebook form; the rest of the series costs a few dollars or euros per book for the ebooks, and is available DRM-free for reading on almost any type of device without any particular software requirements), focuses primarily on the trials of Ben and Sylvia; a chance meeting at a gas station which eventually grows into a romantic interest. As the series goes on, we get to meet more and more of these wolf shifters, who prefer to refer to themselves as simply “wolves”, seeing their struggle. Naturally we also get to meet others, including Hunters, humans adamant about and who will stop at almost nothing in their efforts to exterminate the wolves, as well as people who are caught variously in between the sides and end up having to make difficult choices, knowing that they will have to live for the rest of their lives with the consequences of whichever choice they make.

The series is set in a world very similar to our own, geographically in the United States, complete with much technology that we are used to, including computers and the Internet. Also within Steenbock’s world, lore about werewolves very similar to that of our real world exists – and, it turns out, not only are the wolves aware of it, but some of it is actually true, and some of it they wish were true. With the exception of the Hunters, however, the generally held belief among humans is that “werewolves” don’t exist, nor can exist, with predictable results when people realize what these wolves are and that they are actually for real – some humans taking the revelation better than others. The wolves also do their best to fit in, including working with humans as everything from sheepdogs to security personnel, generally without revealing their true nature, seeing as that with Hunters who might learn of their nature, to keep it concealed can be a matter of life and death for both themselves and their friends.

Although the series is described as not having “steamy” scenes, and the reader doesn’t get to see much when it happens, there is strongly implied sex in several places in the books, to say nothing of how many times it’s described how the wolves think largely nothing of being stark naked even in their human form. And while Steenbock describes the series as lacking in “that tacky alpha nonsense”, these books do show the concept of alphas in a manner more reminiscent of real-world wolf packs, which are actually largely just families or occasionally extended families where “alpha” is often simply another word for “parent”; with individuals assuming a leadership position not by forcing others to do their bidding, but by others choosing to follow the lead of and deferring to the experience of the individuals who do lead. And just as in the real world, theirs too is inhabited also by individuals who are more interested in power for personal gain than to do what’s best for the group, as well as individuals who follow another’s leadership while also questioning choices of the individual who they defer to.

The pacing within each book is what I would consider moderate. This is not a hack-and-slash litany of one fight right after another, and on numerous occasions the characters are shown doing relatively mundane things including simply having a dinner with their friends; yet there’s always enough going on to drive the narrative forward. This pacing allows the reader sufficient time to get to know the characters, their motivations, fears and dilemmas, and even encourages getting to know the characters, while not making it feel like events drag on for significantly longer than is warranted by the storytelling. Also, if you know your canine behavior and body language, you’ll notice that Steenbock has worked in some of those just-right little details here and there, as well as details which are decidedly more human than wolf – and some that aren’t quite either.

On the whole, this is a series where it’s easy to get drawn in and care about what happens to the characters.

Unfortunately, when looked at as a series, the pace of the story is slowed down quite a lot by the fact that a good portion at the beginning of particularly two of the books (A Wolf’s Fear and A Wolf’s Honor, the second and third book respectively) is spent simply catching up to the point in time that the story had advanced to at the end of the previous book, but from the point of view of other characters. The upshot of this is that we get to see what has happened through the eyes of and to characters with whom we have not yet spent any significant time, with much less head-hopping as it happens and keeping the number of characters introduced in each book more managable; but the downside is that it uses up a good number of pages which could have more directly advanced the story. In the case of A Wolf’s Honor, this adds up to approaching two thirds of the book, which, even though the events depicted themselves were captivating and there are a number of references to those events later in the series, still left me with a feeling of so when do we get to where we were? This got perhaps especially frustrating since Steenbock would finish with fairly large cliffhangers.

Another thing to be aware of is the head-hopping, with different chapters being written from the point of view each of a different character. This kind of point of view shifting is something that a lot of readers will either love or hate, but regardless of one’s take on it, it does allow for following events from different characters’ perspectives without relying on for example omniscient third person, which can easily be even more jarring not to mention feeling detached, as if reading a news story recount rather than a first-hand or second-hand account of events; or relying on one closely followed point of view character, possibly also being the narrator, describing only what they are able to observe and what they are told as they become aware of events. I don’t mind the technique, but in the specific case, even though each chapter is clearly labelled with the name of the viewpoint character, I found myself on more than one occasion having to think about who the “I” referred to. It’s a small issue, but did break immersion a little bit for me at times. More clearly establishing the narrating character within the story itself near the beginning of each chapter probably could have helped.

And then, right at the end of the sixth and currently last book (A Wolf’s Peril), Steenbock pretty much closes with a bit of a bombshell revelation that caught even the main characters themselves by apparent total surprise. I won’t ruin anyone’s enjoyment by telling what it is, but it’s something I do hope that Steenbock explores a bit further in future books in the series or set in the same world.

Overall, if you enjoy reading paranormal fiction, particularly involving werewolves, but don’t care for Hollywood-style werewolves portrayed as they often are in horror movies, then the Wolves of the South series may very well be money well spent.

Linux gocryptfs on rclone mount giving “operation not permitted” errors

Trying to run a gocryptfs encrypted file system mount within a rclone mount remote file system itself accessing the remote host over SFTP, I ran into an annoying issue that writes would fail with a “Operation not permitted” error, but the file in question would appear within the gocryptfs file system (so clearly something was working).

Slothing through the output of running rclone mount with -vv but without --daemon to try to find clues as to what was actually going on, I came across these two lines:

yyyy/mm/dd hh:mm:ss ERROR : <encrypted filename>: WriteFileHandle: ReadAt: Can't read and write to file without --vfs-cache-mode >= minimal

yyyy/mm/dd hh:mm:ss DEBUG : &{<encrypted filename> (w)}: >Read: read=0, err=operation not permitted

Well, there’s the error being returned, and the cause.

Per the documentation, rclone mount --vfs-cache-mode takes one of off, minimal, writes, full; and the default, lo and behold, is off, which is clearly less than minimal.

Adding a --vfs-cache-mode minimal to the end of the rclone mount command seems to have fixed it, insofar as the error is gone and writes appear to go through fully as intended.

Page 1 of 3

Powered by WordPress & Theme by Anders Norén