Netatalk: AFP over Appletalk and TCP/IP

Sharing files with retro computers can sometimes be problematic. Nowadays with Samba and ssh it’s easy. But back then such sharing protocols were nowhere near as ubiquitious as we find them now.

AppleTalk Filing Protocol (AFP) was a fairly ubiquitous sharing protocol in the Mac world until Apple deprecated it in favour of SMB (SAMBA). AFP was still around until Apple finally removed it with the release of macOS Big Sur. I’d used a mixture of AFP, NFS and Samba up to that point for sharing files and music/video. But then I got rid of both NFS and AFP and switched to SAMBA. Since then I’d also setup a new NAS and retired the old one for just backup. I never bothered with configuring AFP on the new NAS.

Anyway with my new found interest in Mac OS9 I found myself regretting getting rid of AFP. It’s been around in the Mac world for an age and even before Mac moved to only TCPIP and had AppleTalk as the network protocol.

Netatalk is a project to keep AFP alive. It’s actively developed and packages are available for Debian (which my new NAS runs!). However, when I went to install it it wanted to install a LOT of other packages too. My NAS is headless but runs a plain vanilla Debian. Netatalk wanted to pull in a lof of zeroconf stuff and even some audio libraries. Forget that I want to keep my minimal NAS!

So I switched to Docker of which there are official images available for the Netatalk project. After a bit of messing that runs fine. The image builds your afp.conf file from a series of environment variables that you can set in the Docker Compose file. I will post this in a few weeks when I’ve added a few other cool stuff for Mac OS9. For example Web Rendering Proxy (WRP) which allows you to access HTTPS sites by serving a image with clickable areas to the retro browser on the retro mac!

Netatalk3 is mainly aimed at OSX/macOS. It uses Appletalk but for AFP it shares this over TCP/IP. This works fine for Mac OS9 as long as you manually connect using the IP address, but hey we want the simplicity of auto discovery! I struggled for some time with this before realising that I need to use Netatalk2 which still supports sharing AFP over Appletalk. This will share AFP over Appletalk via TCPIP, rather than AFP over TCPIP. All quite confusing. I was further confused by finding out if I used the standard Docker image of Netatalk3 then Mac OS9 failed to mount the share. Whereas if I built the image myself (from the same Git repository that the developers build their Docker image) then it worked fine! This red herring made me waste quite a bit of time until I actually read the Netatalk documentation!

Anyway once I realised this I switched my Docker compose file to use the netatalk2 image and Mac OS9 immediately saw the share (with netatalk3 I had to manually enter the IP address). Now it even allowed me to save the login details into the Keychain.

Chooser on an old retro Mac OS9 desktop.
Here’s my NAS (U2) in Chooser. Note all my machines are called after whatever music is playing. I used to love U2 but not since the eighties! But it was playing on the radio so….
an AFP login dialog on a old retro Mac OS9 desktop.
The user/password is still the default one for Netatalk! But it’s added to the Keychain.

The NAS share is also served by SAMBA so I can connect from it from my main Mac and Linux workstation. Both protocols give read access to guests but only RW to a logged in user.

a network share on a old retro Mac OS9 desktop.
The afpshare folder.
The Network Browser

Adding two SATA SSD devices to a RAID10 six SATA array.

I needed a bit more space on my NAS’s RAID10 array which was 6 x 2TB drives.To be honest iI’m not sure why I am using RAID10. The array was initially on my main Linux workstation. Then I decided I needed a separate NAS as my existing NAS was way too small, so I moved it across. RAID5/6 would give me more space. But I guess I like the flexibility and the ability to survive failure of two disks even though it does reduce space by 50%!

The server is running Debian and is headless. I know there’s loads of NAS OSs. But I do prefer to do things myself. The boot/root partitions are on a single SATA SSD card and the (now) eight drives are plugged into a Seagate Smart Host bus Adaptor H240.

I found two “consumer” Crucial 2TB SSD disks on Black Friday for £65 which seemed reasonable. I did wonder how well two SSDs would do in a RAIDarray with spinning drives. Let’s find out….! So this is what I did (which I am blogging about so I do not have to remember next time!). Interestingly the last time I blogged about growing a RAID array was quite some time ago. Also that’s the only time I’ve ever got comments on my blog (127 to be exact!). I think growing RAID arrays with MDADM was quite new back then.

The procedure to add new devices and grow the raid array is:

Procedure to grow an array with two new disks

  • physically add new disks
  • Partition
  • add disks to array
  • increase number of active disks in array to grow the array
  • grow filesystem

The new disks are /dev/sda /dev/sdd

Partition

For GPT use sgdisk to copy the partition table from one disk to a new one

Backup first of course!

sgdisk /dev/sdX -R /dev/sdY

sgdisk -G /dev/sdY

“The first command copies the partition table of sdb to sda/d

sgdisk /dev/sdb -R /dev/sda

sgdisk /dev/sdb -R /dev/sdd

Now randomise the GUID of each device:

sgdisk -G /dev/sdd

sgdisk -G /dev/sda

Add new devices

mdadm --add /dev/md1 /dev/sdd1 /dev/sda1

mdadm: added /dev/sdd1

mdadm: added /dev/sda1

These are added as spares as the number of active devices does not change. Let’s check:

# cat /proc/mdstat
Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
md1 : active raid10 sda1[8](S) sdd1[7](S) sdc1[5] sdg1[0] sde1[6] sdh1[3] sdb1[1] sdf1[4]
5860141056 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU]
bitmap: 4/22 pages [16KB], 131072KB chunk

Increase number of devices to include new ones

mdadm --grow --raid-devices=8 --backup-file=/mnt/USB/grown_md1.bak /dev/md1

The –backup-file creates a backup file in case the power goes. Not essential as I have a UPS. Also the filesystem is still mounted. However, to speed it up I turned off all services except the DNS/DHCP server. The less disk activity the quicker the reshape will finish.

The reshaping took about 20 hours. Much less than I thought.

Now we need to resize the filesystem. Unmounting is not essential for growing a ext4 filesystem (although it is for shrinking), but hey it’s a lot safer so I shut off everything and unmounted it

systemctl stop smbd
systemctl stop docker
umount /mnt/storage

resize2fs /dev/md1

This gave an error that the filesystem needed checking first.

e2fsck -f /dev/md1

resize2fs /dev/md1

This took about 30 minutes.

Finishing off.

Now let’s get it all back up and running.

mount /mnt/storage

systemctl start docker

systemctl start samba

systemctl start smbd

systemctl start docker

systemctl restart docker

The entry for the mdadm device does not need updating. Previous it did but I think that’s when I was using the 0.9 metadata block.

mdadm --detail --scan

cat /etc/mdadm/mdadm.conf

One bizarre issue was that when I restarted all the docker containers they downloaded a new image rather than using existing images. I have no idea why that happened.

Plan9 – what I do after an install

This was written in Markdown using acme on Plan9 with an sshfs mounted folder of my NextCloud setup (via a Linux host). Tidied up in Drafts, then posted on macOS using Marsedit! This post is for me really. ..it will also be a WIP.

initial post install stuff

Create a new user

Everybody seems to use the default user, Glenda (from Plan9 from outer space). This is the equivalent of a root* unix user. Except I do not like that.

*Well almost. There is no root user that is all powerful in plan9

This also shows how you run a lot of command stuff by cat’ing text to the running process. cwfs.cmd is the file server process that runs the CWFS filesystem (it’s different if you run another filesystem). You can do this in one step with the “con” tool. But i prefer to do each step one by one as I remember it better.

echo newuser chris >>/srv/cwfs.cmd

You also need to add the new user to the sys and upas groups.

echo newuser sys +chris >>/srv/cwfs.cmd

echo newuser upas +chris >>/srv/cwfs.cmd

Various how to pages also suggest the “adm” group too. I think this is equivalent to root. I have not yet added my user to it and not found any errors.

term% cat /adm/users
-1:adm:adm:glenda
0:none::
1:tor:tor:
2:glenda:glenda:
3:chris:chris:
10000:sys::glenda,chris
10001:map:map:
10002:doc::
10003:upas:upas:glenda,chris
10004:font::
10005:bootes:bootes:

I’ve no idea what the other groups are for!

Then reboot, login as this user. You will see errors. But run the newuser script to setup your $home folder and profile


/sys/lib/newuser

customise plan9.ini

This requires you to mount the 9fat filesystem which contains a plain text file (hey everything is plain text file!) that configures the boot process.


9fs 9fat
cd /n/9fat


“9fs” is equivalent to mount and “9fat” is the partition.


Nice and simple.


You now need to familiarise yourself with a text editor. Acme is the best. I have found myself having to use “ed” a streaming text editor. It’s Ok, but very basic and quite painful to do anything other than simple emergency edits. Keep backups!

acme plan9.ini

Setup plan9.ini to boot straightway into my user account

Once you have your user then you can bypass which filesystem to boot and which user to boot into.
Remembering that at anytime you are promoted for options in the boot process then you can type “!rc” at a prompt to launch a minimal terminal to fix the issue. ..and booting from the USB install image always allows you to easily mount the 9fat partition and fix things.

change the video driver to IGFX

customise your desktop – rio

Rio is the windowing application.

To customise Rio you need to edit your profile

acme $home/lib/profile

I now configure my profile to load a “$home/bin/rc/riostart”, which autostart a few tools and a few “rc” shells.
Change

rio

to

rio -i riostart

Mine is:

#!/bin/rcwindow 0,0,161,117 stats -lmiscewindow -miny 130
window bar
# run a system shell on the serial console
~ $#console 0 || window -scroll console`

Bar is a cool little tool you will have to compile and install.

Change the font

Change the following line. There are many different fonts in /lib/font:

#font=/lib/font/bit/vga/unicode.font
font=/lib/font/bit/dejavusans/unicode.14.font

keyboard

Add this line just before loading rio:


cat /sys/lib/kbmap/uk > /dev/kbmap

misc

Setup SSH

The instructions are here. However, there is an omission in that first line as “role=client” needs to be added.

auth/rsagen -t 'service=ssh role=client' >$home/lib/sshkey # generate private key
auth/rsa2ssh $home/lib/sshkey >$home/lib/sshkey.pub # generate public key, if you need to share it
cat $home/lib/sshkey >/mnt/factotum/ctl # put the private key in the password manager
echo 'ssh sha256=DDDDDDDDDDDDDDDDDDDDDDDD server=scotgate.pixies' >> /usr/chris/lib/sshthumbs

a few useful tips

Mounting a linux filesystem over SSH

sshfs is useful.

sshfs chris@scotgate.pixies
cd /n/ssh

..will mount your $home folder on the remote host as /n/ssh

Note if you are using Drawterm then /srv/ has your root filesystem of the host already.

Software to install

Treason

Plan9 continues

So Plan9 is becoming a very large time sink! After playing with a VM/QEMU I decided to buy a cheap minipc and install on some real hardware (Lenovo ThinkCentre with 16GB Ram and an i5 – £70 from eBay).

Installing the 9front fork of Plan9 is really easy. I did decide to complicate matters and use two drives so I had to manually partition them as the install script is quite simplistic (but really does make life easier!). Plan9 is a distributed OS. It needs a CPU server a file server, and auth(entication) server and then clients. Due to it’s quite ancient heritage the file system (CWFS) uses a cache and a WORM (write once read many) partition. Which in times gone by could have been an optical drive jukebox. As the name suggests this partition is read from, but writes are instead done to the fscache drive.

Other partitions are:

  • 9fat (DOS partition where the boot parameters are stored in a file called plan9.ini)
  • other
  • nvram (used to simulate a real NVRAM storage where authentication stuff is stored).

Drive/partition naming is very similar to linux

/dev/sdE1/fscache
/dev/sdE0/fsworm

So for some inane reason I put the WORM partition on one largish SATA spinning drive and the rest on a different drive. Of course I made life harder for myself. plan9 prides itself on a simple filesystem. It does not really have command completion as the philosophy is that files should not be squirrelled away in hard to remember locations (you do have simple filename completion with CTRL – f). Not sure I quite agree with that, but what it does is automatically find portions and mount then (no stab needed here). But of course since my main partitions are on a separate drive which I went and attached to the second SATA port, then it cannot find them so I do have to specify them in plan9.ini

9fs 9fat
cd /n/9fat
acme /n/9fat/plan9.ini

(More on acme later. A quick to learn but wacky text editor).

Of course I found this out the hard way with an unbootable system in configuring the system to allow remote connections (see next section). But a quick reboot from the install USB stick. I mounted the 9fat partition. Unfortunately without a graphical environment none of the text editors worked. But sed does. A quick reboot later and all was well. But now I do create backups when I edit plan9.ini!

Now how do I connect to this headless plan9 server. With Drawterm of course! Drawterm is an emulated plan9 client. Calling it emulated is not really accurate as it runs natively on whatever OS you install on (for me that was Linux) and it brings the 9p protocol to connect.

But to do this you need to configure the authentication server and allow remote connections

Anyway with the networked OS model I got Drawterm working on Linux and the CPU, file and auth servers are all running on the ThinkCentre.

So after quite a few hours I find myself with this!

Drawterm on the right, Plan9 under QEMU on the left

Plan9 running under QEMU, with Drawterm on the right connected to a real Plan9 server.

Spot the difference?

No, me neither!

Taking procrastination further – Installing Plan9 under QEMU

So instead of doing something useful and needed (like gardening!) during a few hours off last week I decided that what I really needed to do was install plan9!

A few months back I got various Windows NT VMs and OS/2 VMs running. Then Haiku (a port of BeOs) the other week and now this….!

I started with the last “official” build of Plan 9. This was difficult and I had to abandon it as I just could not get it sufficiently configured to launch Rio (the window manager). However, further googling shows that there is a port of plan9 called 9front. This is under continual development and the build I installed was just a few months old. Under QEMU (using libvirtd) it was a pretty hassle free install.

Once installed I found that there is also a recent port of NetSurf, as the two existing browsers are a few decades behind. Downloading the sources from Git, compiling and installing was quite straightforward. The only hurdle was that text does not auto scroll in terminals. Not a significant hurdle you may think. But I guess due to “everything is a file” when the text reaches the end of the terminal then the process stops working. Here as soon as the build script got to the bottom of the terminal it just stopped compiling. Luckily there is an auto scroll option.

  • Middle click and choose SCROLL/NOSCROLL
  • Incidentally you really need a three button mouse. For example to open a new terminal:

  • RIGHT CLICK
  • Choose NEW
  • The cursor changes to a cross. With the RIGHT MOUSE BUTTON draw where you need the new terminal to be.
  • Screenshot from 2023 09 08 14 18 18

    Screenshot from 2023 09 11 12 31 52

    I have no idea what I will do with it. But that goes for more or less all the VMs I have. Install, fire up a few times, then never again.

    A quite enjoyable, but very weird retro computing dream.

    I started putting my memories of a dream I had last night in a Mastodon post, but as I typed I realised there was a lot more weirdness to my dream than my initial memories of it as I woke and it needed much more characters!
    I remember the dream starting as I dug up my Amstrad CPC464 (which in reality is long gone), which had morphed into a CPC6128 but with an integrated dot matrix printer rather than its 3” disk drive! Then I opened it up and found my old ZX81 (which I never actually owned but borrowed from school) inside it.
    Then I found myself inside an electronic shop from my hometown (that I’m not sure ever existed) and found a ZX Spectrum magazine for sale with a new “circuit board” that plugged into the ZX Spectrum (shades of RPi hats here) taped to the cover as a free gift. This gave the ZX Spectrum modern GPU power with a HDMI output.
    Cue playing Sorcery+ on a 4k monitor powered from the supercharged ZX Spectrum…! Note: Sorcery was an Amstrad game and never made it to the Spectrum!
    The dream went even deeper into the rabbit hole from that point as that “plugin board” led me to a whole new secret group of makers that add modern hardware to the first round of personal computers to supercharge them. So lots of boxes of various plugin boards from the BBC Model B and more..!
    Just what had I been eating/drinking/smoking before I went to bed?
    Then this morning I see a new book documenting the era of shareware games on Daring Fireball.
    I think the dream had started as I was playing Fallout 4 on my Steam Deck just before going to sleep. Ever since I got back into gaming after a 20 year break it’s still very rare that I would play games just before going to bed. But recently I’ve lost control of the Steam Deck to my daughter who has found games! She’s still a few years younger than when I had my Amstrad, but she’s loves games as I much as I did then. So my time to play is quite limited.

    A years old Wifi issue finally sorted! Yeah!

    I’ve always wondered why I had wifi issues in my garden office on my phone and more generally everywhere else in my garden. It’s only 10m away from an access point with good quality external aerials. Which should give decent coverage over the entire garden and offices.

    I have a central Draytek 2820n router connected to a Netgear GS724t switch (then another in my garden office). There are two Draytek Access points connected to the switch. AP-800 and AP-900. Both legacy products now, but OK.

    Wifi reception in the house is generally decent. But in the garden and our garden offices it is rubbish. There are multiple Smart switches that are always disconnecting (in fact I threw one away a few years ago and replaced it with a new one). A pair of Sonos speakers that are always disconnecting (though they should be using SonosNet and not general wifi) and a Raspberry Pi on the boat that I can rarely reliably SSH to.

    Our garden has some overhead power cables above it. These are 11kv with three phases and power the entire village.

    Yesterday morning I just noticed that I was getting a very decent wifi signal on my phone (20Mbps down). Then it dawned on me that the cables are turned off today for tree work and the village is being powered by a diesel generator for the day. The naive “jumping at straws because this dammed wifi problem is never going away” person in me immediately assumed that interference from the cables was the issue

    Then more knowledgeable people than me told me that a 50hz electricity cable will not interfere with a 2.4/5Ghz wifi signal. The frequencies are too different. Still something is awry. I also noticed that the three smart plugs in my garden that are generally problematic are connected without issue. Same goes for two Sonos speakers. I could even connect to the RPi with a decent connection. Enough to update Raspbian without having to use Tmux.

    However, then the connection fails again and the power has yet to be restored to the cables. So what is the problem?

    The Draytek router has AP central management and I configure the two Draytek access points using that tool. I’ve rarely looked at the config pages of the actual APs.

    Draytek CentraAPmanagement

    But now I looked at one. Turns out that it was using two wifi networks with the same SSID.

    The AP in the house had both SSIDs connected to LAN-A, but the AP in the garden had one of these “pseudo” networks connected to LAN-B. The LAN-B is not connected to anything!

    DraytekAPdualSSIDs

    So when the power went off the APs rebooted. Then the devices get a decent connection. But as the phone roamed it switched to the SSID with no network connection and so the iPhone goes crazy with no internet connection and switches to 4G. I have no idea why the Smart switches disconnected though.

    I just turned this off on both APs and everything now connects flawlessly with a strong signal. I have had this problem for years…! Sheesh…!

    UPDATE March 18, 2023

    So the wifi failed again. I went back to check and the multi SSID option had been re-enabled. Turns out the AP Management tool on the router has a profile that enables it, and it kept cloning it to the access points. Now that’s fixed. Fingers crossed.

    UPDATE April 18, 2023

    So the problem was not fully solved. The issues above were real, but the heart of the problem was hardware. In these days of reliable hardware how odd to have an issue that is not PSU related. I swapped out the AP, but the only spare AP I had did not have a built in Ethernet switch, and there’s an IP camera attached to the existing AP. So I disabled wifi on the faulty AP point (Draytel AP-900), used that as a switch and plugged in the Ethernet portless AP (Draytek AP-710). For a week before our holiday and the few days we returned it has proved to be faultless. I do need to remove that faulty unit completely. The AP in the house is plugged into an adjacent switch and so only needs a single Ethernet port. I’ll swap the two over soon.

    migrated blog to new hosting provider

    So after moving my old hosting provider to a monthly renewal 12 months ago in preparation for moving it. I finally found a day to migrate it all over to a new provider on Saturday. I chose Mythic Beasts. A Cambridge based company that I have a few other domains hosted on and have been impressed by their plain talking sales, sensible pricing and proper technical support.

    Their support pages suggests a “no downtime” workflow that involves copying their DNS entries for your new site to your existing hosting provider’s DNS record. So even though you do not know when the changes propagate through DNS servers both your existing provider and the new one’s all point to the same new server on your new provider’s site. Very straightforward.

    I also discovered a cool WordPress plugin (Updraft Plus), that I’ve been using for backups for some time, has a very restore function that you can easily and quickly restore a backup to a new site. If I’d have known how easy and reliable this perhaps I would not have bothered exporting my site, and importing to a backup server. Just In Case.

    Migrating My Root Drive Raid1 Array to a Pair of NVME Drives.

    I’ve always tried to keep upgrading my own Linux boxes. I enjoy it and as I found out a few years ago if I do not regularly keep updating hardware then I completely lose knowledge of doing so.
    My latest project is to move all the services from my workstation to a separate box. I’ve got a few RPis around the house running a few services, but I’ve always used my workstation as a games machine/server/development/everything else. In fact for a number of years I stopped using Linux as a workstation at all and this machine was a headless server.
    Because of this cycle of continuous upgrades this computer has existed for probably twenty years. Always running some form of Linux (mainly Gentoo).
    Currently it’s using some space heater of a server motherboard with a pair of E5 2967v2 on a Supermicro X9dri-LN4 motherboard with 128Gb of ECC DDR3 RAM (very cheap!). That’s fine over winter (my office has no heating) but I do need to fully transfer all the services to the low power Debian box under my desk instead!
    Anyway this computer boots from pair of SATA SSD drives in a RAID1 array, with a six disk RAID10 array for data. That array needs to be replaced by a single large drive when I’ve finished moving services to a new machine….!
    The motherboard is too old to EFI boot from NVME drives. However, whilst browsing Reddit I came across some people talking about using an adaptor card to add in four NVME drives and using bifurcation toggle each drive proper access to the four PCIe channels that NVME devices need. So x4,x4,x4,x4 instead of x16.
    This was not supported on this board, but turns out Supermicro did release a new BIOS that does support bifurcation.
    So I bought the card they suggest and a pair of 1TB NVMe drives. The drives are only PCIev3 as that’s all the motherboard supports. PCIe is backwards/forwards compatible. but PCIev4 drives are considerably more expensive than PCIeV3 ones. I may as well get a pair of these, then when I upgrade to a PCIeV4 motherboard the available drives will likely be larger and cheaper!
    – Asus M.2. X16 Gen 4
    – 2 x WD Blue SN570 NVMe SSD 1TB
    The adaptor and cards came. The adapter has got a lovely heatsink that sandwiches the drives in with a small low noise fan.
    The adaptor took ten minutes to install. When booted up the BIOS setup disk was a little tricky to enable bifurcation as the slots are numbered from the bottom. This one was CPU1/slot 1.
    I had to recompile the kernel to add NVME support, but once booted the pair of drives were there.
    After many, many years of using /dev/sdX to refer to storage devices (I was using SCSI hardware before SATA), it does seem a little strange to be running parted on /dev/nvme1n1 then getting partition devices like /dev/nvme1n1p2
    I know I should likely move to ZFS, but I’m knowledgeable enough about mdadm not to completely mess up things! ..and replacing a pair of RAID1 devices is just so easy with mdadm.

    Workflow is:
    – partition drive.
    – add to raid array as spare
    – fail drive to be removed and then remove it.
    – Wait until raid1 array is synced again.
    – repeat with second drive
    – resize array, then resize the filesystem

    procedure

    fdisk /dev/nvme0n1
    

    We can use fdisk again as fdisk is GPT aware. Previously we’d always used parted. But I prefer fdisk as I know it!
    – partition
    – Label the drive as GPT
    – Make 256mb partition and mark as EFI boot.
    – Make 2nd partition for the rest of the drive and mark that as type Linux raid.

    Now add that drive to our raid1 array. For some reason it was not added to a spare, but was instead immediately synced to make a 3 drive raid1 array. I think this is because I previously created this array as a three drive array (for reasons I forget). I guess that’s stored in the metadata of the array.

    mdadm /dev/md127 --add /dev/nvme0n1p2
    

    We can watch the progress:

    watch cat /proc/mdstat
    

    Once completed we can fail and then remove the drive

    mdadm --manage /dev/md127 --fail /dev/sdh3
    mdadm /dev/md127 --remove /dev/sdh3
    

    Then let’s update our mdadm.conf file

    mdadm --detail --scan >> /etc/mdadm/mdadm.conf
    

    Then remove the oldlines:

    vi /etc/mdadm/mdadm.conf
    

    Finally let’s wipe that old drive of RAID data so the array does not try to reassemble it to the array.

    wipefs -a /dev/sdh3
    

    A reboot is a good idea now to ensure the array is correctly assembled (and new PAT reread).

    Now let#s copy the partition table (PAT) to the second new drive.

    sgdisk /dev/nvme0n1 -R /dev/nvme1n1
    

    Then randomise the UIDs

    sgdisk -G /dev/nvme1n1
    

    Check all is OK

    Now repeat adding the second new drive:

    mdadm /dev/md127 --add /dev/nvme1n1p2
    mdadm --manage /dev/md127 --fail /dev/sdg3
    mdadm /dev/md127 --remove /dev/sdg3
    mdadm --detail --scan >> /etc/mdadm/mdadm.conf
    wipefs -a /dev/sdg3
    

    resize after adding new devices

    mdadm –grow –size max /dev/md127
    df -h
    Then resize the filesystem
    resize2fs -p /dev/md127

    benchmarks

    dd if=/dev/zero of=/home/chris/TESTSDD bs=1G count=2 oflag=dsync 
    

    2+0 records in
    2+0 records out
    2147483648 bytes (2.1 GB, 2.0 GiB) copied, 4.7162 s, 455 MB/s

    dd if=/dev/zero of=/mnt/storage/TESTSDD bs=1G count=2 oflag=dsync
    

    2+0 records in
    2+0 records out
    2147483648 bytes (2.1 GB, 2.0 GiB) copied, 10.0978 s, 213 MB/s

    dd if=/home/chris/TESTSDD of=/dev/null bs=8k                                                              !10032
    

    262144+0 records in
    262144+0 records out
    2147483648 bytes (2.1 GB, 2.0 GiB) copied, 0.666261 s, 3.2 GB/s

    dd if=/mnt/storage/TESTSDD of=/dev/null bs=8k                                                             !10034
    

    262144+0 records in
    262144+0 records out
    2147483648 bytes (2.1 GB, 2.0 GiB) copied, 0.573111 s, 3.7 GB/s

    I did think that the write speed would be faster. But I guess dd is not the most accurate of benchmark tools.

    A review of 2022

    So a quick review of 2022. Not done one for at least a decade.
    I started writing this during Christmas, then a few updates. I wanted to add links and photos. But hey, it’s now February and I have not. I should learn from this. I either blog lightly, or not at all!

    So I got more and more into regularly gaming. From starting again after not playing anything since Quake (the first), during the pandemic, to this year regularly playing games, and even treating myself to a Steam Deck (which is wonderful). Witcher II, Fallout 4, Elite Dangerous, God of War, and of course Minecraft with my daughter. I’ve got her into gaming as well. Starting the year with games that she’d played on Apple Arcade, Sneaky Sasquatch, then moving onto Minecraft with me (and others), then onto the Steam Deck. She loves Alba, Sonic, Stray and Slime Rancher. Other games she’s tried but failed to get into are Planet Zoo (which her cousin plays),

    Apart from a new GPU (see above for the reasons, AMD RX 6600XT) and Steam Deck (also see above), then I’ve not done hardly anything. no RPI’s Arduino’s or anything properly geeky. However, we did upgrade to Gigabit FTTP which is very, very cool. I also switched my office switch from a dumb Netgear 16 port GS116 to a Netgear GS724t. Old tech, but quite reliable (and at £20 on Facebook quite affordable!). This is linked with two aggregated links between the house switch. Same router but an earlier hardware version.

    I did switch most “home services, TVHeadend, Minecraft server, Motion cameras and a few other trivial things to Docker. I also moved these from my Gentoo workstation to a new headless Debian box made from bits and pieces I had lying around (well I did upgrade RAM and CPU from eBay!). I guess on that topic I also finally switched from Mythtv to TVHeadend, when I added a second satellite dish as I found that all French TV services are broadcast unencrypted on multistream feeds from the 5W satellite. I also switched from a pair of TBS tuner cards to using a completely separate Digitbit R1. That broadcasts the four tuners as DVB>IP, which means TVHeadend can use those as tuners over the LAN. TvHeadend also acts as a recorder. TVHeadend just does not have the legacy baggage that Mythtv does. Much less functionality, but much simpler to configure (no need for a Mysql db to store settings and recordings, just a fairly simple HTTPS interface).

    The Digitbit’s firmware is quite ancient. but it is trivial to boot for a new firmware via a USB stick (without having to flash the onboard storage) and there are a quite a few forks of that older firmware that support Multistream.

    Clients are a number of RPI’s (running OSMC) and a dedicated client from OSMC called Vero 4k. Which is a lovely bit of kit
    I also got a Quest Pro II. Facebook blah blah but it’s cheap and standalone. I’m not easily impressed but some of the apps are pretty wonderful. Downside is that our house is quote small and we have so few spaces to walk around in VR! I could use the garden, but then I would look a bit of a tool! Resident Evil 4 in VR is bloody scary, but I’ve yet to acquire my VR legs and can only play for 20 minutes before nausea takes over.

    I carried on with the outdoor kitchen and added walls (using rural style corrugated sheets) plus completely weather proof kitchen cabinets (using treated wood and marine ply. All exposed cut edges were soaked in epoxy, and topped with quarry tiles. I need to do the other side. but perhaps when it’s warmer.

    IMG 5537

    In sadder news our dog died. Zorro, our ten year old black lab, started wheezing and it turned out he had a mouth melanoma that was affecting his breathing. Treatment was possible but would have meant removing most of his jaw and chemo for months. Nothing you could put a dog through. Sadly he was put to sleep in the boot of a friend’s car and died in my arms.

    IMG 4016

    I had a pretty bad year for growing stuff. Potatoes, cabbages (in poly tunnel), strawberries and tomatoes (grow bags on the balcony) were great. But everything else did badly I can only blame my laziness in watering during the heatwave. Even our courgettes failed! Better luck next year.

    Finally managed to get camping, both for a fortnight in France (super warm), and also at Bluedot festival (brilliant music, shitty weather! Again see below).
    I also upgraded our camp kitchen, with a brilliant foldaway kitchen, and a camping stove (Cadac) that has double burners, two griddles, and connects to a gas unit that uses three cheap, ubiquitously available, aerosols gas canisters. It provides a regulator for them too. I’ve always thought they are a little on the dangerous side. But with this bit of kit it makes them very usable and very safe.
    IMG 6359

    IMG 6433

    We also had a good year for boating. The boat needs a LOT of work doing still. But we had a good few outings including one with the village.

    IMG 1356

    Our own health is decent. LN had Covid over NY (which made that quieter). I’ve still not had it (that I am aware of).
    The world continued to become a scarier place. Both nationally and internationally. What is happening in Ukraine is beyond horrendous. However, I decided that ranting at the news is pointless. Whereas I’ll never forgive any unrepentant Leave voters, I guess I’m used to it. Still I had a good day out on the Rejoin March back in August.

    IMG 6944

    It was a good year for live music. Made the Bluedot festival, which had too many decent bands to list (although Yard Act are brilliant!). saw Billy Nomates, Fontaines DC, Scalpings and Working Mens club.

    F58e6817 2e46 4065 a83d 39375610dc4aIMG 6271

    I properly made the move to Mastodon. Twitter was only ever a pleasant social media community in the early years. For some time it’s been both essential to keep up to date with local and national news, but also very unpleasant to be in. Mastodon is some time away from replacing that function, but by being a pleasant place to be in, it works for me. I think Elon Musk has done us all a favour. Back when I joined Twitter (2006) I thought it was a bad move to put all our communication eggs in one company’s basket, but in those early years it seemed to work and before long everybody and their rabbits had an account.

    Finally I decided I need to stop doing as much “community stuff” as I have been doing. As well as the village newsletter which I have done for over ten years, I am also a school parent governor and part of a local campaign to prevent a sewage works being built on Greenbelt just outside the village. I find myself spending a hell of a lot of time, but both those latter two tasks have a lot of frustration. One for an incredible lack of communication and the other for manipulative people that are quite incompetent too. If either were paying jobs I’d have left both a while ago.