Category Archives: Uncategorized

Writing a rain predictor, preparing the data

The index to the articles in this series is found here.

It’s time to get a baseline reference, the radar image as it would appear with no rain anywhere. I picked out three images that were quite clean. This isn’t trivial, as the radar seems to produce false short-range returns on clear, humid days. I assume this is because, in the absence of any precipitation, there’s no strong reflected signal, and the radar analysis is interpreting some close-range backscatter from the air as slight rainfall. This means that we often have light blue pixels surrounding the radar station when there isn’t rain elsewhere. Still, I found three images that voted to produce a good consensus.

Here’s the code I used to analyse those .gif files and produce a consensus image:

#! /usr/bin/python3

# This script reads in three .gif files and produces a new file in
# which each pixel is set to the majority value from the three inputs.
# If there is no majority value (i.e. all three files have a different
# value at that point), we exit with an error so that a better set of
# inputs can be found.

# We are using this script to analyse machine-generated files in a
# single context.  While the usual programming recommendation is to be
# very permissive in what formats you accept, I'm going to restrict
# myself to verifying consistency and detecting unexpected inputs,
# rather than trying to handle all of the possible cases.

# This is a pre-processing step that will be used by another script
# that reads .gif files.  Therefore it is reasonable to make this
# script's output be a .gif itself.

# The script takes 4 arguments.  The first three are the names of the
# input files.  The fourth is the name of the output file.

# The script will return '1' on error, '0' for success.

import sys
import gif


class SearchFailed(Exception):
    def __init__(self, message):
        self.message = message


def find_index_of_tuple (list_of_tuples, needle, hint = 0):
    if list_of_tuples[hint] == needle:
        return hint
    for i in list_of_tuples:
        if (list_of_tuples[i] == needle):
            return i
    raise SearchFailed('Tuple {0} not found in list.' % needle)


if len(sys.argv) != 5:
    print ("Require 3 input filenames and 1 output filename.")
    sys.exit(1)

file = [None, None, None]
reader = [None, None, None]

for i in range(3):    
    try:    
        file[i] = open(sys.argv[i+1], 'rb')
    except OSError as ex:
        print ("Failed to open input file: ", sys.argv[i+1])
        print ("Reason: ", ex.strerror)
        sys.exit(1)
    reader[i] = gif.Reader()
    reader[i].feed(file[i].read())
    if ( not reader[i].is_complete()
         or not reader[i].has_screen_descriptor() ):
        print ("Failed to parse", sys.argv[i+1], "as a .gif file")
        sys.exit(1)

# OK, if we get here it means we have successfully loaded three .gif
# files.  The user might have handed us the same one three times, but
# there's not much I can do about that, it's entirely possible that we
# want to look at three identical but distinct files, and filename
# aliases make any more careful examination of the paths platform
# dependent.

# So, we're going to want to verify that the three files have the same
# sizes.

if ( reader[0].width != reader[1].width
     or reader[1].width != reader[2].width
     or reader[0].height != reader[1].height
     or reader[1].height != reader[2].height ):
    print ("The gif logical screen sizes are not identical")
    sys.exit(1)

for i in range(3):
    if ( len(reader[i].blocks) != 2
         or not isinstance(reader[i].blocks[0], gif.Image)
         or not isinstance(reader[i].blocks[1], gif.Trailer)):
        print ("While processing file: ", sys.argv[i+1])
        print ("The code only accepts input files with a single block of "
               "type Image followed by one of type Trailer.  This "
               "constraint has not been met, the code will have to be "
               "changed to handle the more complicated case.")
        sys.exit(1)
    
    
# Time to vote

try:
    writer = gif.Writer (open (sys.argv[4], 'wb'))
except OSError as ex:
    print ("Failed to open output file: ", sys.argv[4])
    print ("Reason: ", ex.strerror)
    sys.exit(1)

output_width = reader[0].width
output_height = reader[0].height
output_colour_depth = 8
output_colour_table = reader[0].color_table
output_pixel_block = []

for ind0, ind1, ind2 in zip(reader[0].blocks[0].get_pixels(),
                            reader[1].blocks[0].get_pixels(),
                            reader[2].blocks[0].get_pixels()):
    tup0 = reader[0].color_table[ind0]
    tup1 = reader[1].color_table[ind1]
    tup2 = reader[2].color_table[ind2]

    # Voting
    if ( tup0 == tup1 or tup0 == tup2):
        output_pixel_block.append(ind0)
    elif ( tup1 == tup2 ):
        try:
            newind = find_index_of_tuple(output_colour_table,
                                         tup1, ind1)
            output_pixel_block.append(newind)
        except SearchFailed as ex:
            print ('The colour table for file %s does not hold the '
                   'entry {0} that won the vote.  You may be able '
                   'to fix this problem simply by reordering your '
                   'command-line arguments.' % sys.argv[1], tup1)
            sys.exit(1)

writer.write_header()
writer.write_screen_descriptor(output_width, output_height,
                               True, output_colour_depth)
writer.write_color_table(output_colour_table, output_colour_depth)
writer.write_image(output_width, output_height,
                   output_colour_depth, output_pixel_block)
writer.write_trailer()

So, what does this do? After verifying that it received the correct number of arguments, that it can open the three inputs, and that the input files are all valid .gif files, it checks to make sure they all have the same image dimensions.

Now, it would be a bit more work to support multiple image blocks, though the GIF specification does allow that. So, I verified that these files from the government website do not use multiple image blocks, and coded in a check. This script will exit with an error if it is presented such files. This way I don’t have to write the extra code unless some future change forces me to accept the more complicated format.

Now, the files I chose did not have identical colour tables, but the tables differed only in the ordering. This might not always be true, but it is at the moment. I use the colour table from the first input .gif as my output colour table. Then, I walk through the pixels in the three files and look up the tuple of colours for that pixel. If the first and second input files agree on the value of that tuple, then we simply insert the appropriate index into the colour table. If the first disagrees, but the second and third agree, then we have to find the index of this tuple in the output colour table. It’s probably the same, so we hint with the offset into the colour table of the second file, but my function will walk the entire colour table if it has to, to find an index matching that tuple. If it fails to do so, that’s an error, and we exit.

Finally, we write out the consensus .gif file, and exit normally.

In the next article we’ll have a discussion of how to set up the neural network.

UPDATE #1 (2019-08-23): Included a link to an index of articles in this series.

A machine learning project

The index to the articles in this series is found here.

Well, four years ago I mentioned that I was going on a brief hiatus, and there hasn’t been very much here since then. Turns out that having a baby in the house does eat into the free time a bit. Now, though, I find myself with some more free time, after the parent company closed the entire Ottawa office and laid off the staff here. If anybody’s looking for an experienced mathematical programmer with a doctorate in physics, get in touch.

So, here’s a project I was about to start four years ago. I had collected some training data, but never got the project itself started.

I like to bicycle in the summer time, but I don’t like to ride in the rain. So, when I remember, I check the local weather radar and look for active precipitation moving toward the city. I can decide from that whether to go for a bicycle ride, and whether to ride to work, or find another way to get to the office.

The weather radar website, https://weather.gc.ca/radar/index_e.html?id=XFT, shows an hour of rain/snow detection at 10 minute intervals, played on a loop. You can look at the rain and guess how long it will take to get to the city. This won’t help you if rain forms directly over the city, but most of the time the rain moves into town, rather than beginning here.

The interpretation of these sequences seemed to me to be something I could automate. Maybe have a program that sends a warning or email to my cellphone if rain is imminent, in case I’m out on the bike.

I collected over 11000 .gif files by downloading individual files via a cron job. The images don’t have an embedded copyright message, and are government-collected data, but I’m not confident that this gives me the right to make this dataset available online, so I will satisfy myself with reproducing a single example for illustrative purposes. Here is a typical downloaded image:

The city of Ottawa is located roughly North-East of the white cross, just South of the Ottawa river that runs dominantly West to East. Near the right edge of the active region you can see the island of Montreal.

The very light blue represents light rainfall, something you might barely notice while riding a bicycle. Anything at the bright green or higher would be something I would try to wait out by sheltering under a bridge or similar construction. Weather patterns in this area, as in much of the continent, are dominantly blown from the West to the East, though there are some exceptions, and we will, very occasionally, have storms blow in from the East.

So, here’s the project. I haven’t actually written code yet, so we’ll explore this together. I would like to set up a neural network that can watch the radar website, downloading a new image every 10 minutes, and use this to predict 10 binary states. The first five values will be the network’s confidence (I’m not going to call it probability) that there will be any rain at all in the next 0 to 1 hours, 1 to 2 hours, 2 to 3 hours, and so on out to 5 hours. The next five values will be the confidence of heavy rain, defined as rain at the bright green or higher level, in the same intervals.

Ideally, this network would also update itself continuously, as more data became available.

This isn’t a substitute for the weather forecasts made by the experts at Environment Canada, they use a lot more to inform their forecasts than just the weather radar in the area, but it aims to answer a different question. My project will try to estimate only confidence of rain specifically in the city of Ottawa, and over a relatively short projection interval, no more than 5 hours. It’s answering a more precise question, and I hope it turns out to give me useful information.

Now, we might be tempted to just throw the raw data at a neural network along with indications of whether a particular image is showing that it is raining in Ottawa, but we don’t have an unlimited data set, and we can probably help the process along quite a bit by making some preliminary analysis. This isn’t feature selection, our input set is really a bit too simple for any meaningful feature selection, but we can give the algorithm a bit of a head start.

The first thing we’ll want to do is to pull out the background image. The radar image shows precipitation as colours overlaid on a fixed background. If we know what that background is in the absence of any rain, we can call that ‘0’ everywhere in the inputs, and any pixels that differ will be taken as coming from rain, with a value that increases as we climb that scale on the right side of the sample image.

I’ll pick out three images that are rain-free to my eye. There might be tiny pockets of precipitation that escape my notice, but by choosing three that appear clean and letting them vote on pixel values, I should have a good base reference.

We’ll be writing this project in Python3, with Keras interfacing onto TensorFlow.

The next posting will cover the baseline extraction code.

UPDATE #1 (2019-08-20): I’ve made the source files I’m posting in this series available on github. You can download them from https://github.com/ChristopherNeufeld/rain-predictor. I’ll continue to post the source code in these articles, but may not post patches there, I’ll just direct you back to the github tree for history and changes.

UPDATE #2 (2019-08-23): Added a link to an index page.

A home NAS

So, my MythTV box was starting to fill up.  It had three 3TB drives in it.  I also had three 3TB drives in my main desktop machine to hold a backup of the Myth box.  With space running low, and with the cases pretty full of hard drives, it was time to do something.

I decided I would build a NAS, and buy some 6TB drives.  My reasoning was that I could stripe pairs of 3TB drives together into logical 6TB drives.  I would then have a Myth box with three 6TB drives, and a NAS with six 3TB drives and one 6TB drive.  The NAS drives would be set up to look like four 6TB drives, and I could do a RAIDZ1 on those.

Over time, as the 3TB drives failed, I would buy 6 TB drives when necessary, and pair together the survivors of striped pairs that had failed.  Eventually, I would have four 6TB drives in the box.

One reason for having a NAS is that I could just pick up the whole box, carry it to a relative’s house, and leave it there if I was going on a vacation.  I don’t like having all my backups in the same place, even if important files are backed up in a fireproof/waterproof safe.

I do have sensitive files, so it was also important that the contents of the NAS be unavailable when it was booted up, until a passphrase is supplied.

So, I took my starting point from two articles on Brian Moses’ blog.  https://blog.brianmoses.net/2016/02/diy-nas-2016-edition.html and https://blog.brianmoses.net/2017/03/diy-nas-2017-edition.html.

I bought some hardware:[amazon_link asins=’B00IAELTAI,B003H4QPDC,B072WD1QLY,B00OAJ412U,B00LO3KR96′ template=’ProductGrid’ store=’winte0db-20′ marketplace=’CA’ link_id=’c6e83ccf-8df2-11e7-94c3-b32a64fe7989′]

I installed FreeNAS-11.0-U2 (e417d8aa5) on the boot USB drive, and started configuring.

I wasn’t able to figure out how to stripe the 3TB drives together into 6TB logical discs for the RAID array, and asked on the forum: https://forums.freenas.org/index.php?threads/using-striped-disks-as-raid-members.57245/.

It turns out you can’t do that, but the suggestion I received there, to partition the 6TB drive into two 3TB logical units and then put everything together as RAIDZ2 was a workable alternative.

So, here’s the procedure I worked out.  My 3TB drives are on ada0, ada1, ada2, ada5, ada6, and ada7.  My 6TB drive is on ada8.  A lot of this had to be done on the command line.  As I was eventually to figure out, there’s also a lot of stuff that I traditionally do on the command line that can’t be done that way anymore.

First, I create an encryption key:

dd if=/dev/random of=MainPool.key bs=64 count=1

I uploaded this key to Google Drive, as an off-site backup.  The passphrase that is used with the key means that the key isn’t particularly useful by itself.

So, we create the encrypted drives:

for i in ada0 ada1 ada2 ada5 ada6 ada7 ada8
do
    geli init -s 4096 -K MainPool.key /dev/$i
done

It will ask for the passphrase twice for each drive, so 14 times.  Then we attach the encrypted devices.

for i in ada0 ada1 ada2 ada5 ada6 ada7 ada8
do
    geli attach -k MainPool.key /dev/$i
done

It will ask for the passphrase once for each drive.

Next, we put a single partition on each of the 3TB drives:

for i in ada0 ada1 ada2 ada5 ada6 ada7
do
    gpart create -s gpt /dev/${i}.eli
    gpart add -t freebsd-zfs -b 128 /dev/${i}.eli
done

Now, we have to partition the 6TB encrypted drive.  Note that the size of the drive looks different on the encrypted than the bare device, I think the block sizes are different.  So, I used this sequence of commands, with the argument on the third line being half of the size reported by the ‘show’ command on the second line:

gpart create -s gpt /dev/ada8.eli
gpart show /dev/ada8.eli
gpart add -t freebsd-zfs -s 732565317  /dev/ada8.eli
gpart add -t freebsd-zfs /dev/ada8.eli

Running glabel status allows me to identify the gptids of the partitions.  That’s important because I don’t know whether the adaN identifiers change when drives are removed or based on boot-time hardware probing.  So, gptids are UUID labels that we can use to identify the partitions unambiguously.

Next, we create the pool:

zpool create MainPool raidz2 \
      gptid/8377bd7e-8d2c-11e7-8faf-d05099c2b71d \
      gptid/83d3e21a-8d2c-11e7-8faf-d05099c2b71d \
      gptid/8425b23d-8d2c-11e7-8faf-d05099c2b71d \
      gptid/8478358d-8d2c-11e7-8faf-d05099c2b71d \
      gptid/84d9db6d-8d2c-11e7-8faf-d05099c2b71d \
      gptid/852b9055-8d2c-11e7-8faf-d05099c2b71d \
      gptid/e7e945b1-8d2c-11e7-8faf-d05099c2b71d \
      gptid/eb13e372-8d2c-11e7-8faf-d05099c2b71d

Of course, you substitute your own gptids there.

Finally, we export the pool with zpool export MainPool

All of this happened on the command line, but we can now switch over to the GUI.  In the GUI, go to “Storage”, and select “Import volume”.  Choose the encrypted pool option.  You’ll have to supply the key (MainPool.key), which has to be on the computer that is running your web browser, so you can upload it.  You will then be asked for the passphrase, which you type in.  The system ponders for a while, and then the pool appears.

Next, I created datasets for the backups of the computers in my house. The main computer is called Londo, so I created a Londo-BACKUP dataset.  Underneath that, I created separate datasets for my root partition, my home partition, and my encrypted partition.  The MythTV box is called “mythtv”, I created a MythTV-BACKUP dataset, and underneath that separate datasets for the non-media partitions, and one for each media partition.   I turned off compression on the media partition datasets, as that wasn’t going to achieve anything on H.264 files.  With this granularity of datasets, I can snapshot the root filesystem and the user files separately, and I can avoid snapshotting the MythTV media partitions, which see a lot of churn in multi-GB files.

We now move on to the configuration.  This was more difficult than it had to be, mostly because by now, having set up the drives that way, I was primed to use the command line for things.  I know how to configure ssh, rsync, and so on, and I made my changes, but nothing worked.  Turns out that the FreeNAS system overwrites those configuration files when it wants to, so my changes were being silently discarded.  Many of these settings have to be altered through the GUI, not from the shell.

After FreeNAS installation, the sshd is set up to refuse port forwarding requests.  I wanted to use those for my rsync jobs.  I would alter the /etc/ssh/sshd_config file, and the changes would do nothing.  The file wasn’t modified.  Turns out that there are two sshd_config files.  The /etc/ssh directory, while present and populated, is unused.  The actual location of the sshd files is /etc/local/ssh, and they are subject to being overwritten, so I used the GUI to turn on port forwarding.

I was getting very slow throughputs on ssh, about 1/8 of the wire speed.  I confirmed that wget ran at the expected speed, and that two ssh sessions each got 1/8 of the wire speed, so I was CPU bound on the decryption of the data stream.  That was a bit surprising, it’s been a while since I saw a CPU that couldn’t handle a vanilla ssh session at gigabit ethernet speeds.  So, I checked to see what ciphers were supported on the FreeNAS sshd, and tested them for throughput.  I settled on “aes128-gcm@openssh.com”, which allowed me to use half the wire speed.  Good enough, though the initial backup would take over 40 hours, rather than just 20.  I avoided that by backing up over three separate ssh channels, so I could go at full wire speed by backing up different datasets in parallel.

On to rsync.  I like to have security inside my firewall, and don’t like the idea of backups being sent over unencrypted channels.  I also don’t want the compromise of a single machine to endanger other machines if that’s at all avoidable.  So, simple rsync over port 873 wasn’t what I was looking for.  I also wanted the machines in the house to be able to decide if and when to perform a backup, rather than having the NAS pull backups.  That way my scripts could prepare the backup and ensure that their filesystems are healthy before starting to write to the NAS.  The obvious choice, then, is rsync tunneled over ssh.

First, I generated an rsync key:

ssh-keygen -b 521 -t ecdsa -N “” -C “Rsync access key” -f rsync_key

I copied the private key to all the machines needing to make backups, and I put the public key into the authorized_keys file:

no-pty,command=”/bin/echo No commands permitted” ecdsa-sha2-nistp521 <KEYTEXT> Rsync access key

The default umask in bash on the FreeNAS box is 0022, so you have to be careful with permissions.  Make sure to set files in .ssh to 0400 or 0600, to ensure that they are not ignored.

This authorized_keys file does not allow the user of the key to execute any commands.  They can open a connection, and they can forward ports.  So, each machine on my network can use this same key to open an encrypted connection to the rsyncd port on the NAS.

Next, we have to set up rsyncd.  This has to happen in the GUI.  There’s a box labelled “auxiliary parameters”, you just copy everything in there.  It opens in the global section, so put in the individual section headers, and you can append anything you like to the base rsyncd setup.  Here’s mine:

address = 127.0.0.1

[Londo]
path = /mnt/MainPool/Londo-BACKUP
use chroot = yes
numeric ids = yes
read only = no
write only = no
uid = 0
gid = 0
auth users = londobackup
secrets file = /root/rsync-secrets.txt

[MythTV]
path = /mnt/MainPool/MythTV-BACKUP
use chroot = yes
numeric ids = yes
read only = no
write only = no
uid = 0
gid = 0
auth users = mythbackup
secrets file = /root/rsync-secrets.txt

[Djinn]
path = /mnt/MainPool/Djinn-BACKUP
use chroot=yes
numeric ids = yes
read only = no
write only = no
uid = 0
gid = 0
auth users = djinnbackup
secrets file = /root/rsync-secrets.txt

EDIT #1 (2017-09-05): Originally, I had the secrets file at /usr/local/etc/rsync/secrets.txt, but it turns out that, on reboot, extra files in the /usr/local/etc/rsync directory are deleted, so my secrets file disappeared and my backups failed.  I have moved it to the parent directory now.

EDIT #2 (2017-09-07): Turns out the /usr/local/etc directory isn’t safe either.  After performing an upgrade, I lost the passwords file again.  I have moved it to /root.

With the ‘address’ keyword in the global section, we restrict access to the rsyncd to localhost.  That’s fine for us, we’ll be coming in through ssh, so our connections will appear to come from 127.0.0.1.  Other accesses will be blocked.  I use chroot and numeric IDs because I do not have common UIDs between machines in my home network anyway, so I don’t care to remap IDs on the NAS.  I run as UID/GID zero so that the backup has permission to create any files and ownerships that are appropriate.  There is a secrets file that contains the plaintext passwords needed to have access to each module.  Mine looks like this:

mythbackup:<PASSWORD1>
londobackup:<PASSWORD2>
djinnbackup:<PASSWORD3>

The appropriate password is also copied into a file on each machine being backed up, I’ve chosen to put them in ~/.ssh/rsync-password.  Only the password, not the username.  Make sure the permissions are 0400 or 0600.

Now, the backup configuration.  Let’s look at the MythTV box.  Here’s its /root/.ssh/config file:

host freenas-1
hostname freenas-1.i.cneufeld.ca
compression yes
IdentityFile /root/.ssh/rsync_key
protocol 2
RequestTTY no
Ciphers aes128-gcm@openssh.com
LocalForward 4322 127.0.0.1:873

This says that if the root user on the MythTV box just types “ssh freenas-1”, that it will go to the correct machine, with ssh stream compression, using the rsync key, protocol 2, no terminal, using the cipher we identified as acceptably fast, and will open a local forward port on 4322 on the MythTV box that encrypts all traffic and sends it to port 873 on the NAS box.

Now, the backup script:

 

#! /bin/sh
#

RSYNC_OPTS="--password-file=/root/.ssh/rsync-password -avx --del -H"

ls /myth/tv1/xfs-vol \
   /myth/tv2/xfs-vol \
   /myth/tv3/xfs-vol > /dev/null 2>&1 || exit 1

ssh -N freenas-1 &
killme=$!

sleep 5

/root/bin/generate-sql-dump.sh

rsync ${RSYNC_OPTS} / \
      rsync://mythbackup@127.0.0.1:4322/MythTV/Non-media_Filesystem
rsync ${RSYNC_OPTS} --exclude=.mythtv/cache \
      --exclude=.mythtv/Cache* \
      /home \
      rsync://mythbackup@127.0.0.1:4322/MythTV/Non-media_Filesystem/home
rsync ${RSYNC_OPTS} /data/srv/mysql \
      rsync://mythbackup@127.0.0.1:4322/MythTV/Non-media_Filesystem/data/srv/mysql
rsync ${RSYNC_OPTS} \
      /data/storage/disk0 \
      rsync://mythbackup@127.0.0.1:4322/MythTV/Non-media_Filesystem/data/storage/disk0


pushd /myth
rsync ${RSYNC_OPTS} . rsync://mythbackup@127.0.0.1:4322/MythTV/Media_Disk_1
popd

pushd /myth/tv2
rsync ${RSYNC_OPTS} . rsync://mythbackup@127.0.0.1:4322/MythTV/Media_Disk_2
popd

pushd /myth/tv3
rsync ${RSYNC_OPTS} . rsync://mythbackup@127.0.0.1:4322/MythTV/Media_Disk_3
popd

kill $killme

What does this do?  First, it verifies that all the media drives are mounted.  I created directories called xfs-vol on each drive.  If those directories are not all present, it means that at least one partition is not correctly mounted, and we don’t want to run a backup.  If a power spike bounced the box while killing a drive, it would start up, but maybe /myth/tv3/ would be empty.  I don’t want the backup procedure to run, and delete the entire /myth/tv3 backup.

Next, we create the ssh connection to the NAS and record the PID of the ssh.  We wait a few seconds for the connection to complete.

We generate a mysql dump.  Backing up the MySQL files is rarely a good strategy, the resulting files generally can’t be used.  The mysql dump is an ASCII snapshot of the database at a given instant, and can be used to rebuild the database during a restore from backup.

Because I use the ‘-x’ switch in rsync, each partition has to be explicitly backed up, we don’t descend into mount points.  The next 4 lines send 4 non-media partitions into a single backup directory and dataset on the NAS.  The “mythbackup” user is the username in /usr/local/etc/rsync/secrets.txt on the NAS box, it need not exist in /etc/passwd on either box.

Next are the media partitions.  They are mounted on /myth, /myth/tv2, and /myth/tv3.  To avoid leading cruft, we chdir to each mount point and then send the backup to the appropriate subdirectory of the module.  Once everything’s backed up, we kill the ssh tunnel.

That’s pretty well everything for now.  I might write another little article soon about what I did with the pair of 250GB SSDs.  They’re for swap, ZIL, and L2ARC (see this article), with about 120GB left over for a mirrored, unencrypted pool holding things like family photos that can be recovered even if the decryption key is lost.

Building a puzzle

I was able to find time over several nights to assemble a metal puzzle.  In a Google+ posting, Linus Torvalds mentioned working on a Star Wars model, and I was curious to see what other models were available by Fascinations.  I eventually ordered this:[amazon_link asins=’B00S5V98HC’ template=’ProductCarousel’ store=’winte0db-20′ marketplace=’CA’ link_id=’e615e664-5424-11e7-b003-f119972d5e31′]

It sat, unopened, on my desk for a few months, but I finally tore open the packaging and started putting it together.  After about 15 minutes, I realized I wasn’t going to be able to do it with just fingernails, I needed a proper tool, very fine needle-nosed pliers.  So, I ordered those, and waited a few days.[amazon_link asins=’B009DFK1TS’ template=’ProductCarousel’ store=’winte0db-20′ marketplace=’CA’ link_id=’4439c833-5425-11e7-84b7-93844903fcf8′]

Some of the tabs were very hard to place, I needed to borrow a magnifying visor to get the tabs into the slots, but things went relatively smoothly.  I broke three pieces.  Two of the breaks aren’t noticeable, there were enough tabs to hold everything together.  The last piece broke when I was trying to remove it from the metal sheets, you can see a gap near the front of the upper cupola on the model where some of the length of that piece broke off.  Here’s the final version:

Overall, it was a fun experience.  Not frustrating, but it did require patience.

Poking head up for a moment

So, turns out that the birth of a new baby does reduce the amount of time available to other activities.  With the baby turning 2 years old soon, and starting to do things on her own, I might be able to get back to writing here again in a while.

So, what have I been up to?  My stack of books to be read continues to grow much more quickly than I can possibly read them.  Right now, I’m reading [amazon_link asins=’B00842H6HQ’ template=’ProductCarousel’ store=’winte0db-20′ marketplace=’CA’ link_id=’978552b9-52d8-11e7-b016-9b62e35134b9′]

I’ve had to cut my list of authors to read down quite a bit recently, just to keep my stack of books from growing too much.  I still prefer physical books to e-books, though there are many times I’ve wished that I could continue a book on my phone while waiting for something.  I’m not willing to pay twice for the same book for that privilege, and the Amazon Unlimited books don’t seem to have much overlap with my reading list, so I haven’t signed up for that.

Meanwhile, Chin Yi and I managed to watch a movie this week, [amazon_link asins=’B00ZGDIGZ2′ template=’ProductCarousel’ store=’winte0db-20′ marketplace=’CA’ link_id=’2d901f36-52d9-11e7-a269-47cd4eaa8b08′].

With the baby going to sleep around 10:00, and various chores to finish off after that, it can take a few nights to get through a movie.

I’m posting now mostly because I want to continue using my Amazon API account when running the tellico program to keep track of books and movies that I own.  They changed their terms today, and so I had to sign up as an Associate, to put in advertising links to their products.