On the occasion of my 33rd birthday, I announced on social media (the irony is not lost on me) that Rachel and I were contemplating how we can keep in touch in deeper and more meaningful ways with our family, friends, and coworkers, and really the effects of social media in general.
I proposed leaving Facebook, Instagram, etc and creating a more private setting (e.g. monthly email and/or friends/family-only blog), hoping you would come along for the ride.
This revamped version of our site, https://AndrewAndRachel.com, attempts to do at least part of that: it’s a private setting that we are encouraging our family and friends (which includes co-workers!) to sign up for to get updates on our life.
To make it as easy as possible, you’ll also be able to receive daily or weekly updates (or both, I suppose).
Note that I will still continue to post here on AFdN, however it will be of the technical flavor.
This is an engineering experiment — not be confused with rigorous scientific experimentation method — so we’ll see how it goes and tweak things as needed.
Having a backup of your data is important, and for me it’s taken several different forms over the years — morphing as my needs have changed, as I’ve gotten better at doing backups, and as my curiosity has compelled me.
For various reasons that will become clear, I’ve iterated through yet another backup system/strategy which I think would be useful to share.
The Backup System That Was
The most recently incarnation of my backup strategy was centered around CrashPlan and looked something like this:
Atlas is my NAS and where a bulk of the data I care about is located. It backs up its data to CrashPlan Cloud.
Andrew and Rachel are the laptops we have. I also care about that data and they also backup to CrashPlan Cloud. Additionally, they also backup to Atlas using CrashPlan’s handy peer-to-peer system.
Brother and Mom are extended family member’s laptops that just backup to CrashPlan Cloud
Fremont is the web server (decommissioned recently though), it used to backup to CrashPlan as well.
This all worked great because CrashPlan offered a (frankly) unbelievably good CrashPlan+ Family Plan deal that allowed up ten computers and “unlimited” data — which CrashPlan took to mean somewhere around 20TB of total backups1 — for $150/year. In terms of pure data storage cost this was $0.000625/GB/month2, which is an order of magnitude less than Amazon Glacier’s cost of $0.004/GB/month3.
And then one year ago CrashPlan announced:
we have shifted our business strategy to focus on the enterprise and small business segments. This means that over the next 14 months we will be exiting the consumer market and you must choose another option for data backup before your subscription expires.
… To allow you time to transition to a new backup solution, we’ve extended your subscription (at no cost to you) by 60 days. Your new subscription expiration date is 09/28/2018.
Important Things In A Backup System
3-2-1-Bang
First a quick refresher on how to backup. Arguably the best method is the 3-2-1-bang strategy: “three total copies of your data of which two are local but on different mediums (read: devices), and at least one copy offsite.” Bang represents inevitable scenario where you have to use your backup.
This can be as simple as backing up your computer to two external hard drives — one you keep at home and backup to weekly and one you leave at a friends house and backup to monthly.
Of course, it can also be more complex.
Considerations
Replacing CrashPlan was hard because it has so many features for its price point, especially:
Encryption
Snapshots
Deduplication
Incremental backup
Recentness
…these would become my core requirements, in addition to also needing to understand how the backup software works (because of this I strongly prefer open-source).
I also had additional considerations I needed to keep in mind:
How much data I needed to backup:
Atlas: While I have 12TB of usable space (of which I’m using 10TB), I only had about 7TB of data to backup.
This could impact all devices in the region (~ 1000 km radius)
How much touch-time did I want to put in to maintain the system:
As little as possible (< 10 hours/year)
The New Backup System
There’s no single key to the system and this is probably the way it should be. Instead, it’s a series of smaller, modular elements that work together and can be replaced as needed.
My biggest concern was cost, and the primary driver for cost was going to be where to store the backups.
Where to put the data?
I did look at off-the-shelf options and my first consideration was just staying with CrashPlan and moving to their Small Business plan, but at $120/device/year I was looking at $360/year just to backup Atlas, Andrew, and Rachel.
Carbonite, a CrashPlan competitor but also who CrashPlan has partnered with to transition their home users to, has a “Safe” plan for $72/device/year, but it was a non-starter because they don’t support Linux, have a 30 day limit on file restoration, and do silly things like not automatically backing up files over 4GB and not backing up video files.
I decided I could live with Backblaze Backups to handle the off-site copies for the laptops, at least for now. I was back to the drawing board for Atlas though.
The most challenging part was how to create a cost-effective solution for highly-recent off-site data backup. I looked at various cloud storage options4, setting up a server at a friends house (high initial costs, hands-on maintenance would be challenging, not enough bandwidth), and using external hard drives (recentness would be too prolonged in backups).
I was dreading how much data I had as it looked like backing up to the cloud was going to be the only viable option, even if it was expensive.
In an attempt to reduce my overall amount of data hoarding, I looked at the different kinds of data I had and noticed that only a relatively small amount changed on a regular basis — 2.20% within the last year, and 4.70% within the last three years.
The majority5 was “archive data” that I still want to have immediate (read-only) access to, but was not going to change, either because they are the digital originals (e.g. DV, RAW, PDF) or other files I keep for historic reasons — by the way, if I’ve ever worked on a project for you and you want a copy because you lost yours there’s a good chance I still have it.
Since archive data wasn’t changing, recentness would not be an issue and I could easily store external hard drives offsite. The significantly smaller amount of active data I could now backup in the cloud for a reasonable cost.
Backblazes B2 has the lowest overall costs for cloud storage: $0.005/GB/month with a retrieval fee of $0.01/GB6.
Assuming I’m only backing up the active data (~300GB) and I have a 20% data change rate over a year (i.e. 20% of the data will change over the year which I will also need to backup) results in roughly $21.60/year worth of costs. Combined with two external WD 8TB hard drives for rotating through off-site storage and the back-of-the-envelope calculations were now in the ballpark of just $85/year when amortized over five years.
How to put the data?
I looked at, tested, and eventually passed on several different programs:
duplicacy…doesn’t support restoring files directly to a directory outside of the repository7
To be clear: these are all very good programs and in another scenario I would likely use one of them.
Also, deduplication was probably the biggest issue for me, not so much because I thought I had a lot of files that were identical (or even parts of files) — I don’t — but because I knew I was going to be re-organizing lots of files and when you move a file to a different folder the backup program (without deduplication capability) doesn’t know that it’s the same file8.
I eventually settled on Duplicati — not to be confused with duplicity or duplicacy — because it ticks all right boxes for me:
open source (with a good track record and actively maintained)
client side (e.g. does not require a server-side software)
incremental
block-level deduplication
snapshots
deletion
supports B2 and local storage destinations
multiple retention policies
encryption (including ability to use asymmetric keys with GPG!)
The default settings appear to be pretty good and I didn’t change anything except for:
Adding SSL encryption for the web-based interface
Duplicati uses a web-based interface9 that is only designed to be used on the local computer — it’s not designed to be run on a server and have then access the GUI remotely through a browser. Because it was only designed to be accessed from localhost, it sends passwords in the clear, which is a concern but one that has already been filed as an issue and can be mitigated with using HTTPS.
Unfortunately, the OMV Duplicati plugin doesn’t support enabling HTTPS as one of its options.
Normally Duplicati uses symmetric keys. However, when doing some testing with duplicity I was turned on to the idea of using asymmetric keys.
If you generated the GPG key on your server then you’re all set. However, if you generated them elsewhere you’ll need to move over to the server and then import them:
gpg --import private.key
gpg --edit-key {KEY} trust quit
# enter 5<RETURN>
# enter y<RETURN>
Once you have your GPG key on the server you can then configure Duplicati to use them. This is not intuitive but has been documented:
Note: the recipient can either be an email address (e.g. andrew@example.com) or it can be a GPG Key ID (e.g. 9C7F1D46).
The last piece of the puzzle was how to manage my local backups for the laptops. I’m currently using Arq and TimeMachine to make nightly backups to Atlas on a trial basis.
Final Result
The resulting setup actually ends up being very similar to what I had with CrashPlan, with the exception of adding two rotating external drives which brings me into compliance with the “3 total copies” rule — something that was lacking.
Each external hard drive will spend a year off-site (as the off-site copy) and then a year on-site where it will serve as the “second” copy of the data (first is the “live” version, second is the on-site backup, and third is the the off-site backup).
Overall, this system should be usable for at the least the next five years — at least in terms of data capacity and wear/tear. Total costs should be under $285/year. However, I’m going to work on getting that down even more over the next year by looking at alternatives to the relatively high per-device cost for Backblaze Backup which only makes sense if a device is backing up close to 1TB of data — which I’m not.
“While there is no current limitation for CrashPlan Unlimited subscribers on the amount of User Data backed up to the Public Cloud, Code 42 reserves the right in the future, in its sole discretion, to set commercially reasonable data storage limits (i.e. 20 TB) on all CrashPlan+ Family accounts.” Source↩
my actual usage was closer to 8TB, so my actual rate was ~$0.0015/GB/month…still an amazingly good deal ↩
which also has additional costs associated with retrieval processing that could run up to near $2000 if you actually had to restore 20TB worth of data ↩
very expensive – on the order of $500 to $2500/year for 10 TB ↩
95.30% had not been modified within the last three years ↩
Guys! I bought a 3D printer! It hasn’t even arrived yet, but I already feel like I should have done this ages ago! I ended up going with the Wanhao i3 v2.1 Duplicator. It’s an upgraded version of the v2.0, which is effectively the same model that MonoPrice rebrands and sells as the Maker Select 3D Printer v2.
All around it seems to hit the sweet spot between price and capability. For me, the big selling points are:
Sufficient large build envelope: 200 mm x 200 mm x 180 mm
Sufficient build resolution: 0.1 mm, but can go down to 0.05 mm!
Multiple-material filament capabilities
Good community support
Easy to make your improvements/repairs
I had to pay a bit of a premium since I’m in the UK, but I think it will be worth it. Printer arrives tomorrow, and I hope to have a report out soon thereafter.0
After having some fun hosting this on a Linode VPS, I’ve decided I don’t really want to be in the server maintenance business. So I’ve move everything over to SiteGround over the last couple of months. It feels good to have one less thing to worry about, and SiteGround supports Let’s Encrypt! Win-Win!
New Theme!
With great sadness, Alex King (of Crowd Favorite) passed in 2015. Unfortunately, his theme, FavePersonal, hasn’t been getting updates since and things were starting to break. So, new theme.
Unfortunately, this also means that the social media interoperability has changed. Comments on Facebook and Twitter used to automatically be aggregated on this blog as well. My thinking on this continues to evolve, and while I believe it would be best to have a single commenting ecosystem, I’m more at ease with allowing separate systems to exist.
Fortunately, I’m still pushing blog posts to Facebook and Twitter since I know that’s a primary news source for many people (for better or for worse).
Email “Newsletters”
Ugh…I’ve hated this. I hate posting something and having it go out via email only to find I made a typo or something. Or wanting to post multiple time in a day and feeling worried that people would hate all the email. This has honestly been a big mental block for me. Also, the plugin I was using1 was overly complicated and often didn’t render things correctly. I thought hard about getting rid of email subscriptions entirely, but instead I’m going to try something else. You’re welcome relatives 😉
First, I’ve switched to a new system: MailPoet. We’ll see how this works…it seems to tick all the boxes I need for what I want to do.
Second, everyone who was on the old mailing list has been migrated to the new weekly digest list. If there have been blog posts from the past week, you will get an email on Monday morning with them — in theory.0
Apparently I did a lot of flying last year — 101584 miles worth. I was lucky enough to carried via British 747-400 for many of my trips (my first trip on a 747 since 2006). I climbed the Alaska Airlines points ladder pretty quickly, finally hit MVP Gold 75K (after a couple of close years), and was in their top 10% of mileage earners for 2016.
Many new cities this year, and one state (Hawaii) and one new country (Spain).
Seattle, Washington*
Bournemouth, United Kingdom*
Arbon, Switzerland
Lindau, Germany
Zurich, Switzerland
Oxford, United Kingdom
Retford, United Kingdom*
Tenerife, Spain
Cambridge, United Kingdom
Heathrow, United Kingdom*
Snoqualmie, Washington
Stanwood, Washington
Lucerne, Switzerland
Vancouver, British Columbia
Orange County, California
Jamaica, New York
Kaua’i, Hawaii
One or more nights were spent in each place. Those cities marked with an asterisk (*) were visited multiple times on non-consecutive days. Roughly in order of appearance.
The election was so close that I’ve come to see the result as a bad roll of the dice. A few minor tweaks here and there — a more enthusiastic Sanders endorsement, one fewer of Comey’s announcements, slightly less Russian involvement — and the country would be preparing for a Clinton presidency and discussing a very different social narrative. That alternative narrative would stress business as usual, and continue to obscure the deep social problems in our society. Those problems won’t go away on their own, and in this alternative future they would continue to fester under the surface, getting steadily worse. This election exposed those problems for everyone to see.0
It’s been a long time since I’ve run a race—I kind of just stopped running in 20121.
I got back into doing something fitness related in June 2015 with CrossFit, which has been a boon for me…especially with all my travel I’ve done this year2. I signed up to run a 10K with Rachel and some friends in Vancouver, BC some months ago and that’s what got me really running again. With help from Coach Monica at Twenty Pound Hammer I got a running plan together to make this my best race yet.
After doing lots of free runs and looking at the data, I set a goal pace of 5:30 min/km ± 15 seconds3. For me, this amounts to around 80 strides a minute. I built a playlist around this pace that I trained with and raced to.
It’s also cool to see how far technology has come in the last few years. I used to run with a Nike+ sensor that I placed on my shoe to detect steps and eventually moved up to using their iPhone app. I recently switched to using iSmoothRun with Smashrun and Strava and I get so much more data…which is what also helped me pick my goal pace.
I also have sinus tachycardia …nothing serious, just something I have to keep an eye on. I’ve found through trial and error that so long as I can keep my HR below 190 I don’t get winded such that I have to slow down (e.g. think about how long you can run a sprint). This seems to put me at around 5:30 min/km on flat surfaces, though with more training I’m hoping to best this.