Can anyone explain why, when my internet download speed is testing around 20mbps, if I go to download a file, the actual speed result is more like 1-2mbps?
There’s a couple things going on, but first a primer on the internet:
For our purposes, think of the internet as several independent networks that are joined together through interconnection points. for our sake, let’s assume that each independent network is a physically restricted to a city; so there’s a Seattle Network and a Denver Network and a Minneapolis Network, etc.
Also, each network is only connected to its closest *major* city. So, Seattle and Denver don’t actually connect to each other but instead both connect to the Salt Lake City Network…this called a hop and it takes two hops to get from the core of the Seattle Network to the core of the Denver Network (Hop 1: Core Seattle -> Core SLC; Hop 2: Core SLC -> Core Denver).
There are also other ways the Seattle Network could connect to the Denver Network…it could go down the west coast and then back up, but that would take more hops (through Portland, San Fran, LA, etc). Each hop takes time so there’s benefit to keeping the number of hops as low as possible. Also, the connections between any two cities are not infinitely big, but some are bigger than others.
Web servers are located throughout the world, but generally congregate near large cities since they offer the best chance of serving the most people with the fewest hops. If a web site has customers in many different cities they will probably have web servers in each of those cities to try to reduce the number of hops each visitor has to make to get to their server.
As a general internet user you and I are on the outer fringes of one of these networks. If I want to connect to a server in a different city, I first must get to the core of my network before I can transit across other city Networks to get to my destination. This could take several hops just within my city to get to the core of Seattle and then several more hops to get to a different city if it’s not physically located nearby and potentially even more hops if the server I want isn’t near a large city.
Okay that’s the primary and hopefully that makes sense. To answer your specific question:
When you do a speed test, you are generally checking it against a that’s run by your own ISP. If you look at Speedtest, you have an option to pick a server and you can see that there are servers run by Frontier, AT&T, CenturyLink, Comcast, Sprint, and whole bunch of other internet service providers. What you are testing is the connection between you and *near* the core of your city network. It probably takes about 6-10 hops. This is also the part of the network that is generally underutilized the most (which is also why ISPs also oversubscribe and you get the dreaded 7pm slowdown when everyone is binging Netflix). This is rarely representative of real-world situations.
When you go to download your file, it’s probably hosted across the country and has to make 20+ hops. Any one of those hops may be subject to limits for all sorts of reasons that ultimately result in a slower download speed.
If you want a true test of your download speeds, you need to check it using a site that better represents a real-world situation. I’d suggest trying http://speedtest.fremont.linode.com/ and see how that compares.
If you’re interested I can go waaaay more in depth too and we can even look at exactly what routes your data is taking (it’s actually really fascinating) and maybe even figure out where the bottle neck is happening (though it will be tough, but not impossible, to do anything about it).
On the occasion of my 33rd birthday, I announced on social media (the irony is not lost on me) that Rachel and I were contemplating how we can keep in touch in deeper and more meaningful ways with our family, friends, and coworkers, and really the effects of social media in general.
I proposed leaving Facebook, Instagram, etc and creating a more private setting (e.g. monthly email and/or friends/family-only blog), hoping you would come along for the ride.
This revamped version of our site, https://AndrewAndRachel.com, attempts to do at least part of that: it’s a private setting that we are encouraging our family and friends (which includes co-workers!) to sign up for to get updates on our life.
To make it as easy as possible, you’ll also be able to receive daily or weekly updates (or both, I suppose).
Note that I will still continue to post here on AFdN, however it will be of the technical flavor.
This is an engineering experiment — not be confused with rigorous scientific experimentation method — so we’ll see how it goes and tweak things as needed.
Having a backup of your data is important, and for me it’s taken several different forms over the years — morphing as my needs have changed, as I’ve gotten better at doing backups, and as my curiosity has compelled me.
For various reasons that will become clear, I’ve iterated through yet another backup system/strategy which I think would be useful to share.
The Backup System That Was
The most recently incarnation of my backup strategy was centered around CrashPlan and looked something like this:
Atlas is my NAS and where a bulk of the data I care about is located. It backs up its data to CrashPlan Cloud.
Andrew and Rachel are the laptops we have. I also care about that data and they also backup to CrashPlan Cloud. Additionally, they also backup to Atlas using CrashPlan’s handy peer-to-peer system.
Brother and Mom are extended family member’s laptops that just backup to CrashPlan Cloud
Fremont is the web server (decommissioned recently though), it used to backup to CrashPlan as well.
This all worked great because CrashPlan offered a (frankly) unbelievably good CrashPlan+ Family Plan deal that allowed up ten computers and “unlimited” data — which CrashPlan took to mean somewhere around 20TB of total backups ((“While there is no current limitation for CrashPlan Unlimited subscribers on the amount of User Data backed up to the Public Cloud, Code 42 reserves the right in the future, in its sole discretion, to set commercially reasonable data storage limits (i.e. 20 TB) on all CrashPlan+ Family accounts.” Source)) — for $150/year. In terms of pure data storage cost this was $0.000625/GB/month ((my actual usage was closer to 8TB, so my actual rate was ~$0.0015/GB/month…still an amazingly good deal)), which is an order of magnitude less than Amazon Glacier’s cost of $0.004/GB/month ((which also has additional costs associated with retrieval processing that could run up to near $2000 if you actually had to restore 20TB worth of data)).
And then one year ago CrashPlan announced:
we have shifted our business strategy to focus on the enterprise and small business segments. This means that over the next 14 months we will be exiting the consumer market and you must choose another option for data backup before your subscription expires.
… To allow you time to transition to a new backup solution, we’ve extended your subscription (at no cost to you) by 60 days. Your new subscription expiration date is 09/28/2018.
Important Things In A Backup System
First a quick refresher on how to backup. Arguably the best method is the 3-2-1-bang strategy: “three total copies of your data of which two are local but on different mediums (read: devices), and at least one copy offsite.” Bang represents inevitable scenario where you have to use your backup.
This can be as simple as backing up your computer to two external hard drives — one you keep at home and backup to weekly and one you leave at a friends house and backup to monthly.
Of course, it can also be more complex.
Replacing CrashPlan was hard because it has so many features for its price point, especially:
…these would become my core requirements, in addition to also needing to understand how the backup software works (because of this I strongly prefer open-source).
I also had additional considerations I needed to keep in mind:
How much data I needed to backup:
Atlas: While I have 12TB of usable space (of which I’m using 10TB), I only had about 7TB of data to backup.
This could impact all devices in the region (~ 1000 km radius)
How much touch-time did I want to put in to maintain the system:
As little as possible (< 10 hours/year)
The New Backup System
There’s no single key to the system and this is probably the way it should be. Instead, it’s a series of smaller, modular elements that work together and can be replaced as needed.
My biggest concern was cost, and the primary driver for cost was going to be where to store the backups.
Where to put the data?
I did look at off-the-shelf options and my first consideration was just staying with CrashPlan and moving to their Small Business plan, but at $120/device/year I was looking at $360/year just to backup Atlas, Andrew, and Rachel.
Carbonite, a CrashPlan competitor but also who CrashPlan has partnered with to transition their home users to, has a “Safe” plan for $72/device/year, but it was a non-starter because they don’t support Linux, have a 30 day limit on file restoration, and do silly things like not automatically backing up files over 4GB and not backing up video files.
I decided I could live with Backblaze Backups to handle the off-site copies for the laptops, at least for now. I was back to the drawing board for Atlas though.
The most challenging part was how to create a cost-effective solution for highly-recent off-site data backup. I looked at various cloud storage options ((very expensive – on the order of $500 to $2500/year for 10 TB)), setting up a server at a friends house (high initial costs, hands-on maintenance would be challenging, not enough bandwidth), and using external hard drives (recentness would be too prolonged in backups).
I was dreading how much data I had as it looked like backing up to the cloud was going to be the only viable option, even if it was expensive.
In an attempt to reduce my overall amount of data hoarding, I looked at the different kinds of data I had and noticed that only a relatively small amount changed on a regular basis — 2.20% within the last year, and 4.70% within the last three years.
The majority ((95.30% had not been modified within the last three years)) was “archive data” that I still want to have immediate (read-only) access to, but was not going to change, either because they are the digital originals (e.g. DV, RAW, PDF) or other files I keep for historic reasons — by the way, if I’ve ever worked on a project for you and you want a copy because you lost yours there’s a good chance I still have it.
Since archive data wasn’t changing, recentness would not be an issue and I could easily store external hard drives offsite. The significantly smaller amount of active data I could now backup in the cloud for a reasonable cost.
Assuming I’m only backing up the active data (~300GB) and I have a 20% data change rate over a year (i.e. 20% of the data will change over the year which I will also need to backup) results in roughly $21.60/year worth of costs. Combined with two external WD 8TB hard drives for rotating through off-site storage and the back-of-the-envelope calculations were now in the ballpark of just $85/year when amortized over five years.
How to put the data?
I looked at, tested, and eventually passed on several different programs:
duplicacy…doesn’t support restoring files directly to a directory outside of the repository ((though the more I’ve though about it the more question if this would actually be a problem))
To be clear: these are all very good programs and in another scenario I would likely use one of them.
Also, deduplication was probably the biggest issue for me, not so much because I thought I had a lot of files that were identical (or even parts of files) — I don’t — but because I knew I was going to be re-organizing lots of files and when you move a file to a different folder the backup program (without deduplication capability) doesn’t know that it’s the same file ((it’s basically the same operation as making a copy of a file and then deleting the original version)).
I eventually settled on Duplicati — not to be confused with duplicity or duplicacy — because it ticks all right boxes for me:
open source (with a good track record and actively maintained)
client side (e.g. does not require a server-side software)
supports B2 and local storage destinations
multiple retention policies
encryption (including ability to use asymmetric keys with GPG!)
The default settings appear to be pretty good and I didn’t change anything except for:
Adding SSL encryption for the web-based interface
Duplicati uses a web-based interface ((you can also use the CLI)) that is only designed to be used on the local computer — it’s not designed to be run on a server and have then access the GUI remotely through a browser. Because it was only designed to be accessed from localhost, it sends passwords in the clear, which is a concern but one that has already been filed as an issue and can be mitigated with using HTTPS.
Unfortunately, the OMV Duplicati plugin doesn’t support enabling HTTPS as one of its options.
Note: the recipient can either be an email address (e.g. email@example.com) or it can be a GPG Key ID (e.g. 9C7F1D46).
The last piece of the puzzle was how to manage my local backups for the laptops. I’m currently using Arq and TimeMachine to make nightly backups to Atlas on a trial basis.
The resulting setup actually ends up being very similar to what I had with CrashPlan, with the exception of adding two rotating external drives which brings me into compliance with the “3 total copies” rule — something that was lacking.
Each external hard drive will spend a year off-site (as the off-site copy) and then a year on-site where it will serve as the “second” copy of the data (first is the “live” version, second is the on-site backup, and third is the the off-site backup).
Overall, this system should be usable for at the least the next five years — at least in terms of data capacity and wear/tear. Total costs should be under $285/year. However, I’m going to work on getting that down even more over the next year by looking at alternatives to the relatively high per-device cost for Backblaze Backup which only makes sense if a device is backing up close to 1TB of data — which I’m not.
Guys! I bought a 3D printer! It hasn’t even arrived yet, but I already feel like I should have done this ages ago! I ended up going with the Wanhao i3 v2.1 Duplicator. It’s an upgraded version of the v2.0, which is effectively the same model that MonoPrice rebrands and sells as the Maker Select 3D Printer v2.
All around it seems to hit the sweet spot between price and capability. For me, the big selling points are:
Sufficient large build envelope: 200 mm x 200 mm x 180 mm
Sufficient build resolution: 0.1 mm, but can go down to 0.05 mm!
Multiple-material filament capabilities
Good community support
Easy to make your improvements/repairs
I had to pay a bit of a premium since I’m in the UK, but I think it will be worth it. Printer arrives tomorrow, and I hope to have a report out soon thereafter.
After having some fun hosting this on a Linode VPS, I’ve decided I don’t really want to be in the server maintenance business. So I’ve move everything over to SiteGround over the last couple of months. It feels good to have one less thing to worry about, and SiteGround supports Let’s Encrypt! Win-Win!
With great sadness, Alex King (of Crowd Favorite) passed in 2015. Unfortunately, his theme, FavePersonal, hasn’t been getting updates since and things were starting to break. So, new theme.
Unfortunately, this also means that the social media interoperability has changed. Comments on Facebook and Twitter used to automatically be aggregated on this blog as well. My thinking on this continues to evolve, and while I believe it would be best to have a single commenting ecosystem, I’m more at ease with allowing separate systems to exist.
Fortunately, I’m still pushing blog posts to Facebook and Twitter since I know that’s a primary news source for many people (for better or for worse).
Ugh…I’ve hated this. I hate posting something and having it go out via email only to find I made a typo or something. Or wanting to post multiple time in a day and feeling worried that people would hate all the email. This has honestly been a big mental block for me. Also, the plugin I was using ((Subscribe2 HTML)) was overly complicated and often didn’t render things correctly. I thought hard about getting rid of email subscriptions entirely, but instead I’m going to try something else. You’re welcome relatives 😉
First, I’ve switched to a new system: MailPoet. We’ll see how this works…it seems to tick all the boxes I need for what I want to do.
Second, everyone who was on the old mailing list has been migrated to the new weekly digest list. If there have been blog posts from the past week, you will get an email on Monday morning with them — in theory.
Apparently I did a lot of flying last year — 101584 miles worth. I was lucky enough to carried via British 747-400 for many of my trips (my first trip on a 747 since 2006). I climbed the Alaska Airlines points ladder pretty quickly, finally hit MVP Gold 75K (after a couple of close years), and was in their top 10% of mileage earners for 2016.
Many new cities this year, and one state (Hawaii) and one new country (Spain).
Bournemouth, United Kingdom*
Oxford, United Kingdom
Retford, United Kingdom*
Cambridge, United Kingdom
Heathrow, United Kingdom*
Vancouver, British Columbia
Orange County, California
Jamaica, New York
One or more nights were spent in each place. Those cities marked with an asterisk (*) were visited multiple times on non-consecutive days. Roughly in order of appearance.
The election was so close that I’ve come to see the result as a bad roll of the dice. A few minor tweaks here and there — a more enthusiastic Sanders endorsement, one fewer of Comey’s announcements, slightly less Russian involvement — and the country would be preparing for a Clinton presidency and discussing a very different social narrative. That alternative narrative would stress business as usual, and continue to obscure the deep social problems in our society. Those problems won’t go away on their own, and in this alternative future they would continue to fester under the surface, getting steadily worse. This election exposed those problems for everyone to see.
It’s been a long time since I’ve run a race—I kind of just stopped running in 2012 ((2012 July 2nd was my last recorded run)).
I got back into doing something fitness related in June 2015 with CrossFit, which has been a boon for me…especially with all my travel I’ve done this year ((but that’s a another story)). I signed up to run a 10K with Rachel and some friends in Vancouver, BC some months ago and that’s what got me really running again. With help from Coach Monica at Twenty Pound Hammer I got a running plan together to make this my best race yet.
After doing lots of free runs and looking at the data, I set a goal pace of 5:30 min/km ± 15 seconds ((8:51 min/mi)). For me, this amounts to around 80 strides a minute. I built a playlist around this pace that I trained with and raced to.
It’s also cool to see how far technology has come in the last few years. I used to run with a Nike+ sensor that I placed on my shoe to detect steps and eventually moved up to using their iPhone app. I recently switched to using iSmoothRun with Smashrun and Strava and I get so much more data…which is what also helped me pick my goal pace.
I also have sinus tachycardia …nothing serious, just something I have to keep an eye on. I’ve found through trial and error that so long as I can keep my HR below 190 I don’t get winded such that I have to slow down (e.g. think about how long you can run a sprint). This seems to put me at around 5:30 min/km on flat surfaces, though with more training I’m hoping to best this.
10K time: 58:40 (5:52 min/km)
5K time: 27:54 (5:35 min/km)
Overall ranking: 850 / 3071
Division Ranking (Male 30-34): 81 / 142 ((Sabo finished 79th and Charlie 80th…even though I didn’t run with them))